Ian Miell

Author
+ Follow
since Apr 27, 2016
Cows and Likes
Cows
Total received
5
In last 30 days
0
Total given
0
Likes
Total received
2
Received in last 30 days
0
Total given
0
Given in last 30 days
0
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Ian Miell

Greg Horie wrote:Hi Ian

To clarify, 'running your own private cloud' means a private OpenStack instance. Its a lot to manage and challenging to do it well.



As ever, it depends, and there's no right answer for everyone. I have seen environments where OpenStack does make more sense, but it's very context-specific.

One reason (as mentioned above) to pursue a OpenStack or VMWare solution is regulatory reasons. If you work in the financial sector, then you may already have an OpenStack environment that's maintained by another team, and this can bring great benefits.

As also mentioned above, it is relatively unlikely that building a new OpenStack environment from scratch is the way to go, however, as most of the industry is moving to cloud-based solutions.

If you are moving to container-based deployment strategies, then taking on OpenStack (rather than paying for it by consuming cloud services) in addition to all the other challenges is going to increase the risk. This risk needs to be balanced against any risks of using cloud services that you might have, eg regulatory or lack of experience in that domain.

Other factors that may come into play are cost (cloud can get expensive at scale), lock-in (some orgs fear dependence on a single external cloud provider), and quality of vendor support vs quality of cloud support.

Hope that helps!
2 years ago

Greg Horie wrote:Hi Pini, Jamie, Michelle

Do you see a benefit in running a private cloud for VMs (e.g. openstack) when your infrastructure can run on containers? I know there are benefits, but do these outweigh all the operational costs of running your own private cloud for a production operations?

Cheers,
Greg



Greg,

Hi, Ian here, I work with Jamie and Pini. Just wanted to first clarify what you meant by 'running your own private cloud' - do you mean a vendor's private cloud solution (eg a VMWare or similar), or something self-rolled, or something else entirely?

Thanks,

Ian
2 years ago

Amandeep Singh Ghai wrote:Hello Ian,

I was too quick to post that question without much details. I essentially meant:
* Data Dependencies: Usually with sharing of data per Web-Service / FTP etc...
* Library Dependencies: We also have framework supporting several applications, that use common libraries & those are also updated quite often.

What kind of challenges are presented under such situations?

Thanks, Amandeep



In terms of data dependencies, you can use volumes to share data between different containers, or abstract a data service to another container.

Library dependencies arguably easier to manage in Docker, since each image can be managed individually, or a common base image used to manage dependencies shared between different containers.

One of the challenges is to identify and update all your images when a change (eg for security) is required, and this is to an extent an unsolved problem. However, it's not a hard problem to solve, and arguably a simpler problem to solve than a fundamental update to numerous VMs where the outcome is less certain.

Ian
6 years ago
Hi!

Can you elaborate on what you mean by 'dependencies between the applications'?

Are you talking about library dependencies, data dependencies, or logical dependencies?

Thanks,

Ian
6 years ago
Thanks Paul, we hope it is!

BTW are you referring to OpenShift v2 or v3? v3 is Docker/Kubernetes-based and is generally very easy to get going on. I'd be interested to know of other easier platforms.

Ian
6 years ago

Palak Mathur wrote:

Ian Miell wrote:

This is why our book contains a lot of material considered 'impure' Docker - ie not microservices, monolithic and not highly orchestrated 'data centre as an operating system' type work.

Docker helped us turn our monolithic application into a single unit which could be easily shipped for testing or support or dev purposes. We wrote our own simple automation tool to help achieve this - ShutIt

Ian Miell



I think we are also some impure "Docker" work. Some of our stuff is not microservices at all. Docker is really interesting in the opportunities it seem to provide for all sorts of applications.



Indeed - it's a very broad and open tool - which makes writing a book on it very challenging!
6 years ago

Will Myers wrote:Hi,
How does Docker In Practice differ from Docker In Action?
Does it delve deeper into the topics and maybe skip some that are less important?
As a beginner, would Docker In Practice be a good starting point to learn the technology?



Hi Will,

Docker in Action is a more standard 'introduction' to Docker and overview of its features.

Docker in Practice was conceived far more as a handbook for people 'in the field' using Docker in various contexts. We try and be emphathetic towards the harassed and busy reader by giving practical advice and tools. We also hope that along the way you will learn lots of other useful tools techniques for software engineering work. We strove to make it informal and pragmatic.

Chapter 2 covers the core of Docker in a 'hands on' way, and is available free here: https://www.manning.com/books/docker-in-practice, and gives a good flavour of our approach. It's very much not an expansion on the core documentation, rather a guided way of getting your hands dirty with Docker in order to understand it fully. If you find it a little too hands-on, then https://www.manning.com/books/docker-in-action will more likely be for you!

Ian Miell

6 years ago

paul nisset wrote:
Hi ,
Thank you for your responses.
I was thinking of using it more in the way Palak mentioned


You can set docker to configure the container the way you want and carry the same container across platforms - your in-premises network to an EC2 instance.




simulating the interaction between 20 different servers

seems like an interesting use for testing a service .
If I were to this would I create/copy a new Docker instance for each server that would be calling the service?

Thanks,
Paul



With respect to carrying the container across platforms, Docker is very useful for this. But it's worth being aware of the following caveats:

- You can run in to obscure kernel-related issues, especially if using unusual or 'bleeding edge' software (we talk about this in Chapter 12 in our book https://www.manning.com/books/docker-in-practice)
- Data persistence can cause issues relating to 'data gravity'. While Docker containers can be easily 'lifted and shifted', and volumes offer a means of managing data, very often the real challenge with your deployments is managing large amounts of data associated with instances, and these can be harder to move.
- Related to the previous comment, Docker encourages an ephemeral architecture. If your application is not designed for this, it can limit the benefits of using Docker.
- Some Docker platforms (eg OpenShift) make subtle changes to the way applications are run (eg for security purposes, OpenShift does not allow containers to run as a specified user), and this can cause challenges

Regarding your second point: yes, that would be a great (and cheap and quick-to-run) use case for Docker (to model your multi-server set up). Be aware that Docker containers are not VMs and while they can be treated as such, you may run into issues (eg with no init process). We discuss this extensively in the book, as it's an area of great contention, as many feel this is an 'impure' approach. Our view is that Docker is a tool that can save your organisation money, and if microservices doesn't make sense for you then you shouldn't be forced to go that route - software is a means to a (usually business) end, and not the other way around. We cover these issues in chapters 3 (using Docker as a VM) and 8 (Network Simulation: Realistic Environment Testing Without the Pain), and in places throughout the book.
6 years ago

Luca Botti wrote:Hi all,

I am just asking because Docker, while simple at the start, requires a complete new mindset related to deployment. And, to me, it looks like to be "production ready" it needs a complete devops infrastructure to be effective.
While experimenting from the command line is quite simple, things can get out of control fast, so the complete "assemble-deploy-manage" shall be in place. Not only Kubernetes or Swarm, but also on the Dev side (whatever you use).

Just my thinking.

Regards



Great question!

In my experience, most organisations don't have much of a devops infrastructure. Indeed, the company we worked for where we introduced Docker had relatively little.

This is why our book contains a lot of material considered 'impure' Docker - ie not microservices, monolithic and not highly orchestrated 'data centre as an operating system' type work.

Docker helped us turn our monolithic application into a single unit which could be easily shipped for testing or support or dev purposes. We wrote our own simple automation tool to help achieve this - ShutIt

Once this was in place the concepts of phoenix deployment and reproduction of state were more easily digested by devs, test and support alike.

And finally, this could be placed in a DevOps pipeline more easily, as state was more easily recovered.

The book covers all these steps, and is structured to go from 'command line' to fully-orchestrated production deployments.

In terms of things getting 'out of control', this is why Platforms as a Service were created. The one I'm most familiar with (origin/OpenShift) takes care of a lot of the heavy lifting of managing the build and deployment of your images, leaving the dev or ops team to get on with curating this environment.

Ian Miell
6 years ago

Tim Cooke wrote:Welcome to the Ranch guys! I hope you enjoy your stay and have fun answering all our questions



Thanks both!
6 years ago