This week's giveaway is in the Testing forum.
We're giving away four copies of TDD for a Shopping Website LiveProject and have Steven Solomon on-line!
See this thread for details.
Win a copy of TDD for a Shopping Website LiveProject this week in the Testing forum!

Cody Bumgardner

+ Follow
since May 03, 2016
Cows and Likes
Total received
In last 30 days
Total given
Total received
Received in last 30 days
Total given
Given in last 30 days
Forums and Threads
Scavenger Hunt
expand Ranch Hand Scavenger Hunt
expand Greenhorn Scavenger Hunt

Recent posts by Cody Bumgardner

Image["why not both meme"]

In all seriousness the overlap of OpenStack in Cloud Foundry is mostly on the application orchestration layer. So why not learn OpenStack, at least enough to provide underly infrastructure, at use it to host a Cloud Foundry instance [1]. This way you can work with both and you have the added benefit of two application orchestration platforms.


5 years ago
+1 for Jeanne's response.
5 years ago
Congratulations Peter, Tim, Magnus, and Chaminda!

I hope you enjoy the book.

5 years ago
I certainly agree with Tim's points.

I don't know what Amazon has going on "under the hood" and can only speculate. In terms of OpenStack there is no one type of networking [1]. The majority of people (based on survey) use Neutron and OVS, but there are others that use the Nova plugin directly. Within Neutron alone there are dozens of drivers ranging from generic SDN control to vendor-specific integrations. There are even network interface vendors [2] with hardware specifically for OVS/OpenStack.

You have a great deal of flexibility when it comes to OpenStack and networking. For instance, I know of a major cloud provider that has a custom network management solution that conforms to the Neutron API, so as far as OpenStack components are concerned they are working directly with Neutron.


6 years ago
There is certainly overlap between OpenStack and other what I would call regional or data center type orchestration tools do, but many aspects of each framework are distinct. Architecturally only you can determine if you can live with one, the other, or both.

Fundamentally OpenStack provides a lower-level (VM, container, metal provisioning, vendor-specific integration) control of infrastructure than any container-only framework. I would say that fundamentally OpenStack is a "cloud operating system" and container-based frameworks are more "application delivery systems". Depending on your your requirements you might not need OpenStack for this, in fact you might no need Kubernetes or Swarm either since you can acquire resources from Amazon.

Professionally I have deployed OpenStack instances to control underlying infrastructure (network, storage, etc.) and run containers within OpenStack.

Some additional thoughts:

Distributed system scheduling and orchestration is an area of my research and as a result I have spent a great deal of time thinking about such things.

I generally think of orchestration systems based on scope of their control and break them down into three groups:

-High performance cluster (HPC)
-Hadoop, Spark, etc.

Data center / Region:
-OpenStack clouds
-Amazon EC2
-Microsoft Quincy[9] and Apollo [10]
-Google Borg[11] and Omega[12]
-Kubernetes [1]

-Typically application specific

Google Borg [11]: a large-scale cluster management software, which until recently* was considered “Google’s Secret Weapon” [13].
-Two-phase scheduling: find a suitable node, score and schedule best suitable node.
-High (service) and low (batch) priority scheduling, with independent resource quotas.
-Typical scheduling time is 25s. However, global (cluster) optimality is not attempted when making scheduling decisions.

Apache Mesos [14]: an open-source cluster manager providing resource isolation and sharing across distributed resources.
-Mesos began as a research project [15] in the UC Berkeley RAD Lab by then PhD student Benjamin Hindman*
-Mesos has been adopted [16] by Twitter, eBay, Airbnb, Apple and at least 50 other organizations.
-“Mesos is a distributed systems kernel that stitches together a lot of different machines into a logical computer. It was born for a world where you own a lot of physical resources to create a big static computing cluster.”[17]

Kubernetes by Google [18]: is an open-source platform for automating deployment, scaling, and operations of application containers across clusters of hosts.
-Kubernetes is based [19] on Google's Borg and "The Datacenter as a Computer” [20] papers.
-Kubernetes partners include Microsoft, RedHat, VMware IBM, HP, Docker, CoreOS, Mesosphere, and OpenStack*.
-“Kubernetes is an open source project that brings 'Google style' cluster management capabilities to data centers.” [17]
-“Kubernetes goal is to become as the standard way interact with computing clusters. Their idea is to reproduce the patterns that are needed to build cluster applications based on experiences at Google.”[17]

Note: From a scheduling and orchestration level, these are not global (multi-zone) schedulers!

-Most data center scheduling is based on bin packing optimization of CPU, memory, and network bandwidth resources, where resources are assumed to be uniform (by value).
-“Kubernetes cluster is not intended to span multiple availability zones. Instead, we recommend building a higher-level layer to replicate complete deployments of highly available applications across multiple zones”
-Application-centric schedulers like Fenzo*[22] (for Mesos) is designed to manage ephemerality aspects that are unique to the cloud, such as reactive stream processing systems for real time operational insights and managed deployments of container based applications.

[9] M. Isard, V. Prabhakaran, J. Currey, U. Wieder, K. Talwar, and A. Goldberg. Quincy: fair scheduling for distributed computing clusters. In Proc. ACM Symp. on Operating Systems Principles (SOSP), 2009.
[10] E. Boutin, J. Ekanayake, W. Lin, B. Shi, J. Zhou, Z. Qian, M. Wu, and L. Zhou. Apollo: scalable and coordinated scheduling for cloud-scale computing. In Proc. USENIX Symp. on Operating Systems Design and Implementation (OSDI), Oct. 2014.
[11] Verma, Abhishek, et al. "Large-scale cluster management at Google with Borg." Proceedings of the Tenth European Conference on Computer Systems. ACM, 2015.
[12] Schwarzkopf, Malte, et al. "Omega: flexible, scalable schedulers for large compute clusters." Proceedings of the 8th ACM European Conference on Computer Systems. ACM, 2013.
[15] Hindman, Benjamin, et al. "A common substrate for cluster computing." Workshop on Hot Topics in Cloud Computing (HotCloud). Vol. 2009. 2009.
[20] Barroso, Luiz André, Jimmy Clidaras, and Urs Hölzle. "The datacenter as a computer: An introduction to the design of warehouse-scale machines." Synthesis lectures on computer architecture 8.3 (2013): 1-154.
6 years ago
For home testing you can nest hypervisors (hypervisor in hypervisor) allowing for a multi-node deployment without much hardware. The performance will not be very good, but it will be good enough to help you learn. From the learning prospective you can so a single node deployment as well using Devstack.

In terms of bare-metal provisioning there is the Ironic project, which is used to deploy OS images on physical servers, typically for high performance applications where virtualization is not appropriate.

There is support [1] for VMware ESX, but not all features are available [2]. I venture to say there are more instances of OpenStack (testing and such) running in VirtualBox on a laptop than anywhere else

6 years ago
The OpenStack framework is constantly evolving, but some foundational component choices have remained popular for some time. For instance most people [1] use:

Hypervisor: KVM
Networking: OVS
Database: MySQL
Storage: Ceph

While popularity does not guarantee superiority it does typically mean that more development and testing have gone into these areas. I personally think that VMware ESX and IBM PowerVM are the most advanced hypervisors, but I don't use either of them due to limited/lack of community support.

I would say that when in doubt choose the default choice/settings.

6 years ago
There should be no question that core OpenStack components in and of themselves are ready for serious production use cases. That being said, you need either a strong internal team and/or a good vendor partnership for a major OpenStack deployment.

As Tim said, Backspace contributed early object store (Swift) components and NASA contributed Nebula (Nova), which later became the foundation for OpenStack.

Here is a nice historical OpenStack timeline:!date=2010-06-06_16:33:46!

There are tons of notable and very serious OpenStack use cases ( including Walmart (, CERN ( and many others across many verticals.

6 years ago
Thanks! Glad to be there.
6 years ago