Skip to content

OpenStack Summit Report

2015-11-11

by Donovan Jones

The Catalyst Cloud is built on an open source platform called OpenStack. Every six months OpenStack developers, operators, vendors and users get together for a summit where they plan out the next release. The latest summit was held in Tokyo and a number of Catalyst Cloud team members were there participating.

The OpenStack summit comprises two different events, the design summit and the main conference. The design summit is where operators and developers plan out the features for the next release. Running in parallel to the design summit is a regular conference featuring many tracks and talks covering all aspects of OpenStack.

Most of our team's time was spent in the design summit collaborating with other operators and developers on the next release of OpenStack. In this post I will discuss not the design summit but the themes and trends evident from the wider conference: Scale, Containers and Integration.

Scale

OpenStack is huge!

I knew that OpenStack was big but this summit really brought this home.

The summit was huge with over 5000 attendees, there were 14 different tracks in the main conference alone, it was spread over many rooms in a very large convention complex.

The number of companies involved is large, with almost all the largest technology companies represented including. IBM, HP, RedHat, Rackspace, Canonical, EMC, VMWare, Cisco and many others. Even companies that do not contribute directly to OpenStack like Microsoft and Google were represented.

The resources being put into the project are very substantial. A good example was Intel and Rackspace announcing the "OpenStack Innovation Center" which is going to be staffed by developers dedicated to upstream development. They are also building two dedicated 1000 node clusters for use in OpenStack development.

The scale of the presence of some of the organisations was impressive with HP, RedHat, Canonical, IBM and others all having dozens of people on the ground.

In the Asia region the scale of some OpenStack deployments is massive, there numerous examples from Japan, South Korea and China of extremely large scale production deployments with huge traffic volumes. NTT, Yahoo Japan, Huawei, SK Telecom and NEC all have huge and growing OpenStack deployments.

In addition to these massive deployments for more traditional workloads the scale of some deployments in the HPC space in terms of nodes is potentially very large. OpenStack will be a critical component of the Square Kilometre Array which has impressive geographical scale.

Containers

Containers were a major theme, with everyone talking about them. There are three different areas where containers are important for OpenStack. Users want to run containerised applications on top of OpenStack, operators want to run OpenStack in containers and operators would like to use containers as an alternative to full virtualisation.

Running Containerised Applications on OpenStack

Up until now users who wanted to run containerised applications on top of OpenStack needed to build their own solutions using compute instances. With the Magnum project being declared production ready it is now possible to integrate container workloads more natively into OpenStack. There were some interesting talks from people who have gone down the build it yourself route, clearly though there are many users who would like to run containers as a service without building everything from scratch.

Magnum allows you to provision the standard Linux/Docker/Kubernetes stack within an OpenStack tenant. In Magnum the top layer of this stack (where Kubernetes sits) has been named the "Container Orchestration Engine" which I think is a useful term. The COEs supported in Magnum are currently Kubernetes and Docker Swarm with Apache Mesos being worked on. Magnum makes no attempt to wrap or abstract the Docker or Kubernetes APIs, once provisioned the user interacts with them directly. Work is ongoing in provisioning of networking and storage for containers. The Kuryr project bridges Docker and OpenStack networking by connecting Dockers new libnetwork with OpenStacks Neutron thus allowing Docker networks to be provisioned directly by OpenStack without requiring double encapsulation of packets. The other area that is being worked on is backing docker volumes with OpenStack block storage.

Running OpenStack in containers

The Kolla project is a project to containerise OpenStack itself, the ability of containers to encapsulate an application runtime to make it portable, scalable and consistent looks to be a promising approach to deploying OpenStack itself. This approach has the potential to decrease the pain of managing python dependencies and make packaging easier for operators deploying and upgrading OpenStack clouds. 

Running containers to provide compute resources

The third interesting development in the containerisation field is the re-emergence of system level containers. These are containers that are designed to be drop in replacements for KVM and other full virtualization technologies to provide compute resources. Technologies like LXC/LXD and OpenVZ look to be near to being ready for production use within OpenStack. These technologies are not providing full virtualisation, there is less abstraction and therefore better performance. From a service provider perspective this means greater density of compute nodes is possible with the same hardware. Density increases of a factor of 10 or more are possible. In addition to the density increase, instances start faster and have better network and disk IO which is of direct benefit to end users. A lot of work has been put into this area particularly around isolation and security.

Integration

Increasingly OpenStack is being positioned as an integration engine that allows organisations to tie together disparate parts of their existing infrastructure together with new infrastructure in order to provide elastic resources for the organisation as a whole. It is billed as providing a single interface for running Bare Metal, virtualized and containerised workloads. Does it live up to the hype? Given what I experienced at the summit it seem clear to me that for may organisations using OpenStack, it has moved to being a central and growing part of their infrastructure and as such they are wanting to use it as an integration engine but they are hitting obstacles and use cases where OpenStack is not perfect fit.

One of the biggest areas where integration activity is happening is in the networking area, this is demonstrated by the fact that the Neutron (OpenStack networking) project had the most commits during the last cycle. There was also a large amount of noise about SDN and NFV, integrating these technologies with OpenStack is high on a lot of organisations agendas. Catalyst is currently more interested in some less glamorous features like full IPv6 support and BGP integration with external networks.

The challenge for many of these organisations now is how to integrate OpenStack more closely with the non OpenStack parts of their infrastructure and how to use OpenStack to manage things like Bare Metal and containers that have traditionally not been part of Openstack. There is a huge amount of activity by vendors, operators, developers and users who are all working to integrate OpenStack with other things. A lot of the talks and products I saw at the summit were addressing this issue in some way. There is clearly a lot of work to do and OpenStack is not perfect but there is a huge amount of effort from many organisations working on integration.

Videos

Nearly all the talks are available online here.

Here are a few that that the team has highlighted as worth watching: