OpenStack: The Good and Not-So-Good Bits

OpenStack holds a great deal of promise as a cloud platform built on open standards, and has support from the major players in cloud services. It has the potential for allowing organisations to set up their own private cloud services that are designed to inter-operate. Is it ready yet for companies that want the convenience of cloud solutions, but with more control, and without the large subscription fees? Robert Sheldon finds out.

You hear a lot about OpenStack these days, particularly when discussions around the cloud arise. Rackspace, Red Hat, Ubuntu, and VMware are just some of the companies that freely namedrop the OpenStack moniker in relation to their own offerings. What’s often lacking in these discussions, however, is a concise explanation of what OpenStack is and how it fits into the overall scheme of things.

That’s not unusual with technology, of course, but few initiatives have caught the imagination of cloud pundits as the OpenStack platform, yet those outside the inner cloud circle continue to ask, “What exactly is OpenStack?”

The brief answer goes something like this. OpenStack is a framework for controlling the compute, storage, and networking resources necessary to support a large cloud-focused data center. An OpenStack infrastructure comprises the hardware, software, APIs, user interfaces, virtualization mechanisms, and countless other pieces that deliver the components and interoperability needed to make a comprehensive cloud platform possible. With OpenStack, administrators can manage the resources through a single dashboard, and users can provision resources through a web interface.

That’s the quick answer. And for some of you, that might be enough. But if you need a bit more substance, like I usually do, read on. Here I dive deeper into the OpenStack Foundation and its platform and the pieces that make it all work.

The OpenStack Foundation

In 2010, Rackspace and NASA launched OpenStack, an open source project aimed at enabling organizations to offer their own cloud-computing services, while fostering open standards and cloud interoperability. Rackspace donated the code that powered its Cloud Files and Cloud Server services. The project also incorporated technology used to drive the NASA Nebula Cloud platform.

The OpenStack Foundation was formed to promote the development, adoption, and distribution of the OpenStack platform, providing a home for the OpenStack software, independent of any one participating organization. Since then, the OpenStack project has continued to grow and evolve and now represents a global collaboration effort. By July 2014, according to a survey, OpenStack was the most popular open source cloud project out there, followed by Docker and KVM (Kernel-based Virtual Machine).

The Foundation’s goal is to deliver a standardized platform that can be used to support all types of clouds, with components that are simple to implement and scale, as well as rich in features. All OpenStack code is freely available under the Apache 2.0 license, and all roadmaps and code reviews are public. Every six months, the foundation holds a design summit for gathering requirements and writing specifications. The summits too are open to the public.

The OpenStack community includes developers, users, organizations, researchers, and service providers from around the world. Foundation membership is free and open to anyone, although members are expected to participate through technical contributions or community building. The foundation bylaws define a number of bodies that govern the OpenStack project. Organizations such as BMW, Disney, Go Daddy, Cigna, Lenovo, CERN, UCLA, and the Wikimedia Foundation have all implemented OpenStack in one form or another.

That Still Doesn’t Tell Me What OpenStack is

OpenStack refers to all that software that the foundation is giving away for free. The OpenStack platform comprises a set of interrelated components that make it possible for organizations and service providers to set up a cloud infrastructure that offers on-demand computing resources across large networks of virtual machines. Organizations ranging from single-node SMBs to those with massive data centers can take advantage of OpenStack technologies.

The foundation organizes the OpenStack project around compute, storage, and networking resources as well as the shared services necessary to tie them together.

The compute technologies focus on managing virtualized commodity server resources, including CPUs, memory, disks, and network interfaces, making it possible to provision large networks of virtual machines. OpenStack has also been making inroads into bare metal implementations, although historically the focus has been on virtual environments. To that end, you can use one of a number of supported hypervisors, such as KVM or XenServer, in conjunction with OpenStack. In addition, OpenStack compute systems support Linux container technologies such as LXC.

The compute architecture follows a distributed, asynchronous model for supporting massive scalability and highly available systems. You can scale resources horizontally, with no proprietary hardware or software requirements. OpenStack also provides a set of APIs that enable applications to access compute resources directly in order to manage and secure them. You can, for example, programmatically allocate IP addresses and virtual LANs (VLANs).

Within the storage category, you’ll find that OpenStack supports both block storage and object storage, with a number of deployment options available to each. Block storage, as the name suggests, stores chunks of data in blocks, each with its own address that applications can use to access the data. With object storage, data is treated as individual objects (typically files). Bundled with each object is metadata that identifies the objet as well as optional contextual information. Applications can access an object’s data by referencing its globally unique identifier.

OpenStack storage can leverage commodity hardware, while providing data redundancy and self-healing reliability. With block storage, you can connect storage devices to compute instances to expand your storage and achieve better performance. The block storage systems are fully integrated into OpenStack and support such platforms as NetApp and SolidFire. If you need a more cost-effective solution, you can use object storage to distribute and scale out your storage, which can be ideal for operations such as archiving and backing up data that require less critical performance.

The OpenStack networking technologies are scalable and pluggable and provide a rich set of APIs for IP and network management. You can use the networking technologies to manage IP addresses, support floating addresses, create networks, control traffic, and perform a number of other operations. OpenStack networking also includes an extensive framework that supports integration with other services, such as firewalls, load balancers, intrusion detection systems, and virtual private networks (VPNs). As with other OpenStack components, administrators can control networking resources through the dashboard or by using OpenStack APIs to automate operations.

Integral to the OpenStack platform is a set of shared services that span the compute, storage, and networking components. The services are integrated with both OpenStack components and external services and can perform a wide range of operations. For example, the image service supports discovery, delivery, and registration operations, while the telemetry service lets you aggregate usage and performance data. There’s even an orchestration service that helps you automate infrastructure deployment.

One of the up-and-coming components that’s been making a splash in cloud circles is the identity service, which provides a central directory for mapping OpenStack users to the services they can access. The identity service includes a common authentication mechanism that enables distributed identity capabilities across the OpenStack landscape, while supporting multiple forms of authentication. The identity service also lets you configure centralized policies and can be integrated with backend services such as LDAP-based solutions. A number of providers are already leveraging the identity services, including Cisco, Rackspace, HP, Ubuntu, IBM, and EasyStack.

Putting Together the OpenStack Pieces

The components that make up the OpenStack platform provide the building blocks for creating a cloud environment. You can pick and choose from the selection of software depending on the needs of your overall design.

Currently, the OpenStack Foundation offers 12 components, or programs, that the foundation considers to be integrated and ready for production, with more programs under development. The following table provides an overview of each of the integrated programs.

Component Codename Description
Compute Nova Program for managing and automating pools of compute resources that supports a wide range of virtualization technologies, bare metal configurations, and high-performance computing.
Networking Neutron API-driven, scalable system for managing networks and IP addresses, based on a pluggable backend architecture that can be used with commodity hardware.
Object Storage Swift Distributed storage platform for providing redundant storage across clustered servers, scalable to handle petabytes of data.
Block Storage Cinder System for exposing and connecting block storage devices to compute instances and allowing cloud user to manage their own storage needs.
Identity Keystone Directory service with a central repository that maps users to the OpenStack services they can access, serving as an authentication system across the cloud environment.
Image Service Glance Service for discovering, registering, and delivering disk and server images, which can be stored in a variety of back-end systems.
Dashboard Horizon Graphical interface for accessing, provisioning, and automating the various OpenStack resources.
Telemetry Ceilometer Service that aggregates usage and performance information collected from the services deployed in an OpenStack cloud.
Orchestration Heat Template-driven engine for describing and automating the infrastructure’s deployment and post-deployment operations.
Database Trove Service that allows users to provision and manage one or more relational databases, including patching the databases and performing backups.
Data Processing Sahara Service for provisioning a data-intensive application cluster on Hadoop or Spark within the OpenStack environment.
Bare-Material Provisioning Ironic Program for provisioning bare metal machines rather than virtual machines, providing a hypervisor-like API and set of plug-ins.

You can swap out any OpenStack component for a non-OpenStack one, depending on your organization’s needs. For example, an OpenStack component might not support all the features you need, or you might be integrating your OpenStack solution with a legacy system and want to take advantage of the resources at hand, rather than incur additional costs or unnecessary development efforts, such as trying to engineer drivers for specific types of hardware.

It should be noted, however, that drivers are becoming less of any issue all the time. More OpenStack components than ever are shipping with the ability to communicate with third-party systems. For example, Cinder block storage comes with drivers for NetApp Data ONTAP, Pure Storage FlashArray, and BM Storwize.

The OpenStack Choice

The latest release of OpenStack, Kilo, contains the most recent updates of the integrated components, which include nearly 400 new features. According to information out of the OpenStack Foundation, over 169 organizations contributed to the Kilo release, representing almost 1500 individual participants. The most notable of the new features is Ironic, the first OpenStack integrated component for provisioning OpenStack on bare metal machines, rather than in virtual environments.

Ironic was the only new component to be added to the integrated stack with this release. All other updates and added features occurred within components that already existed. For example, the Keystone identity service includes enhancements to better support hybrid workloads in multi-cloud environments, and the Neutron networking component includes new features related to VLAN transparency, Open VSwitch port security, and maximum transmission unit API extensions. In addition, Cinder block storage now allows users to attach a volume to multiple compute instances.

Organizations considering a Kilo implementation have several options. They can download the software and do it themselves, go through a service provider and let them do the bulk of the work, or acquire a packaged distribution, which falls somewhere in between the other two in terms of effort.

Rackspace is perhaps the best known in the OpenStack provider sector, offering both public and private cloud services, all hosted within the Rackspace data centers. Blue Box is another company providing OpenStack as a cloud service, offering its own take on the private cloud as a service (PCaaS). Not surprisingly, such services eliminate many of the headaches and upfront costs that come with trying to build your own private cloud, in exchange for ongoing subscription fees and loss of control over many of the implementation specifics.

Companies such as Red Hat and Ubuntu take a different approach from service providers by offering OpenStack distributions that support enterprise-grade private cloud solutions. A distribution package removes many of the pain points you’d find in a do-it-yourself solution, ensuring a stable and production-ready cloud environment that remains in control of the organization implementing it. At the same time, a distribution lets you avoid the long-term subscription fees that come with a service provider, as well as the limitations often imposed by such a service. Under the distribution model, you’re essentially purchasing the expertise you would otherwise have to acquire in house or find through a service provider.

There is no one right answer when it comes to choosing from one of these three options for implementing an OpenStack cloud. Large organizations with the necessary expertise and resources might benefit from the flexibility and long-term savings that come from doing it themselves. Smaller organizations with a limited IT staff will likely not have the time or resources it would take to build an OpenStack cloud from scratch. If it comes down to purchasing a distribution or subscribing to a service, organizations will have to determine what exactly they want out of the cloud and what their long range plans are for those cloud services. Each approach has its advantages or disadvantages, although an organization that decides to strike out on its own should be especially wary.

The Other Side of the Silver Lining

Although not for everybody, a number of organizations have taken the do-it-yourself approach to implementing an OpenStack cloud. And they have good reason for doing so. It’s open source nature makes the software free and customizable, while benefiting from a global community of collaborators. Organizations also have the flexibility to pick and choose which components to implement and which to replace or augment.

As enticing as all this sounds, the OpenStack cloud is not without its dark side. To begin with, OpenStack is notoriously complex to implement. An organization requires engineering skills and resources to navigate the hundreds of configuration options as well as the variety of interoperability issues that can arise. Organizations that come to OpenStack expecting to find a vanilla cloud platform, ready to implement into production out-of-the-box, are likely to be both disappointed and frustrated. There is a steep learning curve that comes with OpenStack, and those new to the technology had better be prepared.

Fortunately, there’s a large user community out there, which can help mitigate some of the pitfalls, but it doesn’t remove the need for highly trained, dedicated personnel to make all the pieces fit together and work as they should.

Another concern coming from organizations trying to implement OpenStack is the lack of accurate and comprehensive documentation to help ease the implementation process. Although this is a common complaint with many open source projects, it remains a problem nonetheless. When users come across documents that are inaccurate or outdated, the OpenStack experience becomes even more difficult and frustrating, sometimes leading to an organization abandoning the project altogether.

OpenStack is a work in progress that is evolving continuously. Some components are more mature than others, and some offer a greater degree of interoperability. A scaled-out, high-performing enterprise implementation requires a great deal of customization and skilled personnel with the know-how necessary to fill in the gaps.

Decision-makers should be wary of becoming so enchanted with the free nature of open source licensing that they fail to take into account the true costs of implementing such a solution. You must consider the resources and infrastructure necessary to implement an OpenStack cloud, maintain it over the long haul, and be able to update and upgrade components when needed. You will need highly skilled personnel dedicated to the project at every phase of its lifecycle.

Acquiring and keeping the necessary personnel can present yet another challenge. Qualified OpenStack developers and administrators are in short supply and consequently don’t come cheap. You can, of course, invest in the training needed to prepare your in-house staff for an OpenStack implementation, assuming you have an IT staff large enough to warrant such a decision, but you risk losing them to the highest bidder once they have such marketable skills.

Given the challenges that OpenStack presents, it’s not surprising that many organizations, especially smaller ones, turn to services or distributions for their OpenStack solutions. Even so, some organizations will choose to stick with the do-it-yourself approach, in no small part because they hope to avoid vendor lock-in.

Unfortunately, even this issue is not so simple. You might be able to reduce vendor lock-in, but it would be difficult to avoid it altogether, particularly when it comes to implementing OpenStack at scale. Despite the various OpenStack components that are available, organizations often need to supplement or replace some of those pieces with ones that better meet their needs, which often means locking into proprietary solutions such as a DNS or software-defined networking solutions. Then there is the firmware and operating systems running on devices such as network switches, forcing yet another degree of lock-in. Even if you’re able to minimize your vendor dependencies, once you commit to even a few, it’s difficult to change course.

The OpenStack Solution

Until the OpenStack platform has become more mature, organizations that don’t have massive engineering resources will likely need to turn to a service or distribution for their OpenStack solution. In this way, they can leverage the expertise offered by vendors and providers, while mitigating their own risks. If an organization does have the technical resources to make its own OpenStack implementation feasible, then those making the decisions must be sure to take into account the full costs of implementing, maintaining, and supporting an OpenStack cloud over the long-term.

Despite concerns over OpenStack, the foundation has come a long way in delivering a credible cloud platform. Its open nature invites collaboration and innovation at a global scale, and OpenStack comes at a time when the industry at large is moving toward more open and interoperable technologies. Even companies such as Microsoft and IBM are now embracing the movement, at least to some degree. In fact, IBM is a Platinum member of the OpenStack Foundation, along with other big players such as HP, Intel, Rackspace, and AT&T. Clearly, they see OpenStack as well worth the investment in time and resources. If the platform can reach the level of scalability, interoperability, and flexibility that the foundation hopes to achieve, OpenStack will indeed become a force to be reckoned with.