Windows Containers and Docker

Windows Server 2016 features support for containers. These are not Linux-based, but containers that run on Windows and run Windows on the inside.
These conform to the Open Container Initiative (OCI). They allow you to run applications insulated from the rest of the system, within portable containers that include everything an application needs to be fully functional. As they did with Linux, containers will change the nature of the software supply chain for Windows users.

Containers have been a mainstay of Linux computing for a number of years now. Google, for example has long been implementing container-based solutions throughout its data empire, delivering massive distributed applications to its multitude of Googlers.

Yet Google hasn’t been alone in its enthusiasm for container computing. Any organization with the necessary resources has been able to participate in the container shuffle, with necessary resources being the operative phrase. It wasn’t until Docker standardized container delivery and management a few years back that the technology made in-roads into humbler realms. Docker’s open-source nature and relative ease of implementation has made it possible for just about anyone to benefit from the speed, flexibility and simplified implementation that containers offered.

The Docker revolution has become so significant that even Microsoft has plowed forward into the container territories, first through Docker/Linux support in Azure and now through integration in Windows Server 2016, now at Technical Preview 5. What’s most interesting is that the Windows Server containers are not Linux-based, but rather something entirely new. Windows containers: Containers that run on Windows and run Windows on the inside.

Microsoft is so serious about containers, in fact, that it now actively participates in the Open Container Initiative (OCI) and has embraced the collaborative mindset as if it came up with it on its own, promising seamless integration with the Docker ecosystem, despite its open-source, community-minded, Linux-based heritage.

The Windows container

The Windows container shares many similarities with its Linux counterpart. Both provide an isolated environment for running applications without affecting the rest of the system and without being affected by that system. The containers use advanced isolation techniques to provide discreet and portable environments that include most everything an application needs to be fully functional.

A container looks a lot like a virtual machine (VM)-and is often considered a type of virtualization-but the two are distinctly different. True, each runs an operating system (OS), provides a local file system, and can be accessed over a network, just like a physical computer. However, with a VM, you’re dealing with a full and independent OS, along with virtualized device drivers, memory management, and other components that add to the overhead.

A container shares more of the host’s resources than a VM and consequently is more lightweight, quicker to deploy, and easier to scale across data centers. In this way, the container can offer a more efficient mechanism for encapsulating an application, while providing the necessary interface to the host system, all of which leads to more effective resource usage and greater portability.

Microsoft plans to offer two types of containers in Windows Server 2016: the Windows Server container and the Hyper-V container. The two types function in the same way and can be created and managed identically. Where they differ is in the level of isolation each one provides.

The Windows Server container shares the kernel with the OS running on the host machine, which means all containers running on that machine share the same kernel. At the same time, each container maintains its own view of the OS, registry, file system, IP address, and other components, with isolation provided to each container through process, namespace, and resource control technologies.

The Windows Server container is well suited for situations in which the host OS and containerized applications all lie within the same trust boundary, such as applications that span multiple containers or make up a shared service. However, Windows Server containers are also subject to an OS/patch dependency with the host system, which can complicate maintenance and interfere with operations. For example, a patch applied to the host can break an application running in a container. Even more importantly, in situations such as multitenant environments, the shared kernel model can open up a system to application vulnerabilities and cross-container attacks.

The Hyper-V container addresses these issues by providing a VM in which to run the Windows container. In this way, the container no longer shares the host machine’s kernel or has an OS/patch dependency with that machine. Of course, taking this approach means sacrificing some of the speed and packing efficiency you get with the basic Windows Server container, but you gain by having a more isolated and secure environment.

Regardless of the type of container you implement, you now have a way to use containers with Windows technologies such as .NET or PowerShell, something that was not possible before. The Windows container provides everything you need to implement your application on any machine running Windows Server 2016, giving you a level of portability that for most of Windows’ history has been unavailable. You can create your containers locally, make them available for testing and QA, and then send them off to the production team, without having to worry about complex installations and configurations every step of the way.

Inside the world of Windows containers

A number of components go into creating and implementing containers, starting with a host on which to operate the containers. The host can be a physical computer or a VM running Windows 2016 Server, as long as the Windows container feature is enabled.

You can host containers on either the Windows Server Full UI edition or on the Core edition, which is the one installed by default. Microsoft is also introducing the Windows Server 2016 Nano edition, a minimal headless version of the OS that includes no local GUI or console.

Microsoft has also added nested virtualization to Windows Server 2016 so you can run Hyper-V containers if the host is a VM. If you do plan to run this type of container, you must enable the Hyper-V feature on the host OS. Microsoft is also adding container support to Windows 10, though only for Hyper-V containers. (The container feature is currently available as part of the Windows 10 insiders build, versions 14352 and up.)

As with other Docker containers, you deploy Windows containers from images. Each image starts with the container OS image , a base image that includes the OS that will run inside the container. Microsoft currently provides two base images: the Server Core image and the Nano Server image. You must download at least one of these OS images from Microsoft before you can deploy a container.

Microsoft restricts which image you can use with each container type, based on the host OS, as outlined in the following table.

Host OS Windows Server Container Hyper-V Container
Windows Server Full UI Server Core image Nano Server image
Windows Server Core Server Core image Nano Server image
Windows Server Nano Nano Server image Nano Server image
Windows 10 N/A Nano Server image

As you can see, Hyper-V containers currently support only the Nano Server image, but your choice for Windows Server containers depends on which edition of Windows Server you’re running.

For this type of container, the OS image must also match the host system with regard to the build and patch level. A mismatch can result in unpredictable behavior for either the container or the host. This means you must update the container base OS image if you update the host OS. It also means you won’t be able to run a Linux-based container on a Windows-based machine, or vice versa, but that’s also true for Hyper-V containers.

Images provide a high degree of flexibility when it comes to deploying containers. You can create images based on an existing image and update the new images as often as necessary. You can then deploy one or more containers from that image.

For example, suppose you create an image based on the Server Core image. Into the new image, you install an application currently in development, along with any application dependencies. You can then deploy one or more containers from the image. Each container acts as a sandbox that includes all the components necessary for the application to be fully functional.

An image can be deployed as often as necessary and be shared by any number of containers. You create the containers as you need them and then dispose of them when you’re done. Best of all, you can update and redeploy an image at any time and then create new containers that contain the latest modifications.

You do not need to choose the container type (Windows Server or Hyper-V) until you’re ready to implement the actual container. The container type has no bearing on how you assemble your images. The images are stored in a repository and are available on demand for deploying containers wherever and whenever they’re needed, whether Windows Server or Hyper-V containers.

To help automate container management for both Windows Server and Hyper-V containers, Microsoft has been providing a PowerShell module in the Windows Server 2016 technical previews. Such a module can also be useful for integrating containers with native tools. However, Microsoft recently announced that it would be deprecating this module and replacing it with a new one that builds directly on top of the Docker engine’s REST interface, not a surprising move given the pivotal role Docker plays in the unfolding container drama.

The Docker connection

In addition to being a company, Docker is also an open source project that facilitates the process of deploying and managing containers. Windows containers are now part of that project, with Docker working to fully integrate Windows containers into the Docker ecosystem. As part of this initiative, Docker is now offering Docker Engine for Windows and Docker Client for Windows.

The Docker engine provides the functionality necessary to manage your Docker environment. For example, the engine makes it possible to automate the creation of container images. Although you can create the images manually, the engine offers a number of benefits, such as the ability to store images as code, easily recreate those images, or incorporate them into a continuous integration development cycle.

However, the Docker engine is not part of the Windows installation. You must download, install, and configure the engine separately from Windows. The engine runs as a Windows service. You can configure the service using with the engine’s configuration file or the Windows Service Control Manager (SCM). For example, you can set the default debug and log options or configure how the engine accepts network requests. Microsoft recommends that you use the configuration file over SCM, but notes that not every configuration option within the file is applicable to Windows containers.

The Docker engine essentially does all the container-management grunt work for you, while exposing the API necessary for the Docker client to interface with the engine. The client is a command-line interface that provides a set of commands for managing images and containers. These are the same commands that allow you to create and run Docker containers on Linux. Although you cannot run a Windows container on Linux or a Linux container on Windows, you can use the same client to manage both Linux and Windows containers, whether Windows Server or Hyper-V containers.

As with the Docker engine, you must download and install the Docker client yourself. The client can run on either Windows 10 or Windows Server 2016. You need only point the client to the Docker service to take control of the engine. But keep in mind that the Docker for Windows components are still in preview and are not yet feature-complete. That should be coming soon, just like the new PowerShell container module, which will also allow you to manage your Windows containers.

A world of Windows

Microsoft and Docker still have plenty of work left to do before Windows containers are fully functional, but what we’ve seen so far represents a significant step forward. Windows users will finally get to take advantage of the benefits, flexibility, and portability that containers have been offering the Linux world for well over a decade. Given the extent in which Docker has taken hold in recent years, the Windows-Docker integration makes the picture even brighter, especially for those who want to be able to work with both Linux and Windows containers.

The degree to which Docker will rock the Windows world is yet to be seen, and there’s no telling whether Windows containers will bring diehard Linux fans over to the dark side, but for those already invested in the Windows ecosystem, containers could prove a big boon-and play a significant role in convincing organizations to upgrade to Windows Server 2016, a factor that Microsoft has no doubt been considering since the first Docker sonic boom.