Working with Windows Containers and Docker: Into your Stride

So far, in this series, Nicolas has shown how to get simple container instances up and running with just some basic background information. Now we need to understand the differences between Linux containers, Windows Server containers and Hyper-V containers. We can then define, create and run multi-container Docker applications, and port existing Windows Container VMs to Docker.

In this article I’ll be explaining some of the differences between Windows containers and Docker. To allow us to do more with Windows Containers and Docker, I’ll need to explain a couple of new techniques such as using Docker-Compose to build a multi-container application, and using Image2Docker to port existing Windows application workloads from virtual machines to Docker images. I’ll then go on to explain and demonstrate Hyper-V Isolation because it affects whether you can run a Hyper-V container on Windows Server.

Linux Containers on Windows: Bridging the Gap

Before going further with this article, I must demonstrate that if I try to run a Linux Container on my Windows Container Host, the attempt will fail because the Windows and Linux kernels are fundamentally different. We can’t currently run Linux-based containers on a Windows container host:

This problem is being solved, and developers will soon be able to run Linux containers natively on Windows Server using the Hyper-V container isolation technology. Microsoft announced this fact during DockerCon 2017 conference, taking place in Austin, Texas. When this is released, it will remove the need for separate infrastructures and development tools for the two operating systems. At the same DockerCon event, The Docker team announced LinuxKit, a secure and portable Linux subsystem for the container movement. LinuxKit will provide the tooling that will allow us to build custom Linux subsystems that include just the components that are required by the runtime platform. The project to enable this new capability was officially launched at the event. Docker will be working with Microsoft to integrate the LinuxKit subsystem with Hyper-V isolation.

Docker-Compose

Although Docker provides us with a container platform that allows simple and fast deployment, the process of setting up a new environment can be time-consuming, especially if you have more than one service to deploy. Docker-Compose simplifies the installation process to a single deployment command. Docker-Compose is a tool that greatly reduces the time and effort required to define and run multi-container Docker applications. With Docker-Compose, you use a special docker-compose.yml file to configure your application’s services. Then, just by using a single command, you can use data in the file to create and start all the services from your configuration.

There are two steps to Using Docker-Compose:

  • Define the services that make up your app in docker-compose.yml to be run together in an isolated environment
  • Run docker-compose to run your entire app

You must install the Docker-Compose executable using this command:

If you already have a Docker-Compose.yml file, then you just have to run the following:

That’s it. Your App is deployed!

What’s happening here?

When I run the Docker-Compose command, Docker will build a Windows container from the Docker-Compose.yml file. Instead of using one or more dockerfile(s), here I can deploy, for example, an entire application based on a web server and a database. Below is an example of a Docker-Compose.yml file example, written in YAML:

The “Services:” section will define two services: “db” and “web”. The “db” service is a Microsoft SQL Express image from Microsoft. This service includes some parameters:

  • The password for the SA account is set to “Password1”
  • Port 1433 on the host is mapped to the exposed port 1433 in the container

Next, the second service named “Web” is built from my custom repository and especially from my custom IIS image which contains a custom website. Then we use an environment variable which defines where the database is, and how to connect to it. Finally, port 5000 is mapped to the exposed port 5000 in the container. The two services are added to an existing network, named nat.

Docker Compose is a particularly good way of managing multi-containers that contain databases and web frontends.

Image2Docker

It is difficult to migrate apps out of Virtual Machines, especially distributed apps with multiple components. Image2Docker may be the simplest way of getting your older applications working on newer operating systems.

What is Image2Docker?

Image2Docker is a PowerShell module that ports existing Windows application workloads from virtual machines to Docker images. It supports multiple application types, but the initial focus is on IIS. You can use Image2Docker to extract ASP.NET websites from a VM, so you can then run them in a Docker container with no application changes. You will need Windows Server 2016 or Windows 10 in order to use Image2Docker.

How does it work?

Image2Docker first inspects the artifacts in a Windows Server 2003, 2008, 2012 or 2016 VM image – in WIM, VHD or VHDX format. It then extracts either an entire VM or specific artifacts from a VHD file. Next, it will generate a Dockerfile which you can build into a Docker image. This PowerShell module requires PowerShell 5.0, or later.

I will now describe the steps that are needed to extract IIS artifacts. In the screenshot below, I deployed a Windows Server 2016 Virtual Machine with the IIS role installed named “IIS01”. This VM has two IIS websites:

First, install the Image2Docker PowerShell module on your container host:

Next, you can use the ConvertTo-DockerFile cmdlet. This cmdlet will scan your source image (e.g. VHDX or WIM file) to determine the artifact. Image2Docker currently supports discovery of the following artifacts:

To scan an image, you just need to call the ConvertTo-Dockerfile cmdlet and specify the -ImagePath parameter which contains the VHDX file. The output folder will contain the generated dockerfile. Before scanning your image, you must power-off the virtual machine.

You can also extract a single website from an IIS virtual machine using the –ArtifactParam parameter followed by the IIS website name:

Now, you can go to the output folder, and you’ll notice that a Dockerfile has been created.

If you only have a VMDK file, you can use the Microsoft Virtual Machine Converter Tool to convert VMDK images to VHD images.

Hyper-V Isolation

When you are running on Windows 10, you can only work with Hyper-V containers, but when you are running on Windows Server 2016, you can choose between Hyper-V and Windows Server containers. By default, when you use the docker run command, it will start a Windows Server container, but you can specify that you want to run a Hyper-V container by using the —isolation=hyperv parameter.

Before running a Hyper-V container, you must install the Hyper-V role on your container host:

When running the previous command, you will probably get the following error message if your container host is a virtual machine:

It means that Nested Virtualization is not enabled on the system. Nested Virtualization allows you to run a Hypervisor inside a Virtual Machine running on a Hypervisor. To enable Nested Virtualization in Hyper-V, you must first shutdown the Container Host Virtual Machine and then run the following PowerShell command:

This feature is currently Intel-only: Intel VT-x is required. Once the Hyper-V role is installed, you can run your first Hyper-V container using the following script on your container host:

In this example, I deploy a Hyper-V container named “MyNanoHYPV” with the —isolation=hyperv parameter which is based on the Nano Server image from Microsoft.

As you can see, the Hyper-V container boots in seconds; much faster than a virtual machine. To confirm that you are running a Hyper-V container, you can run a very simple check. Open the PowerShell console, and use the Get-Process cmdlet to list the process named “VMWP” which corresponds to the Hyper-V process:

We can see from this result that there are two processes running on the container host. Now, just type the “exit” command inside your Hyper-V container and rerun the previous command:

There is only one process. Why? Hyper-V container is like a virtual machine in some ways but different in others. When you run your Hyper-V container, Windows will create what seems to be a VM but it’s not actually a virtual machine, it’s a Hyper-V container! Of course, you can’t see this VM in the Hyper-V console manager. The only thing that you can see is the process. The use of a Hyper-V container provides a kernel-mode isolation instead of user-mode isolation.

Ok, now let’s examine another example to understand the difference between Windows Server containers and Hyper-V containers. Run the following command:

This command deploys a Windows Server container based on the Nano Server image. We run a permanent ping inside the container. Now, use the Docker Top command to display the list of the process running inside this container:

We can see that the last process, called “PING.EXE”, corresponds to the ping command inside the container. But, running the Get-Process cmdlet on the container host, notice that the same process exists with the same PID! It means that our Windows Server container uses the Kernel resources from my container host. OK, now stop this container with the Docker Stop command and do the same thing by appending the —isolation=hyperv parameter:

And run the Docker Top command:

PowerShell error! No PING process exists on the container host. It means that due to the Hyper-V isolation and especially the kernel mode isolation, the kernel resources are not shared between the container host and the Hyper-V containers.

For those of you who are getting the following error:

It means that Hyper-V role is not installed on your system, so you can’t run Hyper-V containers. If you have some trouble using a specific container, you can use the Docker Logs command to troubleshoot. The Docker logs command shows information logged by a running container. The information that is logged and the format of the log depends almost entirely on the container’s endpoint command:

Conclusion

In this part, we discussed about Docker Compose. To build a multi-container application, Docker has developed Docker-Compose which makes it easier to configure and run applications made up of multiple containers. Docker-Compose starts all the required containers with a single command.

Next, we used Image2Docker that allows you to take a virtualized web server in a Hyper-V VM and extract a Docker image for each website in the Virtual Machine. It looks at the disk for known artifacts, compiles a list of all the artifacts installed on the VM and generates a Dockerfile to package the artifacts.

Finally, we described the Hyper-V container concept. Hyper-V containers provide a kernel mode isolation instead of user mode isolation. Hyper-V containers use an automatically generated Hyper-V VM where the container instances run.