Windows Containers: Understanding Docker Build Files, Networks, and Repositories

Azure windows_docker_part_2

In Part 1 we looked at the basics of Docker, concepts and deploying your first container using an image from the Docker repository.

If you haven’t already, I highly recommend you read through Part 1 and complete the steps outlined in that post prior to continuing here as it will make far more sense.

In Part 2, we will dive into the basic nuts and bolts of how to create a custom build file, capture an image and push it to a repository to deploy the image to other hosts, and how to connect your container to the network.

 

Install Docker on Windows Server 2016

In Part 1 we used Docker installed on a Windows 10 laptop to deploy our first container. 

For Part 2, we’ll be using Windows 10 and Windows Server 2016 Server Core, as we won’t be hosting running workloads and services on Windows 10 in the real world.

My Docker VM is running in a Virtual Machine on Hyper-V 2016 with the virtualisation extensions enabled on the VM.  This is a requirement if you plan to use the Hyper-V Isolation option for Docker.  We won’t be using it for this post but will use it in later iterations so we’ll do the configuration now.

For this post, I’ve used Windows Server 2016 Core running in a Hyper-V Virtual Machine on Windows Server 2016.

You can download a copy of the Windows Server 2016 Server in VHDX or ISO format. In my case, I’ve used the ISO to install Windows Server 2016 Datacentre Core Evaluation.

  1. Create the Virtual Machine in Hyper-V. You’ll need to assign the memory statically to enable the virtualisation extensions in the next step. In my case I’ve gone for a single CPU with 2 GB of RAM. I’ve allocated the standard 127GB for the OS Disk which is plenty for what we’re planning. Finally, I’ve selected the option to install from an ISO image (which I downloaded from the evaluation centre).
  2. Before you power-on the VM, open a PowerShell Window and run the following command to expose the Virtualization extensions to the Virtual Machine:
    Set-VMProcessor -VMName docker01 -ExposeVirtualizationExtensions $true – The VM must be powered off before you do this and you cannot use Dynamic Memory.
  3. Power on the virtual machine and complete the installation process. For testing scenarios, I tend to disable the Windows Firewall and enable the WinRM.
  4. Open PowerShell and run the following commands:
    Install-Module -Name DockerMsftProvider -Repository PSGallery -Force
    Install-Package -Name docker -ProviderName DockerMsftProvider
    Restart-Computer -Force

 

Networks

Let’s talk a little bit about networking in Docker. I’m not going to head down into the weeds just yet, but it is important that we understand the fundamentals of how Docker networking works, otherwise we’ll have these awesome lightweight deployable application containers that no one can connect to

 

 

Windows Container networking uses the Hyper-V Virtual Switch and network interfaces and the WinNat service to provide connectivity to the outside world.  The following network types are available:

  • NAT – containers attached to a network created with the 'nat' driver will receive an IP address from the user-specified (--subnet) IP prefix. Port forwarding is configured at runtime to map services from the ost IPhost IP address to the container, or alternatively the NAT address can be used from the container host to access the service. Using NAT is recommended for testing purposes only.
  • Transparent – The transparent network driver provides the container with direct connectivity to the hosts physical network and is akin to the External Virtual Switch in Hyper-V. The container (unless assigned statically) will receive an IP address from a DHCP server on the same subnet as the host OS.  This is also referred to as Bridged in some documentation.
  • Overlay – Overlay networks use the Windows SDN stack to communicate with containers in the same network but spread across multiple hosts. This network mode is used in Swarm mode which we’ll cover in a later post
  • L2 Bridge – The Layer 2 bridge network uses the same subnet as the Container hosts to communicate with containers on the same subnet across physical hosts. In this mode, containers share the same MAC address as the host unless traversing a subnet boundary where MAC Addresses are re-written and routed by the SDN Stack.  Kubernetes for example uses L2 Bridge Networking.

In this blog post, we’ll be working with a single host using a transparent network. To create the network, issue the following docker commands in PowerShell:

  1. Locate the physical network adapter on your VM by running Get-NetAdapter, in my case I only have one, aptly named “Ethernet”
  2. Create the Docker Transparent network using the following command: docker network create –name “transp” -d transparent

That’s it – we can now connect our Docker containers to the local network.

 

Container Registries and Repositories

At their core, a central repository that we can deploy images from control our builds and ultimately share the builds with other team members for verification. In a nutshell, the distinct types of registries are defined as follows:

  • Local Repositories – A local repository is on your laptop, it's here any builds that we perform on our local device are stored. We would typically use this for our initial development of a containerised application.  We also store our platform level images (Linux, Windows Server Core and Nano) here for use in local builds.
  • Docker Hub – Docker Hub is a free registry where we can host our repositories. It requires that we sign up for a Docker ID and create a repository; either public or private.  Public repositories are a lot like Git repositories, public can be accessed by anyone and private repositories require a user name and password.  We’ll be using a private repository for this blog post.
  • Azure Container Registries – Azure Container registries provide a cloud hosted managed registry that can be used to host your images and shared within your enterprise.

Here are the basics regarding Registries and Repositories:

  • Naming and Versioning – Repositories local and remotely work on a naming convention of “registry/repository:version”. When we’re working with container images, it's important to understand that if we want to share these images for use on other systems we need to name them correctly.  If you want to use version control simply change the tag after the “:” either to a number or distinctive name. Key point, if you want to use a remote repository, you must use a naming standard that matches the repository.  If you don’t, pushing the image will not be possible.
  • Push – Pushing an image allows us to build images on one docker host and push them to the repository for deployment to other container hosts, testing by peers, and other continuous integration and deployment tasks.
  • Pull – Pulling an image is the process used to pull the container image to the local container host – this applies to base images and custom builds. When pulling a docker image from a repository it will satisfy it's own dependencies and download only what it needs to make the container function.  For example if you have a build based on microsoft/nanoserver and the custom image in the repository is built from that particular base image, docker will only pull the changes to the base image (provide you have the base image present  on your local system).

 

Signing up for Docker Cloud and Creating a Registry

 

To create a DockerID you can right-click the Docker Icon and select Sign-In from the list of options. From there you can open the link (cloud.docker.com) to sign-up for an account.

 

Enter your details and confirm that you’re not a robot. You’ll then receive a confirmation email to verify your account. Click the link and you’re basically ready to go. You can create the repository from the web portal or we can now sign into Docker locally and create it using the Docker Tray application.

 

Clicking create will open the Docker Cloud Web Site to log and you’ll find yourself at the Create Repository page.

 

Once you’ve verified your email address, you can then create the repository.  For this example, I’ll be using themadfitz/blog.  The last step is to open PowerShell and login to your new repository which is as simple as opening PowerShell, typing Docker Login and entering your Docker Cloud credentials.  That’s it for this section but we’ll be using this repository in later steps.

 

Build Files

Docker Build files are the instructions we provider to the Docker daemon to create a new image based on changes made to a base or previous image.  The example I’ll use in this instance is the creation of a custom MySQL server image based on the latest version of the Windows Server Nano Container image.

We’ll get into the details of the Docker build file we’ll be using to customise the base server image in a minute but first a couple of perquisite activities to keep everything running smoothly.

 

 

When creating docker images, I like to set up a working folder structure.  When building images, the docker file we use for the build must be in a folder with the associated perquisites that it references.  I like to set my folders up as per the illustration above.  Where does this folder structure live, on my Laptop… just because we’re deploying the image to a Windows Server 2016 VM doesn’t mean we can’t build it and deploy it there later, in fact that’s one of the biggest advantages of Docker.

For our MySQL Build we have the following items:

  • Dockerfile – this is our build file that contains the instructions required to build the image. When using Windows containers, this file has no extension and is placed in the same folder as your build artefacts.
  • mysql-5.7.20-winx64.zip – the binaries for MySQL (Grab it here) and place it in the build directory. Make sure you grab the x64 edition, Nano Server doesn’t include the 32-bit libraries.
  • mysql-init.ini –contains the post configuration steps for MySQL
  • vcruntime140.dll – this is the Visual C++ Redistributable library which cannot be installed using an MSI in Nano.  I put this in the image just in case we need it.
# Build this image from the microsoft/nanoserver base image
FROM microsoft/nanoserver
# Copy the MySQL binaries from the build folder to the image
ADD mysql-5.7.20-winx64.zip /
# Set the MYSQL environment variable
ENV MYSQL C:\\MySQL
# Add the MySQL\bin directory to the existing Path environment variable
RUN setx PATH /M %PATH%;C:\MySQL\bin

# Use PowerShell to extract the MySQL Zip File
RUN C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -Command Expand-Archive -Path C:\mysql-5.7.20-winx64.zip -DestinationPath . -verbose

# Rename the default folder to C:\MySQL
RUN ren mysql-5.7.20-winx64 MySQL
# Create the Data folder under the C:\MySQL folder
RUN C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -Command mkdir -Path C:\MySQL\data
# Copy the mysql-init.ini file to the MySQL directory on the server
ADD mysql-init.ini /MySQL/mysql-init.ini
# Initialise the the MySQL installation
RUN mysqld.exe --initialize-insecure --console --explicit_defaults_for_timestamp
# Install MySQL as a service
RUN mysqld.exe --install --verbose
# Start the MySQL Service
RUN C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -Command Start-Service *mysql* -verbose
# Run the following command to configure the mysql instance with the information contained in the init file
RUN mysql.exe -u root --skip-password < C:\MySQL\mysql-init.ini

 

If we’ve covered the perquisites and downloaded the software that we need we can jump to a PowerShell command prompt and build the image based on the build file.

 

Putting it all together

In this section we’re going to put all of these pieces together to do the following:

  • Build an image based on the build file we walked through above
  • Push the image to the remote repository (themadfitz/blog) that we created earlier
  • Pull the image to our Windows Server 2016 Container VM
  • Deploy the container and connect to the MySQL instance using the MySQL workbench client.
  1. Open a PowerShell as an administrator and head to the directory containing your build file and associated prerequisites.
  2. We’ll now run docker build -t themadfitz/blog:latest .
    What does this do?  The docker build command reads the dockerfile in the directory to build the container, it will also tags the resulting image as themadfitz/blog:latest, and finally the “.” at the end of the command refers to our working directory.
  3. Once the above step is complete, we’ll have a local copy of the themadfitz/blog container image that’s ready to be used on other servers in our environment. Before we can do that we’ll need to push the image to our repository. Because we’ve named our container in line with our repository, we can issue the command: docker push themadfitz/blog:latest. The image will start uploading to the repository and once completed, be available to pull from the repository we created earlier on another machine.
  4. Log on to the Windows Server 2016 virtual machine we created earlier and open a PowerShell console.
  5. Run docker login and enter the credentials we created earlier – in my case they are themadfitz and my super complex password
  6. Now that we’re logged in, we can pull the image from the repository using the following command: docker pull themadfitz/blog:latest
  7. Now that we have a copy of the image locally, we can deploy it using the following command: docker run –name wincontpt2 –network trans –ip 172.26.2.66 -dit themadfitz/blog:latest
  8. At this point the instance is up and running, if you’re connected to your local network (in my case 172.26.2.0/24) you can ping the container and it will respond accordingly.
  9. Now we’ll connect to the MySQL instance using the Workbench (note that you can use any MySQL client you like to connect). As part of this build, we set the root password to Welcome123. Simply enter the details and click test connection. As you can see above we now have a working MySQL instance that we can connect to. The best part is the application doesn’t even know that its running in a container.

 

Windows Containers Part 2: Wrap-up

In this post we worked through the key building blocks for deploying a Windows Containerised application. 

Repositories allow us to store, deploy and version control our base images, build files allow us to define the base container configuration and its applications, and finally configuring a transparent network allows us to deploy our container application to the network for consumption by other applications and services.

In summary, we completed the following:

  • Installed Docker on a Windows Server 2016 Virtual Machine
  • Configure the transparent or bridged network to allow our containers to connect to the local network
  • Created a Docker Registry and Repository to store our custom images
  • Defined a build file for a basic MySQL server container based on the Nano server platform image and pushed it to the repository from our Windows 10 Laptop
  • Deployed the container to our Windows Server 2016 Container Host
  • Verified the MySQL container using the MySQL workbench.

That’s all for this post. Stay tuned for Part 3, where we will continue down the automation path and look at some of the more advanced build and configuration options available to us.

 

Related posts