In this post we want to explore the main ingredients when you work with the docker engine to containerize your applications. If you need to know what docker is then take a look at my post or the official website

What this post will cover

  • Building blocks of docker engine
  • Working with the building blocks

What this post will not cover

  • Details on Dockerhub
  • Details on internal working of Docker engine
  • Advanced features like compose, networks, volumes or swarms

The main building blocks for working with docker are the following three:

  1. docker file
  2. docker images
  3. docker containers

In short, the docker file describes how to build an image and an image is needed to create and start a container. You can imagine it like you are building a program in general. The docker file resembles the source code, the docker image is the compiled binaries and the container is the in memory representation of the application.

Also docker Images can be built on top of each other, so preparing an image for an application is quite simple to do. This can be done by using a base image (like asp net core) and extending it with your custom image of your application.

As with compiled binaries, from one single image you can you can create multiple (identical) container instances that are all isolated from each other. Images are immutable (like binaries). You have to “recompile” them to change the outcome. On the other hand, you can change containers at runtime and create new images from those changes.

As mentioned above, you create a docker image by using a docker file and the docker build command. To create a container from an image you use the docker create command. With that created container you then start a container by using the docker start and end them by docker stop commands.

This all sounds simple enough. Yet it can become quite complex for applications with lots of different configurations and parts. This can also be a time consuming process.

Instead of the docker create/docker start commands you can also simply use the docker run command to do those two in one step. Sometimes it makes sense to use create/start. This is the case if you need to add the container to a SDN (software defined network, a feature of docker), have to apply a volume or any other manual configuration for a container.

Now we want to look closer at how those building blocks work on their own.

Working with Docker files

Docker files are made up of the instructions to create a docker image and ultimatively a container.  This is where the power of docker comes into play, because a docker file allows for individual composition of an application with different commands.  We look at a simple definition for a AspNetCore app. For this we assume we have an MVC application created with the dotnet template called MyApp. Which only contains an Index view and a simple HomeController.
In the root directory of your application create a file conventionally called Dockerfile with the following contents:

The FROM command simply defines which base image should be build on. In this case it is the microsoft/aspnetcore:2.1 image, where the :2.1 is the tag that specifies the exact version of the image.
COPY is simply the linux cp <source> /destination command and copies the build binaries from dotnet publish in the app folder.
WORKDIR sets the directory in which the isolated file system of the container is located.
EXPOSE makes it possible to map a port for the containers, in this case on OS port 80 the tcp requests are accessible to the container.
The ENTRYPOINT command describes the application that should be used in this container. Which in this case is the MyApp application which is executed with dotnet core.

To prepare the app for the image go to your application directory and execute the following commands from the command line:

The most important one is the publish command which compiles the app and allows for use in linux docker containers.

With this done you can build an Image from the Dockerfile.

. defines the current directory
-t  adds a tag to the image which is conventionally used as <company>/<application>
-f points to the dockerfile to use

This files can get pretty complex and have a lot of stuff to it. But you get the basic idea from this example.

Working with Images

As mentioned above images are templates that are used to create containers.  To get an image you have two options, pull an existing one from a Repository (like the DockerHub, or a private one) or create an image from your own Docker file.

  1. Pulling from a Repositoy
    To get ready to use images to create containers you simply use the

    Where the <tag> can be omitted and then defaults to latest.
  2. Creating your own Image
    Creating docker file for custom image (see above working with docker files)

To list all images you use

Deleting images

(removes all images in list)

Gives only image ids.
The -f  argument removes images even if they are used by containers.

Working with containers

Containers bring images to life. The hosting OS  can run multiple instances of one image as isolated containers.
If you already have an image you can create a container by using

We applied the –name argument for easier handling of the container in subsequent commands.
The container exposes port 3000 and maps this OS port to the container port 80 for http,
The final argument tells docker daemon which image to take as a template for the container.

Now if you want a second instance of this image you simply create a new docker container from  the same image but specify another name and network port like so:

Listing containers

To find all containers currently in use you can utilize

This exposes the names, ids and status of all containers. If you simply want to see the running containers you can also use:

Starting containers
use docker start <containerName> to start a container

Getting container output

By default docker does not display output from containers, which can be desired. Although it does not show the output directly, docker stores the output in a log file.
This log files can be inspected by using

e.g.  for ASP.NET core applications on default the app writes out a message for each HTTP request it recieves. So a log could look like this:

For running containers you can use the -f argument for incoming messages, else stopped containers show you the latest messages.

Modifying containers

As mentioned above, Images like binaries are immutable yet containers are not. So basically you can create two containers from the same image which will initially have identical files etc. but during its application lifetime those independent file systems will differ.
You can also deliberately change a container to then create a new image from this changed container which can be quite useful for manual configuration tasks and such.

To change a container this way you start it from any given image. And then you can for example copy a configuration file into the container, like so:

This copies the myconfigfile.config into the root directory of the app that I created in the previous section from the image of this post. Note the colon between the name of the running container and the directory you want to copy to.

The docker cp command can also be used to copy files from the container to the outside like so:

After modifying the contaienr you can then run

which in turn shows you different changes with prefixed Annotations:

C /app
A /root/.aspnet
….
A stands for added file or directory to the container, and C stands for a changed file or directory. There is also for deleted dirs or files.

Executing commands in containers

You can also interact directly with the container by executing commands in the container:

you specify the containername  in which you want to run the command, then the name of the command and all the arguments it requires.
You can even start a shell in a container:

Now you can maneuver like in a normal linux file system and use linux commands to alter the container.
A common gotcha: Linux containers often use a slim version of Linux like Alpine linux which do not have any editors pre installed.
To install you can simply execute inside the container (this is for ubuntu):
apt-get update; apt-get install vim;
or for alpine:
apk add nano –update

With

you create a new image from the edited container. Then use docker images to see that now the newly tagged image is also part of the list.

Summary

In this post we looked at the main ingredients of docker. Namely docker files, docker images and docker containers and how they are connected and one is used to create the next.

We also explicitly looked at how containers can be modified to create new images from it so you can apply runtime configurations.

Leave a Reply