In this post we want to explore the main ingredients when you work with the docker engine to containerize your applications. If you need to know what docker is then take a look at my post or the official website
What this post will cover
- Building blocks of docker engine
- Working with the building blocks
What this post will not cover
- Details on Dockerhub
- Details on internal working of Docker engine
- Advanced features like compose, networks, volumes or swarms
The main building blocks for working with docker are the following three:
- docker file
- docker images
- docker containers
In short, the docker file describes how to build an image and an image is needed to create and start a container. You can imagine it like you are building a program in general. The docker file resembles the source code, the docker image is the compiled binaries and the container is the in memory representation of the application.
Also docker Images can be built on top of each other, so preparing an image for an application is quite simple to do. This can be done by using a base image (like asp net core) and extending it with your custom image of your application.
As with compiled binaries, from one single image you can you can create multiple (identical) container instances that are all isolated from each other. Images are immutable (like binaries). You have to “recompile” them to change the outcome. On the other hand, you can change containers at runtime and create new images from those changes.
As mentioned above, you create a docker image by using a docker file and the docker build command. To create a container from an image you use the docker create command. With that created container you then start a container by using the docker start and end them by docker stop commands.
This all sounds simple enough. Yet it can become quite complex for applications with lots of different configurations and parts. This can also be a time consuming process.
Instead of the docker create/docker start commands you can also simply use the docker run command to do those two in one step. Sometimes it makes sense to use create/start. This is the case if you need to add the container to a SDN (software defined network, a feature of docker), have to apply a volume or any other manual configuration for a container.
Now we want to look closer at how those building blocks work on their own.
Working with Docker files
Docker files are made up of the instructions to create a docker image and ultimatively a container. This is where the power of docker comes into play, because a docker file allows for individual composition of an application with different commands. We look at a simple definition for a AspNetCore app. For this we assume we have an MVC application created with the dotnet template called MyApp. Which only contains an Index view and a simple HomeController.
In the root directory of your application create a file conventionally called Dockerfile with the following contents:
FROM microsoft/aspnetcore:2.1 COPY dist /app WORKDIR /app EXPOSE 80/tcp ENTRYPOINT ["dotnet", "MyApp.dll"]
The FROM command simply defines which base image should be build on. In this case it is the microsoft/aspnetcore:2.1 image, where the :2.1 is the tag that specifies the exact version of the image.
COPY is simply the linux cp <source> /destination command and copies the build binaries from dotnet publish in the app folder.
WORKDIR sets the directory in which the isolated file system of the container is located.
EXPOSE makes it possible to map a port for the containers, in this case on OS port 80 the tcp requests are accessible to the container.
The ENTRYPOINT command describes the application that should be used in this container. Which in this case is the MyApp application which is executed with dotnet core.
To prepare the app for the image go to your application directory and execute the following commands from the command line:
dotnet restore; dotnet publish --framework netcoreapp2.1 --configuration Release --output dist
The most important one is the publish command which compiles the app and allows for use in linux docker containers.
With this done you can build an Image from the Dockerfile.
docker build . -t theiten/mydockerapp -f Dockerfile
. defines the current directory
-t adds a tag to the image which is conventionally used as <company>/<application>
-f points to the dockerfile to use
This files can get pretty complex and have a lot of stuff to it. But you get the basic idea from this example.
Working with Images
As mentioned above images are templates that are used to create containers. To get an image you have two options, pull an existing one from a Repository (like the DockerHub, or a private one) or create an image from your own Docker file.
- Pulling from a Repositoy
To get ready to use images to create containers you simply use the
docker pull <image-name>:<tag>
Where the <tag> can be omitted and then defaults to latest.
- Creating your own Image
Creating docker file for custom image (see above working with docker files)
To list all images you use
docker rmi -f $(docker images -q)
(removes all images in list)
docker images -q
Gives only image ids.
The -f argument removes images even if they are used by containers.
Working with containers
Containers bring images to life. The hosting OS can run multiple instances of one image as isolated containers.
If you already have an image you can create a container by using
docker create -p 3000:80 --name mydockerapp3000 theiten/mydockerapp
We applied the –name argument for easier handling of the container in subsequent commands.
The container exposes port 3000 and maps this OS port to the container port 80 for http,
The final argument tells docker daemon which image to take as a template for the container.
Now if you want a second instance of this image you simply create a new docker container from the same image but specify another name and network port like so:
docker create -p 3100:80 --name mydockerapp3100 theiten/mydockerapp
To find all containers currently in use you can utilize
docker ps -a
This exposes the names, ids and status of all containers. If you simply want to see the running containers you can also use:
docker container list
use docker start <containerName> to start a container
Getting container output
By default docker does not display output from containers, which can be desired. Although it does not show the output directly, docker stores the output in a log file.
This log files can be inspected by using
docker logs <containername>
e.g. for ASP.NET core applications on default the app writes out a message for each HTTP request it recieves. So a log could look like this:
... Hosting environment: Production Content root path: /app Now listenening on: http://+:80 Application started. Press Ctrl+C to shut down. info: Microsoft.AspNetCore.Hosting.Internal.WebHost Request starting HTTP/1.1 GET http://localhost:3000/ info: Microsoft.AspNetCore.Mvc.Internal.ControllerActionInvoker Executing action method ExampleApp.Controllers.HomeController.Index ...
For running containers you can use the -f argument for incoming messages, else stopped containers show you the latest messages.
As mentioned above, Images like binaries are immutable yet containers are not. So basically you can create two containers from the same image which will initially have identical files etc. but during its application lifetime those independent file systems will differ.
You can also deliberately change a container to then create a new image from this changed container which can be quite useful for manual configuration tasks and such.
To change a container this way you start it from any given image. And then you can for example copy a configuration file into the container, like so:
docker cp ./myconfigfile.config mydockerapp3100:/app/
This copies the myconfigfile.config into the root directory of the app that I created in the previous section from the image of this post. Note the colon between the name of the running container and the directory you want to copy to.
The docker cp command can also be used to copy files from the container to the outside like so:
docker cp <containerID>:/path/to/dir/ /path/on/host/
After modifying the contaienr you can then run
docker diff mydockerapp3100
which in turn shows you different changes with prefixed Annotations:
A stands for added file or directory to the container, and C stands for a changed file or directory. There is also D for deleted dirs or files.
Executing commands in containers
You can also interact directly with the container by executing commands in the container:
docker exec mydockerapp3100 cat /app/Views/Home/Index.cshtml
you specify the containername in which you want to run the command, then the name of the command and all the arguments it requires.
You can even start a shell in a container:
docker exec -it mydockerapp3100 /bin/bash
Now you can maneuver like in a normal linux file system and use linux commands to alter the container.
A common gotcha: Linux containers often use a slim version of Linux like Alpine linux which do not have any editors pre installed.
To install you can simply execute inside the container (this is for ubuntu):
apt-get update; apt-get install vim;
or for alpine:
apk add nano –update
docker commit mydockerapp3100 theiten/mydockerapp:edited
you create a new image from the edited container. Then use docker images to see that now the newly tagged image is also part of the list.
In this post we looked at the main ingredients of docker. Namely docker files, docker images and docker containers and how they are connected and one is used to create the next.
We also explicitly looked at how containers can be modified to create new images from it so you can apply runtime configurations.