In this post, I will focus on good practices and see how a proper repository is realized. Indeed, in my previous post, my example was a little bit trivial, and if you want to create your own images through a Dockerfile, you will surely bump into difficulties: how do I manage interactive installation that ask a user input during install? How should I configure my application after installation? And many others…
Trusted Builds: a good way to learn
When you commit your image in a local repository, or push it into a remote repository, you only push the built image, as a file. Trusted build is a mechanism to build automatically an image from its sources: the docker index will built the image each time a commit is done on the public github repository corresponding to the docker image. This is a great way to study popular images and see how their maintainers manage difficulties you can have with the settings of some images.
You can browse and search into the official Docker index of repositories from the website, or interact with it in command line with
docker pull and
Let’s have a look to the mysql image of the repository tutum (provided by tutum.co), available here. As it is a trusted build, you have access to the github page from where the image is built.
Erratum: the repository has changed since the date of this post, I let the information available here, but you may find differences with the sources hosted on Github.
Let’s have a look the docker file, and the good practices it includes.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
The FROM command defines the base image for this image.
You can see that the maintainer used a tag to define precisely which version of Ubuntu to use. You should always define a tagged version of your base image to precisely define which release of the distribution your image relies on.
The maintainer is vital tag, to define the author of the image and a way to contact it.
You can add comments with the character
#. You should always add comments to explain the goal of each block of instructions.
RUN instruction, the image is committed, and the following RUN instruction is executed on the newly committed image.
1 2 3
We can notice several practices here:
No usage of
apt-get upgrade: Indeed, you just want to add on this layer what is needed for the container. If you want to upgrade the system, you should upgrade the base image, as it is its role to offer the system environment.
Avoid interaction: the
RUNcommand is executed in a non-interactive way. As a consequence, you don’t want to been ask for confirmation when installing package: you must use the
apt-get install -y <packages>. Moreover some packages ask questions during installation about account creation, default configuration, etc. It is the case for mysql-server for instance. That’s why the author put the variable
noninteractive, in order to inform that there won’t be any interaction during installation.
Finally, there is the
apt-get update. It is the most litigious command. If you install your packages via
apt-get, you have no choice but to update the index of packages before the install: you don’t want to have an outdated cache and broken links during installation. Nevertheless, it also means that the Dockerfile can create slightly different images depending of the date of build. Indeed, the version of a package or a dependency of a package you want to install can have change in the repository. The only work around is to install your packages via a direct link to a binary package or from source. Anyway, the consequence can be neglected as you don’t want to rebuild your image every day, and you should build it once and use it / share it as long as you want to use the same environment.
1 2 3 4 5 6 7 8
The maintainer has added two types of external files here:
Configuration files: you want your image to be operational immediately after the build. You have to provide working default configurations that allow to use the package in rational conditions. You must of course allow the user to override the configuration settings (by adding its own configuration files, or with environment variables).
Scripts files: As you want to automate all setup steps, it is a good idea to wrap your launcher inside scripts that could executes some checking and actions for you (Database creation, Account creation, etc.).
The Dockerfile defines the ports you want to expose to the host system to access the service you will run on the container, with the instruction
Even if you can tell on which host port you want to map the local port, this is good practice to let Docker framework dynamically map the port to the host. Indeed, if you map yourself the port, you won’t be able to launch several containers from the same Dockerfile, as the first one will lock the port for itself.
Finally, you can tell Docker which process it should execute by default when launching a container.
Here, as the maintainer wrapped the process into a script to automate account creation, it launches the script instead of the process.
If you study the scripts, you may have noticed that the maintainer doesn’t launch the mysql server directly, but launch instead a process called Supervisord. Supervisor is a process control system, a little bit like
init, that allow you to manage the execution of several processes.
Indeed, I told you in my previous article that a Docker container can only run one job: there isn’t any
init running instance to manage the lifecycle of several process executions in a Docker container. Nevertheless, you will certainly want to be able to manage several processes or job in a same container: for example, running a SSH server and at the same time another kind of server. You can use Supervisor to do that.
Better, as explained in this article, you can use inheritance to include the Supervisor configuration files from you base image, to launch the services the base image already provide in parallel with the jobs you define in your own configuration.
Finally, the author of the image included a README.md explaining how to build, launch and configure the container. It is really handy and you always should include it if you create your own trusted build. The README is displayed on the Docker index website when you look for the image.
You have seen how is built a popular docker image. If you want to create your own Docker image, you should search for similar images in the Docker index and analyze their Dockerfiles. All trusted build are available through their github page, so it’s a really easy task.
I hope this post will help you to create great images for yourself and the community!