Deploying Websites with Docker

Depending where you hang out, use of the virtualisation tool Docker[1] can get you variable responses. Some think it is an anti-trend in systems administration, destroying all that is good in nixland along with systemd and whatever else Poettering is doing. Others think that Docker is only useful if you are a Silicon Valley startup orchestrating thousands of service nodes on your path to unicorn status. While both of these viewpoints can be justified, I have found Docker to be incredibly useful for managing my services. In this post I will talk about how I use Docker in my toolchain to deploy this website. But first, a little intro to containers on Linux for the uninitiated.

What are Containers?

Linux containers are a relatively new development in OS virtualisation techniques. They utilize a couple of Linux kernel capabilities, namely namespaces and cgroups, to isolate processes from the host machine. Cgroups allow for limiting of computing resources such as memory or cpu usage.[2] Namespaces are wrappers around a system resource such as network information, providing a virtual abstraction to a process.[3] These technologies were combined to form a container implementation known as Linux Containers (LXC). Compared to virtual machines such as kvm, containers are lighter weight. They rely on the host systems kernel and other resources instead of creating an entire new system on top of the host.

Docker started out as an abstraction to manage LXC containers, however it has since created its own container backend, libcontainer. Compared to standalone LXC, Docker makes container management easy. Docker images are built using a list of instructions called a Dockerfile. When building the image, it creates several image layers on top of each other, known as a union file system.[4] This reduces disk space, and allows for faster build times, as only changes to the base image(s) need to be processed. Docker also provides tools for networking your containers together, methods to monitor you containers, and ways to easily deploy images across multiple hosts.

Docker treats containers slightly differently than other managers. It’s containers are designed to be ephemeral. That is, made to be disposable and rebuilt on the fly. This is in contrast to the more traditional systems administration viewpoint where multiple apps are running on one system, and uptimes of years are not unheard of. Docker containers are also designed to only run a single process per container. In practice this means a multi component site traditionally installed on a single system is now split up into multiple containers. One for the web server, one for the database, one for the site backend, etc. This may seem insane at first, but it makes it easier to make changes to any single piece of a service, or move pieces to different hosts as needed.[5]

How I Utilize Docker

Time to talk about how I utilize Docker. For my websites I have four containers running:

The nginx web proxy takes all incoming traffic, and directs it to the correct container through an internal docker bridge network.

What drew me to using Docker is that it makes it easy to both develop and deploy the application. Because the containers it creates are independent of the host, I can develop an application on my Arch desktop, and deploy it to a Debian server without worrying about messing around on the server with dependency installations to make it work. It also makes it easy to add new services to the host, all I have to do is add some lines to my nginx proxy config, rebuild the container and I am good to go. If I want to do some load balancing, I can easily deploy a container to another host.

To manage my dockerfiles between development and deployment, I check them into my apps git repo. When I need to deploy the app on the server, it is just a matter of pulling the changes, building the new image, and starting the container. Something I am thinking of deploying for the future is a private docker image repository, so i can build the images on my local dev machine, push them to the repo, and then pull the image onto my hosting server. That way I don’t have to take up resources building on the host server.

To deploy my site, I first do a clean build of the website to generate the necessary html and other assets. I then move this directory over to a secondary git repository that serves as my deployment branch. This could all be done with some git hooks or pushing to a separate production repository, but at this point in time I do not deploy frequently enough, or have that complex of an environment to need this. This secondary repository contains my Dockerfile and all configurations needed to build the website into a nginx web server. I then pull the new code on my server, rebuild the container, kill the old one, and start the new one. I am in the process of automating this process with Ansible, but that is a topic for another blog post.


Overall I am glad that I took the time to learn the ins and outs of Docker. It has proven to be a powerful tool that has made my life a little easier. I will continue to experiment with it, and I look forward to eventually hosting something where I can test its ability to scale horizontally. There are some things that I would like to do where I fell that Docker is not the correct tool for the job. I have avoided dockerizing applications that would use nixland tools (I’m looking at you cron), as well as persistent storage needs (i.e. a database).[6] These applications are still on the bare server, or in a VM. Something that I have been looking into is systemd-nspawn. This is another container implementation in Linux, except it has the benefit that you can design your containers as single app disposable services, or as a more robust VM type environment running multiple components.