I recently switched hosting providers. In the process, I decided to switch to running all my services inside of containers.
Let’s take a look at my previous server environment: I had
- Some version of nginx serving static files and acting as a reverse proxy for gogs and ghost
- Let’s Encrypt client running periodically and copying certificates for nginx
- Init or cron would run this, maybe both?
- Some version of Ghost running under some version of node.js
- Some version of gogs running
- Some copy-pasted/handrolled init scripts to manage everything
- Probably out-of-date system software
- Probably other forgotten pieces of software installed
- Data stored in various, unknown locations on disk
There are a lot of unknowns here. Because of this, it’s easy to accidentally break things and difficult to reproduce a setup that has worked. Containers can help with this. They allow us to use immutable (unchanging), versioned images in a sandboxed environment to make sure that we run the exact same code in the exact same environment. This is huge. Everything in the container is always the same. Just pass the same command line flags every time, and everything should be identical. To modify some local state, just mount a local directory into the container.
To push things as far into the immutable, self-managing relm as possible, I decided on the following setup:
- I used
CoreOSContainer Linux as my host system. In Container Linux, all software is in
/usrwhich is read-only. The OS itself is auto-updating, so even if you forget to update your containers, the host OS should have all the latest security patches which should hopefully prevent vulnerabilities in one container from affecting another.
- Everything would be run inside containers. I already dicussed the benefits of immutable software above. However, instead of running containers with docker, I used rkt.
- All config files (including start-up scripts/systemd unit files) would be checked into a git repository.
This allows me to reproduce any previous setup exactly by backing up any directories that are referenced in my systemd unit files, copying them to a new host, and running my unit files there. No apt-get required.
Rkt vs Docker
Docker/Moby is the poster child and de facto runtime for containers in the open source world. I have a great deal of appreciation for docker for popularizing and making containers easy to use. However, docker’s container runtime reimplements a lot of functionality that other Linux tools (e.g. systemd) already provide. In contrast, rkt uses existing facilities when possible. When the docker daemon crashes, so does your app. rkt doesn’t have a daemon, it lets systemd own your app. Because I wanted to start my containers and trust that they’ll keep running without issue, I went with rkt.
Using a container-based setup has been quite nice. For the most part, things are easy to figure out, dead easy to reproduce, and do what you want. I will talk about two things specifically though: finding docker images and configuration.
Because rkt has no trouble launching docker images and because it’s so much easier to find docker images than ACI (rkt-native) images, I didn’t even bother to look for ACI images. I needed images for 3 pieces of software: an http server & reverse-proxy, ghost, and a git server. Because containers are so new, sometimes I would run into issues. I had a difficult time figuring out how to make Let’s Encrypt work with nginx inside dockers, so I gave up and decided to use Caddy instead. Besides that, setting things up wasn’t any more difficult than with a non-container based installation.
I decided to create systemd unit files to launch and manage all of my services. Had I needed multiple machines, I probably would have used kubernetes, but that would have been overkill for a single server. Once you learn the basic syntax, creating systemd unit files is really simple. Here’s an example one I used to launch ghost:
[Unit] Description=Ghost Requires=network-online.target [Service] ExecStart=/usr/bin/rkt run \ --volume data,kind=host,source=/home/ghost/data,readOnly=false \ --mount volume=data,target=/var/lib/ghost/content \ --set-env=url=https://uluyol.xyz/blog \ --net=host \ --insecure-options=image \ docker://ghost:1.5-alpine ExecStopPost=/usr/bin/rkt gc --mark-only KillMode=mixed Restart=always [Install] WantedBy=multi-user.target
ExecStart is the command used to launch ghost. You can see that I start rkt, and tell it to mount
/home/ghost/data read-write to
/var/lib/ghost/content in the container. I also have it run on the host machine’s network (as opposed to a local NAT), and tell it not to check the signature of the ghost image (unsupported by docker images). If this crashed, systemd will restart it. When the container is stopped, any dead containers will be garbage collected by rkt.
On the whole, switching to container-based hosting has been a big win. I don’t need to worry about updating Container Linux, and reproducing old setups is trivial. I know how my services are configured (systemd unit files checked into a git repository), and I know exactly what files are important to me. For anyone looking to simplify their deployment setups, I highly recommend starting with containers.