How I ended up with Dokku and how to deploy Docker images with a GitLab pipeline

The road so far

About 2 years ago I started a bit of an odyssee to find a convenient, affordable and production ready way to deploy my web application stack. My goal was to make it viable by the above criteria, especially to have a stack/product in a segment that would normally end up on a shared webhost with clients that are used to shared webhosting pricing. First I discovered sloppy.io and started to use their lowest tier. About two months in their entire infrastructure went down for 24 hours and I was left with angry clients, took responsibility and found a better way: now.sh. That seemed like the best of both worlds. Almost zero configuration, regional deployments, integrated DNS. It seemed to good to be true - easy and affordable? Count me in!

Turns out it was too good to be true. 4 months in, ZEIT (now known as VERCEL) decided that Docker is over and serverless is the future and basically deprecated their Docker deployment options overnight. At that point I lost all confidence in the service and tried my luck again with sloppy: it simply was the only comparable service with the kind of pricing I was looking for and comparable tooling.

And again: 8 months later, right around Christmas 2019, I get an email from Sloppy that they will shut down their service with a few month's notice. Apparently their investors decided to pull out or something like that. For fuck's sake...

At this point I decided to turn my back on all the shiny stuff and to go "homebrew" for my infrastructure. For better or worse, I had to build my own solution and luckily, I found Dokku.

On a cold winter night, some time between Christmas and New Year, I decided to set up Dokku on a Hetzner Cloud instance and rewrote my GitLab pipeline. I ended up with, in essence, a similar solution like all of the above, deplyoing in a "herokuish" manner on my own infrastructure. At a fraction of the cost and knowing that if things go down, I can at least do someting about it. I was actually surprised how easy all of that was. Within 2 days, I moved all my projects to Dokku.

Before I started using Dokku, I checked out Ranger and pondered wether I should start investing my time into Kubernetes as my infrastructure. But it just felt, let's be honest, way too overengineered for a couple of websites with low traffic. Dokku nicely fits into that gap between static, shared webhosting and managed solutions on AWS or Gcloud that cost a metric shit ton per month for each single project.

Hindsight is 20:20 – I fully acknowledge that I might have been a bit too eager in wanting to use the shiny new thing and I payed the price for my naivité. Switching providers costs time, introduces new learning curves each time and might cause downtime for client projects. But with a modern web stack based on NodeJS, Express and Docker, there's no common infrastructure pattern like there is with LAMP. Everything is a bit more hands-on, technical and sometimes frustrating.

At the same time, this is something I like about the whole thing. Sounds masochistic, I know. But in a way, using the same thing I used for the past 20 years seems to be so utterly boring that I happily accept the potential frustration that comes with being a bit more "on the edge".

Lessons learned

Over the past 4 months, I refined the deployment and build processes and learned a few things along the way.

  • GitLab's free offering of runners is nice. But sometimes slow. And sometimes not available at all. Since I have more than enough spare CPU cycles at my home office that just crave being used, I used them for exactly that.
  • If you update Dokku using the "dokku-update" package availabele via apt-get and you deployed your apps the "herokuish" way by git pushing to your dokku host, prepare to wait for a while as "dokku-update" will rebuild all apps. During this time, the load balancer will randomly show different apps for different domains, as Dokku takes the alphabetically first available app for a domain pointing to it. This is a bit of a weird way of handling things and it caused me to have a sweaty forehead while updating Dokku during the first maintenance window. To prevent weird behaviour, you can run an app called something like "0a-default-app" or configure Dokku's NGINX to show an HTTP status message or forward to a default site instead of the next available app.
  • If you run 3-4 projects on a small Hetzner Cloud instance, including Dokku build jobs, CPU quickly becomes a bottleneck during builds. That made me switch to having docker images built and deployed using dokku tags, instead of the regular, "herokuish" way of doing things. If you only run one app per instance - don't bother.
  • If you play around with your pipeline, run it a dozen times and create Let's Encrypt certificates each time along the way: be aware of LE's rate limiting for the creation and for failed creation attempts. I was locked out of LE 2-3 times during the process because I was too stupid to realise that.
  • For additional services of my stack (like Commento or Statping) I don't use Dokku but the smallest Hetzner Cloudlet for each without any fancy automation, just a scp and a volume linked into containers for persistent storage. Also helps to distribute concerns.

The Pipeline

Here's the full .gitlab-pipeline.yml. Some explanations are included as comments below.

Dokku plugins used:

Content Plugin missing (blockSourceCode)
Content Plugin missing (blockSourceCode)

If the stages run successfully, the apps will be available on "myspecialapp-{PIPELINE_NAME}.mydokkuhost.com" and the configured domain for the production container respectively. DNS needs to be managed manually for now. I use Cloudflare - but anything goes, really. Cloudflare is nice because it supports CNAME flattening at the root ("@") level. It also might come in handy for DDoS attack situations or when a quick solution has to be found for a high-traffic/load situation. But I'll cross that bridge when I reach it.

Of course this is all very specific to my needs. Your needs might be different. The included shell functions are universally applicable though, to a lot of build and deployment contexts with Dokku.

Things I want to look into when the need arises:

  • Integrating the Cloudflare API via some CLI utility to automate subdomain creation
  • Deploying feature branches dynamically to subdomains (requires Cloudflare API integration)
  • Rollback to a specific tag/commit that has successfully passed all stages to production, offer it as a stage in GitLab UI

In terms of scaling - well, I yet have to find that out. So far all my projects have been tugging along nicely and didn't encounter any load issues. If a project reaches a certain limit, I just scale up the Hetzer cloudlet or extract it to it's own cloudlet/Dokku instance. That will be good enough for 99% of all my projects.

Photo by Mak on Unsplash

Published: 04/22/2020

© 2020 genox.ch – emaillegal