genox.ch
Home

How I ended up with Dokku and how to deploy Docker images with a GitLab pipeline

The road so far

About 2 years ago I started a bit of an odyssee to find a convenient, affordable and production ready way to deploy my web application stack. My goal was to make it viable by the above criteria, especially to have a stack/product in a segment that would normally end up on a shared webhost with clients that are used to shared webhosting pricing. First I discovered sloppy.io and started to use their lowest tier. About two months in their entire infrastructure went down for 24 hours and I was left with angry clients, took responsibility and found a better way: now.sh. That seemed like the best of both worlds. Almost zero configuration, regional deployments, integrated DNS. It seemed to good to be true - easy and affordable? Count me in!

Turns out it was too good to be true. 4 months in, ZEIT (now known as VERCEL) decided that Docker is over and serverless is the future and basically deprecated their Docker deployment options overnight. At that point I lost all confidence in the service and tried my luck again with sloppy: it simply was the only comparable service with the kind of pricing I was looking for and comparable tooling.

And again: 8 months later, right around Christmas 2019, I get an email from Sloppy that they will shut down their service with a few month's notice. Apparently their investors decided to pull out or something like that. For fuck's sake...

At this point I decided to turn my back on all the shiny stuff and to go "homebrew" for my infrastructure. For better or worse, I had to build my own solution and luckily, I found Dokku.

On a cold winter night, some time between Christmas and New Year, I decided to set up Dokku on a Hetzner Cloud instance and rewrote my GitLab pipeline. I ended up with, in essence, a similar solution like all of the above, deplyoing in a "herokuish" manner on my own infrastructure. At a fraction of the cost and knowing that if things go down, I can at least do someting about it. I was actually surprised how easy all of that was. Within 2 days, I moved all my projects to Dokku.

Before I started using Dokku, I checked out Ranger and pondered wether I should start investing my time into Kubernetes as my infrastructure. But it just felt, let's be honest, way too overengineered for a couple of websites with low traffic. Dokku nicely fits into that gap between static, shared webhosting and managed solutions on AWS or Gcloud that cost a metric shit ton per month for each single project.

Hindsight is 20:20 – I fully acknowledge that I might have been a bit too eager in wanting to use the shiny new thing and I payed the price for my naivité. Switching providers costs time, introduces new learning curves each time and might cause downtime for client projects. But with a modern web stack based on NodeJS, Express and Docker, there's no common infrastructure pattern like there is with LAMP. Everything is a bit more hands-on, technical and sometimes frustrating.

At the same time, this is something I like about the whole thing. Sounds masochistic, I know. But in a way, using the same thing I used for the past 20 years seems to be so utterly boring that I happily accept the potential frustration that comes with being a bit more "on the edge".

Lessons learned

Over the past 4 months, I refined the deployment and build processes and learned a few things along the way.

  • GitLab's free offering of runners is nice. But sometimes slow. And sometimes not available at all. Since I have more than enough spare CPU cycles at my home office that just crave being used, I used them for exactly that.
  • If you update Dokku using the "dokku-update" package availabele via apt-get and you deployed your apps the "herokuish" way by git pushing to your dokku host, prepare to wait for a while as "dokku-update" will rebuild all apps. During this time, the load balancer will randomly show different apps for different domains, as Dokku takes the alphabetically first available app for a domain pointing to it. This is a bit of a weird way of handling things and it caused me to have a sweaty forehead while updating Dokku during the first maintenance window. To prevent weird behaviour, you can run an app called something like "0a-default-app" or configure Dokku's NGINX to show an HTTP status message or forward to a default site instead of the next available app.
  • If you run 3-4 projects on a small Hetzner Cloud instance, including Dokku build jobs, CPU quickly becomes a bottleneck during builds. That made me switch to having docker images built and deployed using dokku tags, instead of the regular, "herokuish" way of doing things. If you only run one app per instance - don't bother.
  • If you play around with your pipeline, run it a dozen times and create Let's Encrypt certificates each time along the way: be aware of LE's rate limiting for the creation and for failed creation attempts. I was locked out of LE 2-3 times during the process because I was too stupid to realise that.
  • For additional services of my stack (like Commento or Statping) I don't use Dokku but the smallest Hetzner Cloudlet for each without any fancy automation, just a scp and a volume linked into containers for persistent storage. Also helps to distribute concerns.

The Pipeline

Here's the full .gitlab-pipeline.yml. Some explanations are included as comments below.

Dokku plugins used:

1
2
3
4
5
6
dokku plugin:install https://github.com/mbreit/dokku-monit.git
dokku plugin:install https://github.com/dokku/dokku-letsencrypt.git
dokku plugin:install https://github.com/dokku/dokku-http-auth.git
dokku plugin:install https://github.com/dokku/dokku-redirect.git
dokku plugin:install https://github.com/ribot/dokku-slack.git
dokku plugin:install https://github.com/josegonzalez/dokku-docker-direct.git
.gitlab-pipeline.yml (updated: 2020/05/12)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
# GitLab Variables on Group level

# DOKKU_USER - usually the default user, "dokku"
# DOKKU_DOCKER_PORT - the default internal port of the containers
# DOKKU_PRIVATE_KEY_FOR_DOKKUHOST - private key for all SSH tasks run as DOKKU_USER
#                                   and DOKKU_CI_USER
# DOKKU_DOMAIN - the default domain of the dokku host

variables:
  APP_NAME: "myspecialappcom" # A slug that will be used in creating the Dokku app and tags
  APP_DOMAIN: "myspecialapp.com" # Primary domain that will be linked to the Dokku app
  APP_DOMAIN_ALTERNATIVE: "www.myspecialapp.com" # An optional, secondary domain
  DOKKU_HOST: "dokkuhost.mydomain.io" # The Dokku host we're deploying to
  DOKKU_MEMORY_LIMIT: "256MB" # Limiting the Dokku app's memory on small cloudlet instances
  STAGING_LOGIN: "staging staging" # HTTPAUTH for the staging step
  SSH_PRIVATE_KEY: $DOKKU_PRIVATE_KEY_FOR_DOKKUHOST # SSH private key
  USE_LETSENCRYPT: "true" # Wether or not Dokku should add SSL certs during the deployment
  USE_APP_DOMAIN: "false" # Wether or not Dokku should set up domains other than the automatically created subdomains

# My workflow includes an E2E stage using Cypress. Production can only be deployed if
# staging and e2e finished without errors.


stages:
  - staging
  - e2e
  - production

.init_functions: &init_functions |
  function init() {
    export SSH_BINARY=$(which ssh)
    export SSH="$SSH_BINARY -qtt $DOKKU_USER@$DOKKU_HOST --"
    export APP="$APP_NAME-$CI_ENVIRONMENT_SLUG"
    export DOKKU_APP_EXISTS="true"
  }

  # Preparing openSSH client
  function ssh_setup() {
    eval $(ssh-agent -s)
    ssh-add <(echo "$SSH_PRIVATE_KEY")
    mkdir -p ~/.ssh && touch ~/.ssh/known_hosts
    ssh-keyscan $DOKKU_HOST >> ~/.ssh/known_hosts
    chmod 644 ~/.ssh && chmod 644 ~/.ssh/known_hosts
  }

  # Isolated LE handling with error catcher
  function letsencrypt() {
    init
    if [[ $USE_LETSENCRYPT == "false" ]]; then
      echo -e "Skipping SSL certificate provisioning"
      exit 0
    fi
    echo -e "Provisioning SSL certificates"
    # Letsencrypt errors don't exit the pipeline properly, we need to parse the output
    # for errors manually and "exit 1" to trigger a failed deployment
    LE_RESULT=$($SSH letsencrypt $APP 2>&1)
    echo -e "$LE_RESULT"
    if [[ $LE_RESULT == *"ERROR"* ]]; then
      echo "ERROR: Could not generate SSL certs"
      exit 1
    fi
  }

  # Dokku App config if app does not exist
  function create_basic_app_if_not_exists() {
    $SSH apps:exists $APP || export DOKKU_APP_EXISTS="false"
    if [[ $DOKKU_APP_EXISTS == "false" ]]; then
      cfcli --activate false -t CNAME add $APP.$DOKKU_DOMAIN $DOKKU_HOST || true
      $SSH apps:create $APP
      $SSH slack:set $APP $DOKKU_SLACK_WEBHOOK  
      $SSH resource:limit --memory $DOKKU_MEMORY_LIMIT $APP
      $SSH resource:reserve --memory $DOKKU_MEMORY_LIMIT $APP
      $SSH config:set --no-restart $APP NODE_ENV=production
      $SSH config:set --no-restart $APP ENVIRONMENT=$CI_ENVIRONMENT_SLUG
      $SSH proxy:ports-add $APP http:80:$DOKKU_DOCKER_PORT
    fi
  }

  function build_container_and_push_to_gitlab() {
    init
    docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
    docker pull $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA:latest || true
    docker build --cache-from $CI_REGISTRY_IMAGE:latest --tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA --tag $CI_REGISTRY_IMAGE:latest .
    docker push $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
  }

  function pull_container_from_dokkuserver() {
    init
    $SSH docker-direct login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
    $SSH docker-direct pull $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
    $SSH docker-direct tag $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA dokku/$APP:latest
    $SSH docker-direct image rm $CI_REGISTRY_IMAGE:$CI_COMMIT_SHORT_SHA
  }

  function deploy_to_staging() {
    init
    create_basic_app_if_not_exists
    $SSH tags:create $APP latest
    $SSH tags:deploy $APP latest
    letsencrypt
    $SSH http-auth:on $APP $STAGING_LOGIN
  }

  function deploy_to_production() {
    init
    create_basic_app_if_not_exists
    if [[ $DOKKU_APP_EXISTS == "false" ]]; then
      if [[ $USE_APP_DOMAIN == "true" ]]; then
        $SSH domains:add $APP "$APP_DOMAIN" "$APP_DOMAIN_ALTERNATIVE"
      fi
    fi
    $SSH tags:create $APP latest
    $SSH tags:deploy $APP latest
    letsencrypt
    $SSH monit:enable $APP
  }

  function cleanup() {
    $SSH docker-direct container prune -f
    $SSH docker-direct image prune -af
  }

before_script:
  - *init_functions

staging:
  stage: staging
  image: zweitakt/nextjs-gitlab-runner:latest
  only:
    - master
  environment:
    name: staging
    url: https://$CI_ENVIRONMENT_SLUG.$DOKKU_DOMAIN/
  script:
    - ssh_setup
    - build_container_and_push_to_gitlab
    - pull_container_from_dokkuserver
    - deploy_to_staging

# E2E tests are testing against deployed instances,
# we don't need to worry about checking out code.
e2e:
  stage: e2e
  image: cypress/base:13.6.0
  only:
    - master
  environment:
    name: e2e
  variables:
    CYPRESS_CACHE_FOLDER: "$CI_PROJECT_DIR/cache/Cypress"
  cache:
    key: ${CI_COMMIT_REF_SLUG}
    paths:
      - cache/Cypress
  script:
    - yarn install
    - yarn workspace @zweitakt/e2e run test-staging

production:
  stage: production
  image: zweitakt/nextjs-gitlab-runner:latest
  when: manual
  only:
    - master
  environment:
    name: production
    url: https://$APP_DOMAIN/
  script:
    - ssh_setup
    - pull_container_from_dokkuserver
    - deploy_to_production
    - cleanup

If the stages run successfully, the apps will be available on "myspecialapp-{PIPELINE_NAME}.mydokkuhost.com" and the configured domain for the production container respectively. DNS needs to be managed manually for now. I use Cloudflare - but anything goes, really. Cloudflare is nice because it supports CNAME flattening at the root ("@") level. It also might come in handy for DDoS attack situations or when a quick solution has to be found for a high-traffic/load situation. But I'll cross that bridge when I reach it.

Of course this is all very specific to my needs. Your needs might be different. The included shell functions are universally applicable though, to a lot of build and deployment contexts with Dokku.

Things I want to look into when the need arises:

  • Integrating the Cloudflare API via some CLI utility to automate subdomain creation
  • Deploying feature branches dynamically to subdomains (requires Cloudflare API integration)
  • Rollback to a specific tag/commit that has successfully passed all stages to production, offer it as a stage in GitLab UI

In terms of scaling - well, I yet have to find that out. So far all my projects have been tugging along nicely and didn't encounter any load issues. If a project reaches a certain limit, I just scale up the Hetzer cloudlet or extract it to it's own cloudlet/Dokku instance. That will be good enough for 99% of all my projects.

Photo by Mak on Unsplash

Published: 04/22/2020
DevOps



© 2020 genox.ch – emaillegal