Gitlab CI to Droplet with Docker

The meta post

Gitlab recently went and did a nice thing for everyone. They now offer CI and docker registry at no cost. That means we can do big build jobs like compiling and testing code, attach that code to a docker registry, store the containers, and deploy for free. Better still they’ve provided a sane interface into the CI system via yaml files.

This site is built with Hakyll, which is static site generator written in Haskell in the tradition of Ruby’s Jekyll. The artifacts of the Hakyll build are served with Nginx, and deployed with Docker. So the goal is something like this:

Gitlab Digital Ocean
Our Project Droplet
Git Repo CI Runner Registry Docker
Push to master
->
Compile Hakyll
Generate static assets
Copy assets into Nginx image
Push container
->
Store container
SSH into Droplet
->->
Stop existing server container
Send container
->
Pull new copy
Start new container

Step 1, Build

Luckily the kind Mr. Done's fpco has a Docker image up on hub.docker.com called fpco/stack-build that has all the batteries included for compiling Haskell with Stack.

Gitlab's Docker integration is fantastic.

Lets take a look at .gitlab-ci.yml. When this file is present in a repo, Gitlab's CI will automatically detect it and start a build process called a "Pipeline".

image: fpco/stack-build

build:
  script:
    - stack setup
    - stack install
    - /root/.local/bin/blog build

Thats it!. When this file is committed to the Hakyll project, and pushed to the master branch. Gitlab CI will detect the file, ans start a pipeline.

The pipeline will:

  1. Pull the fpco/stack-build image from hub.docker.com
  2. Mount the project directory in the docker image
  3. Execute stack setup inside the image, which will download GHC and prepare the stack build tool
  4. Execute stack install inside the image, which will compile the Hakyll project. Producing a binary called blog
  5. Run blog build which will process all the present .md and related files, outputting the blog in a _site directory.

So far so good! We got a lot done in very few lines of yml.

Step 2, Dockerize

Nginx

Now I want to serve this site with nginx when all is done. So we are going to need a tiny nginx.conf file to serve our _site directory in an appropriate fashion.

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include       /etc/nginx/mime.types;

    sendfile           on;
    keepalive_timeout  65;

    tcp_nopush on;
    tcp_nodelay off;
    client_header_timeout 10s;
    client_body_timeout 10s;
    client_max_body_size 128k;
    reset_timedout_connection on;

    gzip on;
    gzip_types
        text/css
        text/plain
        application/javascript
        font/truetype
        font/opentype
        image/svg+xml;

    server {
        server_name  $hostname;
        listen       80;
        root /_site;
    }

}

Thats it! This will tell nginx that we are going to serve /_site, to handle MIME times for us by file name, and even do gziping for performance. Now its easy for us to produce a Docker container holding this blog, served with nginx, by basing it on the nginx official container.

Docker

FROM nginx:1.9
COPY nginx.conf /etc/nginx
COPY _site /_site

The Docker file simply copies in our nginx.conf and whatever is in _site. When we start the container, we should see the blog served at port 80.

Step 3, Register

Docker in Docker

Now that we've dockerized the blog with the production server, lets get it building with Gitlab CI. First we are going to need to switch images from fpco/stack-build to gitlab/dind (which stands for "Docker In Docker"). This will allow us to build our own docker images in Gitlab CI.

image: gitlab/dind

build:
  script:
    - docker run -v ${PWD}:/work fpco/stack-build bash -c 'cd /work && stack setup && stack install && /root/.local/bin/blog build'
    - docker build .

The above yml will get us pretty far, but not all the way. We still use fpco/stack-build but now we use it with the docker command, rather than implicitly in the image. Here's whats going on:

  1. Gitlab CI will pull the gitlab/dind image.
  2. CI will mount that image with the project directory.
  3. CI will execute docker run -v ${PWD}:/work fpco/stack-build ... inside the the gitlab/dind image, which will in tern pull the fpco/stack-build image and execute some bash inside it. The bash is just our earlier script of
    1. stack setup
    2. stack install
    3. blog build
  4. Last CI will actually build our Docker image, inside of gitlab/dind.

But we need to register the image with Gitlab's automatic Docker registry for this repo. Head over to the "Registry" tab of the project to see the location for the repo you are using. Each repo recieves a regestry endpoint, meaning we can only tag one container (of whatever version) for a given project.

In my case its registry.gitlab.com/<you>/<image name>.

Build Tokens

In order to build our image tagged to the registry, and push to the registry we need CI to login to Gitlab's registry. Luckily they've done a great job with that integration as-well.

image: gitlab/dind

build:
  script:
    - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
    - docker run -v ${PWD}:/work fpco/stack-build bash -c 'cd /work && stack setup && stack install && /root/.local/bin/blog build'
    - docker build -t registry.gitlab.com/<you>/<image name> .
    - docker push registry.gitlab.com/<you>/<image name>

Now CI will login to registry.gitlab.com from inside gitlab/dind using a special build token. Build the image, and push it to the registry. Vunderbratski! We've built a custom container that servers the blog with nginx and registered it to our docker registry. Better yet, it will occur automatically on git push origin master!

Step 3, Deploy

Now its time to deploy the blog somewhere. Since all the nice things used so far, have been provided for free on Digital Ocean's servers. We might as well publish to a Droplet.

Droplet

First click the Create Droplet button in the upper left after you’ve signed into Digital Ocean. Then just create a Docker Droplet with one click.




This should just generate a new droplet with Docker ready to go. We are going to need to shell into the droplet, login to registry.gitlab.com, stop any running container, pull the latest version, and start the container. The next and last step we will need to do in Digital Ocean, is reset the root password for the droplet.

Authentication

Authenticating with our droplet can be done one of two ways. Either by usage of rsa ssh keys, or by simple password.

Gitlab has some excellent documentation for SSH Keys in CI.

docs.gitlab.com/ee/ci/ssh_keys

before_script:
  # Install ssh-agent if not already installed, it is required by Docker.
  # (change apt-get to yum if you use a CentOS-based image)
  - 'which ssh-agent || ( apt-get update -y && apt-get install openssh-client -y )'

  # Run ssh-agent (inside the build environment)
  - eval $(ssh-agent -s)

  # Add the SSH key stored in SSH_PRIVATE_KEY variable to the agent store
  - ssh-add <(echo "$SSH_PRIVATE_KEY")

  # For Docker builds disable host key checking. Be aware that by adding that
  # you are suspectible to man-in-the-middle attacks.
  # WARNING: Use this only with the Docker executor, if you use it with shell
  # you will overwrite your user's SSH config.
  - mkdir -p ~/.ssh
  - '[[ -f /.dockerenv ]] && echo -e "Host *\n\tStrictHostKeyChecking no\n\n" > ~/.ssh/config'

But thats kind of alot of work. Here is a lazier solution.

before_script:
  - apt-get update -y && apt-get install sshpass -y

This will allow us to ssh with our password in “plain text”. We wont actually do it that way, we will use a Gitlab Secret Variable, which is built. You can add your own variables in Gitlab and access them as environment variables.

Running!

Since we are sshing into the droplet, its much easier to store this portion of the process in a bash script. We need do disable StrictHostKeyChecking so the script wont pause and wait for user input to allow the host into the hosts file. This script also uses $1 bash variables so we can pass arguments from CI.


sshpass  -p $1 ssh -o StrictHostKeyChecking=no root@x.x.x.x <<-'ENDSSH'
   echo "here we will do work in the production server"
ENDSSH

We will need to login to the gitlab docker registry to pull the image, and we can easily to that using $2 to accept the build token as a second argument.

sshpass  -p $1 ssh -o StrictHostKeyChecking=no root@x.x.x.x <<-'ENDSSH'   
   docker login -u gitlab-ci-token -p $2 registry.gitlab.com
ENDSSH

Now we kill the current running copy (if any).

sshpass  -p $1 ssh -o StrictHostKeyChecking=no root@x.x.x.x <<-'ENDSSH'   
   docker login -u gitlab-ci-token -p $2 registry.gitlab.com
   docker stop <image name>
   docker rm <image name>
ENDSSH

Pull the latest contain from the build, and run it.

sshpass  -p $1 ssh -o StrictHostKeyChecking=no root@x.x.x.x <<-'ENDSSH'   
   docker login -u gitlab-ci-token -p $2 registry.gitlab.com
   docker stop <image name>
   docker rm <image name>
   docker pull registry.gitlab.com/<you>/<image name>
   docker run --name <image name> -p 80:80 -d registry.gitlab.com/<you>/<image name>
ENDSSH

Putting it all together

.gitlab-ci.yml

image: gitlab/dind

stages:
  - build
  - deploy

build:
  stage: build
  script:
    - docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
    - docker run -v ${PWD}:/work fpco/stack-build bash -c 'cd /work && stack setup && stack install && /root/.local/bin/blog build'
    - docker build -t registry.gitlab.com/<you>/<image name> .
    - docker push registry.gitlab.com/<you>/<image name>

deploy:
  stage: deploy
  environment: production
  script:
    - apt-get update -y && apt-get install sshpass -y
    - ./deploy.sh $PASS $CI_BUILD_TOKEN

deploy.sh

sshpass  -p $1 ssh -o StrictHostKeyChecking=no root@x.x.x.x <<-'ENDSSH'   
   docker login -u gitlab-ci-token -p $2 registry.gitlab.com
   docker stop <image name>
   docker rm <image name>
   docker pull registry.gitlab.com/<you>/<image name>
   docker run --name <image name> -p 80:80 -d registry.gitlab.com/<you>/<image name>
ENDSSH

Notice that now the script is split into stages. This way if one part fails, it can be restarted independently.

In the end, thats it! Two files and we have CI, with automated deployment with Docker at no cost provided by Gitlab, in a Droplet with zero configuration! So nice!

I couldn’t find a tutorial on this, and there was plenty of stumbling around to put this together. Hopefully this helps!