Deploying Websites with GitLab CI and Uberspace

Published Friday, April 28th 2023 · 6min read

This is one of these blog posts that I write more for myself than others, because today I once again found myself in a place where I wanted to automate deployments of a website via rsync and using GitLab CI and Uberspace as a host and spent far too long re-creating a workflow that I have used a bunch of times before, but not often enough to commit it to memory, it seems. 😅

Nonetheless, I hope this can still be of use to some of you! So without further ado, here’s how you can deploy a website (or anything really) to Uberspace via rsync after a push to a specific branch. This example is specifically about deploying a site built with Astro, but I’m sure the information holds true for any other kind of project with a little tweaking.

Prerequisites

You’ll need a couple of things if you want to follow along:

  • An Uberspace to host the website on
  • A GitLab account
  • A computer with SSH installed
  • A fresh ed25519 keypair with no passphrase (generate it with ssh-keygen -t ed25519 -a 100)

Preparing the Uberspace

The first step is to SSH into the Uberspace and create a new file: logssh.sh with the following content:

#!/bin/sh
if [ -n "$SSH_ORIGINAL_COMMAND" ]
then
  echo "`/bin/date`: $SSH_ORIGINAL_COMMAND" >> $HOME/ssh-command-log
  exec $SSH_ORIGINAL_COMMAND
fi

The code for it is based on this tutorial by Gerhard Gappmeier. What is does is help us restrict what that new key we’re going to register in the Uberspace is allowed to do, since we wouldn’t want to hand GitLab a private key that could do whatever it wanted on our Uberspace. In a nutshell, it logs the exact command that was run on the server before executing that command, so we can then use that logged output to restrict what the key can do.

For that purpose, you can prepend a command="xyz" string to an SSH-key in .ssh/authorized_keys, just like we’re going to do in order to get the command in the first place. But first, we need to make our script executable with chmod +x logssh.sh.

Then open .ssh/authorized_keys and add command="/path/to/logssh.sh" followed by the public key of the keypair you prepared earlier.

Save the file and switch back to your local computer.

Preparing GitLab

Log into your GitLab account and open the repository that you would like to deploy on push. In the settings menu on the left, there will be a submenu entry called “CI/CD”. Here you will have to set up two variables:

  1. SSH_PRIVATE_KEY: the private key of the keypair you prepared earlier
  2. SSH_HOST_KEY: the host key of your Uberspace, which you can find out by running ssh-keyscan -H -t ssh-ed25519 followed by the domain of your Uberspace (this obviously only works if you’ve already connected to your Uberspace via SSH before and have added its public key to your known_hosts file). You can also find the public key in your Uberspace’s datasheet

This is sensitive information that you would not want to have within your code, so that’s why we are setting it up as CI variables. It is important to realise that you’re essentially giving GitLab access to your Uberspace this way, since you’re passing them the private key. This is why it is essential you use a new keypair that is only used for this single purpose, and why we are restricting what commands it can run in the first place!

Setting up a .gitlab-ci.yml File

With these preparations in place, it’s time to set up a .gitlab-ci.yml file, which tells GitLab that you want to use GitLab CI to execute some code when certain conditions are met. I’ll include the following example for completeness’ sake, but note that this is the part that will be specific to every project, so it’ll likely not work for you if you just take it as-is.

image: node:18-alpine

build:
  stage: deploy
  before_script:
    - apk update && apk add openssh-client rsync # apk is the alpine package manager
    - eval $(ssh-agent -s) # start the ssh agent
    - echo "$SSH_PRIVATE_KEY" | ssh-add - # add private key
    - mkdir -p ~/.ssh
    - echo "$SSH_HOST_KEY" > ~/.ssh/known_hosts # add host key to known hosts
  script:
    - npm ci
    - npm run build
    - find dist -type f -regex '.*\.\(htm\|html\|txt\|text\|js\|css\)$' -exec gzip -f -k {} \;
    - rsync -avz --delete --progress dist/ username@your.uberspace.de:/var/www/virtual/username/html
  rules:
    - if: '$CI_PIPELINE_SOURCE == "push" && $CI_COMMIT_BRANCH == "main"'
      changes:
        - content/**/*
        - src/**/*
        - public/**/*
    - when: manual
      allow_failure: true

Here’s a quick rundown of what this particular file does:

  • The project depends on Node 18 and needs some dependencies, which is why we’re specifying the node:18-alpine Docker image. Using Alpine is slightly less taxing on your resources than using Debian, in my experience.
  • The CI job is called “build” and runs in the “deploy” stage of the project
  • Before executing the actual CI script, we pull in the openssh-client and rsync packages, add our private key (using our variable) and our host key to the respective files
  • Then we’re ready to run the actual script, which in this case installs dependencies from npm and builds the project, compresses the output and finally transfers it over to the Uberspace with rsync. If you wanted to make this script reusable, you should probably also store the target URI in a variable, but I left it in the code this time. You should obviously use your own Uberspace address here. By default, there’s only one virtual host on your Uberspace, so syncing the files into the /var/www/virtual/username/html directory is fine, but if you have multiple sites running on one Uberspace, you should take care to transfer the files into the right folder.
  • Last, but not least, we set some rules into place to restrict when the CI pipeline runs, in this case:

    • If the trigger is a push to the “main” branch of the repo and only if content in the listed directories was changed
    • or if the pipeline was manually triggered – which in this case also requires that it is allowed to fail, otherwise GitLab would wait for it to be triggered manually, which we don’t want

Add and commit your .gitlab-ci.yml file to your project’s repository and trigger the pipeline for the first time. If everything is configured correctly, the job should succeed and your first deployment should be live. But we’re not done yet!

Limiting the CI’s Access

SSH back into your Uberspace and you should find a ssh-command-log file in your home directory. Within that file, you’ll see the exact command that was executed on the server when the CI job triggered the rsync command. It’ll likely be something like rsync --server -options path/to/target.

Copy that command and replace the quoted string in command="/path/to/logssh.sh" with it. To add some further restrictions, you can also append ,no-port-forwarding,no-X11-forwarding,no-agent-forwarding,no-pty after the closing quote. You can learn more about what these options do here.

Wrapping Up

Save, exit and test the configuration by triggering another run of the pipeline or job. It should run perfectly and continue doing so whenever the conditions in .gitlab-ci.yml are met, so in this specific case if there’s a push to main changing a file in one of the listed folders. 🎉

And that’s it! I hope this can be useful to you, and I’d be especially curious if you see any improvements that could be made, especially security-wise. So if you have any thoughts on the matter, feel free to reach out to me over on Mastodon.

As always, thank you for reading! 😊