Damn nature

Website deployment process

How do posts get from my brain to your eyes?


This process is now out of date, since I rewrote my website.


My website is a very important project to me. I’ve written a lot of content over the years, useful both to me and other internet folks. Currently, my website is a static site, powered by Hugo. Because it’s static, the content is served insanely quickly and handles any insane load spikes like a champ (not that any have happened).

Unlike other platforms like Wordpress, Ghost, or something more custom, static sites don’t really talk about how to deploy them in production. With that said, here’s how mine works:

#Local changes

I do all my content writing locally, in a variety of different tools depending on my mood. Because it’s a static site, I can just write the content in markdown anywhere, then bring it into the site repository once it’s ready for publishing and do any clean up to make sure it’s displayed properly.

And again, because it’s a static site, spinning up the whole thing locally is a breeze. I use the dev server both to check the content is rendering properly and to make any non-content changes like styling.

#Push to GitLab

The source for the site lives in a git repository, which makes versioning and syncing incredibly simple. At the moment, the canonical repository lives on my GitLab server which is mirrored to GitHub, so yes you can go see all the source (and judge me all you want).

As a developer, I use git quite a lot, and know how to do anything I could realistically need to with it. It can be quite a complex tool, but it’s incredibly powerful.

#Continuous integration / Continuous delivery

Whenever code is pushed, the site is automatically run through CI/CD. For this I use GitLab CI, as it’s nicely integrated with the rest of my GitLab. During the CI, the site is built and formatting checked it meets my OCD nature. This makes sure that the site works perfectly before it’s deployed, so what you read is always perfect (ish).

#Upload to server

Once the site is built, it’s not very useful sat in the CI artifacts, it needs deploying to the world. Static sites are by their nature stateless - all you need are the files. Given it’s me, the site itself is hosted on my own server.

There are quite literally hundreds of ways to move files between servers. A lot of people quite like using SSH and rsync, but for me, I’d rather not do things like that. Key management is annoying, and I normally reject all SSH traffic not over a VPN, which I’d have to change. I previously used the AWS CLI to upload to minio, but found minio far heavier than I really needed, not to mention that the performance really wasn’t great (over a minute to upload my site).

Once the site is built, I use rclone to upload it via WebDAV to nginx. WebDAV is a beautifully simple protocol with very minimal overhead, rclone is a powerful upload tool and nginx is also incredibly lightweight. The same process is used for my notes, and a couple other sites. The upload process takes just a few seconds, a huge improvement over the previous minio-based approach - I don’t know whether this is from minio or rclone, but I’m happy with how things work now.

Because the files are uploaded in-place, the deployment isn’t blue-green, and it’s theoretically possible for a race condition in content, but given the number of requests I get, it’s unlikely to happen. I’ve also not had any reports of it, so it’s not really worth looking at yet.

The nginx container is one of mine, designed just to be a WebDAV server. Simple, lightweight, secure and protected with basic auth.


Whilst the uploads are done to nginx, nginx isn’t used to serve it - it’s far more engineered than that! TLS is terminated using Traefik, my reverse proxy of choice. But unfortunately it doesn’t natively support just serving files like nginx does. So I did a thing…

Requests are served using traefik-pages, a tool I wrote to serve files from a directory, and hook into Traefik to advertise domains and some powerful middleware. It’s a project which is quite complex, and solves likely quite a niche issue, but I think it’s super useful - hence writing it. And the icing on the cake: It’s written in Rust! 🎉

In the past I’ve also used a custom nginx container, my own GitLab pages, and even nginx on the host.


I’m not one to really sit still, and keep things the way they are. I’ve not been especially happy with Hugo for a while, nor my website. I’m likely to completely rebuild it this year, but I don’t know exactly how or what in. Given I now professionally build sites with Wagtail, I suspect that could play a part.

Until that time, when you see a new post deployed, and get notified about it through RSS, this is how it happens.

Share this page

Similar content

View all →

Wandering near lake Zurich on an overcast day

Building search into a Hugo website

My website is built with Hugo, a great static site generator with a bunch of features. One of the big missing features though is search. Hugo has documentation on a bunch of different integrations, but none of them quite did what I wanted. Over the last few months, I’ve been…

Why I rewrote my website

I’ve had a website for around four years now, starting with a python CGI-based site hosted at 1&1, and evolving into its current form, powered by Hugo. Although I’m a web developer, I’m very far from a designer. I really can’t design anything!Alternatives In the past, I’ve used services like…

Self hosting my website

A few days ago, I was sharing a blog post to someone on the self-hosted podcast discord, and they asked if I was self-hosting my website. Unfortunately, and rather ironically, I had to answer no. I’ve been intending to move it over to my own server for a while, so…