2024,3d,3d design,3d scene,año 2024,año nuevo 2024,feliz año nuevo,feliz año nuevo 2024,happy new year,happy new year 2024,new year,new year 2024,new years,numero,numero 2024,numero 3d

Self-hosting is a big hobby for me (and a big source of content for my website). Not only is it a great source of entertainment and fun, but I find it incredibly interesting, vaguely relevant to my job, and a good way to regain a little privacy.

Over the last year, not much as changed, but that's given me the time to spend with my servers, better understand how I use them, what I need out of them, and where I can improve. 2023 has involved lots more research than it has actually doing things necessarily, so start preparing for the "State of the server 2025".

#Applications

As with any of these listicle-style posts, most people are just here to see what I run, and don't care huge amounts about why I run them. So, for all you people, here they are (in alphabetical order):

Much of that list is the same from last year, but some services have come and gone, as my server is in a relatively-constant state of flux. I'm not the kind of person who changes things unnecessarily. Everything I use is because I like it, and it's working well "enough".

I'm still mixing VMs, Docker and LXC depending on the need, and it's working well for me. The only VM I run is HomeAssistant, so I can use their OS like an appliance, but in time even that may succumb to the low-overhead and easy pass-through benefits of LXC. I will admit I sometimes miss the easy upgrade and stateless nature of deploying apps with Docker, but LXC is working out great for certain uses (mostly Jellyfin at the moment).

#Hardware

The core hardware of my home server hasn't really changed since I built it. A CPU upgrade last year nicely dropped my power bill, and a more-recent PSU swap dropped it again (more on that in an upcoming post). The biggest drawback at the moment of it is the size - the Define R7 isn't exactly small. I'd love some day to transplant it all into a slightly smaller case, which doesn't stick out on a shelf, but the R7 is a great case for noise, airflow and expansion - so it might stay a while.

#Storage

A server can't do much if it can't store data (yes yes, I realise that's a feature for people like Mullvad, but not me). The storage setup on my servers hasn't changed for quite a while, and I'm not especially happy with that.

My storage has taken the shape of 3 main tiers:

  • A 2-drive ZFS pool of SSDs for Proxmox (OS and VM roots)
  • A 2-drive ZFS pool of 4TB HDDs, for app data and most of my files (tank)
  • A 2-drive snapraid "pool" of 6TB HDDs, with bulk storage and my media library (bulk)

That mostly makes sense, and is unchanged from last year, but there are few other "special" drives:

  • A single HDD, used for ephemeral storage (in progress downloads etc) (scratch)
  • A single ZFS-formatted SSD, for databases (speed)

The "scratch" HDD has been in my servers for a while. I'm always concerned (whether reasonable or otherwise) on putting too much wear on my drives unnecessarily. For that reason, I've tended to have a sacrificial drive around to handle all those, before the content is moved to its permanent home. It's worked out well, and has given me a somewhat-reasonable use for the pile of working but old drives in my house.

The unnecessary separation of app data and files from bulk storage is far from ideal, and doesn't provide much value. Sure, I don't need my media collection protected by ZFS, but equally I don't need completely separate drives for it just to avoid that (and introduce the weird snapraid setup I mentioned before). Instead, I'd really like to merge them together into a single pool, probably onto the smaller disks for now, allowing for expansion as time goes on. The data on there is a mixture of "annoying to lose" and "irreplaceable", so it'd definitely be ZFS. During this merge, I'd also like to move the app data over to SSDs, for a bit of a performance boost for all my apps (I have SSDs, might as well use them).

#Backups

Not having my data backed up would make me physically uncomfortable. But fortunately, I do.

<warning>

If you don't, stop reading here and go back up your data. On second thoughts, read the rest of this section to get some ideas for how to do it.

</warning>

The bulk of my data is backed up with Restic to Backblaze B2. It's been absolutely rock-solid for a few years now, and I have no intention of changing either part. Backblaze reports I'm storing around 350GB (including snapshots), which costs me approximately $2.30/mo (including VAT / taxes / whatever). At that price, it's an afterthought for me, and because it's all completely encrypted both at rest, in transit and before upload, it's not going anywhere. I'd like to separate out the repositories per server, some day, but at the moment I'm not losing sleep over them all being in 1 place.

Most of my Proxmox guests are also backed up locally, and included in the restic backup. The important data is already captured, but grabbing the entire OS can't hurt either. I can't do that with my Docker LXCs, as something about how they work doesn't play nicely with Proxmox's backup tooling.

Backing up databases shouldn't be done by just copying the folder structure - you need to make sure everything in memory is committed too. For that, I use my docker-db-auto-backup tool.

#speed

For years, a part of my job has involved being shown slow parts of an application, and being told to make it quicker / better - and I like to think I'm quite good at it. In almost all the cases, the performance bottleneck in an application is in its database (or I guess the CPU for apps like Immich and Jellyfin), and the same is true for the apps we self-host, too.

Wherever I can, I prefer to use PostgreSQL for applications, as it's the engine I'm most comfortable with managing, and it's pretty damn fast. SQLite can be faster for many workloads, especially with some recent developments, but I like keeping it separate, which also makes backups much easier.

<warning>

Copying an sqlite file is not a backup!

</warning>

Previously, my databases were on my tank pool, in subvolumes specifically tuned for the engines (mostly in the record size). This works great, and allows them to grow easily whilst being backed by redundant storage. However, this pool is all hard drives, and if there's something database engines love, it's fast storage. So I moved the databases off tank and onto a single SSD ZFS pool speed. I can't feel a huge amount of difference day-to-day, but there almost certainly is one. It's not the fastest SSD in the world, but it's much faster than a HDD, especially for the random reads and writes that databases are known for.

It being on a single SSD may make some people uncomfortable, as I've gone from redundant storage to a single point of failure - and you're absolutely right. However, I've been backing up my databases nightly to play .sql dumps for a while now, and those backups are still kept on tank, and that's redundant enough for me.

#Hypervisor

At the heart of my server is still Proxmox. It's fine, but could definitely be better. I'd rather something a little less "magic hosting appliance", so I can dig under the hood a little further. LXD (or, more likely Incus) is likely where I'll go, but a complete migration like that is going to result in a lot of downtime, and take up a lot of time, so it's a can I've kicked down the entire road of 2023, and may do the same for 2024.

Proxmox does everything I need it to, and right now that's good enough for me. If you're just starting out with self-hosting, it's still where I'd recommend you start. Even if you think you'll only need a single VM, at least having the option of another is a great way to start playing around with stuff without needing to buy new hardware.

#Notifications

For the last 6 months or so, I've had an internal door camera powered by an ESP32 camera, and it's worked great to be nosy on my home when I'm not there. When the Zigbee contact on my door opens, the camera takes a picture, and, so long as I'm not at home, sends me a notification, which works great. What doesn't work so great is the HomeAssistant app's deliverability.

In just the last few weeks, I've been playing around with ntfy, a tiny application designed solely for notifications. It's fast, easy to integrate with, and seems to take deliverability far more seriously. I considered Gotify, as I've used it before, but the community and support around ntfy is much stronger, and it's easy to give other people access to certain topics.

I'm planning a separate post about integrating ntfy and HomeAssistant, as whilst it's possible, some of the documentation online skips some useful features on both sides.

Desktop notifications for ntfy are possible using WebPush, but you have to subscribe each device to each topic separately. I wish there was a way to auto-subscribe certain apps and sessions to certain topics. I really don't want to write a desktop client...

#VPN

When people hear "VPN", they usually think of a service to disguise your location, and improve your privacy, much like all those NordVPN keep telling us. Whilst that's true, a VPN is more about protecting yourself from a malicious public hotspot than it is about magically stopping your information being exposed in a data breach. For that, I'm a more than happy Mullvad user. They're reasonably priced, have decent client support, and are doing some really cool things to ensure they know nothing about what I'm doing (something which has been sort-of proved).

#Remote access

Whilst I currently work from home, I do every once in a while venture into the outside world, and I often need access to my services when I do.

For regular web access, I've been running my Wireguard Gateway set up since I originally wrote the article, and I absolutely love it. Sure, it's complex, but it has so many benefits and almost no drawbacks. Firewalling and rate limiting at the edge, privacy around my home IP, and fully encrypted end-to-end so no one else can see (I'm looking at you, Cloudflare).

Anything more than that has been using Nebula, and for the most part I've been happy with it. It's a little fiddly to get set up (especially compared to WireGuard), but now I have a mesh VPN network between all my servers, which communicate directly rather than through a central server (keeping traffic lightning fast over my LAN if it can), all encrypted and proven at scale. None of my servers expose SSH, instead relying on Nebula to get me past the firewall first before I can connect.

However, Nebula isn't quite perfect, and it's resulted in months of research and testing to find an approach I like. I need services accessible only over a VPN, with TLS, using the shortest path, and in a few cases not only HTTP traffic. I've tried a lot of different approaches: Native WireGuard and some DNS magic, static routes with IPv6, overriding DNS, and more. None quite covered everything (mostly being limited by also needing to work on Android).

To pre-empt the people yelling at their screens, yes, Tailscale does actually work very nicely to solve my issues, but regardless of their "lock" feature, I just don't want to trust the security of my network to someone else's infrastructure. Over the last few months, I've taken a few different looks into Headscale, an open-source re-implementation of Tailscale's control plane, but which still uses all the Tailscale client apps (much like Vaultwarden does for Bitwarden). It's not as polished as Tailscale, and doesn't support all the features, but it's completely self-hosted.

I'm still looking into this, and there's a lot more research and testing to do, but I suspect over the next few months, I'll be decommissioning my Nebula lighthouse and moving over to Headscale.

#Reverse proxy

I've been a big fan and fairly early adopter of Traefik (however you want to pronounce it). However, whilst it's incredibly simple to get something basic working, adding some custom behaviour to Traefik can be quite a pain. So, earlier this year, I decided to replace it, with ol' faithful nginx.

A few months ago, I swapped Traefik out for nginx on the server running this website, and I don't know if it's just me, but the site does feel a little more snappy. The configuration is far more verbose, but with Ansible I can at least share the common bits. Adding a simple rewrite went from 3 lines of YAML to a single nginx statement.

The applications themselves still run in Docker, but nginx runs on the host (so running non-docker applications is simpler). I point nginx to Docker's internal DNS (which I proxy using CoreDNS), and nginx references the containers by their hostname, and it all works perfectly. I've gone over that intentionally fast, but I'm working on a longer post about this change, so stay tuned.

#DNS

At home, I run my own DNS server. Not because I want a local cache of queries, or even for network-wide ad blocking, but to handle a single-line of dnsmasq config to force traffic to my home server over my local network.

Because it's what most people seem to use, I deployed pi-hole. Not only does it have all the nice ad-blocking functionality built-in, and pretty graphs for DNS traffic, but it's also just dnsmasq in a trenchcoat, so configuring it for my DNS alias was incredibly simple (it's not just a custom record).

My SO isn't the happiest with pi-hole, mostly because she's the kind of person who clicks on Google adverts in search and uses TikTok, but I'm sure glad the ad blocking is there because of that.

Pi-hole by name runs great on a Pi, and I've had it running on a Pi 1 Model A fine. Most of the time, it's fine (so long as you don't touch the dashboard too much), but occasionally I get incredible latency spikes in DNS requests, slowing all my devices to a halt I can't quite tell if it's because of the Pi being overworked, but it wouldn't surprise me.

For Christmas, my SO bought me a Dell Optiplex 3040 (powered by an Intel 6100T) - a compact mini PC which runs circles round the Pi and draws just 7.5W during normal use (DNS isn't an intensive role, after all). I soon intend to swap the Pi out for the Dell to increase performance, and also switching to AdGuardHome in the process, which in turn I hope to completely remove my DNS latency issues.

<ipv6>

In the last few days, I took out the Pi, and I'm just running stock DNS for the time being. Sure, my privacy is gone, but the IPv6 issues which have plagued my network for months have also gone with it. The Pi was also my DHCP server (doing IPv6 SLAAC), so perhaps this swap will fix that too - I'll have to see.

</ipv6>

#nftables

For my internet-facing servers, I prefer using a firewall defined by my hosting provider ( Vultr, Linode etc). Whatever underlying implementation they're using is probably fairly sensible, I can configure it easily with Terraform, and importantly it keeps any weird port hammering from even touching my server (useful for the occasional SSH hammering).

However, sometimes there's no way around adding a firewall, or some other networking configuration directly on a server. In this case, most people would turn to iptables. Personally, I'm no networking expert, and hate the user experience of iptables. Over the last year, I've found myself needing a little more from a server's networking configuration, and after some recommendations from people who do know networking, I discovered nftables. nftables is the new firewall implementation in the Linux kernel, but has a much nicer interface (in fact, iptables often uses ntfables constructs in the kernel).

Previously, there was a single iptables statement in my entire infrastructure, and that's too many. Now, I've replaced this with an nftables config, which includes a simple firewall and the NAT rules I need. Not only does it make more sense, but it's far easier to configure, test, and read years from now when I forgot how I set it up.

#Git(ea|lab)

A lot of what I do runs through git. My infrastructure, dotfiles, website, random test projects, all run through git at some point. Sure, I could store these repos on GitHub, but some repos contain sensitive content (for example, my CV). For that reason, I've run my own git server for many years now, hosting both the primary versions of many of my public repositories, but also private projects I don't want public.

It's been a sort of sad running joke for many years now, but I've found myself switching between GitLab and Gitea quite a few times. Both have their merits, and I guess sometimes those merits draw me more than the drawbacks. GitLab is incredibly powerful and feature-rich, but hides many basic features behind a paywall, is far larger and more complex to manage, and isn't especially fast. Gitea on the other-hand is small and lightweight, completely free, but doesn't support quite as much, especially when it comes to CI. In 2023, I moved back to Gitea, after they brought out Actions. Gitea Actions aren't quite as featureful as GitHub Actions (even if it's designed to be API-compatible), and is a little more fiddly to set up and self-host, but it's nice having lightweight CI with a lightweight git server.

No matter how many times I switch, I still find myself with a soft-spot for cgit. cgit takes lightweight to a new level, but is entirely read-only and has minimal features. There's just something about the minimal UI and information-dense interface which appeals to me. To satisfy myself, the homepage for my Gitea server is the explore page, listing all repositories much like cgit does.

#RSS

For as long as I can remember, I've been a TT-RSS user. RSS has been the way I keep up with want companies, people and the general tech community have been doing for years, without anyone else getting in the way or promoting things I don't care about.

However, TT-RSS as an application is starting to show its age. I'm a sucker for the minimal and productive interface, but it's always felt a little clunky and unpolished.

Earlier this year, I tried out Miniflux, on the recommendation of a few people. Miniflux is much smaller and more lightweight than TT-RSS, and as a result feels a lot faster. If you just want a single feed into all your feeds (like a simple aggregator might), then it's great. Unfortunately, it's a few too many clicks to view just items from a single feed. For me however, I need a little more than that.

More recently, I tried to FreshRSS. FreshRSS is what I started out using a very long time ago, but switched to TT-RSS for the nicer interface. TT-RSS's default interface has a large list of items, with the preview opening on the side. FreshRSS on the other hand has a single list which expands with each item you want to read - which just doesn't meld with my brain. In the many years since I used it, there's finally an extension to add the 3rd pane. FreshRSS is a much more modern codebase (for a PHP app), has a single-container LSIO deployment, and is maintained by a great community (as opposed to TT-RSS, whose core maintainer is known to be obtuse).

FreshRSS is incredibly tempting, but I need a little more time playing around with it before I consider switching. The number of unique benefits of TT-RSS though is slowly decreasing with time.

#Email

Oh how I wish email wasn't as important as it is. Everything runs through email: Online shopping receipts, service notifications, important notifications, and more and more even physical shop receipts.

<tip>

If you just ask for a paper receipt, most places will give you one, no need to give out an email

</tip>

Service wise, I'm still a fairly happy Fastmail user. They're a great company with a great product at a reasonable price. However, in the last few weeks, I've started to get concerned. Australia (where Fastmail are located) are drafting a new "e-safety" standard, which seems to be another privacy-evading bill disguised behind terrorism and CSAM prevention (both incredibly important, don't get me wrong). I've not followed it closely, but if the Proton Mail CEO wants to fight it in court, it's not exactly perfect. I'm not planning on moving soon, but it's on my radar.

Self-hosting email is almost universally a bad idea. The point of email is being able to send and receive messages, both of which are hindered if "Big Email" don't like your IP address (which they don't if you're on a residential address, or Linode apparently). Services like MailRoute make it easier, by dealing with IP reputation for you, but it's still not perfect.

<help>

If someone has good experiences with a privacy-respecting email platform, let me know!

</help>

#Monitoring

Much like how if data isn't backed up, it doesn't exist. If a service isn't monitored, it doesn't exist either. Over the last year, I've changed monitoring tool, and then swapped back.

For a short while, I switched to Grafana Cloud. Given all my local monitoring uses Grafana and Prometheus, it makes sense to configure everything in the same way with the same stack. Grafana Cloud has a great free tier, and includes its Synthetic Monitoring service, which pings endpoints and yells when they don't respond correctly (like every other kind of monitoring service), from multiple locations around the world. As a bonus, it's automatically connected to Grafana for fancy graphs, and all managed by someone else's infrastructure team - not me.

The downside to Grafana Cloud is that it's nit the smallest or application in the world, as you'd expect. It's possible to Terraform it all, but I've had some experiences in the past of completely breaking the install, requiring support to intervene. Similarly, the notifications of an outage, which come from Alertmanager, aren't the clearest (and customizing them is where the breakages occurred for me). Whilst Grafana Cloud also supports multiple check locations, it queries them all at each check time, rather than doing a round-robin, and won't check any less frequently than 2 minutes, which means I was hitting my sites far more frequently than I really cared about.

Instead, I'm back with UptimeRobot, ol' faithful. Their free tier is more than usable, I trust their platform, and the emails and notifications are clear and robust. Historically, Terraforming UptimeRobot has been incredibly difficult, due to their strict rate limiting, but even that's been improved since my last attempt. For now, it's configured by hand (not that there's much to configure), but it's not like there's much there.

#Telegraf

When I first set up Proxmox, I used Telegraf to monitor various aspects of the host, and expose them to Grafana via InfluxDB for monitoring. The draw to Telegraf for me was that it could monitor almost anything in a single application and configuration file. I eventually migrated to Prometheus, which Telegraf can also talk to natively, but it always felt a little wrong.

This year, after getting yet more exposure to Prometheus-native ways of doing things, I elected to swap Telegraf for prometheus-node-exporter. Node-exporter supports all the metrics I was actually using (alongside the Proxmox exporter), and required no configuration.

#Terraform

I like to automate as much as possible with Terraform. Infrastructure as code is one thing, but configuring everything with Terraform makes for much simpler versioning of complete applications and consistent repetition.

Terraform relies on a state file to track the currently deployed configuration. To support multiple editors, the state file has to be stored somewhere. My core infrastructure state lives in AWS S3. I know that's not self-hosted, but it means if everything of mine breaks so terribly, I still have the state to recover from (or use as a reference). However, for smaller projects, I don't need to use S3, that I can self-host. Terraform doesn't support many storage backends, but the S3 backend supports anything S3-compatible, so I elected to deploy Minio. I've deployed Minio in the past, and it's worked fine, but I've heard many horror stories of deployments completely failing after a seemingly-simple upgrade. To mitigate that, I just cross my fingers and take good backups!

<tangent>

Whatever license issues Terraform has faced in 2023 aside. I'm still going to use Terraform until it becomes problematic.

</tangent>

#Dokku

Last year, I discussed how I didn't need Dokku in my infrastructure, and was going to remove it - which I did. However, Dokku is a great platform for easily deploying services (that I wrote, rather than a replacement for docker-compose), so it lives on on my home server. I don't need it enough for a dedicated server, so save a few pennies, but this way I still have the ability to host some tiny tools and services should I want / need to.

Dokku has been around for a while, and it's pretty stable and robust, but it's also showing its age. Dokku takes over the entire server it's running on, and makes running other tools alongside it fairly risky. In the last few releases, there's been a foot-note at the end of the release notes about some changes to development momentum, which concerns me a little. Dokku is still the best and easiest way I've found to deploy a simple application to a server, without the faff of a full Ansible setup - just git push and you're done. Dokku has native blue-green deployment system, simple deployment from git, and will even build the project for you should you wish.

I'd love to find a tool out there better than Dokku (that isn't a bunch of Kubernetes operators tied together with string and prayer), but I might be asking for too much. The main developer of Dokku has taken a job at Porter, which sits on Kubernetes and achieves similar to Dokku, but not everything. It's a shame gitkube is fairly unmaintained at this point.

The closest I've come up with is accepting I'll need a docker-compose file, and using a webhook to automatically pull and restart the container, but that's neither blue-green (without Swarm) nor super simple to set up. Unless you are reading this and have some other ideas, Dokku may be here to stay.

#Video

I watch a lot of TV, whether on my lunch break, or winding down in the evening. I'm the kind of person that can happily rewatch shows multiple times without getting bored, so I don't need much to keep happy.

I've been a happy Jellyfin user for a few years now, ever since their formation after the drama with Emby. Jellyfin's clients aren't perfect, and are missing some of the polish and design that comes with the money Plex users pay, but they're more than enough for me, and I love that it all stays on my network. The clients keep getting better and better with each release, and the platform has been rock-solid since day 1. If you're a developer and are confident with mobile app dev or C#, go give them some love.

It's been enjoyable (in a sad way) watching the issues with Plex unfold over the last few years (especially in the last month). Plex was a great service which made self-hosted media playback effortless, without needing complex networking. But in the last few years, their attitude has shifted in a direction most people don't seem to like - and nor should they.

#YouTube

Alongside Jellyfin, I also watch a fair bit of YouTube, usually for 2 main reasons: Entertainment and education. Most of my YouTube viewing has been done through RSS, to avoid "the algorithm", and so YouTube can get out the way and just show me the money content. TT-RSS is a great reader, but it's just not the same as an actual subscription manager, especially on mobile. The vast vast majority of my viewing comes from channels I'm already subscribed to anyway, so I've not missed the recommendations too much.

When it comes to private, self-hosted YouTube frontends, there are 2 main options: Invidious and Piped. I've tried both out with varying success, but both are a long way from perfect. Invidious is fiddly to deploy (no database migrations per-se, memory leak issues, and some weird database connection requirements), but Piped is so new it's missing critical features like watch history.

Invidious is probably where I'll end up, as it has the feature I need today, and I'm fairly comfortable managing a weird-looking application, but I'm keeping a close eye on both projects. For now, most of my watching has happened on my TV or phone from bed, and having RSS as a subscription backup is working out just fine.

#Todo

I tend to find myself distracted by new things, and sidetracked into odd tangents, meaning I forget to do a lot of important things. To do lists are the perfect tool for that, to remind me of the important things I need to do, when I need to do them. However, that only really works if you actually check them and update them.

In the past, I've used Todoist. And by "used", I mean "have an account but never check or add to", which isn't very helpful. As a hosted platform, Todoist is pretty great, even the free tier, and it's probably where I'd recommend most people go, but I for one prefer my tools self-hosted.

This year, I discovered Vikunja, a self-hosted todo manager with a great interface, tonnes of features, and CalDAV integration. Vikunja has helped bring my todo lists in-house and on my own servers. Vikunja doesn't have a mobile app (yet), so I've been using tasks.org and DAVx⁵. Tasks themselves have comments, descriptions, and can even be displayed in a Kanban format if that's more your thing. The CalDAV integration doesn't work well with Thunderbird at the moment, but I'm sure that'll come with time.

The tools are there, I just need to find a way to stop relying on my brain and let a computer remember things for me (they're generally much better at it, anyway).

#Auth

Running a lot of self-hosted apps, I have lots of accounts on lots of services. To attempt to consolidate, and increase security, many applications support using an external authentication system, through protocols like OIDC or OAuth2.

Whilst services like Gitea and Nextcloud both support acting as an external auth system, there are tools out there which specialize in it, like Authentik and Authelia. Authentik is much heavier, but far more feature-rich than Authelia. I went with Authentik, as it supports user management through the browser (which helps with the SO adoption factor) and more integration protocols than Authentik, and I have plenty of spare resource on my server.

<warning>

Both Authelia and Authentik support acting as a "forward auth proxy", where they intercept all requests and require authentication before any traffic hits the application itself. For some apps, this is fine, but anything with an API or mobile app will almost certainly break.

</warning>

When I got started with Authentik, late last 2022 even, I found it a nightmare to configure the flows I wanted, as the interface isn't especially approachable, even if it is powerful. The spark finally came when I came across a video series which went through the basic configuration steps, and everything from there has clicked.

I now have Authentik running, and connected in to a couple of services, but I've not migrated to it completely. Some services it's integrated with fine, like Nextcloud. Others aren't quite perfect, like Gitea which doesn't support disabling the built-in auth. And then others which have compromises, like TT-RSS, where it completely breaks any other APi integrations (like the mobile app), as they all assume a plain username/password (or API key).

Managing multiple accounts isn't difficult with you have a password manager, but Authentik may unlock workflows I've not considered. I'm particularly interested to try out the aforementioned "forward auth proxy" mode, to expose a few high-security tools to the internet, or a few smaller dashboards.

#Future

2023 has been a wild ride for me for many reasons. At the time of writing (new years eve), I've been on holiday for almost a week and a half, and I've loved having the time and energy to get back into the groove of things and mess around with my servers. I touched briefly on this in my 100 posts reflection, but that's been a struggle for me recently, so I've loved having my mojo back. My drafts folder has 6 in-progress articles, all of which have been rabbit-holes of time I've dove head-first into, and all of which will be of interest to those reading this.

I've leaked a few things already in this post, but to consolidate what I've been working on, without giving too much away:

  • Exposing Docker's DNS outside the docker network
  • Replacing Traefik with nginx
  • Is the Pi dead for self-hosting in 2023?
  • Do you really need a HBA?
  • What's the source of that ticking whirring noise?

I've teased a few of these in the Self Hosted discord already, but to find out more, you'll have to wait and see!

Share this page

Similar content

View all →

2023

State of the Apps 2023

It's that time of year again, time to steal some of Cortex's search rankings to talk about my own "State of the Apps" - the applications and setups I use to make my life what it is. Since my last post, and in fact in just the last few weeks,…

2022

State of the Apps 2022

2022-01-01
15 minutes

It’s a new year, so it’s time to reflect back on the tools I used last year, how they’ll change this year, and how they might change in future. It’s still an idea I’ve completely stolen from CGP Grey / Cortex, but I think it’s useful, fun and interesting. I’m…

2021 - ending 2020

State of the Apps 2021

2021-01-01
8 minutes

It’s that time of the year again: time to look back at how I work, the tools I use, and how the next year might look. I’ve been working from home basically full time since the UK went into lockdown 17th March. It’s been quite an adjustment barely leaving the…