4 minute read

My last post outlines my home server hardware, this post will outline my current software setup, and the rationale behind my choices. I aim to keep the software setup as straightforward as possible, to reduce the maintenance overhead.

Operating System

I chose Ubuntu Server, as the LTS version has been very stable since January 2022 when I initially installed it. Version 22.04.5 LTS is currently installed, which is still supported, but I do intend to upgrade to 24.04 LTS soon. I use unattended-upgrades to keep the OS and packages up-to-date.

Online, I frequently see Proxmox recommended for home servers. Proxmox is great, but I believe it can introduce unnecessary complexity for a lot of home setups, mine included. Having to maintain multiple guest operating systems, can significantly increase operational complexity in terms of passing through hardware devices, managing storage configuration and general OS maintenance and configuration. My home server’s primary roles are file server and Docker host, which Ubuntu Server is well suited to, without needing a virtualisation layer.

File System

I originally started with FreeNAS (now TrueNAS) as my NAS software running on Proxmox, however, Ubuntu Server supports ZFS as a native kernel module. You lose the WebUI, but the simplicity of being able to run everything on one operating system was worth it for me. Things are slightly different now with TrueNAS Scale (Linux based instead of BSD based), but being forced to managed everything through the CLI has been a great learning experience.

I also wanted to use Root-on-ZFS on my boot drive, to take advantage of the many benefits, particularly snapshots and native encryption. I followed this guide for Ubuntu Root-on-ZFS, as it’s not built-in to the Ubuntu Server installer.

For managing snapshots, I use a tool called Sanoid, which is responsible for maintaining hourly, daily and monthly snapshots. I have different retention policies for my different datasets.

My two 4TB data drives are setup as a basic ZFS mirror. I try not to accumulate too much data, so this setup has been sufficient for my needs. I then use ZFS native encryption to encrypt this dataset, as well as one dataset on my boot drive. This means that if the server is ever stolen, or I sell the drives, my personal data is not accessible. I use a Systemd service to auto mount these encrypted datasets on boot. Automatic unlocking does have disadvantages, but it is convenient. To be able to decrypt the drives, the server must have an internet connection, and access to the cloud storage account where the key file is hosted.

Docker with GitOps

Containerisation makes managing the applications on my home server straightforward. I chose to use Docker, and have a Docker Compose file that defines all my services, with pinned container versions. This Compose file is stored on GitHub, where Renovate is configured to update container images following my defined policies. Renovate auto-updates patch semver releases, but for major and minor releases, it opens a PR. This gives me time to check if the release introduces any breaking changes before I apply it, which I think is a better approach than using a tool like Watchtower which always updates you to the latest version even when there are breaking changes. On the server itself, I have a simple cronjob which pulls the Compose file and applies the updates.

My most useful applications are:

Outside Access

The majority of my services don’t need to be accessed outside my home network, and when they do, I use a VPN. However, there are a few services where it is beneficial to be able to access them without the VPN (eg. Home Assistant). For this I am lucky enough to have a dynamically allocated IPv4 address from my ISP. Therefore, I am able to port forward to the Nginx reverse proxy. It is configured to only allow external access to a limited number of applications, only from certain geographic locations, using mutual TLS where possible and using fail2ban to ban IP addresses that trigger any rules.

Backups

ZFS makes backups super easy. Snapshots can be used for incremental backups. I used Syncoid via a cronjob to backup my data to an external drive, which I rotate with an identical drive so that there is always one copy that is offline (ransomware and hardware fault protection).

I also use Rclone to backup crucial data off-site.

Monitoring

In terms of monitoring, I use Netdata, which comes preconfigured with variety of collectors. Netdata has built-in alarms that alert me via email when anything it monitors becomes unhealthy.

To check backup jobs are running, I use Healthchecks.io. When a backup script runs, it sends a request to Healthchecks.io and then again once it finishes. If Healthchecks.io doesn’t receive this request, it emails me letting me know the job has failed.

To check the services are online I use a combination of uptime-kuma and hyperping.

I also use the system_sensors Python script which reports key metrics to a Home Assistant dashboard.

Future

I frequently tweak this setup when I have time. A few things I have planned for the future:

  • Upgrade to Ubuntu 24.04
  • More services! There are so many great open source selfhosted services that I’d like to try out.
  • OIDC authentication with Pocket ID. This would mean less logins to manage across services.
  • Infrastructure as Code. My current setup has been configured mostly manually and with some scripts. I would like to look at replicating this setup with a tool such as Ansible.

Thanks for reading :)

Comments