How I Configured My Beelink S12 Home Server

How I Configured My Beelink S12 Home Server

This post is a point-in-time snapshot of the Beelink S12 home server I use for my own applications.

Architecture Overview

At a high level, the machine has four responsibilities:

  • hosts shared infrastructure
  • routes inbound app traffic
  • persists database state
  • runs nightly database backups

Traefik handles inbound HTTP and HTTPS on one Docker network, PostgreSQL stays internal on a separate Docker network, and the app containers are attached to both.

Database backups run nightly via a pair of systemd .timer and .service units installed on the host. Each application's database gets its own compressed dump written to a local backup path and optionally synced to a secondary location. The timer fires at 2 AM every day.

Installed Tools and Services

Tool / ServiceTypeDescription
Docker EngineContainer runtimeRuns and manages all application and infrastructure containers
UFWFirewallHost-level network access control; default-deny posture
TraefikReverse proxyRoutes inbound HTTPS traffic to containers via Docker labels
PostgreSQLDatabaseShared relational database; internal-only on db-net
GlitchTipError monitoringPlanned but not yet deployed; DSN placeholder in app config
Backup systemd unitsScheduled job.timer + .service pair; runs daily PostgreSQL dumps at 2 AM

One detail that stands out here is the split between what is installed on the host and what is intentionally containerized. The host stays fairly lean, while shared services live inside Docker and are isolated by network boundaries.

GlitchTip is listed because the DSN configuration is already in place in the app container environment — activating it is purely a deployment step when the service is ready.

Network Topology

NetworkTypeMembersPurpose
traefik-netDocker bridgetraefik, app containersInbound HTTP/HTTPS routing
db-netDocker bridgepostgres, app containersDatabase access

The current documented rules are simple:

  • Traefik is attached only to traefik-net.
  • PostgreSQL is attached only to db-net.
  • App containers are dual-homed on both networks.

Traefik is the single entry point for all inbound HTTPS traffic, routing each request to the right application container via Docker labels. PostgreSQL is isolated from the public network and serves as the shared database for every application on the machine.

Data and Persistence

ItemPurpose
PostgreSQL data (homelab_pg_data volume)Persistent database storage
Daily backup artifactsCompressed dumps retained locally; optional copy to secondary target
App and service secretsRuntime credentials, env vars, connection strings — host-managed
Reverse proxy configTraefik entrypoints, TLS, provider config — host-managed

Logging

Host Docker logging is documented as:

{
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "50m",
    "max-file": "5"
  }
}

That implies a maximum retained log size of 250 MB per container.

Database Layout

RoleNotes
Admin (adm)Full admin access across all databases
app1Scoped credentials; owns the app1 database only
app2Scoped credentials; owns the app2 database only

The isolation model is straightforward:

  • One shared PostgreSQL container serves multiple apps.
  • Each app gets its own database and user.

Firewall Exposure

The firewall runs UFW with a default-deny posture. The only inbound rules are SSH for administration and the web ports that Traefik listens on. Everything else is blocked, and PostgreSQL has no public-facing rule at all — it is reachable only within the Docker bridge network.

This is the part I like most about the setup. The exposure model is minimal: SSH for administration, web traffic for the reverse proxy, and no direct database exposure at all.

Future Improvements

Two things are at the top of the list: deployment automation and remote access.

On the deployment side, releasing an update currently means building the image, pushing it, SSHing into the machine, and running a few commands manually. The obvious next step is a CI/CD pipeline per application that handles the full cycle automatically.

On the access side, I recently added Tailscale to the server. It gives me a private tunnel so I can reach the machine and its apps from anywhere without opening extra firewall rules or managing dynamic DNS. It has been solid enough that it might deserve its own post.

The setup is stable enough that both of these are quality-of-life improvements rather than urgent fixes. A good baseline to build from.