My Networking Setup

I run machines in a few different places — a home lab, a cloud VPS, a laptop that travels with me. Keeping them all on one coherent private network, with stable names and encrypted transport, is the thing I did not want to think about every day. This page collects the pieces I landed on.

Service Connectivity

For the private network I use headscale — an open-source, self-hosted implementation of the Tailscale control server. Clients are the unmodified Tailscale clients; only the coordination server is mine.

The control plane (dotted lines) only hands out keys and policy; all service traffic (thick lines) is direct WireGuard between peers. DERP is a relay used only when a direct path cannot be established.

Why headscale

  • Sovereignty. Account, keys, and ACLs live on a box I control. No SaaS account to expire, no per-device limits, no upstream outage breaking my mesh re-keying.
  • Zero-config client side. Every node — macOS, Linux, iOS, router — runs the same Tailscale client I would use anyway. No custom builds.
  • WireGuard underneath. Fast, modern, encrypted by default. I do not run a classic VPN gateway.

Typical deployment

The server is a small VPS with a public IP. A single Go binary (headscale serve) listens on localhost:8080 and a reverse proxy (Caddy, in my case) terminates TLS for https://headscale.example.com. Configuration lives in /etc/headscale/config.yaml — the defaults are close enough that I only touched a handful of fields:

  • server_url — the public URL.
  • listen_addr — loopback, since Caddy fronts it.
  • ip_prefixes — the CGNAT range for the tailnet (the default 100.64.0.0/10 is fine).
  • dns.magic_dns: true — so every node gets a stable DNS name inside the tailnet.

State is a single SQLite file; backing it up is cp.

Onboarding a node

  1. Create a user on the control server: headscale users create ilya.
  2. Issue a short-lived pre-auth key: headscale preauthkeys create -u ilya --expiration 1h.
  3. On the new node: tailscale up --login-server https://headscale.example.com --authkey <key>.

That is the whole provisioning flow. The node joins the mesh, picks up its tailnet address, and is reachable by MagicDNS name from every other node.

Reaching services

With the mesh in place, service connectivity becomes boring in the best way:

  • Stable names. A Postgres instance on a home-lab host is just homelab-db:5432 from anywhere on the tailnet.
  • ACLs. Headscale supports Tailscale-style policy files. I scope which tags can reach which ports — the laptop can reach tag:dev, CI runners can reach tag:infra, nothing else.
  • Subnet routers. A single node advertises 10.0.0.0/24 from the home lab so I can reach devices that do not run Tailscale (a NAS, a switch’s management interface).
  • No inbound ports. Services never need to be reachable on the public internet. WireGuard punches out; the control server only coordinates.

DERP

Two nodes that cannot reach each other directly (both behind strict NATs) fall back to a DERP relay. I use Tailscale’s public DERP pool for this — headscale lets me mix in my own later if I ever need guaranteed-path control, but I have not needed to.