Headscale, Tailscale and the Boogata boogata (VPN connection via “magic”)

September 4, 2025

This is one I’ve been meaning to solve for about 2 years. After I first stood up a server for myself. I had an SSH tunnel but this is, in theory a little more flexible and usable.

A number of individuals have advertised tailscale and it’s virtues. Tailscale, as I understand it, is a client (built on top of wireguard) which enables the creation of a mesh network. Which is a fancy way of saying every machine on the network can talk directly to any other machine on the network through an encrypted connection.

The really neat thing about tailscale is it’s run with a control server that manages keys (distributes or makes available to all members of the network) and helps with network traversal. This is one of tailscales great advantages, it appears to be able to traverse the gnarliest network setups (this is the boogata boogata bit which I’m sorry I don’t understand very well).

I have had “free” services chose to monetize themselves (and if I had the money I WOULD pay… but I don’t) and while the tailscale folks are being really awesome and making this totally usable for homelab ppl like me and because I’m having trouble trusting someone else with the keys to my stuff AND because something exists that will help I’m running my own thing. Headscale is an open source version of the tailscale control server. It’s not as fancy as tailscale and the UI is NOT unless you want to run an additional container with a web UI (some of which are better than others) but it does appear to work.

I have stood up a headscale container and a tailscale container which has subsequently been configured to be a gateway into my network. I’m running a tailscale client on a laptop that connects to the same “tailnet” as the tailscale container and with the tailscale container designated as an “exit node” AND the routes offered by that exit node approved I seem to have a working setup.

The headscale docker compose is like this

services:
  headscale:
    image: headscale/headscale
    container_name: headscale
    restart: unless-stopped
    environment:
      - TZ=America/Vancouver
    volumes:
      - ./config:/etc/headscale
      - ./data:/var/lib/headscale
    entrypoint: headscale serve
    ports:
      - 8080:8080
      - 9090:9090
    networks:
      - pub
    labels:
      traefik.enable: "true"
      traefik.docker.network: "pub"
      # Configure service and router
      traefik.http.services.headscale.loadbalancer.server.port: 8080
      traefik.http.services.headscale.loadbalancer.server.scheme: http
      traefik.http.routers.headscale.rule: Host(`hs.mydomain.com`)
      traefik.http.routers.headscale.entrypoints: websecure
      traefik.http.routers.headscale.tls.certresolver: myresolver
      traefik.http.routers.headscale.service: headscale

      # UDP ports for DERP, STUN, etc
      traefik.udp.services.headscale-udp-41641.loadbalancer.server.port: 41641
      traefik.udp.services.headscale-udp-3478.loadbalancer.server.port: 3478

networks:
  pub:
    external: true

Since it’s all encrypted the system needs an SSL key. I’m already using Traefik for other things I’m doing and I have it already hooked up and working with Let’s Encrypt I figured I’d use that instead of setting up another acme client in headscale. Hence the traefik labels as part of the setup here and the acme stuff in the following config file remaining commented out.

Headscale needs a config file (in /config/config.yml) to work, mine looks like this

# The url clients will connect to.
server_url: https://hs.mydomain.com

# Address to listen to / bind to on the server
listen_addr: 0.0.0.0:8080

# Address to listen to /metrics, you may want
metrics_listen_addr: 0.0.0.0:9090

# Address to listen for gRPC.
grpc_listen_addr: 0.0.0.0:50443

# Allow the gRPC admin interface to run in INSECURE mode.
grpc_allow_insecure: false

noise:
  # The Noise private key is used to encrypt the traffic
  private_key_path: /var/lib/headscale/noise_private.key

# List of IP prefixes to allocate tailaddresses from.
# Each prefix consists of either an IPv4 or IPv6 address,
prefixes:
  v6: fd7a:115c:a1e0::/48
  v4: 100.64.0.0/10
  # Strategy used for allocation of IPs to nodes, available options:
  # - sequential (default): assigns the next free IP from the previous given IP.
  # - random: assigns the next free IP from a pseudo-random IP generator (crypto/rand).
  allocation: sequential
# DERP is a relay system that Tailscale uses when a direct connection cannot be established.
derp:
  server:
    enabled: false
    # Region ID to use for the embedded DERP server.
    region_id: 999
    region_code: "headscale"
    region_name: "Headscale Embedded DERP"
    # Listens over UDP at the configured address for STUN connections - to help with NAT traversal.
    stun_listen_addr: "0.0.0.0:3478"
    # Private key used to encrypt the traffic between headscale DERP and Tailscale clients.
    private_key_path: /var/lib/headscale/derp_server_private.key
    # This flag can be used, so the DERP map entry for the embedded DERP server is not written automatically,
    automatically_add_embedded_derp_region: true
    # For better connection stability (especially when using an Exit-Node and DNS is not working),
    ipv4: 172.103.244.77
#    ipv6: 2001:db8::1
  # List of externally available DERP maps encoded in JSON
  urls:
    - https://controlplane.tailscale.com/derpmap/default
  # Locally available DERP map files encoded in YAML
  paths: []
  # If enabled, a worker will be set up to periodically refresh the given sources and update the derpmap
  auto_update_enabled: true
  # How often should we check for DERP updates?
  update_frequency: 24h
# Disables the automatic check for headscale updates on startup
disable_check_updates: false
# Time before an inactive ephemeral node is deleted?
ephemeral_node_inactivity_timeout: 30m
database:
  type: sqlite
  # Enable debug mode. This setting requires the log.level to be set to "debug" or "trace".
  debug: false
  gorm:
    prepare_stmt: true
    parameterized_queries: true
    skip_err_record_not_found: true
    slow_threshold: 1000
  sqlite:
    path: /var/lib/headscale/db.sqlite
    write_ahead_log: true

  ### TLS configuration Let's encrypt / ACME
#acme_url: https://acme-v02.api.letsencrypt.org/directory
#acme_email: ""
#tls_letsencrypt_hostname: ""
# Path to store certificates and metadata needed by letsencrypt
#tls_letsencrypt_cache_dir: /var/lib/headscale/cache

# Type of ACME challenge to use, currently supported types: HTTP-01 or TLS-ALPN-01
# tls_letsencrypt_challenge_type: HTTP-01
# When HTTP-01 challenge is chosen, letsencrypt must set up a verification endpoint, and it will be listening on:
# tls_letsencrypt_listen: ":http" # :http = port 80
tls_cert_path: ""
tls_key_path: ""
log:
  format: text # text or json
  level: info

# ACL policy, mode options are database or file, file requires path to be useful
policy:
  mode: file
  path: ""

dns:
  magic_dns: true
  # Defines the base domain to create the hostnames for MagicDNS.
  # must be an FQDN or a .local, this becomes suffix ie hostname.user.basedomain
  base_domain: head.local  
  nameservers:
    global:
      - 1.1.1.1
      - 1.0.0.1
      - 2606:4700:4700::1111
      - 2606:4700:4700::1001
    split: {}
  search_domains: []
  # Extra DNS records
  extra_records: []

unix_socket: /var/run/headscale/headscale.sock
unix_socket_permission: "0770"
logtail:
  # Enable logtail for this headscales clients.
  enabled: false
# Enabling this option makes devices prefer a random port for WireGuard traffic over the default static port 41641.
randomize_client_port: false

This is a copy and paste of the example offered by Headscale on github. I’ve made specific changes pointing to my stuff. The domain at the top is the main one. DERP is a special case of network traversal help provided by the tailscale folks. It’s for really difficult networks (which it appears my own is not, which is not surprising).

I’ve also got into my own router and opened the udp ports 3478 and 41641. Since headscale is the only thing listening to these ports I think I’m ok. In the end I guess I’ll find out if I’ve created a gaping hole in my network. According to a bunch of the stuff I’ve read this shouldn’t be necessary but it’s currently true on my network and the thing is working. I’ll have to experiment and turn those off now that I have at least one working setup.

You can check to see if headscale is successfully installed and running by throwing a command at it.

docker exec -it headscale headscale nodes list

This should return headers and an empty list (since nothing has connected to the server yet to get keys).

Now the fun part… installing tailscale clients, registering them as nodes on the network and connecting it all together.

In each case the client installer should ultimately send you to a URL (which is public) that provides an instruction and a command to be run on the headscale server to register that node.

The important thing about that URL is it has the key in it for that node. The URL will look something like

https://hs.mydomain.com/register/tuDROspacluthooLEkeew6ST

Which will provide you with a webpage that looks like

headscale

Machine registration

Run the command below in the headscale server to add this machine to your network:

headscale nodes register --user USERNAME --key tuDROspacluthooLEkeew6ST

Now a couple more things have to happen.

  1. you have to register a username in headscale before running the above command on the docker host
docker exec -it headscale headscale users create bob

the username can be any string you like. I chose more meaningful names like ‘android’ so I know which node belongs to what.

2. now run the register command on the docker host

docker exec -it headscale headscale nodes register --user bob --key tuDROspacluthooLEkeew6ST

Now if you run

docker exec -it headscale headscale nodes list

you’ll see a node listed for bob.

Hurrah… you have one node, which oddly isn’t very helpful. You’ll need at least one more node on the network before something cool can happen.

What I’ve done is setup a separate tailscale container which has been configured as an exit node. If I understand correctly this let’s systems connecting TO it to then connect to things on the same network as it, using it as a bridge/gateway. But in order for this to work a few more things need to be in place.

A container with tailscale. This will be setup using a docker compose (which looks like the following)

services:
  tailscale:
    container_name: ts
    image: tailscale/tailscale:latest
    hostname: ts
    volumes:
      - ./tailscale/data:/var/lib/tailscale
      - /dev/net/tun:/dev/net/tun
    network_mode: "host"
    cap_add:
      - NET_ADMIN
      - NET_RAW
    environment:
      - TS_STATE_DIR=/var/lib/tailscale
      - TS_EXTRA_ARGS=--login-server=https://hs.mydomain.com --advertise-exit-node --advertise-routes=192.168.0.0/16 ->
      - TS_NO_LOGS_NO_SUPPORT=true
      #- TS_AUTHKEY=MIWRUV0sw9dreashoslyXOK0 # generate this key inside your headscale server container
    restart: unless-stopped

The first run of this container, I don’t use the -d option. I run it and watch for the URL I talked about above to show in the stuff spewing out onscreen as the thing starts up. I’ll stop it (Ctrl-C) and then register the new node.

Now we have two nodes (when you go to run it, ensure the new secret is in this file and that line is uncommented).

There are a couple of things here in this that are important to pay attention to.

in ts_extra_args: there is an “offer” to be an exit node (–advertise-exit-node) and a specification of what IPs if it was an exit node would be available (–advertise-routes=192.168.0.0/16).

If you request a nodes list

docker exec -it headscale headscale nodes list

you’ll have two nodes listed (hurrah again).

if you request a list of routes

docker exec -it headscale headscale nodes list-routes

You’ll see that there is only one route and that isn’t approved yet.

ID | Hostname | Approved  | Available                       | Serving (Primary)              
1  | ts       |           | 0.0.0.0/0, 192.168.0.0/16, ::/0 | 

to get this approved you need to run

docker exec -it headscale headscale nodes approve-routes --identifier 1 --routes 0.0.0.0/0,192.168.0.0/16

and it should return “Node updated”

now when you do

docker exec -it headscale headscale nodes list-routes

you’ll get back

ID | Hostname | Approved                        | Available                       | Serving (Primary)              
1  | ts       | 0.0.0.0/0, 192.168.0.0/16, ::/0 | 0.0.0.0/0, 192.168.0.0/16, ::/0 | 192.168.0.0/16, 0.0.0.0/0, ::/0

Now, one last thing on the tailscale client on the laptop. You need to set an exit node on the client side as well (that is to say the exit node itself needs to declare it’s an exit node AND others using it as an exit node ALSO have to declare that’s the exit node they’re going to use). Fortunately this is relatively easy to do.

sudo tailscale set --exit-node ts

I have left installing the tailscale client up to you. In my case it’s on a linux laptop and this command is the right one for this client. Which is already installed and running.

The command to get it running with my own personal headscale server running is

sudo tailscale up --login-server https://hs.mydomain.com 

After all of this, I have a working VPN connection that allows me full access inside my own network without worrying about what traffic is allowed on a network. I’ve had problems in the past with the SSH service being disallowed as I was using a non-standard port that is frequently blocked on other networks. This should get around all that. I’ll learn soon enough if this is true.

In all of the stuff I’ve seen out there nothing is interested in setting this up quite this way, I’ll guess mostly because I should be able to achieve the same thing with wireguard itself but I never did get it quite right.

Some use tailscale as an alternate proxy. In some ways it is a fine proxy but then there’s a side-car tailscale container attached to every container you want to expose to the interwebs. So far I’ve been happy with my setup with traefik.

Resources I used to figure this out.

Jims Garage on youtube and in github was a great start but didn’t quite go where I have gone and note there is an earlier attempt with this stuff as well which I also used for further clarification.

The tailscale guys “Contain your excitement: A deep dive into using Tailscale with Docker” article and then dredging the documentation for how exit nodes should work and I finally got to the above configuration.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.