Homelab
Tailscale Kubernetes Subnet Router
Secure Remote Access to Your Kubernetes Homelab with Tailscale Subnet Routing
Overview
Setting up Tailscale1 for secure remote access to the homelab. This is the last step requiring physical access - once Tailscale is running, all remaining setup can be done remotely.
| Tip: | Having trouble? See v0.7.0 for reference. |
Before You Begin
Prerequisites
- Flux CD Kubernetes GitOps completed (Flux running with SOPS decryption enabled)
- Tailscale account (free tier)
- Physical access to run initial kubectl commands
What We're Setting Up
Components we're deploying:
- Subnet Router - Advertises cluster IPs to your tailnet so existing kubeconfig/talosctl configs work from anywhere
- Operator - Manages Tailscale resources via CRDs (for service exposure later, including Tailscale Funnel2 for public access)
Why This Approach
kubectlandtalosctlwork exactly like you're on the LAN- Debug services directly via MetalLB IPs (ping, nc, curl)
- Access services via MetalLB IPs from anywhere on your tailnet
How Dependency Ordering Works
CRD (Custom Resource Definition): Kubernetes has built-in resources (Pod, Service, Deployment). CRDs let operators add custom ones - here, the Tailscale operator adds the Connector resource type. The CRD must be registered before you can create instances of it.
sync (parent)
└── tailscale/ (HelmRelease installs Connector CRD)
└── tailscale-connector (waits via healthChecks, then applies Connector) The HelmRelease installs the operator and registers the CRD. We use a Flux Kustomization wrapper (connector.flux.yaml) with healthChecks to wait for the HelmRelease before applying the Connector resource.
Initialize Workspace
Set Environment
cd ~/homelab
export KUBECONFIG=$(pwd)/talos/clusterconfig/kubeconfig
mkdir -p k8s/core/tailscale/connector | Note: | The KUBECONFIG export only applies to your current terminal session. If you open a new terminal, re-run the cd and export commands. |
Configure Tailscale ACLs
Before installing the operator, configure your tailnet's access control policy3.
ACL Policy
Go to Tailscale ACLs and replace the policy with (consider GitOps management4 for version control):
{
"tagOwners": {
"tag:k8s-operator": ["autogroup:admin"],
"tag:k8s": ["tag:k8s-operator"]
},
"autoApprovers": {
"routes": {
"192.168.1.30/32": ["tag:k8s"],
"192.168.1.31/32": ["tag:k8s"],
"192.168.1.40/29": ["tag:k8s"],
"192.168.1.48/28": ["tag:k8s"],
"192.168.1.64/28": ["tag:k8s"]
}
},
"grants": [
{"src": ["*"], "dst": ["*"], "ip": ["*"]}
],
"ssh": [
{
"action": "check",
"src": ["autogroup:member"],
"dst": ["autogroup:self"],
"users": ["autogroup:nonroot", "root"]
}
]
} What this does:
tagOwners- Defines tags for the operator (tag:k8s-operator) and devices it creates (tag:k8s)autoApprovers- Automatically approves subnet routes for cluster IPs (no manual approval needed)grants- Allows all connections (Tailscale default)ssh- Enables Tailscale SSH for your devices- Routes: node IPs (.30, .31) + MetalLB pool (.40-.79)
Generate OAuth Client
The operator needs OAuth credentials5 with specific scopes6. Credentials are managed via Tailscale's trust credentials system7.
Create Credential
- Go to Tailscale Trust Credentials
- Click Credential button
- Select OAuth
- Description:
homelab-k8s-operator - Click Continue
- Configure scopes (select Write for each, which auto-selects Read):
- General > Services: Write, tag
tag:k8s-operator - Device > Core: Write, tag
tag:k8s-operator - Keys > Auth Keys: Write, tag
tag:k8s-operator
- General > Services: Write, tag
- Click Generate credential - this reveals the Client ID and Client Secret
| Important: | The client secret cannot be retrieved after closing this page. Copy both values immediately into the secret file below, then encrypt. |
OAuth Secret
k8s/core/tailscale/secret.sops.yaml:
---
apiVersion: v1
kind: Secret
metadata:
name: tailscale-oauth
namespace: tailscale
type: Opaque
stringData:
clientId: "<your-client-id>"
clientSecret: "<your-client-secret>" Encrypt and Commit
sops -e -i k8s/core/tailscale/secret.sops.yaml
git add k8s/core/tailscale/secret.sops.yaml
git commit -m "chore(tailscale): add encrypted oauth credentials" OAuth clients don't expire (unlike API keys) and are the recommended approach for the Kubernetes operator.
Create Tailscale Manifests
Namespace
k8s/core/tailscale/namespace.yaml:
---
apiVersion: v1
kind: Namespace
metadata:
name: tailscale
labels:
pod-security.kubernetes.io/enforce: privileged
pod-security.kubernetes.io/audit: privileged
pod-security.kubernetes.io/warn: privileged HelmRepository
k8s/core/tailscale/helmrepository.yaml:
---
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: tailscale
namespace: flux-system
spec:
interval: 1h
url: https://pkgs.tailscale.com/helmcharts HelmRelease
k8s/core/tailscale/helmrelease.yaml:
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: tailscale-operator
namespace: tailscale
spec:
interval: 1h
chart:
spec:
chart: tailscale-operator
version: "1.x"
sourceRef:
kind: HelmRepository
name: tailscale
namespace: flux-system
install:
remediation:
retries: 3
upgrade:
remediation:
retries: 3
values:
operatorConfig:
logging: "info"
valuesFrom:
- kind: Secret
name: tailscale-oauth
valuesKey: clientId
targetPath: oauth.clientId
- kind: Secret
name: tailscale-oauth
valuesKey: clientSecret
targetPath: oauth.clientSecret Kustomization
k8s/core/tailscale/kustomization.yaml:
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- secret.sops.yaml
- helmrepository.yaml
- helmrelease.yaml
- connector.flux.yaml | Note: | The connector/ directory is intentionally NOT listed here. It's managed by the Flux Kustomization CRD in connector.flux.yaml. |
Flux Kustomization Wrapper
The Connector CRD8 is installed by the Tailscale operator, so we need to wait for the HelmRelease to be ready before applying it (see How Dependency Ordering Works above). See the Connector API reference9 for all available fields.
k8s/core/tailscale/connector.flux.yaml:
---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: tailscale-connector
namespace: flux-system
spec:
interval: 10m
retryInterval: 1m
path: ./k8s/core/tailscale/connector
prune: true
sourceRef:
kind: GitRepository
name: flux-system
dependsOn:
- name: sync
healthChecks:
- apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
name: tailscale-operator
namespace: tailscale This Flux Kustomization:
- Waits for the
syncKustomization to complete - Checks that the
tailscale-operatorHelmRelease is healthy - Only then applies resources from
./k8s/core/tailscale/connector
Connector
k8s/core/tailscale/connector/connector.yaml:
---
apiVersion: tailscale.com/v1alpha1
kind: Connector
metadata:
name: homelab-subnet
spec:
hostname: homelab-subnet
subnetRouter:
advertiseRoutes:
- "192.168.1.30/32" # Control plane node
- "192.168.1.31/32" # Worker node
- "192.168.1.40/29" # MetalLB pool .40-.47
- "192.168.1.48/28" # MetalLB pool .48-.63
- "192.168.1.64/28" # MetalLB pool .64-.79 | Note: | Connector is cluster-scoped (no namespace). Tags default to tag:k8s so we don't need to specify them. No kustomization.yaml needed - Flux auto-generates one when pointing directly at a directory. |
Routes explained:
- Node IPs enable
talosctlandkubectl(API server at .30:6443) - MetalLB range enables direct service debugging (curl, ping, nc)
Update Core Kustomization
Add tailscale to the core kustomization so Flux picks it up.
Core Kustomization
k8s/core/kustomization.yaml:
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- tailscale Deploy Tailscale
Commit Changes
git add k8s/core/tailscale/*.yaml k8s/core/tailscale/connector/ k8s/core/kustomization.yaml
git commit -m "feat(tailscale): add operator with subnet router"
git push Reconcile Flux
flux reconcile source git flux-system
flux reconcile kustomization sync Verify Installation
Flux Kustomizations
flux get kustomizations Wait for all to show Ready: True:
sync- applies immediatelytailscale-connector- waits for operator, then applies (retries every 1m)
Connector Status
kubectl get connector homelab-subnet Expected:
NAME SUBNETROUTES STATUS
homelab-subnet 192.168.1.30/32,192.168.1.31/32,192.168.1.40/29,192.168.1.48/28,192.168.1.64/28 ConnectorCreated Tailscale Admin
Go to Tailscale Machines - click homelab-subnet and check the Subnets tab. Routes should appear under "Approved" (not "Awaiting Approval").
Configure Your Client
Install Tailscale
If you don't have Tailscale installed:
- Mac (standalone): Tailscale Standalone Package (avoids App Store dependency)
- Other platforms: Tailscale Downloads
Mac standalone users: Enable CLI via menu bar: Tailscale → Settings → CLI integration → Show me how → Add "tailscale" Command to PATH → Add now
Enable Routes
sudo tailscale up --accept-routes | Important: | The --accept-routes flag is required. Without it, you won't be able to reach cluster IPs through the subnet router. |
Test Remote Access
Note Home IP
curl -s ifconfig.me && echo
# Example: 203.0.113.x (your home ISP) Switch to Mobile Hotspot
Disconnect from home WiFi, connect to phone hotspot, and verify the IP changed:
curl -s ifconfig.me && echo
# Example: 2001:db8:... (mobile carrier IPv6) or different IPv4 Test Cluster Access
kubectl get nodes
# Should show talos-node-1 and talos-node-2 Ready
export TALOSCONFIG=$(pwd)/talos/clusterconfig/talosconfig
talosctl --nodes 192.168.1.30 version
# Should show Client and Server versions If these work from the hotspot, you have full remote access!
Next Steps
With remote access configured, we can now deploy core infrastructure services.
See: MetalLB, Longhorn, and Ingress-NGINX
Resources
Footnotes
Tailscale, "Tailscale on Kubernetes," tailscale.com. Accessed: Dec. 16, 2025. [Online]. Available: https://tailscale.com/kb/1185/kubernetes ↩
Tailscale, "Tailscale Funnel," tailscale.com. Accessed: Dec. 16, 2025. [Online]. Available: https://tailscale.com/kb/1223/funnel ↩
Tailscale, "Access control lists (ACLs)," tailscale.com. Accessed: Dec. 16, 2025. [Online]. Available: https://tailscale.com/kb/1018/acls ↩
Tailscale, "GitOps for Tailscale with GitHub Actions," tailscale.com. Accessed: Dec. 16, 2025. [Online]. Available: https://tailscale.com/kb/1204/gitops-acls-github ↩
Tailscale, "OAuth clients," tailscale.com. Accessed: Dec. 16, 2025. [Online]. Available: https://tailscale.com/kb/1215/oauth-clients ↩
Tailscale, "Kubernetes Operator," tailscale.com. Accessed: Dec. 16, 2025. [Online]. Available: https://tailscale.com/kb/1236/kubernetes-operator ↩
Tailscale, "Trust credentials," tailscale.com. Accessed: Dec. 16, 2025. [Online]. Available: https://tailscale.com/kb/1623/trust-credentials ↩
Tailscale, "Deploy exit nodes and subnet routers on Kubernetes," tailscale.com. Accessed: Dec. 16, 2025. [Online]. Available: https://tailscale.com/kb/1441/kubernetes-operator-connector ↩
Tailscale, "Connector API Reference," github.com. Accessed: Dec. 16, 2025. [Online]. Available: https://github.com/tailscale/tailscale/blob/main/k8s-operator/api.md ↩