Homelab
MetalLB, Longhorn, and Ingress-NGINX
Deploying Core Kubernetes Infrastructure: MetalLB Load Balancer, Longhorn Storage, and NGINX Ingress
Overview
Deploying core Kubernetes infrastructure via GitOps - the foundation for all applications. MetalLB provides load balancing, NGINX handles ingress routing, and Longhorn adds distributed storage.
| Tip: | Having trouble? See v0.8.0 for reference. |
Before You Begin
Prerequisites
- Tailscale Kubernetes Subnet Router completed (remote access working)
- Tailscale VPN connected (can work remotely now!)
What We're Setting Up
- MetalLB - Load balancer for bare-metal clusters, assigns IPs from a pool to LoadBalancer services
- NGINX Ingress - Routes HTTP/HTTPS traffic to services based on hostnames and paths
- Longhorn - Distributed block storage with replication, snapshots, and backup to NFS
Why This Approach
- MetalLB gives services real IPs (not just ClusterIP) accessible from your network
- Ingress provides hostname-based routing without exposing every service
- Longhorn enables pod mobility - volumes follow workloads across nodes
Directory Structure
Here's what we're building. Understanding the structure helps follow along:
k8s/core/
├── kustomization.yaml # Lists all core services
├── tailscale/ # From article 07
├── metallb/
│ ├── kustomization.yaml # Lists files (NOT config/ directory)
│ ├── namespace.yaml
│ ├── helmrepository.yaml
│ ├── helmrelease.yaml
│ ├── config.flux.yaml # Flux Kustomization → manages config/
│ └── config/
│ └── config.yaml # IPAddressPool + L2Advertisement (CRDs)
├── ingress-nginx.flux.yaml # Flux Kustomization → manages ingress-nginx/
├── ingress-nginx/
│ ├── kustomization.yaml
│ ├── namespace.yaml
│ ├── helmrepository.yaml
│ └── helmrelease.yaml
└── longhorn/
├── kustomization.yaml
├── namespace.yaml
├── helmrepository.yaml
└── helmrelease.yaml Why the different patterns?
- MetalLB:
config.flux.yamlinside the directory because it manages CRDs that the HelmRelease installs. The dependency is internal to MetalLB. - Ingress-NGINX:
ingress-nginx.flux.yamlincore/because the entire service depends on another service (MetalLB). The dependency is external. - Longhorn: No Flux Kustomization wrapper needed - it has no CRD dependencies and doesn't depend on other services.
How Flux Dependency Chain Helps
Tailscale introduced CRDs and the *.flux.yaml wrapper pattern for waiting on HelmReleases. This article adds a second pattern: cross-service dependencies.
All services deploy from a single git push. Flux resolves dependencies automatically:
sync (parent)
├── tailscale/ (HelmRelease installs CRDs)
│ └── tailscale-connector (waits via healthChecks)
│
├── metallb/ (HelmRelease installs CRDs)
│ └── metallb-config (waits via healthChecks)
│ └── ingress-nginx (waits via dependsOn) ← NEW: cross-service
│
└── longhorn/ (independent, no wrapper needed) When you need a *.flux.yaml wrapper:
CRD dependency (MetalLB, Tailscale) - HelmRelease installs CRDs, wrapper waits via
healthChecksbefore applying CRD instances.Cross-service dependency (Ingress-NGINX) - Service needs another service operational first. Wrapper waits via
dependsOnfor another Kustomization.
When you don't need one:
- Independent services (Longhorn) - No CRDs to wait for, no dependency on other services.
Prepare Directory
Init Workspace
cd ~/homelab
export KUBECONFIG=$(pwd)/talos/clusterconfig/kubeconfig
mkdir -p k8s/core/metallb/config k8s/core/ingress-nginx k8s/core/longhorn | Note: | The KUBECONFIG export only applies to your current terminal session. If you open a new terminal, re-run the cd and export commands. |
Create Dev Branch
Create a dev branch to iterate without triggering Flux reconciliation on every commit:
git checkout -b dev Configure MetalLB
MetalLB1 provides LoadBalancer services in bare-metal environments by assigning IPs from a pool.
| Talos Note: | All three services (MetalLB, Ingress-NGINX, Longhorn) require privileged containers. Talos enforces Pod Security Admission (PSA) at baseline level, so each namespace needs privileged labels. |
Namespace
k8s/core/metallb/namespace.yaml:
---
apiVersion: v1
kind: Namespace
metadata:
name: metallb-system
labels:
# Required for Talos - allows privileged pods
pod-security.kubernetes.io/enforce: privileged
pod-security.kubernetes.io/audit: privileged
pod-security.kubernetes.io/warn: privileged HelmRepository
k8s/core/metallb/helmrepository.yaml:
---
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: metallb
namespace: flux-system
spec:
interval: 1h
url: https://metallb.github.io/metallb HelmRelease
k8s/core/metallb/helmrelease.yaml:
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: metallb
namespace: metallb-system
spec:
interval: 1h
chart:
spec:
chart: metallb
version: "0.15.x" # Current stable series
sourceRef:
kind: HelmRepository
name: metallb
namespace: flux-system
install:
remediation:
retries: 3
upgrade:
remediation:
retries: 3 IP Pool Configuration
The IPAddressPool and L2Advertisement CRDs are installed by the MetalLB HelmRelease. We use a separate Flux Kustomization with dependsOn to wait for the operator - same pattern as Tailscale Connector.
k8s/core/metallb/config.flux.yaml (Flux Kustomization CRD):
---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: metallb-config
namespace: flux-system
spec:
interval: 10m
retryInterval: 1m
path: ./k8s/core/metallb/config
prune: true
sourceRef:
kind: GitRepository
name: flux-system
dependsOn:
- name: sync
healthChecks:
- apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
name: metallb
namespace: metallb-system k8s/core/metallb/config/config.yaml:
---
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: default-pool
namespace: metallb-system
spec:
addresses:
- 192.168.1.40-192.168.1.79 # From [UniFi Flat Network Setup](/blog/homelab-v2-02-unifi-flat-network-setup)
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: default
namespace: metallb-system
spec:
ipAddressPools:
- default-pool Kustomization
k8s/core/metallb/kustomization.yaml:
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- helmrepository.yaml
- helmrelease.yaml
- config.flux.yaml git add k8s/core/metallb/
git commit -m "feat(metallb): add load balancer with IP pool config" Configure NGINX Ingress
NGINX Ingress2 routes HTTP/HTTPS traffic to services based on hostnames and paths. The LoadBalancer service needs MetalLB to assign an IP, so we use a Flux Kustomization wrapper with dependsOn.
k8s/core/ingress-nginx.flux.yaml:
---
apiVersion: kustomize.toolkit.fluxcd.io/v1
kind: Kustomization
metadata:
name: ingress-nginx
namespace: flux-system
spec:
interval: 10m
retryInterval: 1m
path: ./k8s/core/ingress-nginx
prune: true
sourceRef:
kind: GitRepository
name: flux-system
dependsOn:
- name: metallb-config Namespace
k8s/core/ingress-nginx/namespace.yaml:
---
apiVersion: v1
kind: Namespace
metadata:
name: ingress-nginx
labels:
pod-security.kubernetes.io/enforce: privileged
pod-security.kubernetes.io/audit: privileged
pod-security.kubernetes.io/warn: privileged HelmRepository
k8s/core/ingress-nginx/helmrepository.yaml:
---
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: ingress-nginx
namespace: flux-system
spec:
interval: 1h
url: https://kubernetes.github.io/ingress-nginx HelmRelease
k8s/core/ingress-nginx/helmrelease.yaml:
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: ingress-nginx
namespace: ingress-nginx
spec:
interval: 1h
chart:
spec:
chart: ingress-nginx
version: "4.x" # Current stable series
sourceRef:
kind: HelmRepository
name: ingress-nginx
namespace: flux-system
install:
remediation:
retries: 3
upgrade:
remediation:
retries: 3
values:
controller:
service:
type: LoadBalancer # MetalLB will assign an IP from the pool
watchIngressWithoutClass: true
metrics:
enabled: true Kustomization
k8s/core/ingress-nginx/kustomization.yaml:
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- helmrepository.yaml
- helmrelease.yaml git add k8s/core/ingress-nginx.flux.yaml k8s/core/ingress-nginx/
git commit -m "feat(ingress-nginx): add ingress controller" Configure Longhorn
Longhorn3 provides distributed block storage with replication across nodes. No Flux Kustomization wrapper needed - it has no CRD dependencies. For Talos-specific considerations, see the installation guide4.
Namespace
k8s/core/longhorn/namespace.yaml:
---
apiVersion: v1
kind: Namespace
metadata:
name: longhorn-system
labels:
pod-security.kubernetes.io/enforce: privileged
pod-security.kubernetes.io/audit: privileged
pod-security.kubernetes.io/warn: privileged HelmRepository
k8s/core/longhorn/helmrepository.yaml:
---
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: longhorn
namespace: flux-system
spec:
interval: 1h
url: https://charts.longhorn.io HelmRelease
k8s/core/longhorn/helmrelease.yaml:
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: longhorn
namespace: longhorn-system
spec:
interval: 1h
chart:
spec:
chart: longhorn
version: "1.7.x" # Current stable series
sourceRef:
kind: HelmRepository
name: longhorn
namespace: flux-system
install:
remediation:
retries: 3
upgrade:
remediation:
retries: 3
values:
defaultSettings:
# Start with 2 replicas (we have 2 nodes, will add 3rd)
defaultReplicaCount: 2
persistence:
# Retain volumes when PVC deleted (safer, manual cleanup)
reclaimPolicy: Retain
defaultClassReplicaCount: 2
# Make Longhorn the default StorageClass
defaultClass: true Kustomization
k8s/core/longhorn/kustomization.yaml:
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- helmrepository.yaml
- helmrelease.yaml git add k8s/core/longhorn/
git commit -m "feat(longhorn): add distributed storage" Update Core Kustomization
k8s/core/kustomization.yaml:
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- tailscale # From article 07
- metallb # Directory (includes config.flux.yaml for CRD dependency)
- ingress-nginx.flux.yaml # Flux Kustomization CRD (dependsOn metallb-config)
- longhorn # Directory (no dependencies) git add k8s/core/kustomization.yaml
git commit -m "feat(core): register new services in kustomization" Deploy Everything
Merge to main and push to trigger Flux:
git checkout main
git merge --ff-only dev
git push
git branch -d dev Watch Flux Reconcile
# Force immediate reconciliation
flux reconcile kustomization sync
# Watch all Flux Kustomizations
flux get kustomizations -w Wait for all Kustomizations to show Ready: True:
sync- main sync, applies immediatelytailscale-connector- waits for Tailscale operatormetallb-config- waits for MetalLB operatoringress-nginx- waits for metallb-config
# Watch HelmReleases deploy
flux get helmreleases -A -w Wait until all HelmReleases show Ready: True. This may take 2-3 minutes as Helm charts are fetched and installed.
Verify All Services
MetalLB
kubectl get pods -n metallb-system
kubectl get ipaddresspool -n metallb-system All pods should be Running. IPAddressPool should show your range.
NGINX Ingress
kubectl get pods -n ingress-nginx
kubectl get svc -n ingress-nginx Note the EXTERNAL-IP assigned by MetalLB to the ingress controller.
Longhorn
kubectl get pods -n longhorn-system
kubectl get storageclass Should see many Longhorn pods (manager, driver, CSI components) and longhorn StorageClass as default.
Access Longhorn UI (temporary port-forward):
kubectl port-forward svc/longhorn-frontend 8080:80 -n longhorn-system Open http://localhost:8080 to see the Longhorn dashboard.
Test the Services
Test LoadBalancer (MetalLB)
# Create test deployment
kubectl create deployment nginx-test --image=nginx
kubectl expose deployment nginx-test --type=LoadBalancer --port=80
# Watch for EXTERNAL-IP (Ctrl+C when assigned)
kubectl get svc nginx-test -w Should get an IP from the pool (192.168.1.40-79). Test it:
curl http://<EXTERNAL-IP> Test Ingress
# Deploy test app
kubectl create deployment hello --image=gcr.io/google-samples/hello-app:1.0
kubectl expose deployment hello --port=8080
# Create ingress
cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: hello-ingress
spec:
rules:
- host: hello.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: hello
port:
number: 8080
EOF
# Get ingress controller IP
kubectl get svc -n ingress-nginx ingress-nginx-controller \
-o jsonpath='{.status.loadBalancer.ingress[0].ip}' Test from your machine:
# Add to /etc/hosts (use ingress controller's EXTERNAL-IP)
echo "<INGRESS-IP> hello.local" | sudo tee -a /etc/hosts
curl http://hello.local Test Storage (Longhorn)
# Create a test PVC
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
# storageClassName: longhorn # Optional - Longhorn is default
EOF
# Watch for Bound status
kubectl get pvc test-pvc -w Should show Bound within a minute. Check the volume in Longhorn UI.
Clean Up Tests
kubectl delete pvc test-pvc
kubectl delete ingress hello-ingress
kubectl delete svc hello nginx-test
kubectl delete deployment hello nginx-test
# Verify cleanup
kubectl get all Only service/kubernetes should remain.
Next Steps
With core infrastructure in place, we can configure the GPU for hardware transcoding.
For MetalLB troubleshooting on all-control-plane clusters:
See: MetalLB Talos L2 Fix
Resources
Footnotes
MetalLB, "MetalLB Documentation," metallb.universe.tf. Accessed: Dec. 16, 2025. [Online]. Available: https://metallb.universe.tf/ ↩
Kubernetes, "NGINX Ingress Controller," kubernetes.github.io. Accessed: Dec. 16, 2025. [Online]. Available: https://kubernetes.github.io/ingress-nginx/ ↩
Longhorn, "Longhorn Documentation," longhorn.io. Accessed: Dec. 16, 2025. [Online]. Available: https://longhorn.io/docs/ ↩
Josh Noll, "Installing Longhorn on Talos With Helm," joshrnoll.com. Accessed: Dec. 16, 2025. [Online]. Available: https://joshrnoll.com/installing-longhorn-on-talos-with-helm/ ↩