Homelab
Plex Intel GPU Transcoding
Deploying Plex Media Server on Kubernetes with Intel Arc GPU Hardware Transcoding
Overview
Deploying Plex Media Server with Intel Arc GPU hardware transcoding. This is one of the primary goals of the entire homelab.
| Tip: | Having trouble? See v0.10.0 for what your setup should look like after completing this article. |
Before You Begin
Prerequisites
- Intel Arc Kubernetes DRA completed
- Plex account at plex.tv
Why This Approach
GPU access: The official Plex Helm chart doesn't support DRA (Dynamic Resource Allocation). Instead, we use hostPath volume mounts1 to expose /dev/dri directly to the container.
Storage strategy:
- Config data: Longhorn distributed storage (replicated, backed up)
- Media files: Longhorn for initial testing, NFS from NAS for bulk media later
If migrating from a previous setup, you may need to restore media from backups.
What's Not In Scope
DRA GPU Support: The current hostPath approach works but has limitations. Dynamic Resource Allocation (DRA)2 provides a better solution:
| Capability | hostPath (current) | DRA (future) |
|---|---|---|
| Resource accounting | No | Yes |
| Scheduling coordination | No | Yes |
| GPU constraints | No | Yes |
| Helm chart support | Yes | Requires forked chart |
Why Not DRA Now? The official Plex Helm chart doesn't support DRA because it lacks:
extraPodSpecfor pod-levelresourceClaimsextraContainerSpecfor container-levelresources.claims
DRA requires both:
spec:
template:
spec:
resourceClaims: # Pod level
- name: gpu
resourceClaimTemplateName: intel-gpu
containers:
- resources:
claims: # Container level - chart doesn't expose this
- name: gpu Future: Forked Chart Approach. To enable DRA, fork the Plex chart and add:
- values.yaml - Add
dra.enabled,dra.resourceClaimTemplateName - statefulset.yaml - Add conditional
resourceClaimsblocks - GitHub Pages - Host forked chart as Helm repo
Then reference the forked chart in HelmRelease with DRA enabled. This provides proper Kubernetes-native GPU allocation via ResourceClaims.
Bulk Media via NFS: For larger media libraries, NFS from a dedicated NAS is recommended:
- Talos has no SSH - can't rsync directly to nodes
- Longhorn PVCs aren't accessible as directories outside pods
- NFS mountable from any device (Mac, cluster pods, personal devices)
- Separates compute (cluster) from storage (NAS)
This article uses Longhorn for initial testing. NFS migration is a future enhancement.
Verify GPU
Before deploying Plex, confirm GPUs are available on nodes.
DRI (Direct Rendering Infrastructure)
talosctl -n 192.168.1.30 ls /dev/dri Expected: card0 and renderD128 devices.
Create Plex Manifests
Namespace
k8s/apps/plex/namespace.yaml:
---
apiVersion: v1
kind: Namespace
metadata:
name: plex
labels:
pod-security.kubernetes.io/enforce: privileged
pod-security.kubernetes.io/audit: privileged
pod-security.kubernetes.io/warn: privileged | Note: | Plex needs privileged access for GPU hostPath mount. |
PersistentVolumeClaims
Longhorn storage for Plex config and media.
k8s/apps/plex/pvc.yaml:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: plex-config
namespace: plex
spec:
accessModes:
- ReadWriteOnce
storageClassName: longhorn
resources:
requests:
storage: 50Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: plex-media
namespace: plex
spec:
accessModes:
- ReadWriteOnce
storageClassName: longhorn
resources:
requests:
storage: 500Gi | Note: | Adjust plex-media size based on your storage needs. The 2TB NVMe provides ample room. |
Claim Token Secret
The Plex claim token links your server to your Plex account.
k8s/apps/plex/secret.sops.yaml (before encryption):
---
apiVersion: v1
kind: Secret
metadata:
name: plex-claim
namespace: plex
type: Opaque
stringData:
claim: "claim-XXXXXXXX" # Replace with your token from https://www.plex.tv/claim/ | Important: | Claim tokens expire in 4 minutes. Generate one right before encrypting. |
Encrypt Secret
sops --encrypt --in-place k8s/apps/plex/secret.sops.yaml After initial Plex setup completes, you can delete this secret (the claim is only used once).
HelmRelease
Using the official Plex chart3 with GPU access via hostPath. The claim token is injected from the SOPS secret via valuesFrom.
k8s/apps/plex/helmrelease.yaml:
---
apiVersion: source.toolkit.fluxcd.io/v1
kind: HelmRepository
metadata:
name: plex
namespace: plex
spec:
interval: 24h
url: https://raw.githubusercontent.com/plexinc/pms-docker/gh-pages
---
apiVersion: helm.toolkit.fluxcd.io/v2
kind: HelmRelease
metadata:
name: plex
namespace: plex
spec:
interval: 30m
chart:
spec:
chart: plex-media-server
version: "0.x"
sourceRef:
kind: HelmRepository
name: plex
valuesFrom:
- kind: Secret
name: plex-claim
valuesKey: claim
targetPath: extraEnv.PLEX_CLAIM
values:
extraEnv:
TZ: "America/Denver"
# Service - MetalLB assigns IP
service:
type: LoadBalancer
port: 32400
# Config storage
pms:
configExistingClaim: plex-config
# Media and GPU volumes
extraVolumes:
- name: media
persistentVolumeClaim:
claimName: plex-media
- name: dev-dri
hostPath:
path: /dev/dri
type: Directory
extraVolumeMounts:
- name: media
mountPath: /data/media
- name: dev-dri
mountPath: /dev/dri | Claim token: | Injected from the SOPS-encrypted secret via valuesFrom. Flux decrypts the secret automatically. |
| GPU access: | The /dev/dri hostPath mount exposes Intel GPU devices to the container for hardware transcoding. |
Kustomization
k8s/apps/plex/kustomization.yaml:
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- namespace.yaml
- pvc.yaml
- secret.sops.yaml
- helmrelease.yaml Apps Kustomization
Add plex to k8s/apps/kustomization.yaml:
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- plex Deploy Plex
Commit Changes
cd ~/homelab
# Get claim token NOW (4 minute window)
# Visit: https://www.plex.tv/claim/
# Update k8s/apps/plex/secret.sops.yaml with the token
git add k8s/apps/
git commit -m "feat(plex): add Plex with GPU transcoding"
git push Reconcile Flux
flux reconcile source git flux-system
flux reconcile kustomization sync Verify Deployment
Plex Resources
# Check HelmRelease status
flux get helmreleases -n plex
# Check pods
kubectl get pods -n plex
# Check service (note the EXTERNAL-IP)
kubectl get svc -n plex
# View logs
kubectl logs -n plex -l app.kubernetes.io/name=plex-media-server -f Create Media Directories
Create the library directories before accessing Plex:
kubectl exec -n plex -it plex-plex-media-server-0 -- mkdir -p /data/media/movies /data/media/tv /data/media/music These persist in the Longhorn PVC, so this only needs to be done once.
Access Plex
Get the LoadBalancer IP:
kubectl get svc -n plex plex-plex-media-server -o jsonpath='{.status.loadBalancer.ingress[0].ip}' Access at: http://<EXTERNAL-IP>:32400/web
Configure Plex
Initial Setup
- Sign in with your Plex account
- Name your server: e.g., "Homelab"
- Add libraries:
- Movies:
/data/media/movies - TV Shows:
/data/media/tv - Music:
/data/media/music
- Movies:
- Verify hardware transcoding (enabled by default):
- Settings → Transcoder → "Use hardware acceleration when available" should be checked
Verify Hardware Transcoding
Hardware transcoding4 offloads video encoding/decoding to the GPU.
GPU Access
kubectl exec -n plex -it plex-plex-media-server-0 -- ls -la /dev/dri Expected:
card0
renderD128 Transcoding
- Play a video in Plex
- Change quality to force transcode (e.g., 720p if source is 4K)
- Check dashboard - should show "(hw)" indicator for hardware transcoding
From CLI:
kubectl logs -n plex -l app.kubernetes.io/name=plex-media-server | grep -i transcode Add Media
Copy via kubectl
For initial testing, copy media directly via kubectl:
# Copy a file
kubectl cp /path/to/movie.mkv plex/plex-plex-media-server-0:/data/media/movies/ This works for small files but is slow for bulk media. See NFS approach in What's Not In Scope.
Next Steps
With Plex running, deploy more services.
See: Factorio Kubernetes Server
Resources
Footnotes
Reddit, "Plex on Kubernetes with Intel iGPU passthrough," reddit.com. Accessed: Dec. 16, 2025. [Online]. Available: https://www.reddit.com/r/selfhosted/comments/121vb07/plex_on_kubernetes_with_intel_igpu_passthrough/ ↩
Kubernetes, "Dynamic Resource Allocation," kubernetes.io. Accessed: Dec. 16, 2025. [Online]. Available: https://kubernetes.io/docs/concepts/scheduling-eviction/dynamic-resource-allocation/ ↩
Plex, "Plex Media Server Docker," github.com. Accessed: Dec. 16, 2025. [Online]. Available: https://github.com/plexinc/pms-docker ↩
Plex, "Using Hardware-Accelerated Streaming," support.plex.tv. Accessed: Dec. 16, 2025. [Online]. Available: https://support.plex.tv/articles/115002178853-using-hardware-accelerated-streaming/ ↩