Homelab PKI: Caddy CA + HashiCorp Vault + cert-manager on Kubernetes

This post documents how to set up a proper PKI for a homelab environment by importing an existing Caddy CA into HashiCorp Vault, and wiring it up with cert-manager on Kubernetes so that services automatically get trusted TLS certificates. Overview The trust chain we’re building: Caddy Root CA → Vault PKI → cert-manager → Kubernetes Secret (tls.crt / tls.key) Assumptions: Caddy is running as a reverse proxy and already has an internal CA Caddy is running outside the Kubernetes cluster on a dedicated host Vault is running outside the Kubernetes cluster (e.g. in Docker) cert-manager is installed in the Kubernetes cluster Internal domain is fritz.box, my public domain is dannihome.de Step 1: Locate Caddy’s CA Certificate and Key Caddy stores its internal CA here by default: ...

March 1, 2026 · 4 min · Jens

Resolving Local Hostnames in k3s Using PiHole and CoreDNS

If you’re running a k3s cluster in your home network like me, you might run into a common problem: your pods can’t resolve local hostnames like keycloak.fritz.box or nas.fritz.box. That’s because CoreDNS inside the cluster doesn’t know about your local DNS server. In my case, I run a PiHole instance on a Raspberry Pi that handles DNS for my home network, including custom hostnames under the .fritz.box domain. Here’s how I made those hostnames available to all pods in my k3s cluster. ...

February 16, 2026 · 3 min · Jens

Manage secrets with Vault and K8S

The goal is to replicate values from a Vault secret to K8S. There is a K8S operator called external-secrets (ESO) for this purpose. It can be deployed quite conveniently with Helm: helm repo add external-secrets https://charts.external-secrets.io helm install external-secrets external-secrets/external-secrets -n external-secrets --create-namespace Next, we should first create a test secret in Vault. vault kv put secret/foo my-value=mytopsecretvalue On to the manifests! The SecretStore: Connecting Kubernetes to Vault apiVersion: external-secrets.io/v1 kind: SecretStore metadata: name: vault-backend spec: provider: vault: server: "https://vault.myhomenet.lan" path: "secret" version: "v2" caBundle: "..." # Base64-encoded CA certificate, see explanation below auth: tokenSecretRef: name: "vault-token" key: "token" What’s happening here? ...

September 15, 2025 · 2 min · Jens

Running a 4 node Kubernetes Cluster on Raspberry Pi

What I wanted to achieve: Using 4 of my Raspis for Kubernetes: Part 1 Learn how to deploy a Spring Boot application: Part 2 Expose this application to the internet: Part 3 Part 1: Setting up the cluster For my setup I use 1 Raspi 4 and 3 Raspi 3B. The Raspi 4 serves as master node, the others as client nodes. First I flashed HypriotOS on all 4 Raspis. Before commencing don’t forget to: ...

March 4, 2021 · 1 min · Jens

Removing A Persistent Volume Claim

I fiddled with persistent volume claims (pvc) on OpenShift. Creating a pvc was no problem, but afterwards I tried to delete it but it was stuck in “Terminating”-state. Here’s what I did to remove it: # Login to OpenShift, this can be obtained in web console with 'Copy Login Command' $ oc login --token=41cxWS0NnARW2zxRCK5p2GQb31VNf7zEz-wuYMdhw1k --server=https://openshift.cluster.host:6443 # Create a pvc $ oc set volume dc/testpvc --add --type pvc --claim-size=100Mi info: Generated volume name: volume-s9njq deploymentconfig.apps.openshift.io/testpvc volume updated # Check the status $ oc get pvc -w NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-gpvft Bound pvc-86cc776c-4190-4b76-bc27-5a8846c71fd8 1Gi RWO gp2 15s # Try to delete it... $ oc delete pvc/pvc-gpvft persistentvolumeclaim "pvc-gpvft" deleted # Check status...it's stuck in 'Terminating' $ oc get pvc -w NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE pvc-gpvft Terminating pvc-86cc776c-4190-4b76-bc27-5a8846c71fd8 1Gi RWO gp2 8m29s # Check deployment...the finalizer is the interesting part $ oc get pvc pvc-gpvft -o yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: (...) finalizers: - kubernetes.io/pvc-protection (...) spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Mi storageClassName: gp2 volumeMode: Filesystem volumeName: pvc-86cc776c-4190-4b76-bc27-5a8846c71fd8 status: accessModes: - ReadWriteOnce capacity: storage: 1Gi phase: Bound # Patch the finalizer $ oc patch pvc pvc-gpvft -p '{"metadata":{"finalizers": []}}' --type=merge persistentvolumeclaim/pvc-gpvft patched # Check again...aaaand it's gone $ oc get pvc No resources found in test-space namespace.

September 2, 2020 · 2 min · Jens