If you’re running a k3s cluster in your home network like me, you might run into a common problem: your pods can’t resolve local hostnames like keycloak.fritz.box or nas.fritz.box. That’s because CoreDNS inside the cluster doesn’t know about your local DNS server.

In my case, I run a PiHole instance on a Raspberry Pi that handles DNS for my home network, including custom hostnames under the .fritz.box domain. Here’s how I made those hostnames available to all pods in my k3s cluster.

The Problem

By default, k3s ships with CoreDNS as the cluster DNS. It resolves:

  • Kubernetes-internal names (my-service.default.svc.cluster.local)
  • Public domains via the upstream DNS in /etc/resolv.conf

But it has no idea about your local .fritz.box (or any other private) domain. Any pod trying to reach a local hostname will get an NXDOMAIN error.

In my case, this caused a PKIX certificate error because my Spring Boot application couldn’t reach the Keycloak OIDC issuer at keycloak.fritz.box.

The Solution

k3s makes this easy. Its CoreDNS configuration automatically imports custom server blocks from a ConfigMap called coredns-custom in the kube-system namespace. No need to touch the main CoreDNS ConfigMap.

Create the ConfigMap:

kubectl apply -f - <<'EOF'
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns-custom
  namespace: kube-system
data:
  pihole.server: |
    fritz.box:53 {
      errors
      cache 30
      forward . 192.168.178.100
    }
EOF

Replace 192.168.178.100 with the IP address of your PiHole instance.

Then restart CoreDNS to pick up the new config:

kubectl -n kube-system rollout restart deployment coredns

How It Works

The k3s CoreDNS Corefile ends with:

import /etc/coredns/custom/*.server

This line tells CoreDNS to load any .server files from the coredns-custom ConfigMap as additional server blocks. Our pihole.server entry creates a dedicated DNS zone for fritz.box that forwards all queries to PiHole.

The main .:53 block continues to handle everything else (Kubernetes-internal names, public domains).

Verifying

You can verify it works by running a quick DNS lookup from inside the cluster:

kubectl run dnstest --rm -it --image=busybox --restart=Never -- nslookup keycloak.fritz.box

You should see PiHole’s response with the correct IP address.

Check the CoreDNS logs to confirm the zone was loaded:

kubectl -n kube-system logs -l k8s-app=kube-dns | head -20

Look for fritz.box.:53 in the output — that confirms the server block is active.

Notes

  • This approach works for any local domain, not just fritz.box. Add multiple .server entries in the ConfigMap if needed.
  • The cache 30 directive caches responses for 30 seconds, reducing load on PiHole.
  • If your PiHole IP changes, update the ConfigMap and restart CoreDNS.
  • This is specific to k3s. For vanilla Kubernetes, you would edit the main coredns ConfigMap directly.