Questions
Services can't communicate across namespaces. Debug this networking issue.
The Scenario
You’re the Platform Engineer at a SaaS company with a microservices architecture. Your cluster has multiple namespaces for isolation:
frontendnamespace: Web applicationapinamespace: Backend API servicesdatanamespace: Data processing servicessharednamespace: Shared services (auth, logging, monitoring)
Everything was working fine until this morning. Now developers are reporting:
Error: Failed to connect to auth-service
ECONNREFUSED: Connection refused at auth-service:8080
What’s happening:
- The
frontend-app(infrontendnamespace) cannot connect toauth-service(insharednamespace) - The API service (in
apinamespace) also cannot connect toauth-service - Services within the same namespace can communicate fine
- No recent code deployments
The auth-service is running and healthy:
$ kubectl get pods -n shared
NAME READY STATUS RESTARTS AGE
auth-service-7d9f8c-xyz 1/1 Running 0 2h
$ kubectl get svc -n shared
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
auth-service ClusterIP 10.100.20.50 <none> 8080/TCP 30d
The Challenge
Debug this cross-namespace networking issue. Walk through your debugging process:
- What’s the first command you run?
- How do you verify DNS resolution is working?
- What are the potential causes?
- How do you fix it?
A junior engineer might restart all pods in all namespaces, redeploy services hoping it fixes the issue, check firewall rules at the OS level, or recreate namespaces losing all data. This fails because restarting pods doesn't fix DNS configuration, redeploying doesn't address the root cause, Kubernetes networking isn't controlled by OS firewalls, and recreating namespaces causes massive disruption.
A senior engineer systematically debugs networking by first verifying the service DNS format. The issue is likely how the service is being accessed since Kubernetes DNS has specific formats. Short names like auth-service only work within the same namespace, while the FQDN auth-service.shared.svc.cluster.local works cross-namespace. Test DNS resolution using nslookup to confirm short names fail but FQDNs work. Then check NetworkPolicies which might be blocking cross-namespace traffic even with correct DNS.
Step 1: Verify Service DNS Format
# WRONG - only works within same namespace
curl http://auth-service:8080
# CORRECT - fully qualified domain name (FQDN)
curl http://auth-service.shared.svc.cluster.local:8080Kubernetes DNS Format:
<service-name>.<namespace>.svc.cluster.localTest from frontend pod:
kubectl exec -it frontend-app-xyz -n frontend -- sh
# This will FAIL (different namespace)
$ curl http://auth-service:8080
curl: (6) Could not resolve host: auth-service
# This will WORK (FQDN)
$ curl http://auth-service.shared.svc.cluster.local:8080
{"status":"ok"}Step 2: Test DNS Resolution
# Exec into a pod in the frontend namespace
kubectl exec -it frontend-app-xyz -n frontend -- sh
# Test DNS resolution
$ nslookup auth-service
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
nslookup: can't resolve 'auth-service'
# Test FQDN resolution
$ nslookup auth-service.shared.svc.cluster.local
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: auth-service.shared.svc.cluster.local
Address 1: 10.100.20.50 auth-service.shared.svc.cluster.localThis confirms DNS works, but the application is using the wrong service name.
Step 3: Check Network Policies
Even with correct DNS, traffic might be blocked by NetworkPolicies:
# Check if NetworkPolicies exist
kubectl get networkpolicies -n shared
kubectl get networkpolicies -n frontend
# Describe the policy
kubectl describe networkpolicy -n shared Root Causes and Solutions
Root Cause #1: Incorrect Service DNS Name (Most Common)
Problem: Application uses short name auth-service instead of FQDN.
// ❌ WRONG - Only works in same namespace
const authUrl = 'http://auth-service:8080';
// ✅ CORRECT - Works cross-namespace
const authUrl = 'http://auth-service.shared.svc.cluster.local:8080';
// ✅ ALSO CORRECT - Shorter form (svc.cluster.local is default)
const authUrl = 'http://auth-service.shared:8080';
Fix: Update application configuration
# ConfigMap for frontend app
apiVersion: v1
kind: ConfigMap
metadata:
name: frontend-config
namespace: frontend
data:
AUTH_SERVICE_URL: "http://auth-service.shared.svc.cluster.local:8080"
Root Cause #2: NetworkPolicy Blocking Traffic
Problem: NetworkPolicy blocks cross-namespace traffic.
Solution: Allow ingress from specific namespaces
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-frontend-and-api
namespace: shared
spec:
podSelector:
matchLabels:
app: auth-service
policyTypes:
- Ingress
ingress:
# Allow from frontend namespace
- from:
- namespaceSelector:
matchLabels:
name: frontend
ports:
- protocol: TCP
port: 8080
# Allow from api namespace
- from:
- namespaceSelector:
matchLabels:
name: api
ports:
- protocol: TCP
port: 8080
# Allow from within same namespace
- from:
- podSelector: {}
Important: Namespaces must have labels for namespaceSelector to work:
# Add labels to namespaces
kubectl label namespace frontend name=frontend
kubectl label namespace api name=api
kubectl label namespace shared name=shared
Complete Working Example
Correct setup for cross-namespace communication:
# 1. Shared namespace with labels
apiVersion: v1
kind: Namespace
metadata:
name: shared
labels:
name: shared
---
# 2. Auth service in shared namespace
apiVersion: v1
kind: Service
metadata:
name: auth-service
namespace: shared
spec:
selector:
app: auth-service
ports:
- name: http
port: 8080
targetPort: 8080
---
# 3. Auth service deployment
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-service
namespace: shared
spec:
replicas: 3
selector:
matchLabels:
app: auth-service
template:
metadata:
labels:
app: auth-service
spec:
containers:
- name: auth
image: company/auth-service:v1.0
ports:
- containerPort: 8080
---
# 4. NetworkPolicy allowing cross-namespace traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-from-app-namespaces
namespace: shared
spec:
podSelector:
matchLabels:
app: auth-service
policyTypes:
- Ingress
ingress:
- from:
- namespaceSelector:
matchLabels:
name: frontend
- namespaceSelector:
matchLabels:
name: api
---
# 5. Frontend app using FQDN
apiVersion: v1
kind: ConfigMap
metadata:
name: frontend-config
namespace: frontend
data:
# Use FQDN for cross-namespace communication
AUTH_URL: "http://auth-service.shared.svc.cluster.local:8080"
Testing Cross-Namespace Communication
# Test DNS resolution from frontend namespace
kubectl run test-pod --rm -it --image=busybox -n frontend -- sh
# Test short name (will fail)
$ nslookup auth-service
** server can't find auth-service: NXDOMAIN
# Test FQDN (will succeed)
$ nslookup auth-service.shared.svc.cluster.local
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: auth-service.shared.svc.cluster.local
Address 1: 10.100.20.50
# Test actual HTTP connection
$ wget -O- http://auth-service.shared.svc.cluster.local:8080/health
{"status":"healthy"}
Practice Question
You have a service database in namespace data that needs to be accessed by a pod in namespace backend. Which service URL will work correctly?