When you deploy applications on Kubernetes, your pods often need to communicate with other services. To do this securely, Kubernetes provides a built-in identity and authorization system through Service Accounts and RBAC (Role-Based Access Control). This post breaks down what these are, why they matter, and how to use them to make authenticated calls between workloads — including across namespaces.
Every pod has no identity other than its IP by default. To call another pod or list resources, there must be a trusted identity associated with the pod. This is where Service Accounts come in.
A ServiceAccount (SA) is an identity assigned to a pod. Each namespace automatically has a default SA. When you create a pod or a deployment, if you don’t specify a service account, it uses default.
Each ServiceAccount comes with:
These credentials are mounted inside pods at /var/run/secrets/kubernetes.io/serviceaccount/:
ca.crt → Cluster’s public CA certificatenamespace → The namespace of the podtoken → JWT token used for authenticationThis token is a JWT (JSON Web Token) signed by the Kubernetes API server. It is yours pods identity and is automatically rotated and managed by Kubernetes.
Role-Based Access Control defines what a ServiceAccount can do. Without RBAC, all SAs would have unrestricted access.
RBAC is built from three main components:
Using ServiceAccounts with RBAC provides:
This is the core of Kubernetes in-cluster security and the foundation for secure workload-to-workload communication.
Example scenario:
kubectl create namespace client-ns
kubectl create namespace server-ns
This will be the service we’re calling. It uses a simple python container that prints the request headers.
File: receiver-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: receiver-app
namespace: server-ns
spec:
replicas: 1
selector:
matchLabels:
app: receiver-app
template:
metadata:
labels:
app: receiver-app
spec:
containers:
- name: receiver
image: python:3.11-slim
command: ["python3", "-m", "http.server", "8080"]
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: receiver-svc
namespace: server-ns
spec:
selector:
app: receiver-app
ports:
- protocol: TCP
port: 80
targetPort: 8080
Apply it:
kubectl apply -f receiver-deploy.yaml
This creates a simple service - receiver-svc.server-ns.svc.cluster.local
This app will simulate another microservice making a request inside the cluster.
File: curl-deploy.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: curl-sa
namespace: client-ns
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: curl-role
namespace: client-ns
rules:
- apiGroups: [""]
resources: ["pods", "services"]
verbs: ["get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: curl-rolebinding
namespace: client-ns
subjects:
- kind: ServiceAccount
name: curl-sa
namespace: client-ns
roleRef:
kind: Role
name: curl-role
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: curl-app
namespace: client-ns
spec:
replicas: 1
selector:
matchLabels:
app: curl-app
template:
metadata:
labels:
app: curl-app
spec:
serviceAccountName: curl-sa
containers:
- name: curl
image: curlimages/curl:8.8.0
command: ["sleep", "3600"]
Apply it:
kubectl apply -f curl-deploy.yaml
Even though this example just uses curl, it’s best practice for every app in Kubernetes to use its own ServiceAccount instead of the default one.
In production, this prevents accidental privilege escalations or unauthorized API calls between namespaces.
Here:
This mirrors real-world microservice security principles: least privilege and explicit access.
Now exec into the receiver pod and watch its logs in real time:
kubectl logs -n server-ns -f deploy/receiver-app
Now let’s exec into the curl-app pod and call the receiver:
kubectl exec -it -n client-ns deploy/curl-app -- sh
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
curl -v -H "Authorization: Bearer $TOKEN" http://receiver-svc.server-ns.svc
Note: Using the full FQDN (receiver-svc.server-ns.svc.cluster.local) can sometimes
fail in minimal
container
images (like BusyBox or curlimages/curl) due to resolver quirks, so the shorter DNS name (.svc) is
more reliable for in-cluster communication.
You’ll see something like this in the logs:
10.1.0.15 - - [08/Nov/2025 13:21:40] "GET / HTTP/1.1" 200 -
Host: receiver-svc.server-ns.svc
User-Agent: curl/8.8.0
Accept: */*
Authorization: Bearer eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9...
Copy the JWT to jwt.io, you’ll see claims like:
{
"aud": [
"https://kubernetes.default.svc.cluster.local",
"k3s"
],
"exp": 1794149282,
"iss": "https://kubernetes.default.svc.cluster.local",
"kubernetes.io": {
"namespace": "client-ns",
"serviceaccount": {
"name": "curl-sa",
"uid": "4a407f99-d81c-4c65-b5d9-50214ec1ba38"
}
},
"sub": "system:serviceaccount:client-ns:curl-sa"
}
That’s exactly how we enforce trust between services in Kubernetes.
Kubernetes exposes its signing keys at the OpenID Connect discovery endpoint if your cluster has OIDC enabled. You can find the jwks_uri by executing the following:
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
curl -k -H "Authorization: Bearer $TOKEN" https://kubernetes.default.svc/.well-known/openid-configuration
This returns a JSON containing jwks_uri:
{
"issuer": "https://kubernetes.default.svc.cluster.local",
"jwks_uri": "https://172.21.0.3:6443/openid/v1/jwks"
}
The jwks_uri can be used by apps to fetch public keys to validate JWTs locally.
Note: Kubernetes exposes the API server as a ClusterIP service called kubernetes in the default
namespace. So, the DNS name would be kubernetes.default.svc. All pods in the cluster can reach the API server
using either the ClusterIP or the DNS name. Using https://kubernetes.default.svc/openid/v1/jwks is
preferred in-cluster because it’s DNS-based, automatically resolves, no hardcoding.