Learn how to deploy a complete application stack on Kubernetes, from Pods, Deployments, and Services to Config Maps and Secrets. Discover how to expose apps externally via Ingress, manage traffic with port forwarding and external IPs, and securely handle configurations with ConfigMaps and Secrets—all in a practical, hands-on way.
Kubernetes (K8s) is a container orchestrator. It runs and manages containers for you and helps with:
| Component | What it does |
|---|---|
| Control Plane | The brain (API server, scheduler, controllers) |
| Worker Nodes | Where Pods (containers) actually run |
| etcd | Cluster state store (configs, secrets, cluster metadata) |
| Kubelet & container runtime | Local node agent + runtime (containerd, Docker) |
kubectl is your remote control for Kubernetes. Use it to inspect, modify, and debug the cluster.
kubectl get pods
kubectl get deployments
kubectl get services
kubectl get namespaces
kubectl describe pod my-pod
kubectl logs my-pod
kubectl exec -it my-pod -- /bin/sh
kubectl get pods -n <namespace>, then use that exact
name in kubectl exec or
kubectl logs.
Context controls which cluster and user your kubectl talks to. Always check your context before
applying manifests in production clusters.
kubectl config current-context
kubectl config get-contexts
Namespaces are logical project-level divisions inside a cluster (like folders).
kubectl get namespaces
Typical default list:
NAME STATUS AGE
default Active 14m
kube-node-lease Active 14m
kube-public Active 14m
kube-system Active 14m
| Namespace | Purpose |
|---|---|
| default | Where resources go if no namespace specified |
| kube-system | Kubernetes system services (DNS, metrics) |
| kube-public | Generally empty; readable by all |
| kube-node-lease | Used for node heartbeats |
kubectl create namespace curity
For learning and local development you can run Kubernetes in several ways:
kubectl. To be sure you’re talking to the MicroK8s cluster,
prepend
microk8s (for example: microk8s kubectl get namespaces). This avoids accidentally using
a different kubeconfig/context on your machine.
k3d cluster create curity-local
kubectl config get-contexts
Example output:
CURRENT NAME CLUSTER AUTHINFO
* k3d-curity-local k3d-curity-local admin@k3d-curity-local
k3d-user-management-local k3d-user-management-local admin@k3d-user-management-local
kubectl config use-context k3d-user-management-local
kubectl get nodes -o wide
This shows node OS, kernel, and container runtime (e.g. containerd://1.x).
kubectl get nodes -o wide shows node details including the container runtime. If you see
entries like
K3s and containerd this often means a lightweight k3s/k3d cluster running on WSL2 or
Docker.
Important: In YAML you can put multiple resource objects in one file separated by three dashes ---. Example: a Deployment and a Service in a single manifest. You can also create separate files (e.g., deployment.yaml and service.yaml).
A Deployment provides a higher-level management layer for Pods. It ensures your application is resilient, scalable, and easy to update. Key benefits include:
replicas and Kubernetes will spin up that
many Pods and balance traffic across them via the Service.In short: a Deployment gives you self-healing, scaling, rolling updates, and rollbacks — making it the standard for running apps in Kubernetes.
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-management-app # Unique name of the Deployment
namespace: user-management-app # Namespace to logically isolate this app from others
spec:
replicas: 2 # Number of Pod replicas to maintain
selector:
matchLabels:
app: user-management-app # The Deployment will manage only Pods that have this label
template:
metadata:
labels:
app: user-management-app # This label is added to every Pod created by the Deployment.
# It MUST match the selector above, otherwise the Deployment
# won’t recognize its own Pods.
spec:
containers:
- name: user-management-app # Container name inside each Pod
image: user-management:latest # Docker image used by the container
imagePullPolicy: IfNotPresent # Pull image only if not present locally
ports:
- containerPort: 8082 # Port that the container listens on inside the Pod
name: http # Named port — allows Services to refer to it by name instead of port number
env:
- name: EXTERNAL_BACKEND_URL
value: "http://external-backend-svc:8088"
# Pod will call this Service instead of raw Windows IP
A Kubernetes Service gives your Pod(s) a stable DNS name and a ClusterIP. Without a Service, pods get ephemeral IPs so other services can't reliably reach them. Example in-cluster URL:
http://user-management-svc.user-management-app.svc.cluster.local:8082
apiVersion: v1
kind: Service
metadata:
name: user-management-svc # Unique name of the Service
namespace: user-management-app # Namespace to logically isolate this app from others
spec:
selector:
app: user-management-app # Matches Pods with label app=user-management-app
ports:
- protocol: TCP
port: 8082 # The port exposed inside the cluster (cluster-wide virtual IP)
targetPort: http # Forwards to the Pod's containerPort named "http" (8082 above)
type: ClusterIP # Default type; exposes service on an internal cluster IP
You can apply these manifests using kubectl apply. Since the resources specify a namespace,
you can either rely on that or explicitly specify it during apply:
# Apply using the namespace in the manifest
kubectl apply -f user-management-deployment.yaml
kubectl apply -f user-management-service.yaml
# Or explicitly specify the namespace
kubectl apply -f user-management-deployment.yaml -n user-management-app
kubectl apply -f user-management-service.yaml -n user-management-app
# Delete Deployment
kubectl delete deployment user-management-app -n user-management-app
# Delete Service
kubectl delete svc user-management-svc -n user-management-app
To restart all Pods in a Deployment (for example, to pick up a new image or configuration):
kubectl rollout restart deployment user-management-app -n user-management-app
# Check rollout status
kubectl rollout status deployment user-management-app -n user-management-app
kubectl scale deployments/user-management-app --replicas=4
This command scales the Deployment to 4 replicas (Pods). The associated Service will automatically load-balance requests across all replicas.
Create a Service that exposes the Deployment named user-management-app:
kubectl expose deployment/user-management-app --type="NodePort" --port 8082 --name=user-management-svc-nodeport -n user-management-app
This command creates a Service of type NodePort to expose the user-management-app
Deployment. The name of this service is user-management-svc-nodeport.
Kubernetes assigns a port on each Node (in the range 30000–32767), and forwards traffic to port 8082
in the Pods.
kubectl get svc user-management-svc-nodeport -n user-management-app
# Example output:
# NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
# user-management-svc-nodeport NodePort 10.43.56.235 none 8082:31776/TCP 6m
# The "31776" is the NodePort assigned by Kubernetes.
To access your application from outside the cluster:
kubectl get nodes -o wide:
kubectl get nodes -o wide
# Example output:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k3d-user-management-local-server-0 Ready control-plane,master 16d v1.31.5+k3s1 172.19.0.4 K3s v1.31.5+k3s1 6.6.87.2-microsoft-standard-WSL2 containerd://1.7.23-k3s2
172.19.0.4) is inside Docker network and usually not reachable
from Windows.hostname -I), e.g., 172.26.41.222:
hostname -I
# Example output:
172.26.41.222 172.119.0.1
http://172.26.41.222:31776
⚡ Alternative approach using kubectl port-forward:
You can directly port-forward to the ClusterIP Service instead of using NodePort.
A ClusterIP service is accessible only inside the cluster. Your laptop/localhost is outside. Use
port-forwarding to create a temporary tunnel for local testing.
kubectl port-forward svc/user-management-svc 8082:8082 -n user-management-app
Now test at http://localhost:8082
This forwards traffic from your local host directly to the ClusterIP service inside the cluster.
This avoids NodePort networking issues and directly forwards traffic from your host to the Service.
Ingress allows you to expose multiple services under the same host or domain using HTTP(S) routes. This is useful when you want to route traffic to multiple applications inside the same cluster without exposing each Service individually.
In this example, we have installed Curity in the curity namespace. The runtime and admin pods are
running, and we want to make these services accessible from outside the cluster (e.g., from other clusters or your
host machine).
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: curity-ingress
namespace: curity
annotations:
kubernetes.io/ingress.class: traefik # Tells Kubernetes which ingress controller should handle this Ingress
spec:
ingressClassName: traefik # Also specifies the ingress controller
rules:
- host: curity.local # The hostname to access services
http:
paths:
- path: /runtime # Route traffic with /runtime prefix
pathType: Prefix
backend:
service:
name: idsvr-tutorial-runtime-svc # Service to route traffic to
port:
number: 8443
- path: /admin # Route traffic with /admin prefix
pathType: Prefix
backend:
service:
name: idsvr-tutorial-admin-svc # Service to route traffic to
port:
number: 6749
To apply the Ingress, ensure you use the correct namespace:
kubectl apply -f curity-ingress.yaml -n curity
172.19.0.4) is
usually internal to Docker/k3d and not directly reachable from Windows.hostname -I, e.g., 172.26.41.222) is reachable from your
Windows host. Adding this IP with the hostname curity.local in the hosts file allows your browser
to resolve the domain correctly:
C:\Windows\System32\drivers\etc\hosts (admin rights) and add:
172.26.41.222 curity.local
/etc/hosts if testing from inside
WSL.https://curity.local/runtimehttps://curity.local/adminFrom a Pod inside the same cluster, you don’t need to go through the Ingress hostname unless you want to test the exact external route. The most common and efficient way to access services inside the cluster is via the Service DNS.
What is Service DNS?
Kubernetes automatically gives every Service a DNS name in the form
<service-name>.<namespace>.svc.cluster.local.
To check the services in the curity namespace:
kubectl get svc -n curity
# Example output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
idsvr-tutorial-admin-svc ClusterIP 10.43.209.141 <none> 6789/TCP,6790/TCP,4465/TCP,4466/TCP,6749/TCP 2d22h
idsvr-tutorial-runtime-svc ClusterIP 10.43.66.149 <none> 8443/TCP,4465/TCP,4466/TCP 2d22h
You can reach these services using:
http://idsvr-tutorial-admin-svc.curity.svc.cluster.local:6789http://idsvr-tutorial-runtime-svc.curity.svc.cluster.local:8443You need to know which Ingress controller is installed (Traefik, Nginx, etc.) so your
ingressClassName matches it.
kubectl get ingressclass
# Example output:
NAME CONTROLLER PARAMETERS AGE
traefik traefik.io/ingress-controller none 16d
You can also describe the IngressClass to see which controller handles a given class:
kubectl describe ingressclass traefik
curity in this case).
curity namespace./runtime and /admin paths to the correct
Services. Now clients can access:
https://curity.local/runtimehttps://curity.local/adminIn the previous section, you learned how to expose services using the Traefik Ingress Controller. Here, let’s explore the same concept using the NGINX Ingress Controller and a new demo application.
k3d cluster create demo-test \
--api-port 6551 \
-p "80:80@loadbalancer" \
-p "443:443@loadbalancer" \
--k3s-arg "--disable=traefik@server:0"
Notes:
--api-port 6551 exposes the Kubernetes API server on port 6551 of
your local
machine. This allows kubectl and other Kubernetes clients on your host to communicate with the
cluster.
--k3s-arg "--disable=traefik@server:0" disables the default Traefik Ingress
Controller that comes
preinstalled with k3s. This ensures there’s no conflict when you later install the NGINX Ingress Controller
manually.
Check that the cluster is running:
kubectl get nodes
sudo apt install -y helm
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install nginx-ingress ingress-nginx/ingress-nginx \
--namespace demo-nginx-ingress \
--create-namespace \
--set controller.publishService.enabled=true
Explanation:
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginxingress-nginx is the Helm repository name you assign locally.
- https://kubernetes.github.io/ingress-nginx is the official Helm chart repository URL for
the NGINX Ingress Controller maintained by the Kubernetes community.
- This command tells Helm where to find and download the NGINX Ingress Controller chart.
helm install nginx-ingress ingress-nginx/ingress-nginxnginx-ingress is the release name — a name you choose to identify this installation of
the chart within your cluster.
- ingress-nginx/ingress-nginx refers to the chart path:
ingress-nginx refers to the repository you added earlier.ingress-nginx is the actual chart name inside that repository.ingress-nginx chart from the ingress-nginx repo,
and call this deployment nginx-ingress.”
--namespace demo-nginx-ingress — Creates and deploys into a dedicated namespace
for the ingress controller.
--set controller.publishService.enabled=true — Ensures NGINX advertises its
external IP through a Service, so external clients can route traffic correctly.
Save the following as demo-app-deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo-app
namespace: demo-app
spec:
replicas: 1
selector:
matchLabels:
app: demo-app
template:
metadata:
labels:
app: demo-app
spec:
containers:
- name: demo-app
image: hashicorp/http-echo
args:
- "-text=Hello from NGINX Ingress!"
ports:
- containerPort: 5678
---
apiVersion: v1
kind: Service
metadata:
name: demo-app
namespace: demo-app
spec:
selector:
app: demo-app
ports:
- port: 80
targetPort: 5678
Apply the manifest:
kubectl apply -f demo-app-deployment.yaml
Save the following as demo-ingress.yaml:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: demo-ingress
namespace: demo-app
spec:
ingressClassName: nginx
rules:
- host: demo.local
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: demo-app
port:
number: 80
Apply the ingress:
kubectl apply -f demo-ingress.yaml
To make demo.local resolvable:
sudo nano /etc/hosts
# Add the following line:
127.0.0.1 demo.local
C:\Windows\System32\drivers\etc\hosts
# Add the following line:
127.0.0.1 demo.local
curl http://demo.local
# Output:
Hello from NGINX Ingress!
Visit http://demo.local after adding the hosts entry.
The hosts file is a local DNS override file that maps domain names to IP addresses before your system queries any DNS server.
What happens when you add it:
http://demo.local in your browser.demo.local → 127.0.0.1.127.0.0.1 (your local machine).Browser → 127.0.0.1:80 → k3d LoadBalancer → NGINX Ingress → demo-app
Sometimes Pods inside your cluster need to communicate with a service running outside the cluster
(e.g., a legacy backend, a database on your host machine, or another service not containerized).
Kubernetes doesn’t automatically route localhost from inside Pods to your Windows host — especially
when
running under WSL2 or k3d. Instead, you can use a Service + Endpoints pair to bridge traffic.
In this scenario, our curity-runtime Pod listens on port 8439.
It needs to call a backend service (ex: internal-scim) running on the Windows host at port 8088.
We expose that Windows service inside Kubernetes with a Service + Endpoints pair,
so Pods can call it like any other internal Service.
# external-backend.yaml
apiVersion: v1
kind: Service
metadata:
name: external-backend-svc
namespace: curity
spec:
type: ClusterIP # Default: makes the Service reachable only inside the cluster
ports:
- name: backend-api
port: 8088 # A port exposed external-backend-svc service inside the cluster (The Pods must connect to this port)
targetPort: 8088 # Forwards to this port on the backing endpoints. Must match the Endpoints port below
- name: backend-db
port: 27017
targetPort: 27017
---
apiVersion: v1
kind: Endpoints
metadata:
name: external-backend-svc # Must match the Service name exactly
namespace: curity
subsets:
- addresses:
- ip: 172.26.41.222 # External service IP (your Windows host IP from `hostname -I`)
ports:
- name: backend-api # Must match Service.spec.ports[].name above
port: 8088 # Port on the external host that provides the API
- name: backend-db
port: 27017
Namespace matters:
Service and Endpoints must be in the same namespace (e.g.,
curity).
kubectl apply -f external-backend.yaml -n curity
default namespace and your Pods in
curity may not find the service.
Verify configuration:
kubectl get svc external-backend-svc -n curity -o wide
kubectl get endpoints external-backend-svc -n curity -o yaml
Testing the setup:
# Exec into the user-management Pod
kubectl exec -it deploy/user-management-app -n curity -- sh
# Inside the Pod, test the external backend call:
curl http://external-backend-svc:8088
If the setup is correct, this curl will hit the Windows service running at
172.26.41.222:8088, but from the Pod’s perspective, it looks like a normal Kubernetes Service.
Service defines logical ports inside the cluster (8088, 27017). Pods in your cluster
can call:
http://external-backend-svc:8088http://external-backend-svc.curity.svc.cluster.local:8088Endpoints object maps those logical ports to an external IP address
(in this case, 172.26.41.222, your WSL2 host IP). This tells Kubernetes where to actually send
traffic.Pod (8082 → outbound request) → Service (ClusterIP 8088) → Endpoints → External host IP (172.26.41.222:8088).
172.19.0.4) is usually internal to Docker/k3d and not directly reachable from Windows.
hostname -I, e.g., 172.26.41.222) is reachable from Windows
and should be used in the Endpoints object.127.0.0.1 or localhost,
because
localhost inside the container refers to the container itself.
https://api.github.com),
they don’t need this — cluster networking and NAT already handle external routing.
This trick is only for routing to services bound to your host machine or private network IPs.localhost:27017.external-backend-svc and the Pod just connects to
mongodb://external-backend-svc:27017.
⚡ Tip: On Windows + WSL2, get the reachable host IP with:
hostname -I # e.g., 172.26.41.222
Add that IP in your Endpoints object. If it changes (like after a reboot), you’ll need to update the Endpoints.
For a more permanent solution, you can run a reverse proxy inside the cluster or use
host.docker.internal. But, this is supported only in Docker Desktop (Windows/Mac) and not supported
in Docker on Linux, k3d, WSL2 + k3d.
Normally, a Kubernetes Service selects Pods using spec.selector. The
targetPort maps to the container port inside those Pods.
In the case of an external Service (using a Service + Endpoints pair), there are no
Pods.
Instead, you manually create an Endpoints object with an IP address and ports.
The Service simply forwards requests to whatever you define in the Endpoints object.
Here, targetPort must match the port numbers you defined in the Endpoints.
So the traffic flow is:
Service.port → Service.targetPort → Pod.containerPort
Service.port → Service.targetPort → Endpoints.portConfigMaps keep configuration separate from images. Two common usage patterns are: mount as files, or inject as environment variables.
# app-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: user-management-app
data:
APP_MODE: "production"
APP_VERSION: "1.0.0"
LOG_LEVEL: "debug"
# deployment-configmap-env.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: configmap-env-deployment
namespace: user-management-app
spec:
replicas: 1
selector:
matchLabels:
app: configmap-env
template:
metadata:
labels:
app: configmap-env
spec:
containers:
- name: demo-container
image: busybox
command: ["sh","-c","env; sleep 3600"]
envFrom:
- configMapRef:
name: app-config
Apply & verify:
kubectl apply -f app-configmap.yaml
kubectl apply -f deployment-configmap-env.yaml
kubectl get pods -n user-management-app
kubectl exec -it <pod-name> -n user-management-app -- env | grep APP_MODE
kubectl rollout restart deployment/<name> -n <ns>).
# deployment-configmap-file.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: configmap-file-deployment
namespace: user-management-app
spec:
replicas: 1
selector:
matchLabels:
app: configmap-file
template:
metadata:
labels:
app: configmap-file
spec:
containers:
- name: demo-container
image: busybox
command: ["sh","-c","cat /config/APP_MODE; sleep 3600"]
volumeMounts:
- name: config-volume
mountPath: /config
volumes:
- name: config-volume
configMap:
name: app-config
Apply & verify:
kubectl apply -f app-configmap.yaml
kubectl apply -f deployment-configmap-file.yaml
kubectl exec -it <pod-name> -n user-management-app -- cat /config/APP_MODE
Secrets store sensitive strings (passwords, keys). They are base64-encoded when stored in YAML but are not encrypted in etcd unless you enable encryption at rest.
echo -n 'user' | base64 # output: dXNlcg==
echo -n 'password' | base64 # output: cGFzc3dvcmQ=
# app-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: app-secret
namespace: user-management-app
type: Opaque
data:
DB_USER: dXNlcg==
DB_PASSWORD: cGFzc3dvcmQ=
# deployment-secret-env.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: secret-env-deployment
namespace: user-management-app
spec:
replicas: 1
selector:
matchLabels:
app: secret-env
template:
metadata:
labels:
app: secret-env
spec:
containers:
- name: demo-container
image: busybox
command: ["sh","-c","env; sleep 3600"]
envFrom:
- secretRef:
name: app-secret
Apply & verify:
kubectl apply -f app-secret.yaml
kubectl apply -f deployment-secret-env.yaml
kubectl exec -it <pod-name> -n user-management-app -- printenv | grep DB_
# deployment-secret-file.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: secret-file-deployment
namespace: user-management-app
spec:
replicas: 1
selector:
matchLabels:
app: secret-file
template:
metadata:
labels:
app: secret-file
spec:
containers:
- name: demo-container
image: busybox
command: ["sh","-c","cat /secrets/DB_USER; cat /secrets/DB_PASSWORD; sleep 3600"]
volumeMounts:
- name: secret-volume
mountPath: /secrets
volumes:
- name: secret-volume
secret:
secretName: app-secret
Apply & verify:
kubectl apply -f app-secret.yaml
kubectl apply -f deployment-secret-file.yaml
kubectl exec -it <pod-name> -n user-management-app -- ls /secrets
kubectl exec -it <pod-name> -n user-management-app -- cat /secrets/DB_USER
# Create namespaces
kubectl create namespace curity
kubectl create namespace user-management-app
# Apply configmap & secret (example)
kubectl apply -f app-configmap.yaml
kubectl apply -f app-secret.yaml
# Deploy example workloads
kubectl apply -f deployment-configmap-env.yaml
kubectl apply -f deployment-configmap-file.yaml
kubectl apply -f deployment-secret-env.yaml
kubectl apply -f deployment-secret-file.yaml
# User management app
kubectl apply -f user-management-deployment.yaml -n user-management-app
# Check pods
kubectl get pods -n curity
kubectl get pods -n user-management-app
# Verify ConfigMap via env
kubectl exec -it <pod-name> -n curity -- env | grep APP_MODE
# Verify ConfigMap via file
kubectl exec -it <pod-name> -n curity -- cat /config/APP_MODE
# Verify Secret via env
kubectl exec -it <pod-name> -n curity -- printenv | grep DB_PASSWORD
# Verify Secret via file
kubectl exec -it <pod-name> -n curity -- cat /secrets/DB_PASSWORD
# Verify service -> external endpoints
kubectl get svc scim-service -n curity -o yaml
kubectl get endpoints scim-service -n curity -o yaml
kubectl config current-context before applying manifests.