In this post, we will capture network traffic of workloads running in Kubernetes.
Most Wireshark tutorials create a file that contains a traffic dump, copy that to your local machine, and analyze it. But that is slow and lacks quick feedback - I want to stream it directly to my laptop!
I have seen a few incidents where we inspected individual packets. This trick can greatly speed up your response in such scenarios!
I assume that you'll be running this on a Mac. Linux works too, just update the path to your Wireshark installation.
I assume you already have a Pod running. If you don't have a Pod
yet, you can use example at the end of the document 1
to kubectl create a Pod.
Attach a debug container to the pod we want to analyse, and
install tcpdump as follows:
$ kubectl get pods
# NAME READY STATUS RESTARTS AGE
# test-pod 1/1 Running 0 13s
$ kubectl -n playground debug -ti test-pod \
--image=debian:stable-slim \
--container=my-debug-container
# root@test-pod:/#
# In the open shell, run:
apt update; apt install -y tcpdumpLeave this terminal open - Kubernetes may clean up debug Pods when not in use.
Open a new terminal, where we start running tcpdump in the Pod, and stream its output to a locally running Wireshark process:
kubectl exec test-pod -c my-debug-container \
-- tcpdump -i eth0 -w - | /Applications/Wireshark.app/Contents/MacOS/Wireshark -k -i -This should open up a Wireshark window on your laptop with a stream of network packets of the pod.
If the pod is not doing anything, the capture may be empty.
Open a third terminal, and generate some test traffic. For example:
kubectl -n playground get pods -owide
# NAME READY STATUS RESTARTS AGE IP [...]
# test-pod 1/1 Running 0 5m54s 10.130.2.147 [...]
curl -v "http://10.130.2.147"You should now see your http request pop up in the Wireshark UI.
aptRunning apt is slow. To avoid waiting on it, we can
publish a container that already has tcpdump installed.
A fairly minimalist version would be:
docker build --platform=linux/amd64 -t gcr.io/my-project/wolfi-tcpdump - <<EOF
FROM chainguard/wolfi-base:latest
RUN apk add --no-cache tcpdump
CMD tcpdump -i eth0 -w -
EOF
docker push gcr.io/my-project/wolfi-tcpdump:latestAnd then you can attach this instead:
kubectl debug -i -q test-pod \
--image=gcr.io/my-project/wolfi-tcpdump:latest \
| /Applications/Wireshark.app/Contents/MacOS/Wireshark -k -i -Note that we are now passing the -q flag. This flag
ensures, kubectl only prints output frm the remote
session. Without it, it would print
some "helpful information", which we don't want to pipe into
Wireshark.
The traffic we generated here is from the same pod, but from a different container! How can this work?
Since our containerized workloads within one pod share the Linux
net namespace, they can see each others traffic. This
is the same reason why two containers in one pod cannot listen on
the same port; they share one network namespace.
Notice that our traffic dump only captures traffic from a single
pod on our node. To capture the node's traffic, you could spin up a
pod that enables the spec.hostNetwork2.
The traffic capture is streamed over the network to our laptop - so why do we not see it reflected in the traffic capture? If you SSH into a virtual machine, and stream a tcpdump some place else, the tcpdump would capture itself, but not in this demo...
Well, the traffic from your laptop does not directly go to the Pod. Clients connect to the control plane, the control plane forwards it to the kubelet, and the kubelet interacts with containers. So the Pod itself does not use its network to stream logs.
If you installed SSH on a container and SSH'ed into a running Pod (please don't), you would see your SSH traffic reflected in the traffic capture.
This also means that running traffic captures this way adds load to your control plane. So use this trick with some restraint on the throughput of the capture.
Here is a test pod config. Create the pod with
kubectl create -f pod.yaml.
apiVersion: v1
kind: Pod
metadata:
name: test-pod
namespace: playground # or remove to use default
spec:
containers:
- name: test-container
image: nginx:latest
resources:
requests:
cpu: 1
memory: 1G
limits:
cpu: 1
memory: 1GFor details on this field, you can run
kubectl explain pod.spec.hostNetwork↩︎