Notice:
This post is older than 5 years – the content might be outdated.
Kubernetes Network Policies appear to be a relatively simple solution for controlling traffic in and to a cluster. But after looking more closely we found that they sometimes behave differently than expected. Here’s what we’ve learned.
Kubernetes Network Policies
Kubernetes Network Policies are the firewall rules of a Kubernetes cluster. They isolate all selected pods from all connections except those that are whitelisted through any policies‘ ingress and egress rules.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 |
kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: access-nginx spec: podSelector: matchLabels: run: nginx ingress: - from: - podSelector: matchLabels: access: "true" |
The above example shows a Network Policy which by default selects pods in the namespace, labelled run=nginx. It also contains a single ingress rule allowing traffic from all pods in the same namespace, labelled access=true. When applying this policy to a Kubernetes cluster, it results in all traffic not specified in the ingress rule to run=nginx pods in the default namespace being blocked. There are more options when declaring Network Policies, such as specifying namespaces, CIDRs, ports or protocols, but the basic behaviour is always the same.
Except that it isn’t. This specification is how Network Policies are described by the Kubernetes docs, but the documentation also mentions that the default networking plugin does not support Network Policies. Instead, various network providers are suggested who officially support Network Policies.
Testing Suggested Network Providers
So off we go, testing our Network Policy with some of the suggested plugins. Let’s start by following the tutorial on declaring Network Policies on a fresh Kubernetes cluster with Calico installed:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 |
$ kubectl run nginx --image=nginx --replicas=2 deployment.apps/nginx created $ kubectl expose deployment nginx --port=80 service/nginx exposed $ kubectl run busybox --rm -ti --image=busybox /bin/sh If you don't see a command prompt, try pressing enter. / # wget --spider --timeout=1 nginx Connecting to nginx (10.96.54.9:80) / # |
So far everything works as expected. We started and exposed an nginx deployment with 2 replicas and a busybox for testing. Using wget
we established that traffic can reach the nginx pods through their service. Now we apply the Network Policy from the tutorial and test the connection again:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
$ kubectl create -f nginx-policy.yaml networkpolicy.networking.k8s.io/access-nginx created $ kubectl run busybox --rm -ti --image=busybox /bin/sh If you don't see a command prompt, try pressing enter. / # wget --spider -timeout=1 nginx Connecting to nginx (10.96.54.9:80) wget: download timed out / # |
That worked out well! The download timed out, as our policy only allows connections from pods labelled access=“true“. Let’s start another busybox with that label and ensure that its traffic can reach the nginx pods through their service:
1 2 3 4 5 6 7 8 9 |
$ kubectl run busybox --rm -ti --labels="access=true" --image=busybox /bin/sh If you don't see a command prompt, try pressing enter. / # wget --spider --timeout=1 nginx Connecting to nginx (10.96.17.194:80) / # |
This time the busybox’s download did not timeout, so our Network Policy is in effect as expected. Let’s try the same with a new cluster using Weave Net:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
$ kubectl run nginx --image=nginx --replicas=2 deployment.apps/nginx created $ kubectl expose deployment nginx --port=80 service/nginx exposed $ kubectl run busybox --rm -ti --image=busybox /bin/sh If you don't see a command prompt, try pressing enter. / # wget --spider --timeout=1 nginx Connecting to nginx (10.96.34.126:80) / # $ kubectl create -f nginx-policy.yaml networkpolicy.networking.k8s.io/access-nginx created $ kubectl run busybox --rm -ti --image=busybox /bin/sh If you don't see a command prompt, try pressing enter. / # wget --spider --timeout=1 nginx Connecting to nginx (10.96.34.126:80) / # |
Huh, looks like Weave does not want to play as nice. We expected the last wget
above to not go through, as the busybox pod was missing the correct labels. Maybe we missed something when installing it? Looking at the documentation on installing the Weave Net addon for Network Policy, it states:
„The Weave Net addon for Kubernetes comes with a Network Policy Controller that automatically monitors Kubernetes for any NetworkPolicy annotations on all namespaces and configures iptables
rules to allow or block traffic as directed by the policies.“
This means that no extra installation should be required, but let’s verify it just to be sure:
1 2 3 |
$ kubectl get pods --all-namespaces -o=jsonpath="{..image}" -l name=weave-net docker.io/weaveworks/weave-kube:2.5.0 docker.io/weaveworks/weave-npc:2.5.0 [...] |
The list of images for pods labelled name=weave contains weave-npc, the image for the Network Policy Controller, so it is definitely included.
Another thing standing out in above quotation is the mention of „annotations on namespaces“. This is not mentioned on the current documentation for Kubernetes Network Policies. Going through another tutorial, this time on Weave’s website, it seems like this namespace annotation feature does not work (anymore). In that tutorial the policies at least took effect, so maybe it has to do with the policies themselves.
Time for a Systematic Approach
To figure out, whether the policies are at fault, we want to test different kinds of policies. For this task we wrote a tool which analyses the Network Policies in the cluster and runs tests against them. These tests are executed by a DaemonSet whose pods hook themselves into the network namespaces of pods affected by Network Policies and then use nmap
to test the connection to the policies‘ selected pods. This approach ensures that our test traffic is identical to production traffic in the cluster, without interfering with it.
To evaluate the functionality of these three plugins we used the example Network Policies from the excellent Kubernetes Network Policy Recipes. This Git Repository contains examples of Network Policies including descriptions of their effects. We extracted the manifests, setup and teardown processes from each recipe and used them as test set. In this evaluation, we skipped the recipes for external traffic as it is not required by our test tool. The table below shows all tests generated for each recipe and the results for each framework over seven test runs. A checkmark (✓) indicates that the case always was successful, a cross (❌) that it always failed and tilde (~) that it sometimes failed.
Test Case | Calico | Weave |
---|---|---|
01-deny-all-traffic-to-an-application | ||
Pods in namespace default with labels app=web cannot reach pods in namespace default with labels app=web on any port | ✓ | ❌ |
02-limit-traffic-to-an-application | ||
Pods in namespace default with labels app=bookstore can reach pods in namespace default with labels app=bookstore,role=api on any port | ~ | ~ |
Pods in namespace default with labels thesis-mbischoff-inverted-app=bookstore cannot reach pods in namespace default with labels app=bookstore,role=api on any port | ✓ | ~ |
Pods in namespace thesis-mbischoff-inverted-default with labels app=bookstore cannot reach pods in namespace default with labels app=bookstore,role=api on any port | ✓ | ~ |
Pods in namespace thesis-mbischoff-inverted-default with labels thesis-mbischoff-inverted-app=bookstore cannot reach pods in namespace default with labels app=bookstore,role=api on any port | ✓ | ~ |
02a-allow-all-traffic-to-an-application | ||
Pods in namespace * with labels * can reach pods in namespace default with labels app=web on any port | ✓ | ✓ |
03-deny-all-non-whitelisted-traffic-in-the-namespace | ||
Pods in namespace default with labels * cannot reach pods in namespace default with labels * on any port | ✓ | ~ |
04-deny-traffic-from-other-namespaces | ||
Pods in namespace secondary with labels * can reach pods in namespace secondary with labels * on any port | ✓ | ✓ |
Pods in namespace thesis-mbischoff-inverted-secondary with labels * cannot reach pods in namespace secondary with labels * on any port | ✓ | ❌ |
05-allow-traffic-from-all-namespaces | ||
Pods in namespace * with labels * can reach pods in namespace secondary with labels app=web on any port | ~ | ~ |
06-allow-traffic-from-a-namespace | ||
Pods in namespace purpose=production with labels * can reach pods in namespace default with labels app=web on any port | ✓ | ✓ |
Pods in namespace thesis-mbischoff-inverted-purpose=production with labels * cannot reach pods in namespace default with labels app=web on any port | ✓ | ❌ |
07-allow-traffic-from-some-pods-in-another-namespace | ||
Pods in namespace team=operations with labels thesis-mbischoff-inverted-type=monitoring cannot reach pods in namespace default with labels app=web on any port | ✓ | ❌ |
Pods in namespace team=operations with labels type=monitoring can reach pods in namespace default with labels app=web on any port | ✓ | ✓ |
Pods in namespace thesis-mbischoff-inverted-team=operations with labels thesis-mbischoff-inverted-type=monitoring cannot reach pods in namespace default with labels app=web on any port | ✓ | ❌ |
Pods in namespace thesis-mbischoff-inverted-team=operations with labels type=monitoring cannot reach pods in namespace default with labels app=web on any port | ✓ | ❌ |
09-allow-traffic-only-to-a-port | ||
Pods in namespace default with labels role=monitoring can reach pods in namespace default with labels app=apiserver on port 5000 | ✓ | ✓ |
Pods in namespace default with labels thesis-mbischoff-inverted-role=monitoring cannot reach pods in namespace default with labels app=apiserver on port 5000 | ✓ | ❌ |
Pods in namespace thesis-mbischoff-inverted-default with labels role=monitoring cannot reach pods in namespace default with labels app=apiserver on port 5000 | ✓ | ❌ |
Pods in namespace thesis-mbischoff-inverted-default with labels thesis-mbischoff-inverted-role=monitoring cannot reach pods in namespace default with labels app=apiserver on port 5000 | ✓ | ❌ |
10-allowing-traffic-with-multiple-selectors | ||
Pods in namespace default with labels app=bookstore,role=api can reach pods in namespace default with labels app=bookstore,role=db on any port | ❌ | ❌ |
Pods in namespace default with labels app=bookstore,role=search can reach pods in namespace default with labels app=bookstore,role=db on any port | ❌ | ❌ |
Pods in namespace default with labels app=inventory,role=web can reach pods in namespace default with labels app=bookstore,role=db on any port | ❌ | ❌ |
Pods in namespace default with labels thesis-mbischoff-inverted-app=bookstore,thesis-mbischoff-inverted-role=api cannot reach pods in namespace default with labels app=bookstore,role=db on any port | ✓ | ✓ |
Pods in namespace default with labels thesis-mbischoff-inverted-app=bookstore,thesis-mbischoff-inverted-role=search cannot reach pods in namespace default with labels app=bookstore,role=db on any port | ✓ | ✓ |
Pods in namespace default with labels thesis-mbischoff-inverted-app=inventory,thesis-mbischoff-inverted-role=web cannot reach pods in namespace default with labels app=bookstore,role=db on any port | ✓ | ✓ |
Pods in namespace thesis-mbischoff-inverted-default with labels app=bookstore,role=api cannot reach pods in namespace default with labels app=bookstore,role=db on any port | ✓ | ✓ |
Pods in namespace thesis-mbischoff-inverted-default with labels app=bookstore,role=search cannot reach pods in namespace default with labels app=bookstore,role=db on any port | ✓ | ✓ |
Pods in namespace thesis-mbischoff-inverted-default with labels app=inventory,role=web cannot reach pods in namespace default with labels app=bookstore,role=db on any port | ✓ | ✓ |
Pods in namespace thesis-mbischoff-inverted-default with labels thesis-mbischoff-inverted-app=bookstore,thesis-mbischoff-inverted-role=api cannot reach pods in namespace default with labels app=bookstore,role=db on any port | ✓ | ✓ |
Pods in namespace thesis-mbischoff-inverted-default with labels thesis-mbischoff-inverted-app=bookstore,thesis-mbischoff-inverted-role=search cannot reach pods in namespace default with labels app=bookstore,role=db on any port | ✓ | ✓ |
Pods in namespace thesis-mbischoff-inverted-default with labels thesis-mbischoff-inverted-app=inventory,thesis-mbischoff-inverted-role=web cannot reach pods in namespace default with labels app=bookstore,role=db on any port | ✓ | ✓ |
11-deny-egress-traffic-from-an-application | ||
Pods in namespace default with labels app=foo cannot reach pods in namespace default with labels app=foo on any port | ✓ | ✓ |
12-deny-all-non-whitelisted-traffic-from-the-namespace | ||
Pods in namespace default with labels * cannot reach pods in namespace default with labels * on any port | ❌ | ❌ |
Calico’s Performance
As you can see, Calico was successfully directing traffic in most cases, with only tests for recipes 10 and 12 failing consistently. Both failures also appear for Weave Net which is caused by implementation errors in our test tool. As recipe 12 demonstrates all egress traffic being blocked, the test application can also not reach the DNS server and therefore fails at name resolution. This in turn means, that all network plugins successfully blocked DNS traffic as intended. In recipe 10, our test container did not respond to the nmap
scan type we used. This can be fixed by running multiple kinds of scans for each test run in the future.
Furthermore there are two test cases where runs sometimes were successful and sometimes failed. This most likely doesn’t have anything to do with the Network Policies but could rather be attributed to a race condition. Race conditions may occur when the network plugin doesn’t pick up a new policy before our test traffic is generated.
Issues with Weave Net
The picture for Weave Net is very suspicious. Besides the two failures caused by our implementation, most cases for blocking traffic seem to fail consistently. There are also more cases in which Weave Net sometimes failed. This prompted us to investigate our configuration only to find an issue with our cluster setup and Weave. In another setup we re-ran the tests and found that only one recipes‘ test cases behave unexpected:
Test Case | Weave |
---|---|
07-allow-traffic-from-some-pods-in-another-namespace | |
Pods in namespace team=operations with labels thesis-mbischoff-inverted-type=monitoring cannot reach pods in namespace default with labels app=web on any port | ✓ |
Pods in namespace team=operations with labels type=monitoring can reach pods in namespace default with labels app=web on any port | ❌ |
Pods in namespace thesis-mbischoff-inverted-team=operations with labels thesis-mbischoff-inverted-type=monitoring cannot reach pods in namespace default with labels app=web on any port | ✓ |
Pods in namespace thesis-mbischoff-inverted-team=operations with labels type=monitoring cannot reach pods in namespace default with labels app=web on any port | ✓ |
The selector in that recipe’s Network Policy addresses ingress pods both by namespace and pod labels, but traffic from a matching pod is blocked by Weave Net. This feature was recently introduced to Kubernetes Network Policies, and is currently still being implemented in Weave Net. Besides this small issue, both networking solutions support the policies we tested well.
Conclusion
The enforcement of Kubernetes Network Policies through plugins doesn’t prevent problems in general. While installation of both Weave Net and Calico is usually straightforward there might be issues that cause them to not constrain traffic as described by the policies. The way network policy API is described by Kubernetes and implemented in these plugins, gaps between specification and implementation aren’t out of the question. If you rely on Network Policies for securing your cluster network, you might wanna be sceptic about their functionality and test them yourself.
Software Versions Used
The following versions of software were used for testing:
- Kubernetes: v1.11.3
- Calico v3.2
- Weave Net:
- weave-kube:2.5.0
-
weave-npc:2.5.0
If the manual way of using wget
seems too costly to you, stay tuned for our follow up blog post introducing and publishing our tool for automatic Kubernetes Network Policy checking.
Read on
Looking for a job as Cloud Platform Engineer, Systems Engineer or something similar? Have a look at our current offerings! You can also find out more about the technologies we use on our website.
This article is based on the findings of my Master’s Thesis Design and Implementation of a Framework for Validating Kubernetes Policies through Automatic Test Generation. It is available for download on inovex.de.
One thought on “Why You Should Test Your Kubernetes Network Policies”