Recently, I wrote about Falco in order to have live supervision of potential malicious behaviors in your Kubernetes cluster. This is interesting when running a cluster but it could be interesting also to check and/or apply some rules on your cluster to leverage security: this is made easy with Kyverno software that I'm going to introduce in this article. Let's go!
Install it
As usual, first, we have to install Kyverno into our cluster. ℹ️ No need to create a dedicated namespace, it's already included in the deployment provided by the editor, just add the --create-namespace
option. There are 2 modes to install it: classic yaml
deployment file or helm chart. As usual, I prefer using Helm as it'll be easier to manage upgrades after.
$ helm install kyverno kyverno/kyverno -n kyverno --create-namespace
[...]
Chart version: 3.0.2
Kyverno version: v1.10.1
Thank you for installing kyverno! Your release is named kyverno.
The following components have been installed in your cluster:
- CRDs
- Admission controller
- Reports controller
- Cleanup controller
- Background controller
We are now ready to set up some rules to administrate our cluster.
Rules
But first, let's describe quickly the different kinds of rules available:
validation: this one is to validate newly created elements and also existing ones. There are 2 levels:
Enforce: the most restrictive one, if the defined rule is failing, the resource cannot be created
Audit: this one is passive. Resource can be created but they appear in the policy reports as failing
mutation: modifies (or mutates) a resource, for instance by automatically adding some labels to it.
generation: generates elements when creating another one. For instance, generate a default
ConfigMap
to a namespace when creating it.image verification: verifies images that are used for pods. It can check signatures, attestors, digests
cleanup: this last one is to enforce some cleanup tasks into the cluster to remove some components that are no longer needed
Of course, all the types of rules are configurable and can be applied to most Kubernetes components. Also, we can easily exclude some namespaces from rulesets.
Let's have a deeper look at some of them and some application samples.
Validation rules
As quickly explained above, validation rules can control elements among some rules. It can be done on existing elements and when creating some. So to illustrate all this, let's first create a demo namespace and a basic deployment in it.
kubectl create namespace demo-kyverno
kubectl config set-context --current --namespace=demo-kyverno
kubectl run nginx --image=nginx
Now, let's create 2 rules, one in Audit
mode (non-blocking) and another one in Enforce
(blocking)
# Policy to check if pods have the label `team`
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: audit-labels
spec:
validationFailureAction: Audit
rules:
- name: audit-team
match:
any:
- resources:
kinds:
- Pod
validate:
message: "label 'team' is required"
pattern:
metadata:
labels:
team: "?*"
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: restrict-image-registry
annotations:
policies.kyverno.io/title: Restrict Image Registries
policies.kyverno.io/description: >-
Only images from specific gitlab.com registry are allowed.
spec:
validationFailureAction: Enforce
background: true
rules:
- name: validate-registries
match:
any:
- resources:
kinds:
- Pod
validate:
message: "Unauthorized registry."
pattern:
spec:
=(ephemeralContainers):
- image: "registry.gitlab.com/*"
=(initContainers):
- image: "registry.gitlab.com/*"
containers:
- image: "registry.gitlab.com/*"
Now, we can create policies in the cluster
kubectl apply -f demo-audit-labels.yml
kubectl apply -f demo-restrict-image-registry.yml
We can check that policies are correctly installed. As explained, policies also applied to already installed resources. As our demo nginx
doesn't follow both policies, we should see that policy reports detected these violations
$ kubectl get policyreports
NAME PASS FAIL WARN ERROR SKIP AGE
cpol-audit-labels 0 1 0 0 0 14m
cpol-restrict-image-registry 0 1 0 0 0 14m
Let's try to create another resource that violates the Enforce
rule
$ kubectl run another_nginx --image=nginx
Error from server: admission webhook "validate.kyverno.svc-fail" denied the request:
resource Pod/demo-kyverno/another_nginx was blocked due to the following policies
restrict-image-registry:
validate-registries: 'validation error: Unauthorized registry. rule validate-registries
failed at path /spec/containers/0/image/'
# Check that nothing has been created
$ kubectl get pods -n demo-kyverno
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 56m
# Check reports
$ kubectl get policyreports
NAME PASS FAIL WARN ERROR SKIP AGE
cpol-audit-labels 0 1 0 0 0 14m
cpol-restrict-image-registry 0 1 0 0 0 14m
ℹ️ As nothing has been created, reports are the same.
Now, let's create another one that matches Enforce
but not Audit
rules
apiVersion: apps/v1
kind: Deployment
metadata:
name: go-sample
spec:
selector:
matchLabels:
app: go-sample
template:
metadata:
labels:
app: go-sample
spec:
containers:
- image: registry.gitlab.com/yodamad-trash/go-sample/go-sample
name: go-sample
ports:
- containerPort: 80
imagePullSecrets:
- name: gitlabcred
$ kubectl apply -f demo-deployment.yml -n demo-kyverno
deployment.apps/go-sample created
# Check that reports are updated
$ kubectl get policyreports -n demo-kyverno
NAME PASS FAIL WARN ERROR SKIP AGE
cpol-audit-labels 0 4 0 0 0 81m
cpol-restrict-image-registry 3 1 0 0 0 81m
First, we can see that cpol-restrict-image-registry
has some PASS as I deployed an image from the authorized registry and cpol-audit-labels
has some more FAIL as I don't add the specified label team
.
But, why do I have 3 PASS rather than 1 ? 🤔
Because Kyverno is clever: as creating Pod
directly is not a good practice in Kubernetes, we more often use Deployment
as I have done here or another kind of component that will create Pod
. Therefore, Kyverno implements Auto-gen
rules like explained in the documentation, to capture the violations to all levels. We can see that by analyzing events of the Deployment
$ kubectl describe deployment go-sample -n demo-kyverno
[...]
Name: go-sample
Namespace: demo-kyverno
[...]
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning PolicyViolation 36m (x2 over 96m) kyverno-scan policy audit-labels/autogen-audit-team fail: validation error: label 'team' is required. rule autogen-audit-team failed at path /spec/template/metadata/labels/team/
In this example, I used a Deployment
so it generates an event for: Deployment
, ReplicaSet
& Pod
. We can see that in the policyreport
description:
$ kubectl describe policyreports cpol-restrict-image-registry -n demo-kyverno
[...]
Results:
Message: validation rule 'autogen-validate-registries' passed.
Policy: restrict-image-registry
Resources:
API Version: apps/v1
Kind: ReplicaSet
[...]
Message: validation rule 'autogen-validate-registries' passed.
Policy: restrict-image-registry
Resources:
API Version: apps/v1
Kind: Deployment
[...]
Message: validation error: Unauthorized registry. rule validate-registries failed at path /spec/containers/0/image/
Policy: restrict-image-registry
Resources:
API Version: v1
Kind: Pod
Name: nginx
[...]
Message: validation rule 'validate-registries' passed.
Policy: restrict-image-registry
Resources:
API Version: v1
Kind: Pod
Name: go-sample-ccfc648f9-p7v4r
[...]
Summary:
Error: 0
Fail: 1
Pass: 3
Skip: 0
Warn: 0
We can force not to handle all these resources by adding an annotation to the policy : pod-policies.kyverno.io/autogen-controllers: none
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: restrict-image-registry-pod-only
annotations:
policies.kyverno.io/title: Restrict Image Registries
policies.kyverno.io/description: >-
Only images from specific gitlab.com registry are allowed.
pod-policies.kyverno.io/autogen-controllers: none
spec:
validationFailureAction: Enforce
background: true
rules:
- name: validate-registries
match:
any:
- resources:
kinds:
- Pod
validate:
message: "Unauthorized registry."
pattern:
spec:
=(ephemeralContainers):
- image: "registry.gitlab.com/*"
=(initContainers):
- image: "registry.gitlab.com/*"
containers:
- image: "registry.gitlab.com/*"
Now applying it, we can see that we have only one PASS
$ kubectl apply -f demo-restrict-image-registry-pod-only.yml
$ kubectl get policyreports -n demo-kyverno
NAME PASS FAIL WARN ERROR SKIP AGE
cpol-audit-labels 0 4 0 0 0 13h
cpol-restrict-image-registry 3 1 0 0 0 13h
cpol-restrict-image-registry-pod-only 1 1 0 0 0 3s
Mutation rules
Another type is the mutation rule that allows enriching a component. For instance, we can add a default label to all our components. This can be useful when coupled with another tool like kube-downscaler to scale down pods automatically during the night (like explained in another article 😉).
Let's create the new policy
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-label-downscaler
annotations:
pod-policies.kyverno.io/autogen-controllers: none
spec:
rules:
- name: add-label-downscaler
match:
any:
- resources:
kinds:
- Deployment
namespaces:
- demo-kyverno
mutate:
patchStrategicMerge:
spec:
template:
metadata:
annotations:
downscaler/downtime: "Mon-Fri 17:30-18:00 CET"
Now, deploy it and create a new deployment
# Deploy policy and deployment
$ kubectl apply -f demo-add-label-policy.yml
clusterpolicy.kyverno.io/add-label-downscaler configured
$ kubectl apply -f demo-deployment.yml -n demo-kyverno
deployment.apps/go-sample created
# Check deployment
$ kubectl describe deploy go-sample -n demo-kyverno
Name: go-sample
Namespace: demo-kyverno
[...]
Annotations: deployment.kubernetes.io/revision: 1
policies.kyverno.io/last-applied-patches: add-label-downscaler.add-label-downscaler.kyverno.io: added /spec/template/metadata/annotations
[...]
Pod Template:
Labels: app=go-sample
Annotations: downscaler/downtime: Mon-Fri 17:30-18:00 CET
Containers:
go-sample:
Image: registry.gitlab.com/yodamad-trash/go-sample/go-sample
[...]
We can see that pod associated with the deployment has been automatically labeled with downscaler/downtime: Mon-Fri 17:30-18:00 CET
Now we can check that the label is taken into account by our downscaler by watching the logs of it
$ kubectl logs -f kube-downscaler-68c68d4f56-7242z -n downscaler
[...]
2023-07-17 17:30:56,483 INFO: Scaling down Deployment demo-kyverno/go-sample from 1 to 0 replicas (uptime: always, downtime: Mon-Fri 17:30-18:00 CET)
Generation rules
The last type I'll cover in this article is the one that allows to generate components automatically. This can be useful when you need some common components in all (or at least almost all) namespaces for instance. Here I'll illustrate that with a secret to pull images from a private registry.
# Create a new namespace
$ kubectl create ns demo-generation-rule-kyverno
namespace/demo-generation-rule-kyverno created
# Try to deploy the same deployment as before
$ kubectl apply -f demo-deployment -n demo-generation-rule-kyverno
deployment.apps/go-sample created
# The pod fails to start as it's private registry and there's no secret to access it
$ kubectl get po
NAME READY STATUS RESTARTS AGE
go-sample-ccfc648f9-fzm9t 0/1 ImagePullBackOff 0 24s
# Remove failing deployment & namespace
$ kubectl delete deploy go-sample
$ kubectl delete ns demo-generation-rule-kyverno
Now, let's create the rule that will automatically generate a secret in the newly created namespace
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: gitlab-secret
spec:
rules:
- name: gitlab-secret
match:
any:
- resources:
kinds:
- Namespace
generate:
synchronize: true
apiVersion: v1
kind: Secret
name: gitlabcred
# generate the resource in the new namespace
namespace: "{{request.object.metadata.name}}"
data:
kind: Secret
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: eyJhd...
Now, we can deploy this policy and try to recreate the previous namespace and deployment
# Deploy new policy
$ kubectl apply -f demo-generate-secret-policy.yml
clusterpolicy.kyverno.io/gitlab-secret configured
# Recreate namespace
$ kubectl create ns demo-generation-rule-kyverno
namespace/demo-generation-rule-kyverno created
# Check if secret automatically created
$ kubectl get secrets -n demo-generation-rule-kyverno
NAME TYPE DATA AGE
gitlabcred kubernetes.io/dockerconfigjson 1 11m
# Deploy
$ kubectl apply -f demo-deployment.yml
deployment.apps/go-sample created
# Check everything is ok
$ kubectl get po
NAME READY STATUS RESTARTS AGE
go-sample-ccfc648f9-c88gs 1/1 Running 0 10m
Use cases
To summarize quickly some of the use cases that can be covered by rules
Enforce some controls on components when creating them or on existing ones: either in a blocking way or non-blocking
Enrich components when deploying them to standardize or enable some global features
Generate components automatically to secure your cluster, easily configure it or monitor it
Conclusion
Kyverno is a very complete tool that covers many use cases and helps you keep control of your Kubernetes cluster, easily normalize deployments, and automatically set up some components.
Kyverno helps a lot to set up a cluster and set up some good practices.
Documentation is quite complete and understandable so it helps a lot to onboard and make your first steps. What could maybe supplement is the complete specification to set up your own rule in a more advanced way, but there are many samples in documentation to help you.
🙏 As usual thanks to OVHcloud to support me for this article by providing me with environments on their platform to test and illustrate.
At the time of this article, Kyvernno was in version 1.10.0
Sources are available on gitlab.com