As I'm learning Kubernetes, I create plenty of useless resources that I need to keep to continue my journey. This is not good for our planet as my workloads are running for nothing when I'm not working on them. Fortunately, a few weeks ago at my work, we discovered a nice little tool to optimize Kubernetes workloads usage: kube-green. kube-green
helps you to shut down automatically and if necessary, restart automatically, based on a cron-like configuration, your resources.
In this article, I'll quickly show how I've applied to my demo resources this tool to reduce my computing time and avoid useless consumption. As the documentation of kube-green
is quite good, I'll be quick just to show the main steps to set it up in your cluster. Be careful, it needs cert-manager
to be installed to work. I've explained this in a previous article if needed.
First, install kube-green
elements into your cluster. In my case, I choose a classic kubectl
command as I don't know (yet) kustomize
and/or how operators
work (which are the other ways to install kube-green
)
kubectl apply -f https://github.com/kube-green/kube-green/releases/latest/download/kube-green.yaml
This will create several resources needed by kube-green
in a dedicated namespace to work and be secured:
namespace/kube-green configured
customresourcedefinition.apiextensions.k8s.io/sleepinfos.kube-green.com configured
serviceaccount/kube-green-controller-manager configured
role.rbac.authorization.k8s.io/kube-green-leader-election-role configured
clusterrole.rbac.authorization.k8s.io/kube-green-manager-role configured
clusterrole.rbac.authorization.k8s.io/kube-green-metrics-reader configured
clusterrole.rbac.authorization.k8s.io/kube-green-proxy-role configured
rolebinding.rbac.authorization.k8s.io/kube-green-leader-election-rolebinding configured
clusterrolebinding.rbac.authorization.k8s.io/kube-green-manager-rolebinding configured
clusterrolebinding.rbac.authorization.k8s.io/kube-green-proxy-rolebinding configured
configmap/kube-green-manager-config configured
service/kube-green-controller-manager-metrics-service configured
service/kube-green-webhook-service configured
deployment.apps/kube-green-controller-manager configured
certificate.cert-manager.io/kube-green-serving-cert configured
issuer.cert-manager.io/kube-green-selfsigned-issuer configured
validatingwebhookconfiguration.admissionregistration.k8s.io/kube-green-validating-webhook-configuration configured
The important resource is SleepInfo
. It's this resource that we'll have to configure to enable kube-green
in our namespaces.
$ kubectl explain SleepInfo
KIND: SleepInfo
VERSION: kube-green.com/v1alpha1
DESCRIPTION:
SleepInfo is the Schema for the sleepinfos API
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadata <Object>
Standard object's metadata. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
spec <Object>
SleepInfoSpec defines the desired state of SleepInfo
status <Object>
SleepInfoStatus defines the observed state of SleepInfo
It's possible to configure as many SleepInfo
as needed but there are some limitations:
SleepInfo
only covers (for now)Deployment
&CronJob
resources. Other types of resources won't be impactedSleepInfo
has a namespace scope, so it requires creating one per namespace to which you want to apply the stop/restart behaviorSleepInfo
configuration cannot handle several periods in a day in the same instance. You need to configure one per period
In my use case, I've 2 namespaces that I want to configure, and 2 periods for each as I know I won't work on my demo cluster during my working hours. So I want to shut down my resources from midnight to noon and from 2 pm to 7 pm. So I create 2 SleepInfo
per namespace, one for the morning
apiVersion: kube-green.com/v1alpha1
kind: SleepInfo
metadata:
name: sleep-funwith-morning
namespace: fun-with
spec:
weekdays: "*"
sleepAt: "00:30"
wakeUpAt: "11:30"
timeZone: "Europe/Paris"
and for the afternoon
apiVersion: kube-green.com/v1alpha1
kind: SleepInfo
metadata:
name: sleep-funwith-afternoon
namespace: fun-with
spec:
weekdays: "*"
sleepAt: "14:30"
wakeUpAt: "19:00"
timeZone: "Europe/Paris"
Now, I can apply them to my cluster. As I specified the namespace in the metadata, I can apply them without specifying the targeted namespace. If you want to use the same description for all the namespace, you just have to specified it with -n
parameter of course.
$ kubectl apply -f sleep-info-funwith-morning.yml
$ kubectl apply -f sleep-info-funwith-afternoon.yml
Just checking that everything is ok by listing SleepInfo
components in the cluster:
$ kubectl get SleepInfo -A
NAMESPACE NAME AGE
dns-management sleep-dns-afternoon 2s
dns-management sleep-dns-morning 6s
fun-with sleep-funwith-afternoon 15s
fun-with sleep-funwith-morning 20s
And finally, at 00:30, I check that my deployments are down:
$ kubectl get deployments -A
NAMESPACE NAME READY UP-TO-DATE AVAILABLE AGE
dns-management external-dns 0/0 0 0 14d
fun-with hashnode 0/0 0 0 3d6h
In conclusion, thank to kube-green
, it's really easy to optimize your Kubernetes workloads. This can be useful for instance for development or test environments that don't need to run outside the working hours.
The official documentation is quite complete. All sources for this demo are available here.