There can be any number of reasons to run a once off task on every node in a Kubernetes cluster such as reading some info off of the node or running a quick test. But doing so is not straight-forward. None of the current resources are quite capable of it. A Job is designed to run to completion but there’s no way to ensure one runs on every node. A DaemonSet is designed to run on every node but there’s no way to ensure that it runs to completion only once. Because if the tasks exits after running once the DaemonSet
Always will cause it to run again.
There is the open issue CronJob daemonset for this feature but until it lands, I needed a solution. There are a number of ways you can go about this. I wanted to keep the solution in Kubernetes-land so I chose to start with the DaemonSet resource and do some scriptery around it such that it only runs once.
The versions used in this post at the time of writing are:
- Create the DaemonSet
- Wait for the DaemonSet Pods to run
- Wait for the script running in the DaemonSet Pods to complete
- Delete the DaemonSet
Create a Kubernetes cluster with Minikube, clone the gist with example code, and run it.
$ minikube start Starting local Kubernetes v1.6.0 cluster... Starting VM... SSH-ing files into VM... Setting up certs... Starting cluster components... Connecting to cluster... Setting up kubeconfig... Kubectl is now configured to use the cluster. $ git clone https://gist.github.com/56d1b2a01daebd9691c62cdcadb1574b.git run-once-daemonset $ cd run-once-daemonset $ chmod u+x run-once-daemonset.sh cat-etc-hosts.sh $ ./run-once-daemonset.sh daemonset "cat-etc-hosts" created waiting for cat-etc-hosts pods to run... waiting for cat-etc-hosts daemonset to complete daemonset "cat-etc-hosts" deleted
The Run Once DaemonSet
This is a vanilla DaemonSet that mounts the node’s /etc/hosts file for the script to read from.
This is a simple task implemented as a script that exits when it receives the
TERM signal (which comes from
You could also build your own Docker image and substitute its
image name in the daemonset.yaml file.
$ docker login --username my-docker-id $ docker build -t my-docker-id/cat-etc-hosts . $ docker push my-docker-id/cat-etc-hosts
The script ties it all together and waits for the task to complete before deleting the DaemonSet.
There are a bunch of ways to shave this yak.
- Wait for CronJob daemonset to become a reality.
- Use config management like Ansible to run through your node inventory.
- Use Fabfiles and Kubernetes: Automating SSH with Kubernetes Nodes.
- Iterate over your nodes and template Job manifests to target every node.
To me a DaemonSet felt like the most natural fit. Keeping the solution in Kubernetes-land also keeps everything a part of the Kubernetes audit trail.