Running and deploying the controller

To test out the controller, we can run it locally against the cluster. Before we do so, though, we’ll need to install our CRDs, as per the quick start. This will automatically update the YAML manifests using controller-tools, if needed:

make install

Now that we’ve installed our CRDs, we can run the controller against our cluster. This will use whatever credentials that we connect to the cluster with, so we don’t need to worry about RBAC just yet.

In a separate terminal, run

make run

You should see logs from the controller about starting up, but it won’t do anything just yet.

At this point, we need a CronJob to test with. Let’s write a sample to config/samples/batch_v1_cronjob.yaml, and use that:

apiVersion: batch.tutorial.kubebuilder.io/v1
kind: CronJob
metadata:
  name: cronjob-sample
spec:
  schedule: "*/1 * * * *"
  startingDeadlineSeconds: 60
  concurrencyPolicy: Allow # explicitly specify, but Allow is also default.
  jobTemplate:
    spec:
      template:
        spec:
          containers:
          - name: hello
            image: busybox
            args:
            - /bin/sh
            - -c
            - date; echo Hello from the Kubernetes cluster
          restartPolicy: OnFailure

kubectl create -f config/samples/batch_v1_cronjob.yaml

At this point, you should see a flurry of activity. If you watch the changes, you should see your cronjob running, and updating status:

kubectl get cronjob.batch.tutorial.kubebuilder.io -o yaml
kubectl get job

Now that we know it’s working, we can run it in the cluster. Stop the make run invocation, and run

make docker-build docker-push IMG=<some-registry>/controller
make deploy

If we list cronjobs again like we did before, we should see the controller functioning again!