I'm running Kubernetes 1.8.1-gke.1 on GCP. I created a cronjob using kubectl
apply -f cronjob.yaml. There was an issue with required ENV vars missing, and
the container failed to startup. I tried a few more times with different yaml
configurations to no avail.
I deleted the cronjob using kubectl delete cronjob <name>, but the containers
are still trying. I tried creating the busybox cronjob in the documentation
using the same name from my original cronjob hoping it would be successful and
clear things up, but that left the previous pods alone and started scheduling
more.
I currently have over 200 containers in pending, error, or CrashLoopBackoff
state related to this. Deleting the pods just recreates them, and I have an
insufficient pods (2) error.
The cronjob.yaml is below. Is there anyway to force stop these from retrying
permanently? Note that even after running kubectl apply -f ..., it says cronjob
<name> was configured, but running kubectl get cronjobs immediately after lists
nothing.
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: payment-${CRON_NAME}
namespace: payment
labels:
app: payment
spec:
schedule: "${CRON_SCHEDULE}"
jobTemplate:
spec:
template:
spec:
containers:
- name: payment-${CRON_NAME}
image: "${GCP_IMAGE}:${BUILD_NUMBER}"
args:
- ./${CRON_NAME}
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: ${DEFAULT_TOKEN_NAME}
readOnly: true
restartPolicy: OnFailure
volumes:
- name: ${DEFAULT_TOKEN_NAME}
secret:
defaultMode: 420
secretName: ${DEFAULT_TOKEN_NAME}
--
You received this message because you are subscribed to the Google Groups
"Kubernetes user discussion and Q&A" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To post to this group, send email to [email protected].
Visit this group at https://groups.google.com/group/kubernetes-users.
For more options, visit https://groups.google.com/d/optout.