Hello ,

We're running spark 2.3.1 on kubernetes v1.11.0 and our driver pods from
k8s are getting stuck in initializing state like so:

NAME
                            READY     STATUS     RESTARTS   AGE

my-pod-fd79926b819d3b34b05250e23347d0e7-driver   0/1       Init:0/1   0
      18h


And from *kubectl describe pod*:

*Warning  FailedMount  9m (x128 over 4h) * kubelet, 10.47.96.167  Unable to
mount volumes for pod
"my-pod-fd79926b819d3b34b05250e23347d0e7-driver_spark(1f3aba7b-c10f-11e8-bcec-1292fec79aba)":
timeout expired waiting for volumes to attach or mount for pod
"spark"/"my-pod-fd79926b819d3b34b05250e23347d0e7-driver". list of unmounted
volumes=[spark-init-properties]. list of unattached
volumes=[spark-init-properties download-jars-volume download-files-volume
spark-token-tfpvp]
  *Warning  FailedMount  4m (x153 over 4h)  kubelet,* 10.47.96.167
MountVolume.SetUp failed for volume "spark-init-properties" : configmaps
"my-pod-fd79926b819d3b34b05250e23347d0e7-init-config" not found

>From what I can see in *kubectl get configmap* the init config map for the
driver pod isn't there.

Am I correct in assuming since the configmap isn't being created the driver
pod will never start (hence stuck in init)?

Where does the init config map come from?

Why would it not be created?


Please suggest

Thanks,
Purna

Reply via email to