The spark-init ConfigMap is used for the init-container that is responsible for downloading remote dependencies. The k8s submission client run by spark-submit should create the ConfigMap and add a ConfigMap volume in the driver pod. Can you provide the command you used to run the job?
On Wed, Sep 26, 2018 at 2:36 PM purna pradeep <purna2prad...@gmail.com> wrote: > Hello , > > > We're running spark 2.3.1 on kubernetes v1.11.0 and our driver pods from > k8s are getting stuck in initializing state like so: > > NAME > READY STATUS RESTARTS AGE > > my-pod-fd79926b819d3b34b05250e23347d0e7-driver 0/1 Init:0/1 0 > 18h > > > And from *kubectl describe pod*: > > *Warning FailedMount 9m (x128 over 4h) * kubelet, 10.47.96.167 Unable > to mount volumes for pod > "my-pod-fd79926b819d3b34b05250e23347d0e7-driver_spark(1f3aba7b-c10f-11e8-bcec-1292fec79aba)": > timeout expired waiting for volumes to attach or mount for pod > "spark"/"my-pod-fd79926b819d3b34b05250e23347d0e7-driver". list of unmounted > volumes=[spark-init-properties]. list of unattached > volumes=[spark-init-properties download-jars-volume download-files-volume > spark-token-tfpvp] > *Warning FailedMount 4m (x153 over 4h) kubelet,* 10.47.96.167 > MountVolume.SetUp failed for volume "spark-init-properties" : configmaps > "my-pod-fd79926b819d3b34b05250e23347d0e7-init-config" not found > > From what I can see in *kubectl get configmap* the init config map for > the driver pod isn't there. > > Am I correct in assuming since the configmap isn't being created the > driver pod will never start (hence stuck in init)? > > Where does the init config map come from? > > Why would it not be created? > > > Please suggest > > Thanks, > Purna > >