Hi all,

If we schedule a spark job on k8s, how are volume mappings handled?

In client mode I would expect that drivers volumes have to mapped manually in 
the pod template. Executor volumes are attached dynamically based on submit 
parameters. Right...?

I cluster mode I would expect that volumes for drivers/executors are taken from 
submit command and attached to the pods accordingly. Right...?

Any hints appreciated,

Best,
Meikel

Reply via email to