Hi Wilson,
The behavior of a Deployment doesn't fit with the way Spark executor pods
are run and managed. For example, executor pods are created and deleted per
the requests from the driver dynamically and normally they run to
completion. A Deployment assumes uniformity and statelessness of the
unsubscribe
unsubscribe
unsubscribe
Hello,
I'm using Spark on YARN in cluster mode.
Is there a way to avoid copying the directory /etc/hadoop/conf to the
machine where I run spark-submit ?
Regards,
Yann.
Hi,
I've been playing around with Spark Kubernetes deployments over the past week
and I'm curious to know why Spark deploys as a driver pod that creates more
worker pods.
I've read that it's normal to use Kubernetes Deployments to create a
distributed service, so I am wondering why Spark just