Re: Spark Kubernetes Architecture: Deployments vs Pods that create Pods

2019-01-29 Thread Yinan Li
Hi Wilson, The behavior of a Deployment doesn't fit with the way Spark executor pods are run and managed. For example, executor pods are created and deleted per the requests from the driver dynamically and normally they run to completion. A Deployment assumes uniformity and statelessness of the

unsubscribe

2019-01-29 Thread Charles Nnamdi Akalugwu
unsubscribe

unsubscribe

2019-01-29 Thread Mun, Woyou - US
unsubscribe

unsubscribe

2019-01-29 Thread Daniel O' Shaughnessy
unsubscribe

How to avoid copying hadoop conf to submit on yarn

2019-01-29 Thread Yann Moisan
Hello, I'm using Spark on YARN in cluster mode. Is there a way to avoid copying the directory /etc/hadoop/conf to the machine where I run spark-submit ? Regards, Yann.

Spark Kubernetes Architecture: Deployments vs Pods that create Pods

2019-01-29 Thread WILSON Frank
Hi, I've been playing around with Spark Kubernetes deployments over the past week and I'm curious to know why Spark deploys as a driver pod that creates more worker pods. I've read that it's normal to use Kubernetes Deployments to create a distributed service, so I am wondering why Spark just