I'm running pyspark with deploy-mode as client with yarn using dynamic
allocation:
 pyspark --master yarn --deploy-mode client --executor-memory 6g
--executor-cores 4 --driver-memory 4g

The node where I'm running pyspark has 4GB memory but I keep running out of
memory on this node. If using yarn, it isn't clear to me why the memory
consumption is so high on the client node. Can someone please let me know if
this is expected?


Thanks!



--
View this message in context: 
http://apache-spark-user-list.1001560.n3.nabble.com/Need-clarification-regd-deploy-mode-client-tp26719.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to