with yarn.executor.memoryOverhead property
doesnt seem to make much of a difference.
Has anyone else come across this before?
- Amey
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/PySpark-failing-on-a-mid-sized-broadcast-tp25520.html
Sent from the Apache
BTW, my spark.python.worker.reuse setting is set to "true".
--
View this message in context:
http://apache-spark-user-list.1001560.n3.nabble.com/PySpark-failing-on-a-mid-sized-broadcast-tp25520p25521.html
Sent from the Apache Spark User List mailing list archive at