Is SPARK_EXECUTOR_INSTANCES the total number of workers in the cluster or the 
workers per slave node?

Is spark.executor.instances an actual config option?  I found that in a commit, 
but it's not in the docs.

What is the difference between spark.yarn.executor.memoryOverhead and 
spark.executor.memory ?  Same question for the 'driver' variant, but I assume 
it's the same answer.

Is there a spark.driver.memory option that's undocumented or do you have to use 
the environment variable SPARK_DRIVER_MEMORY?

What config option or environment variable do I need to set to get pyspark 
interactive to pick up the yarn class path?  The ones that work for spark-shell 
and spark-submit don't seem to work for pyspark.

Thanks in advance.

Greg

Reply via email to