Hi Maria,

SPARK_MEM is actually a deprecated because it was too general; the reason
it worked was because SPARK_MEM applies to everything (drivers, executors,
masters, workers, history servers...). In favor of more specific configs,
we broke this down into SPARK_DRIVER_MEMORY and SPARK_EXECUTOR_MEMORY and
other environment variables and configs. Note that while
"spark.executor.memory" is an equivalent config, "spark.driver.memory" is
only used for YARN.

If you are using Spark 1.0+, the recommended way of specifying driver
memory is through the "--driver-memory" command line argument of
spark-submit. The equivalent also holds for executor memory (i.e.
"--executor-memory").  That way you don't have to wrangle with the millions
of overlapping configs / environment variables for all the deploy modes.

-Andrew


2014-07-23 4:18 GMT-07:00 mrm <ma...@skimlinks.com>:

> Hi,
>
> I figured out my problem so I wanted to share my findings. I was basically
> trying to broadcast an array with 4 million elements, and a size of
> approximatively 150 MB. Every time I was trying to broadcast, I got an
> OutOfMemory error. I fixed my problem by increasing the driver memory
> using:
> export SPARK_MEM="2g"
>
> Using SPARK_DAEMON_MEM or spark.executor.memory did not help in this case!
> I
> don't have a good understanding of all these settings and I have the
> feeling
> many people are in the same situation.
>
>
>
> --
> View this message in context:
> http://apache-spark-user-list.1001560.n3.nabble.com/driver-memory-tp10486p10489.html
> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>

Reply via email to