Re: Setting SPARK_MEM higher than available memory in driver

2014-03-27 Thread Tsai Li Ming
Thanks. Got it working.

On 28 Mar, 2014, at 2:02 pm, Aaron Davidson  wrote:

> Assuming you're using a new enough version of Spark, you should use 
> spark.executor.memory to set the memory for your executors, without changing 
> the driver memory. See the docs for your version of Spark.
> 
> 
> On Thu, Mar 27, 2014 at 10:48 PM, Tsai Li Ming  wrote:
> Hi,
> 
> My worker nodes have more memory than the host that I’m submitting my driver 
> program, but it seems that SPARK_MEM is also setting the Xmx of the spark 
> shell?
> 
> $ SPARK_MEM=100g MASTER=spark://XXX:7077 bin/spark-shell
> 
> Java HotSpot(TM) 64-Bit Server VM warning: INFO: 
> os::commit_memory(0x7f736e13, 205634994176, 0) failed; error='Cannot 
> allocate memory' (errno=12)
> #
> # There is insufficient memory for the Java Runtime Environment to continue.
> # Native memory allocation (malloc) failed to allocate 205634994176 bytes for 
> committing reserved memory.
> 
> I want to allocate at least 100GB of memory per executor. The allocated 
> memory on the executor seems to depend on the -Xmx heap size of the driver?
> 
> Thanks!
> 
> 
> 
> 



Re: Setting SPARK_MEM higher than available memory in driver

2014-03-27 Thread Aaron Davidson
Assuming you're using a new enough version of Spark, you should use
spark.executor.memory to set the memory for your executors, without
changing the driver memory. See the docs for your version of Spark.


On Thu, Mar 27, 2014 at 10:48 PM, Tsai Li Ming wrote:

> Hi,
>
> My worker nodes have more memory than the host that I'm submitting my
> driver program, but it seems that SPARK_MEM is also setting the Xmx of the
> spark shell?
>
> $ SPARK_MEM=100g MASTER=spark://XXX:7077 bin/spark-shell
>
> Java HotSpot(TM) 64-Bit Server VM warning: INFO:
> os::commit_memory(0x7f736e13, 205634994176, 0) failed;
> error='Cannot allocate memory' (errno=12)
> #
> # There is insufficient memory for the Java Runtime Environment to
> continue.
> # Native memory allocation (malloc) failed to allocate 205634994176 bytes
> for committing reserved memory.
>
> I want to allocate at least 100GB of memory per executor. The allocated
> memory on the executor seems to depend on the -Xmx heap size of the driver?
>
> Thanks!
>
>
>
>


Setting SPARK_MEM higher than available memory in driver

2014-03-27 Thread Tsai Li Ming
Hi,

My worker nodes have more memory than the host that I’m submitting my driver 
program, but it seems that SPARK_MEM is also setting the Xmx of the spark shell?

$ SPARK_MEM=100g MASTER=spark://XXX:7077 bin/spark-shell

Java HotSpot(TM) 64-Bit Server VM warning: INFO: 
os::commit_memory(0x7f736e13, 205634994176, 0) failed; error='Cannot 
allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 205634994176 bytes for 
committing reserved memory.

I want to allocate at least 100GB of memory per executor. The allocated memory 
on the executor seems to depend on the -Xmx heap size of the driver?

Thanks!