Thank you, Andrew,

I am using Spark 0.9.1 and tried your approach like this:

bin/spark-shell --driver-java-options
"-Dspark.executor.memory=$MEMORY_PER_EXECUTOR"

I get

bad option: '--driver-java-options'

There must be something different in my setup. Any ideas?

Thank you again,
Oleg





On 5 June 2014 22:28, Andrew Ash <and...@andrewash.com> wrote:

> Hi Oleg,
>
> I set the size of my executors on a standalone cluster when using the
> shell like this:
>
> ./bin/spark-shell --master $MASTER --total-executor-cores
> $CORES_ACROSS_CLUSTER --driver-java-options
> "-Dspark.executor.memory=$MEMORY_PER_EXECUTOR"
>
> It doesn't seem particularly clean, but it works.
>
> Andrew
>
>
> On Thu, Jun 5, 2014 at 2:15 PM, Oleg Proudnikov <oleg.proudni...@gmail.com
> > wrote:
>
>> Hi All,
>>
>> Please help me set Executor JVM memory size. I am using Spark shell and
>> it appears that the executors are started with a predefined JVM heap of
>> 512m as soon as Spark shell starts. How can I change this setting? I tried
>> setting SPARK_EXECUTOR_MEMORY before launching Spark shell:
>>
>> export SPARK_EXECUTOR_MEMORY=1g
>>
>> I also tried several other approaches:
>>
>> 1) setting SPARK_WORKER_MEMORY in conf/spark-env.sh on the worker
>> 2)  passing it as -m argument and running bin/start-slave.sh 1 -m 1g on
>> the worker
>>
>> Thank you,
>> Oleg
>>
>>
>


-- 
Kind regards,

Oleg

Reply via email to