Oh my apologies that was for 1.0

For Spark 0.9 I did it like this:

MASTER=spark://mymaster:7077 SPARK_MEM=8g ./bin/spark-shell -c
$CORES_ACROSS_CLUSTER

The downside of this though is that SPARK_MEM also sets the driver's JVM to
be 8g, rather than just the executors.  I think this is the reason for why
SPARK_MEM was deprecated.  See https://github.com/apache/spark/pull/99


On Thu, Jun 5, 2014 at 2:37 PM, Oleg Proudnikov <oleg.proudni...@gmail.com>
wrote:

> Thank you, Andrew,
>
> I am using Spark 0.9.1 and tried your approach like this:
>
> bin/spark-shell --driver-java-options
> "-Dspark.executor.memory=$MEMORY_PER_EXECUTOR"
>
> I get
>
> bad option: '--driver-java-options'
>
> There must be something different in my setup. Any ideas?
>
> Thank you again,
> Oleg
>
>
>
>
>
> On 5 June 2014 22:28, Andrew Ash <and...@andrewash.com> wrote:
>
>> Hi Oleg,
>>
>> I set the size of my executors on a standalone cluster when using the
>> shell like this:
>>
>> ./bin/spark-shell --master $MASTER --total-executor-cores
>> $CORES_ACROSS_CLUSTER --driver-java-options
>> "-Dspark.executor.memory=$MEMORY_PER_EXECUTOR"
>>
>> It doesn't seem particularly clean, but it works.
>>
>> Andrew
>>
>>
>> On Thu, Jun 5, 2014 at 2:15 PM, Oleg Proudnikov <
>> oleg.proudni...@gmail.com> wrote:
>>
>>> Hi All,
>>>
>>> Please help me set Executor JVM memory size. I am using Spark shell and
>>> it appears that the executors are started with a predefined JVM heap of
>>> 512m as soon as Spark shell starts. How can I change this setting? I tried
>>> setting SPARK_EXECUTOR_MEMORY before launching Spark shell:
>>>
>>> export SPARK_EXECUTOR_MEMORY=1g
>>>
>>> I also tried several other approaches:
>>>
>>> 1) setting SPARK_WORKER_MEMORY in conf/spark-env.sh on the worker
>>> 2)  passing it as -m argument and running bin/start-slave.sh 1 -m 1g on
>>> the worker
>>>
>>> Thank you,
>>> Oleg
>>>
>>>
>>
>
>
> --
> Kind regards,
>
> Oleg
>
>

Reply via email to