The scripts for Spark 1.0 actually specify this property in
/root/spark/conf/spark-defaults.conf

I didn't know that this would override the --executor-memory flag, though,
that's pretty odd.


On Thu, Jun 12, 2014 at 6:02 PM, Aliaksei Litouka <
aliaksei.lito...@gmail.com> wrote:

> Yes, I am launching a cluster with the spark_ec2 script. I checked
> /root/spark/conf/spark-env.sh on the master node and on slaves and it looks
> like this:
>
> #!/usr/bin/env bash
>> export SPARK_LOCAL_DIRS="/mnt/spark"
>> # Standalone cluster options
>> export SPARK_MASTER_OPTS=""
>> export SPARK_WORKER_INSTANCES=1
>> export SPARK_WORKER_CORES=1
>> export HADOOP_HOME="/root/ephemeral-hdfs"
>> export SPARK_MASTER_IP=ec2-54-89-95-238.compute-1.amazonaws.com
>> export MASTER=`cat /root/spark-ec2/cluster-url`
>> export
>> SPARK_SUBMIT_LIBRARY_PATH="$SPARK_SUBMIT_LIBRARY_PATH:/root/ephemeral-hdfs/lib/native/"
>> export
>> SPARK_SUBMIT_CLASSPATH="$SPARK_CLASSPATH:$SPARK_SUBMIT_CLASSPATH:/root/ephemeral-hdfs/conf"
>> # Bind Spark's web UIs to this machine's public EC2 hostname:
>> export SPARK_PUBLIC_DNS=`wget -q -O -
>> http://169.254.169.254/latest/meta-data/public-hostname`
>> <http://169.254.169.254/latest/meta-data/public-hostname>
>> # Set a high ulimit for large shuffles
>> ulimit -n 1000000
>
>
> None of these variables seem to be related to memory size. Let me know if
> I am missing something.
>
>
> On Thu, Jun 12, 2014 at 7:17 PM, Matei Zaharia <matei.zaha...@gmail.com>
> wrote:
>
>> Are you launching this using our EC2 scripts? Or have you set up a
>> cluster by hand?
>>
>> Matei
>>
>> On Jun 12, 2014, at 2:32 PM, Aliaksei Litouka <aliaksei.lito...@gmail.com>
>> wrote:
>>
>> spark-env.sh doesn't seem to contain any settings related to memory size
>> :( I will continue searching for a solution and will post it if I find it :)
>> Thank you, anyway
>>
>>
>> On Wed, Jun 11, 2014 at 12:19 AM, Matei Zaharia <matei.zaha...@gmail.com>
>> wrote:
>>
>>> It might be that conf/spark-env.sh on EC2 is configured to set it to
>>> 512, and is overriding the application’s settings. Take a look in there and
>>> delete that line if possible.
>>>
>>> Matei
>>>
>>> On Jun 10, 2014, at 2:38 PM, Aliaksei Litouka <
>>> aliaksei.lito...@gmail.com> wrote:
>>>
>>> > I am testing my application in EC2 cluster of m3.medium machines. By
>>> default, only 512 MB of memory on each machine is used. I want to increase
>>> this amount and I'm trying to do it by passing --executor-memory 2G option
>>> to the spark-submit script, but it doesn't seem to work - each machine uses
>>> only 512 MB instead of 2 gigabytes. What am I doing wrong? How do I
>>> increase the amount of memory?
>>>
>>>
>>
>>
>

Reply via email to