Try to run spark shell with correct number of executors

e.g. for 10 box cluster running on r3.2xlarge (61 RAM, 8 cores) you can use
the following

spark-shell \
    --num-executors 20 \
    --driver-memory 2g \
    --executor-memory 24g \
    --executor-cores 4


you might also want to set spark.yarn.executor.memoryOverhead to 2662





On Tue, Nov 24, 2015 at 2:07 AM, Dinesh Ranganathan <
dineshranganat...@gmail.com> wrote:

> Thanks Christopher, I will try that.
>
> Dan
>
> On 20 November 2015 at 21:41, Bozeman, Christopher <bozem...@amazon.com>
> wrote:
>
>> Dan,
>>
>>
>>
>> Even though you may be adding more nodes to the cluster, the Spark
>> application has to be requesting additional executors in order to thus use
>> the added resources.  Or the Spark application can be using Dynamic
>> Resource Allocation (
>> http://spark.apache.org/docs/latest/job-scheduling.html#dynamic-resource-allocation)
>> [which may use the resources based on application need and availability].
>> For example, in EMR release 4.x (
>> http://docs.aws.amazon.com/ElasticMapReduce/latest/ReleaseGuide/emr-spark-configure.html#spark-dynamic-allocation)
>> you can request Spark Dynamic Resource Allocation as the default
>> configuration at cluster creation.
>>
>>
>>
>> Best regards,
>>
>> Christopher
>>
>>
>>
>>
>>
>> *From:* Dinesh Ranganathan [mailto:dineshranganat...@gmail.com]
>> *Sent:* Monday, November 16, 2015 4:57 AM
>> *To:* Sabarish Sasidharan
>> *Cc:* user
>> *Subject:* Re: Spark Expand Cluster
>>
>>
>>
>> Hi Sab,
>>
>>
>>
>> I did not specify number of executors when I submitted the spark
>> application. I was in the impression spark looks at the cluster and figures
>> out the number of executors it can use based on the cluster size
>> automatically, is this what you call dynamic allocation?. I am spark
>> newbie, so apologies if I am missing the obvious. While the application was
>> running I added more core nodes by resizing my EMR instance and I can see
>> the new nodes on the resource manager but my running application did not
>> pick up those machines I've just added.   Let me know If i am missing a
>> step here.
>>
>>
>>
>> Thanks,
>>
>> Dan
>>
>>
>>
>> On 16 November 2015 at 12:38, Sabarish Sasidharan <
>> sabarish.sasidha...@manthan.com> wrote:
>>
>> Spark will use the number of executors you specify in spark-submit. Are
>> you saying that Spark is not able to use more executors after you modify it
>> in spark-submit? Are you using dynamic allocation?
>>
>>
>>
>> Regards
>>
>> Sab
>>
>>
>>
>> On Mon, Nov 16, 2015 at 5:54 PM, dineshranganathan <
>> dineshranganat...@gmail.com> wrote:
>>
>> I have my Spark application deployed on AWS EMR on yarn cluster mode.
>> When I
>> increase the capacity of my cluster by adding more Core instances on AWS,
>> I
>> don't see Spark picking up the new instances dynamically. Is there
>> anything
>> I can do to tell Spark to pick up the newly added boxes??
>>
>> Dan
>>
>>
>>
>> --
>> View this message in context:
>> http://apache-spark-user-list.1001560.n3.nabble.com/Spark-Expand-Cluster-tp25393.html
>> Sent from the Apache Spark User List mailing list archive at Nabble.com.
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
>> For additional commands, e-mail: user-h...@spark.apache.org
>>
>>
>>
>>
>>
>> --
>>
>>
>>
>> Architect - Big Data
>>
>> Ph: +91 99805 99458
>>
>>
>>
>> Manthan Systems | *Company of the year - Analytics (2014 Frost and
>> Sullivan India ICT)*
>>
>> +++
>>
>>
>>
>>
>>
>> --
>>
>> Dinesh Ranganathan
>>
>
>
>
> --
> Dinesh Ranganathan
>

Reply via email to