How many nodes do you have: 171.42 TB for a total of 2040 nodes.
how much space is allocated to each node for YARN: 14 G max for each
container. any thing beyond causes failure
how big are the executors you're requesting, *9973*
and what else is running on the cluster? There are 1000s of other YARN
applications running on cluster. I am submitting to above queue.

On Fri, Jun 26, 2015 at 3:49 PM, Sandy Ryza <sandy.r...@cloudera.com> wrote:

> The scheduler configurations are helpful as well, but not useful without
> the information outlined above.
>
> -Sandy
>
> On Fri, Jun 26, 2015 at 10:34 AM, ÐΞ€ρ@Ҝ (๏̯͡๏) <deepuj...@gmail.com>
> wrote:
>
>> These are my YARN queue configurations
>>
>> Queue State:RUNNINGUsed Capacity:206.7%Absolute Used Capacity:3.1%Absolute
>> Capacity:1.5%Absolute Max Capacity:10.0%Used Resources:<memory:5578496,
>> vCores:390>Num Schedulable Applications:7Num Non-Schedulable
>> Applications:0Num Containers:390Max Applications:45Max Applications Per
>> User:27Max Schedulable Applications:1278Max Schedulable Applications Per
>> User:116Configured Capacity:1.5%Configured Max Capacity:10.0%Configured
>> Minimum User Limit Percent:30%Configured User Limit Factor:2.0
>> Executors:
>> ./bin/spark-submit -v --master yarn-cluster --driver-class-path
>> /apache/hadoop/share/hadoop/common/hadoop-common-2.4.1-EBAY-2.jar:/apache/hadoop/lib/hadoop-lzo-0.6.0.jar:/apache/hadoop-2.4.1-2.1.3.0-2-EBAY/share/hadoop/yarn/lib/guava-11.0.2.jar:/apache/hadoop-2.4.1-2.1.3.0-2-EBAY/share/hadoop/hdfs/hadoop-hdfs-2.4.1-EBAY-2.jar
>> --jars
>> /apache/hadoop-2.4.1-2.1.3.0-2-EBAY/share/hadoop/hdfs/hadoop-hdfs-2.4.1-EBAY-2.jar,/home/dvasthimal/spark1.3/1.3.1.lib/spark_reporting_dep_only-1.0-SNAPSHOT-jar-with-dependencies.jar
>> * --num-executors 9973 --driver-memory 14g --driver-java-options
>> "-XX:MaxPermSize=512M -Xmx4096M -Xms4096M -verbose:gc -XX:+PrintGCDetails
>> -XX:+PrintGCTimeStamps" --executor-memory 14g --executor-cores 1 *--queue
>> hdmi-others --class com.ebay.ep.poc.spark.reporting.SparkApp
>> /home/dvasthimal/spark1.3/1.3.1.lib/spark_reporting-1.0-SNAPSHOT.jar
>> startDate=2015-06-20 endDate=2015-06-21
>> input=/apps/hdmi-prod/b_um/epdatasets/exptsession subcommand=viewItem
>> output=/user/dvasthimal/epdatasets/viewItem buffersize=128
>> maxbuffersize=1068 maxResultSize=200G
>>
>>
>>
>>
>> On Thu, Jun 25, 2015 at 4:52 PM, Sandy Ryza <sandy.r...@cloudera.com>
>> wrote:
>>
>>> How many nodes do you have, how much space is allocated to each node for
>>> YARN, how big are the executors you're requesting, and what else is running
>>> on the cluster?
>>>
>>> On Thu, Jun 25, 2015 at 3:57 PM, ÐΞ€ρ@Ҝ (๏̯͡๏) <deepuj...@gmail.com>
>>> wrote:
>>>
>>>> I run Spark App on Spark 1.3.1 over YARN.
>>>>
>>>> When i request --num-executors 9973 and when i see Executors from
>>>> Environment tab from SPARK UI its between 200 to 300.
>>>>
>>>> What is incorrect here ?
>>>>
>>>> --
>>>> Deepak
>>>>
>>>>
>>>
>>
>>
>> --
>> Deepak
>>
>>
>


-- 
Deepak

Reply via email to