Re: Container preempted by scheduler - Spark job error

2016-06-02 Thread Ted Yu
Not much information in the attachment.

There was TimeoutException w.r.t. BlockManagerMaster.removeRdd().

Any chance of more logs ?

Thanks

On Thu, Jun 2, 2016 at 2:07 AM, Vishnu Nair  wrote:

> Hi Ted
>
> We use Hadoop 2.6 & Spark 1.3.1. I also attached the error file to this
> mail, please have a look at it.
>
> Thanks
>
> On Thu, Jun 2, 2016 at 11:51 AM, Ted Yu  wrote:
>
>> Can you show the error in bit more detail ?
>>
>> Which release of hadoop / Spark are you using ?
>>
>> Is CapacityScheduler being used ?
>>
>> Thanks
>>
>> On Thu, Jun 2, 2016 at 1:32 AM, Prabeesh K.  wrote:
>>
>>> Hi I am using the below command to run a spark job and I get an error
>>> like "Container preempted by scheduler"
>>>
>>> I am not sure if it's related to the wrong usage of Memory:
>>>
>>> nohup ~/spark1.3/bin/spark-submit \ --num-executors 50 \ --master yarn \
>>> --deploy-mode cluster \ --queue adhoc \ --driver-memory 18G \
>>> --executor-memory 12G \ --class main.ru..bigdata.externalchurn.Main
>>> \ --conf "spark.task.maxFailures=100" \ --conf
>>> "spark.yarn.max.executor.failures=1" \ --conf "spark.executor.cores=1"
>>> \ --conf "spark.akka.frameSize=50" \ --conf
>>> "spark.storage.memoryFraction=0.5" \ --conf
>>> "spark.driver.maxResultSize=10G" \
>>> ~/external-flow/externalChurn-1.0-SNAPSHOT-shaded.jar \
>>> prepareTraining=true \ prepareTrainingMNP=true \ prepareMap=false \
>>> bouldozerMode=true \ &> ~/external-flow/run.log & echo "STARTED" tail -f
>>> ~/external-flow/run.log
>>>
>>> Thanks,
>>>
>>>
>>>
>>>
>>>
>>
>


Re: Container preempted by scheduler - Spark job error

2016-06-02 Thread Jacek Laskowski
Hi,

Few things for closer examination:

* is yarn master URL accepted in 1.3? I thought it was only in later
releases. Since you're seeing the issue it seems it does work.

* I've never seen specifying confs using a single string. Can you check in
the Web ui they're applied?

* what about this  in class?

Can you publish the application logs from the driver and executors?

Jacek
On 2 Jun 2016 10:33 a.m., "Prabeesh K."  wrote:

> Hi I am using the below command to run a spark job and I get an error like
> "Container preempted by scheduler"
>
> I am not sure if it's related to the wrong usage of Memory:
>
> nohup ~/spark1.3/bin/spark-submit \ --num-executors 50 \ --master yarn \
> --deploy-mode cluster \ --queue adhoc \ --driver-memory 18G \
> --executor-memory 12G \ --class main.ru..bigdata.externalchurn.Main
> \ --conf "spark.task.maxFailures=100" \ --conf
> "spark.yarn.max.executor.failures=1" \ --conf "spark.executor.cores=1"
> \ --conf "spark.akka.frameSize=50" \ --conf
> "spark.storage.memoryFraction=0.5" \ --conf
> "spark.driver.maxResultSize=10G" \
> ~/external-flow/externalChurn-1.0-SNAPSHOT-shaded.jar \
> prepareTraining=true \ prepareTrainingMNP=true \ prepareMap=false \
> bouldozerMode=true \ &> ~/external-flow/run.log & echo "STARTED" tail -f
> ~/external-flow/run.log
>
> Thanks,
>
>
>
>
>


Re: Container preempted by scheduler - Spark job error

2016-06-02 Thread Ted Yu
Can you show the error in bit more detail ?

Which release of hadoop / Spark are you using ?

Is CapacityScheduler being used ?

Thanks

On Thu, Jun 2, 2016 at 1:32 AM, Prabeesh K.  wrote:

> Hi I am using the below command to run a spark job and I get an error like
> "Container preempted by scheduler"
>
> I am not sure if it's related to the wrong usage of Memory:
>
> nohup ~/spark1.3/bin/spark-submit \ --num-executors 50 \ --master yarn \
> --deploy-mode cluster \ --queue adhoc \ --driver-memory 18G \
> --executor-memory 12G \ --class main.ru..bigdata.externalchurn.Main
> \ --conf "spark.task.maxFailures=100" \ --conf
> "spark.yarn.max.executor.failures=1" \ --conf "spark.executor.cores=1"
> \ --conf "spark.akka.frameSize=50" \ --conf
> "spark.storage.memoryFraction=0.5" \ --conf
> "spark.driver.maxResultSize=10G" \
> ~/external-flow/externalChurn-1.0-SNAPSHOT-shaded.jar \
> prepareTraining=true \ prepareTrainingMNP=true \ prepareMap=false \
> bouldozerMode=true \ &> ~/external-flow/run.log & echo "STARTED" tail -f
> ~/external-flow/run.log
>
> Thanks,
>
>
>
>
>