Hi Ted

We use Hadoop 2.6 & Spark 1.3.1. I also attached the error file to this
mail, please have a look at it.

Thanks

On Thu, Jun 2, 2016 at 11:51 AM, Ted Yu <yuzhih...@gmail.com> wrote:

> Can you show the error in bit more detail ?
>
> Which release of hadoop / Spark are you using ?
>
> Is CapacityScheduler being used ?
>
> Thanks
>
> On Thu, Jun 2, 2016 at 1:32 AM, Prabeesh K. <prabsma...@gmail.com> wrote:
>
>> Hi I am using the below command to run a spark job and I get an error
>> like "Container preempted by scheduler"
>>
>> I am not sure if it's related to the wrong usage of Memory:
>>
>> nohup ~/spark1.3/bin/spark-submit \ --num-executors 50 \ --master yarn \
>> --deploy-mode cluster \ --queue adhoc \ --driver-memory 18G \
>> --executor-memory 12G \ --class main.ru.<custom>.bigdata.externalchurn.Main
>> \ --conf "spark.task.maxFailures=100" \ --conf
>> "spark.yarn.max.executor.failures=10000" \ --conf "spark.executor.cores=1"
>> \ --conf "spark.akka.frameSize=50" \ --conf
>> "spark.storage.memoryFraction=0.5" \ --conf
>> "spark.driver.maxResultSize=10G" \
>> ~/external-flow/externalChurn-1.0-SNAPSHOT-shaded.jar \
>> prepareTraining=true \ prepareTrainingMNP=true \ prepareMap=false \
>> bouldozerMode=true \ &> ~/external-flow/run.log & echo "STARTED" tail -f
>> ~/external-flow/run.log
>>
>> Thanks,
>>
>>
>>
>>
>>
>

Attachment: spark-error
Description: Binary data

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@spark.apache.org
For additional commands, e-mail: user-h...@spark.apache.org

Reply via email to