Hi,

I'm trying to run a program on a cluster using YARN.

YARN is present there along with HADOOP.

Problem I'm running into is as below -

Container exited with a non-zero exit code 13
> Failing this attempt. Failing the application.
>      ApplicationMaster host: N/A
>      ApplicationMaster RPC port: -1
>      queue: default
>      start time: 1528297574594
>      final status: FAILED
>      tracking URL:
> http://MasterNode:8088/cluster/app/application_1528296308262_0004
>      user: bblite
> Exception in thread "main" org.apache.spark.SparkException: Application
> application_1528296308262_0004 finished with failed status
>

I checked on the net and most of the stackoverflow problems say, that the
users have given *.master('local[*]')* in the code while invoking the Spark
Session and at the same time, giving *--master yarn* while doing the
spark-submit, hence they're getting the error due to conflict.

But, in my case, I've not mentioned any master at all at the code. Just
trying to run it on yarn by giving *--master yarn* while doing the
spark-submit. Below is the code spark invoking -

spark = SparkSession\
        .builder\
        .appName("Temp_Prog")\
        .getOrCreate()

Below is the spark-submit -

*spark-submit --master yarn --deploy-mode cluster --num-executors 3
--executor-cores 6 --executor-memory 4G
/appdata/codebase/backend/feature_extraction/try_yarn.py*

I've tried without --deploy-mode too, still no help.

Thanks,
Aakash.

Reply via email to