Hi all,

I'm running some spark jobs in java on top of YARN by submitting one
application jar that starts multiple jobs.
My question is, if I'm setting some resource configurations, either when
submitting the app or in spark-defaults.conf, would this configs apply to
each job or the entire application?

For example if I lauch it with:

spark-submit --class org.some.className \
    --master yarn-client \
    --num-executors 3 \
    --executor-memory 5g \
    someJar.jar \

, would the 3 executor x 5G memory be allocated to each job or would all
jobs share the resources?

Thank you!
Nisrina

Reply via email to