Hi Shridhar
Can you check on Spark GUI whether the number of cores shown per worker is
the same as you set up? This shows under column "Cores"
HTH
Dr Mich Talebzadeh
LinkedIn *
https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
Ted, Mich,
Thanks for your replies. I ended up using sparkConf.set(); and
accepted cores as a parameter. But still not sure why spark-submits's
executor-cores or driver-cores property did not work. setting cores within
main method seems to be bit cumbersome .
Thanks again,
Shridhar
On Wed, Mar
Hi Ted
Can specify the core as follows for example 12 cores?:
val conf = new SparkConf().
setAppName("ImportStat").
*setMaster("local[12]").*
set("spark.driver.allowMultipleContexts", "true").
set("spark.hadoop.validateOutputSpecs", "false")
val sc = new
-c CORES, --cores CORES Total CPU cores to allow Spark applications to use
on the machine (default: all available); only on worker
bq. sc.getConf().set()
I think you should use this pattern (shown in
https://spark.apache.org/docs/latest/spark-standalone.html):
val conf = new SparkConf()
Hi all,
While submitting Spark Job I am am specifying options --executor-cores 1
and --driver-cores 1. However, when the job was submitted, the job used all
available cores. So I tried to limit the cores within my main function
sc.getConf().set("spark.cores.max", "1"); however it still