I have a job, running on yarn, that uses multithreading inside of a
mapPartitions transformation

Ideally I would like to have a small number of partitions but have a high
number of yarn vcores allocated to the task (that i can take advantage of
because of multi threading)

Is this possible?

I tried running with  : --executor-cores 1 --conf
spark.yarn.executor.cores=20
But it seems spark.yarn.executor.cores gets ignored

Reply via email to