Hi,
In my experiment, I pin one very important process on a fixed CPU. So the
performance of Spark task execution will be affected if the executors or
the worker uses that CPU. I am wondering if it is possible to let the Spark
executors not using a particular CPU.
I tried to 'taskset -p [cpumask]
Hi,
In my experiment, I pin one very important process on a fixed CPU. So the
performance of Spark task execution will be affected if the executors or
the worker uses that CPU. I am wondering if it is possible to let the Spark
executors not using a particular CPU.
I tried to 'taskset -p [cpumask]
Hi Xiaoye,
could it be that the executors were spawned before the affinity was
set on the worker? Would it help to start spark worker with taskset
from the beginning, i.e. "taskset [mask] start-slave.sh"?
Workers in spark (standalone mode) simply create processes with the
standard java process API.
Hi Jakob,
Yes. you are right. I should use taskset when I start the *.sh scripts.
For more detail, I change the last line in ./sbin/start-slaves.sh on master
to this
"${SPARK_HOME}/sbin/slaves.sh" cd "${SPARK_HOME}" \; *"taskset" "0xffe"*
"${SPARK_HOME}/sbin/start-slave.sh"
"spark://$SPARK_MASTER