Hi Xiaoye,
could it be that the executors were spawned before the affinity was
set on the worker? Would it help to start spark worker with taskset
from the beginning, i.e. "taskset [mask] start-slave.sh"?
Workers in spark (standalone mode) simply create processes with the
standard java process API. Unless there is something funky going on in
the JRE, I don't see how spark could affect cpu affinity.

regards,
--Jakob

On Tue, Sep 13, 2016 at 7:56 PM, Xiaoye Sun <sunxiaoy...@gmail.com> wrote:
> Hi,
>
> In my experiment, I pin one very important process on a fixed CPU. So the
> performance of Spark task execution will be affected if the executors or the
> worker uses that CPU. I am wondering if it is possible to let the Spark
> executors not using a particular CPU.
>
> I tried to 'taskset -p [cpumask] [pid]' command to set the affinity of the
> Worker process. However, the executor processes created by the worker
> process don't inherit the same CPU affinity.
>
> Thanks!
>
> Best,
> Xiaoye

---------------------------------------------------------------------
To unsubscribe e-mail: user-unsubscr...@spark.apache.org

Reply via email to