Hello

If you bind a thread to a newset that contains 4 PUs (4 bits), the
operating system scheduler is free to run that thread on any of these
PUs. It means it may run on it on one PU, then migrate it to the other
PU, then migrate it back, etc. If these PUs do not share all caches, you
will see a performance drop because the data you put in the cache when
running on PU1 has to be stored/migrated in the cache on another PU when
the thread is migrated by the OS scheduler. If the PU share all caches,
the performance drop is much lower, but still exists because migrating
tasks between PU takes a bit of time.

If you call hwloc_bitmap_signlify(newset) before binding, you basically
say "I want to run on any of these 4 PUs, I am actually going to run on
a specific one". Singlify takes your set of PUs in the bitmap and keeps
a single one. Your original binding is respected (you run inside the
original binding), but you don't use all of them.

HOWEVER if you bind multiple threads to the same identical newset, you
don't want to singlify because all of them would run on the SAME PU. You
can either bind without singlify() so that the OS scheduler spreads your
threads on different PUs among newset. Or you want to manually split
newset into multiple subset (hwloc_distrib can do that).

I'll try to improve the doc.

Brice



Le 29/08/2018 à 06:26, Junchao Zhang a écrit :
> Hi,   
>   On cpu binding, hwloc manual says "It is often useful to call
> hwloc_bitmap_singlify() first so that a single CPU remains in the set.
> This way, the process will not even migrate between different CPUs
> inside the given set" . I don't understand it. If I do not do
> hwloc_bitmap_singlify, what will happen? Suppose a process's old cpu
> binding is oldset, and I want to bind it to newset. What should I do
> to use hwloc_bitmap_singlify?
>   Thank you.
> --Junchao Zhang
>
>
> _______________________________________________
> hwloc-users mailing list
> hwloc-users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/hwloc-users

_______________________________________________
hwloc-users mailing list
hwloc-users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/hwloc-users

Reply via email to