If you don't care about the overhead, tell python to use the output of
shell command "hwloc-calc -N pu all".

Brice


Le 31/08/2020 à 18:38, Brock Palen a écrit :
> Thanks,
>
> yeah I was looking for an API that would take into consideration most
> cases, like I find with hwloc-bind --get   where I can find the number
> the process has access to.  Wether is cgroups,  other sorts of
> affinity setting etc.
>
> Brock Palen
> IG: brockpalen1984
> www.umich.edu/~brockp <http://www.umich.edu/~brockp>
> Director Advanced Research Computing - TS
> bro...@umich.edu <mailto:bro...@umich.edu>
> (734)936-1985
>
>
> On Mon, Aug 31, 2020 at 12:37 PM Guy Streeter <guy.stree...@gmail.com
> <mailto:guy.stree...@gmail.com>> wrote:
>
>     I forgot that the cpuset value is still available in cgroups v2. You
>     would want the cpuset.cpus.effective value.
>     More information is available here:
>     https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html
>
>     On Mon, Aug 31, 2020 at 11:19 AM Guy Streeter
>     <guy.stree...@gmail.com <mailto:guy.stree...@gmail.com>> wrote:
>     >
>     > As I said, cgroups doesn't limit the group to a number of cores, it
>     > limits processing time, either as an absolute amount or as a
>     share of
>     > what is available.
>     > A docker process can be restricted to a set of cores, but that
>     is done
>     > with cpu affinity, not cgroups.
>     >
>     > You could try to figure out an equivalency. For instance if you are
>     > using cpu.shares to limit the cgroups, then figure the ratio of a
>     > cgroup's share to the shares of all the cgroups at that level, and
>     > apply that ratio to the number of available cores to get an
>     estimated
>     > number of threads you should start.
>     >
>     > On Mon, Aug 31, 2020 at 10:40 AM Brock Palen <bro...@umich.edu
>     <mailto:bro...@umich.edu>> wrote:
>     > >
>     > > Sorry if wasn't clear, I'm trying to find out what is
>     available to my process before it starts up threads.  If the user
>     is jailed in a cgroup (docker, slurm, other)  and the program
>     tries to start 36 threads, when it only has access to 4 cores,
>     it's probably not a huge deal, but not desirable.
>     > >
>     > > I do allow the user to specify number of threads, but would
>     like to automate it for least astonishment.
>     > >
>     > > Brock Palen
>     > > IG: brockpalen1984
>     > > www.umich.edu/~brockp <http://www.umich.edu/~brockp>
>     > > Director Advanced Research Computing - TS
>     > > bro...@umich.edu <mailto:bro...@umich.edu>
>     > > (734)936-1985
>     > >
>     > >
>     > > On Mon, Aug 31, 2020 at 11:34 AM Guy Streeter
>     <guy.stree...@gmail.com <mailto:guy.stree...@gmail.com>> wrote:
>     > >>
>     > >> My very basic understanding of cgroups is that it can be used
>     to limit
>     > >> cpu processing time for a group, and to ensure fair
>     distribution of
>     > >> processing time within the group, but I don't know of a way
>     to use
>     > >> cgroups to limit the number of CPUs available to a cgroup.
>     > >>
>     > >> On Mon, Aug 31, 2020 at 8:56 AM Brock Palen <bro...@umich.edu
>     <mailto:bro...@umich.edu>> wrote:
>     > >> >
>     > >> > Hello,
>     > >> >
>     > >> > I have a small utility,  it is currently using 
>     multiprocess.cpu_count()
>     > >> > Which currently ignores cgroups etc.
>     > >> >
>     > >> > I see https://gitlab.com/guystreeter/python-hwloc
>     > >> > But appears stale,
>     > >> >
>     > >> > How would you detect number of threads that are safe to
>     start in a cgroup from Python3 ?
>     > >> >
>     > >> > Thanks!
>     > >> >
>     > >> > Brock Palen
>     > >> > IG: brockpalen1984
>     > >> > www.umich.edu/~brockp <http://www.umich.edu/~brockp>
>     > >> > Director Advanced Research Computing - TS
>     > >> > bro...@umich.edu <mailto:bro...@umich.edu>
>     > >> > (734)936-1985
>     > >> > _______________________________________________
>     > >> > hwloc-users mailing list
>     > >> > hwloc-users@lists.open-mpi.org
>     <mailto:hwloc-users@lists.open-mpi.org>
>     > >> > https://lists.open-mpi.org/mailman/listinfo/hwloc-users
>     > >> _______________________________________________
>     > >> hwloc-users mailing list
>     > >> hwloc-users@lists.open-mpi.org
>     <mailto:hwloc-users@lists.open-mpi.org>
>     > >> https://lists.open-mpi.org/mailman/listinfo/hwloc-users
>     > >
>     > > _______________________________________________
>     > > hwloc-users mailing list
>     > > hwloc-users@lists.open-mpi.org
>     <mailto:hwloc-users@lists.open-mpi.org>
>     > > https://lists.open-mpi.org/mailman/listinfo/hwloc-users
>     _______________________________________________
>     hwloc-users mailing list
>     hwloc-users@lists.open-mpi.org <mailto:hwloc-users@lists.open-mpi.org>
>     https://lists.open-mpi.org/mailman/listinfo/hwloc-users
>
>
> _______________________________________________
> hwloc-users mailing list
> hwloc-users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/hwloc-users
_______________________________________________
hwloc-users mailing list
hwloc-users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/hwloc-users

Reply via email to