Julian Taylor added the comment: certainly for anything that needs good control over affinity psutils is the best choice, but I'm not arguing to implement full process control in python. I only want python to provide the number of cores one can work on to make best use of the available resources.
If you code search python files for cpu_count you find on github 18000 uses, randomly sampling a few every single one was to determine the number of cpus to start worker jobs to get best performance. Every one of these will oversubscribe a host that restricts the cpus a process can use. This is an issue especially for the increasingly popular use of containers instead of full virtual machines. as a documentation update I would like to have a note saying that this number is the number of (online) cpus in the system may not be the number of of cpus the process can actually use. Maybe with a link to len(psutils.Process.get_affinity()) as a reference on how to obtain that number. there would be no dependence on coreutils, I just mentioned it as you can look up the OS api you need to use to get the number there (e.g. sched_getaffinity). It is trivial API use and should not be a licensing issue, one could also look at the code from psutil which most likely looks very similar. ---------- _______________________________________ Python tracker <rep...@bugs.python.org> <http://bugs.python.org/issue23530> _______________________________________ _______________________________________________ Python-bugs-list mailing list Unsubscribe: https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com