Thank you, Brice!

I'm testing it on a Lenovo P1 laptop with an i7-12800H CPU with 6P +
8E. --cpukinds option solves the problem for me:-)

hwloc-calc  --cpukind 1 all

BTW, Intel's patches to improve Linux scheduling on hybrid architectures
were significantly improved this year and released recently in kernel
version 6.6.
https://www.phoronix.com/news/Intel-Hybrid-Cluster-Sched-v3

Jirka

On Fri, Nov 24, 2023 at 9:19 AM Brice Goglin <brice.gog...@inria.fr> wrote:

>
> Le 24/11/2023 à 08:51, John Hearns a écrit :
> > Good question.  Maybe not an answer referring to hwloc.
> > When managing a large NUMA machine, SGI UV, I ran the OS processes in
> > a boot cpuset which was restricted to (AFAIR) the first 8 Cpus.
> > On Intel architecures with E and P cores could we think of running OS
> > on E cores only and having the batch system schedule compute tasks on
> > P cores?
> >
>
> That's certainly possible. Linux has things like isolcpus to force
> isolate some cores away from the OS tasks, should work for these
> platforms too (by the way, it's also for ARM big.LITTLE platforms
> running Linux, including Apple M1, etc).
>
> However, keep in mind that splitting P+E CPUs is not like splitting NUMA
> platforms: isolating NUMA node #0 on SGI left tons of cores available
> for HPC tasks on NUMA nodes. Current P+E from Intel usually have more E
> than P, and several models are even 2P+8E, that would be a lot of
> E-cores for the OS and very few P-cores for real apps. Your idea would
> apply better if we rather had 2E+8P but that's not the trend.
>
> Things might be more interesting with MeteorLake which (according to
>
> https://www.hardwaretimes.com/intel-14th-gen-meteor-lake-cpu-cores-almost-identical-to-13th-gen-its-a-tic/)
>
> has P+E as usual but also 2 "low-power E" on the side. There, you could
> put the OS on those 2 Low-Power E.
>
> By the way, the Linux scheduler is supposed to get enhanced to
> automatically find out which tasks to put on P and E core but they've
> been discussing things for a long time and it's hard to know what's
> actually working well already.
>
> Brice
>
>
> _______________________________________________
> hwloc-users mailing list
> hwloc-users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/hwloc-users



-- 
-Jirka
_______________________________________________
hwloc-users mailing list
hwloc-users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/hwloc-users

Reply via email to