Hello,

Please run "lstopo -.synthetic" to compress the output a lot. I will be
> able to reuse it from here and understand your binding mask.
>
Package:2 [NUMANode(memory=270369247232)] L3Cache:8(size=33554432)
L2Cache:8(size=524288) L1dCache:1(size=32768) L1iCache:1(size=32768) Core:1
PU:2(indexes=2*128:1*2)

Mike


Am Di., 1. März 2022 um 19:05 Uhr schrieb Brice Goglin <
brice.gog...@inria.fr>:

>
> Le 01/03/2022 à 17:34, Mike a écrit :
>
> Hello,
>
> Usually you would rather allocate and bind at the same time so that the
>> memory doesn't need to be migrated when bound. However, if you do not touch
>> the memory after allocation, pages are not actually physically allocated,
>> hence there's no to migrate. Might work but keep this in mind.
>>
>
> I need all the data in one allocation, so that is why I opted to allocate
> and then bind via the area function. The way I understand it is that by
> using the memory binding policy HWLOC_MEMBIND_BIND with
> hwloc_set_area_membind() the pages will actually get allocated on the
> specified cores. If that is not the case I suppose the best solution would
> be to just touch the allocated data with my threads.
>
>
> set_area_membind() doesn't allocate pages, but it tells the operating
> system "whenever you allocate them, do it on that NUMA node". Anyway, what
> you're doing makes sense.
>
>
>
> Can you print memory binding like below instead of printing only the first
>> PU in the set returned by get_area_membind?
>>
>     char *s;
>     hwloc_bitmap_asprintf(&s, set);
>     /* s is now a C string of the bitmap, use it in your std::cout line */
>
> I tried that and now get_area_membind returns that all memory is bound to
> 0xffffffff,0xffffffff,,,0xffffffff,0xffffffff
>
>
> Please run "lstopo -.synthetic" to compress the output a lot. I will be
> able to reuse it from here and understand your binding mask.
> Brice
>
>
> _______________________________________________
> hwloc-users mailing list
> hwloc-users@lists.open-mpi.org
> https://lists.open-mpi.org/mailman/listinfo/hwloc-users
_______________________________________________
hwloc-users mailing list
hwloc-users@lists.open-mpi.org
https://lists.open-mpi.org/mailman/listinfo/hwloc-users

Reply via email to