Hi list,

I seek an answer to the following question:

Is there an easy way to limit the memory allocations of a process bound 
to the processors of a specific NUMA node to only that specific NUMA node?

Instead of allocating memory from other NUMA nodes, I would prefer for 
the process to stop running.

In my experiments, I launch HPX processes per NUMA node (8 per cluster 
node). This is supposed to be an easy way to keep the process data near 
the process, in the same NUMA node. It works fine, but currently nothing 
prevents a process to allocate additional memory outside the NUMA node 
it is running on. This happens when the process needs more memory than 
the NUMA node provides, I think.

I use SLURM to schedule the processes.

In my code I do not use special memory allocators or anything like that. 
I just distribute processes over NUMA nodes, which is convenient for me 
at this moment.

Thanks for any hints!

Kor

_______________________________________________
hpx-users mailing list
hpx-users@stellar-group.org
https://mail.cct.lsu.edu/mailman/listinfo/hpx-users

Reply via email to