I have a an OSC caching question.  I am running a dd process which writes an 8GB file.  The file is on lustre, striped 8x1M. This is run on a system that has 2 NUMA nodes (  cpu sockets). All the data is apparently stored on one NUMA node (node1 in the plot below) until node1 runs out of free memory.  Then it appears that dd comes to a stop (no more writes complete) until lustre dumps the data from the node1.  Then dd continues writing, but now the data is stored on the second NUMA node, node0.  Why does lustre go to the trouble of dumping node1 and then not use node1's memory, when there was always plenty of free memory on node0?

I'll forego the explanation of the plot.  Hopefully it is clear enough.  If someone has questions about what the plot is depicting, please ask.

https://www.dropbox.com/scl/fi/pijgnnlb8iilkptbeekaz/dd.png?rlkey=3abonv5tx8w5w5m08bn24qb7x&dl=0

Thanks for any insight shared,

John

_______________________________________________
lustre-discuss mailing list
lustre-discuss@lists.lustre.org
http://lists.lustre.org/listinfo.cgi/lustre-discuss-lustre.org

Reply via email to