Hi,

FWIW, this is a very important distinction for CUDA applications.  CUDA 
registers a combine memory address space for all host, device, and page-locked 
memory as virtual memory.  The OS may detect a virtual memory usage of tens if 
not hundreds of GB, while the application is only using (say) 500MB of resident 
memory.  I’ve had problems scheduling these type of jobs under LoadLeveller, 
and can think of OGS setups (principally on a non-GPU cluster I put together a 
few years back) where a user would be unable to schedule these jobs thanks to 
the memory resource accounting.

Cheers,

Chris


--
Dr Chris Jewell
Lecturer in Biostatistics
Institute of Fundamental Sciences
Massey University
Private Bag 11222
Palmerston North 4442
New Zealand
Tel: +64 (0) 6 350 5701 Extn: 3586






_______________________________________________
users mailing list
[email protected]
https://gridengine.org/mailman/listinfo/users

Reply via email to