Okay, what would make a good test of this, then? And is my configuration correct?
From: Ondrej Valousek <ondrej.valou...@diasemi.com> Sent: Friday, August 07, 2020 12:24 PM To: users@gridengine.org; Trimboli, David <trimb...@cshl.edu> Subject: Re: m_mem_free and cgroups Short answer: Use a different tool than stress Long answer: linux kernel is too clever for tests like stress because allocating a memory is one thing (which is taken only like "alright, i'll see what i can do, here is the pointer") but actually _using_ that memory is something completely different. HTH, Ondrej Get Outlook for Android<https://urldefense.proofpoint.com/v2/url?u=https-3A__aka.ms_ghei36&d=DwMF-g&c=mkpgQs82XaCKIwNV8b32dmVOmERqJe4bBOtF0CetP9Y&r=EKk3zFVROsf8w5OyB2T6u55jzploih3y7CaWIlGOLAY&m=iTtXyrgoZiGdhnZHl7J4ykCm4DIa4q6d_gKxZ95lkE4&s=WlGuA2mBvsPxee-S9ad3PR4wJlsMfrbrdYLnRCn4PEg&e=> ________________________________ From: users-boun...@gridengine.org<mailto:users-boun...@gridengine.org> <users-boun...@gridengine.org<mailto:users-boun...@gridengine.org>> on behalf of Trimboli, David <trimb...@cshl.edu<mailto:trimb...@cshl.edu>> Sent: Friday, August 7, 2020 5:05:19 PM To: users@gridengine.org<mailto:users@gridengine.org> <users@gridengine.org<mailto:users@gridengine.org>> Subject: [gridengine users] m_mem_free and cgroups I'm running Univa Grid Engine 8.6.4. I've been talking to their support, but between a bit of a language barrier and my not fully understanding some aspects of Grid Engine resources, I'm not getting anywhere. I'm trying to limit memory usage by requiring users to ask for m_mem_free for their jobs. I've got cgroups running on the nodes, I've set the cgroups_params cgroup_path to /sys/fs/cgroups and m_mem_free_soft to true. I've made m_mem_free requestable and consumable. So if I do, for instance, "qsub -l m_mem_free=2G", what I'm hoping is that the node it runs on reserves 2G of resident RAM for the job, and if the job tries to take more than that, any excess is pushed off to swap I've been trying this with a script with only one line: "stress --vm 12 --vm-bytes 1024M -t 60". I'm running "qsub -l m_mem_free=100M" to guarantee that the job takes more than the reserved RAM. However, when I watch the job run on the node, it doesn't appear to respect the limits at all. Top shows that each process takes 1032.1m of virtual memory (exactly what the script asks for), of which any amount up to nearly 1G is resident memory. But I wanted resident memory never to go over 100m. If I try running the same script with m_mem_free=30G, or without any m_mem_free at all, I see exactly the same behavior. Clearly, I haven't accomplished anything. What else do I need to do? Have I missed something in the configuration? Am I not understanding the operation of m_mem_free_soft? Is there some other way I should be doing this?
_______________________________________________ users mailing list users@gridengine.org https://gridengine.org/mailman/listinfo/users