Carlos Alexandro Becker <caarl...@gmail.com> added the comment:

The problem is that, instead of getting a MemoryError, Python tries to "go out 
of bounds" and allocate more memory than the cgroup allows, causing Linux to 
kill the process.

A workaround is to set RLIMIT_AS to the contents of 
/sys/fs/cgroup/memory/memory.limit_in_bytes, which is more or less what Java 
does when that flag is enabled (there are more things: cgroups v2 has a 
different path I think).

Setting RLIMIT_AS, we get the MemoryError as expected, instead of a SIGKILL.

My proposal is to either make it the default or hide it behind some sort of 
flag/environment variable, so users don't need to do that everywhere...

PS: On java, that flag also causes its OS API to return the limits when asked 
for how much memory is available, instead of returning the host's memory 
(default behavior).

PS: I'm not an avid Python user, just an ops guy, so I mostly write yaml these 
days... please let me know if I said doesn't make sense. 

Thanks!

----------

_______________________________________
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue42411>
_______________________________________
_______________________________________________
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com

Reply via email to