[issue42411] respect cgroups limits when trying to allocate memory

2020-12-03 Thread Carlos Alexandro Becker


Carlos Alexandro Becker  added the comment:

Any updates?

--

___
Python tracker 
<https://bugs.python.org/issue42411>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42411] respect cgroups limits when trying to allocate memory

2020-11-20 Thread Carlos Alexandro Becker


Carlos Alexandro Becker  added the comment:

Just did more tests here:

**on my machine**:

$ docker run --name test -m 1GB fedora:33 python3 -c 'import resource; m = 
int(open("/sys/fs/cgroup/memory/memory.limit_in_bytes").read()); 
resource.setrlimit(resource.RLIMIT_AS, (m, m)); 
print(resource.getrlimit(resource.RLIMIT_AS)); x = bytearray(4 * 1024 * 1024 * 
1000)'; docker inspect test | grep OOMKilled; docker rm test
Traceback (most recent call last):
  File "", line 1, in 
MemoryError
(1073741824, 1073741824)
"OOMKilled": false,
test

$ docker run --name test -m 1GB fedora:33 python3 -c 'x = bytearray(4 * 1024 * 
1024 * 1000)'; docker inspect test | grep OOMKilled; docker rm test
"OOMKilled": true,
test

**on a k8s cluster**:

$ kubectl run -i -t debug --rm --image=fedora:33 --restart=Never 
--limits='memory=1Gi'
If you don't see a command prompt, try pressing enter.
[root@debug /]# python3
Python 3.9.0 (default, Oct  6 2020, 00:00:00)
[GCC 10.2.1 20200826 (Red Hat 10.2.1-3)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> x = bytearray(4 * 1024 * 1024 * 1000)
Killed
[root@debug /]# python3
Python 3.9.0 (default, Oct  6 2020, 00:00:00)
[GCC 10.2.1 20200826 (Red Hat 10.2.1-3)] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import resource
>>> m = int(open("/sys/fs/cgroup/memory/memory.limit_in_bytes").read())
>>> resource.setrlimit(resource.RLIMIT_AS, (m, m))
>>> print(resource.getrlimit(resource.RLIMIT_AS))
(1073741824, 1073741824)
>>> x = bytearray(4 * 1024 * 1024 * 1000)
Traceback (most recent call last):
  File "", line 1, in 
MemoryError
>>>

--

___
Python tracker 
<https://bugs.python.org/issue42411>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42411] respect cgroups limits when trying to allocate memory

2020-11-20 Thread Carlos Alexandro Becker

Carlos Alexandro Becker  added the comment:

FWIW, here, both cases:

```
❯ docker ps -a
CONTAINER IDIMAGE   COMMAND  CREATED
  STATUSPORTS   NAMES
30fc350a8dbdpython:rc-alpine"python -c 'x = byte…"   24 seconds ago 
  Exited (137) 11 seconds ago   great_murdock
5ba46a022910fedora:33   "python3 -c 'x = byt…"   57 seconds ago 
  Exited (137) 43 seconds ago   boring_edison
```

--

___
Python tracker 
<https://bugs.python.org/issue42411>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42411] respect cgroups limits when trying to allocate memory

2020-11-20 Thread Carlos Alexandro Becker


Carlos Alexandro Becker  added the comment:

Maybe you're trying to allocate more memory than the host has available? I 
found out that it gives MemoryError in those cases too (kind of easy to 
reproduce on docker for mac)...

--

___
Python tracker 
<https://bugs.python.org/issue42411>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42411] respect cgroups limits when trying to allocate memory

2020-11-20 Thread Carlos Alexandro Becker


Carlos Alexandro Becker  added the comment:

The problem is that, instead of getting a MemoryError, Python tries to "go out 
of bounds" and allocate more memory than the cgroup allows, causing Linux to 
kill the process.

A workaround is to set RLIMIT_AS to the contents of 
/sys/fs/cgroup/memory/memory.limit_in_bytes, which is more or less what Java 
does when that flag is enabled (there are more things: cgroups v2 has a 
different path I think).

Setting RLIMIT_AS, we get the MemoryError as expected, instead of a SIGKILL.

My proposal is to either make it the default or hide it behind some sort of 
flag/environment variable, so users don't need to do that everywhere...

PS: On java, that flag also causes its OS API to return the limits when asked 
for how much memory is available, instead of returning the host's memory 
(default behavior).

PS: I'm not an avid Python user, just an ops guy, so I mostly write yaml these 
days... please let me know if I said doesn't make sense. 

Thanks!

--

___
Python tracker 
<https://bugs.python.org/issue42411>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue42411] respect cgroups limits when trying to allocate memory

2020-11-19 Thread Carlos Alexandro Becker


New submission from Carlos Alexandro Becker :

A common use case is running python inside containers, for instance, for 
training models and things like that.

The python process sees the host memory/cpu, and ignores its limits, which 
often leads to OOMKills, for instance:

docker run -m 1G --cpus 1 python:rc-alpine python -c 'x = bytearray(80 * 1024 * 
1024 * 1000)'


Linux will kill the process once it reaches 1GB of RAM used.

Ideally, we should have an option to make Python try to allocate only the ram 
its limited to, maybe something similar to Java's +X:UseContainerSupport.

--
components: IO
messages: 381442
nosy: caarlos0
priority: normal
severity: normal
status: open
title: respect cgroups limits when trying to allocate memory
versions: Python 3.10, Python 3.6, Python 3.7, Python 3.8, Python 3.9

___
Python tracker 
<https://bugs.python.org/issue42411>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com