Let's say I have a host with 32 GB RAM.

To make sure the host is not affected by any weird memory consumption patterns, I've set the following in the container:

  limits.memory: 29GB

This works quite well - where previously, several processes with high memory usage, forking rapidly (a forkbomb to test, but also i.e. a supervisor in normal usage) running in the container could make the host very slow or even unreachable - with the above setting, everything (on the host) is just smooth no matter what the container does.

However, that's just with one container.

With two (or more) containers having "limits.memory: 29GB" set - it's easy for each of them to consume i.e. 20 GB, leading to host unavailability.

Is it possible to set a global, or per-container group "limits.memory: 29GB"?

For example, if I add "MemoryMax=29G" to /etc/systemd/system/snap.lxd.daemon.service - would I achieve a desired effect?



Tomasz Chmielewski
https://lxadm.com
_______________________________________________
lxc-users mailing list
lxc-users@lists.linuxcontainers.org
http://lists.linuxcontainers.org/listinfo/lxc-users

Reply via email to