Keep in mind, overcommit considers *committed* address space, not used. So a process which creates a large anonymous mapping but doesn't touch any of that memory (so the address space is committed but no actual pages of RAM are assigned to the address space) will count against overcommit even though it consumes no RAM. Therefore overcommit_ratio=100 is actually pretty conservative, as it's common for processes to have some mappings which don't aren't backed by RAM (but could be).
Deciding exactly what overcommit ratio is appropriate generally is likely to be difficult, since every system has a different mix of processes running which may use mmaps in different ways. One big advantage to overcommit_ratio=0 is that it allows the system to protect critical processes from OOM situations by adjusting those processes' oom_scroe_adj values (systemd allows this value to be specified in unit configuration files). With overcommit_ratio=0, once the limit is reached memory allocations and mmaps start to fail for all processes equally, which might result in critical processes terminating. So I think overcommit_ratio=0 is better for a general use system. In situations where some other arrangement would work better the sysadmin can adjust the sysctl values. -- You received this bug notification because you are a member of Kernel Packages, which is subscribed to linux in Ubuntu. https://bugs.launchpad.net/bugs/1666683 Title: Default VM overcommit sysctls in Ubuntu lead to unnecessary oom-killer invocation Status in linux package in Ubuntu: Triaged Bug description: On my system, running a couple of LXD containers and VMs (16 GB RAM, 16 GB swap) seems to cause the kernel oom-killer to be frequently triggered. In order to try to resolve this, first I tried limiting the memory my containers were allowed to use, such as by using: lxc config set <container> limits.memory 1024GB ... and restarting the containers for good measure. However, this didn't resolve the problem. After looking deeper into what might trigger the oom-killer even though I seemed to have plenty of memory free, I found out that the kernel VM overcommit can be tuned with the `vm.overcommit_memory` sysctl. The default for value of `vm.overcommit_memory`, 0, uses a heuristic to determine whether or not requested memory is available. According to the `man 5 proc`, if the value is set to zero: """ calls of mmap(2) with MAP_NORESERVE are not checked, and the default check is very weak, leading to the risk of getting a process "OOM-killed". """ Which seems to describe exactly my problem. However, upon setting this value to 2, many of my open programs immediately aborted with out-of- memory errors. This is because the default value for `vm.overcommit_ratio` only allows the usage of 50% of the system's total (memory + swap). I then found the following answer on Server Fault: http://serverfault.com/a/510857/15268 The answers to this question seem to make a good case that the overcommit_ratio should be set to 100. In summary, I think the following sysctl values should be the new defaults: vm.overcommit_memory = 2 vm.overcommit_ratio = 100 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1666683/+subscriptions -- Mailing list: https://launchpad.net/~kernel-packages Post to : kernel-packages@lists.launchpad.net Unsubscribe : https://launchpad.net/~kernel-packages More help : https://help.launchpad.net/ListHelp