Hello,
I've restricted an LXC container to only have access to 300MB of RAM. When
I run the following command 100 times, the process crashes about 3% of the
time:
dd if=/dev/zero of=out bs=1000000 count=1000
The error code is -9, and it seems to be consistent with the container
running out of memory. There are 90,000 memory allocation failures, as
counted by cgroups. I'm guessing this is because /proc/meminfo reports the
total memory in the system, rather than the container.
When dd does succeed, I notice that the memory.usage_in_bytes cgroup
reports 300MBs in use. Using "free -m", it appears that the 300MBs are
allocated under buffer/cache. There are still tens of thousands of memory
allocation failures.
My goal is to make it such that "dd" never crashes. In other tests, writing
arbitrary large files also seems to eventually trigger an error that I
assume is from running out of memory.
Can anyone shed light on this situation?
Best,
Ken
------------------------------------------------------------------------------
Master SQL Server Development, Administration, T-SQL, SSAS, SSIS, SSRS
and more. Get SQL Server skills now (including 2012) with LearnDevNow -
200+ hours of step-by-step video tutorials by Microsoft MVPs and experts.
SALE $99.99 this month only - learn more at:
http://p.sf.net/sfu/learnmore_122512
_______________________________________________
Lxc-users mailing list
Lxc-users@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/lxc-users