If you have two default tmpfs instances on your box (hi buildroot!) and
they're world writeable and a normal user goes "cat /dev/zero >
/run/fillit; cat /dev/zero > /tmp/fillit" the system then goes "sh:
can't fork" trying to call rm on those files, because they each default
to 50% of total system memory, no matter how many instances there are.
It only stops writing when memory allocation fails in the kernel.

I'm not quite sure how to fix that. I want to say "change the default to
be 50% of what's _left_" (and if you size= manually you get to keep the
pieces) but define "what's left"? Other things are using some memory
already, the 50% _was_ the amount it's safe for tmpfs to use. (Once upon
a time the logic was that 50% of memory can go to disk cache. I think
that's gotten a bit more complicated since then? No idea what the
current rules are.)

This is related to the guy who wanted initramfs to be ramfs instead of
tmpfs because he had a cpio.gz that extracted to >50% of system memory
so it extracted fine with ramfs but hit the limit and failed with tmpfs.
That sounds like a tmpfs rootfs (I.E. initmpfs) should start with a
limit of 90% and then scale it _down_ to 50% after extracting the cpio.
(I wonder what happens if you -o remount the size= limit to smaller than
the filesystem currently holds? Hmmm...)

Rob

P.S. Yes I need to pipe rootflags= through so you can specify the size
on the command line. I've got 3 or 4 initramfs/tmpfs plates spinning and
haven't had a chance to work on them all merge window. I'll try to get
some patches ready by next weekend, if they miss the release they miss
the release...

Reply via email to