Voelker, Bernhard wrote: > Jim Meyering wrote: > >> Voelker, Bernhard wrote: >> >> > Jim Meyering wrote: >> > >> > +++ b/tests/rm/4-million-entry-dir >> > ... >> > +# Put 4M files in a directory. >> > +mkdir d && cd d || framework_failure_ >> > +seq 4000000|xargs touch || framework_failure_ >> > + >> > +cd .. >> > + >> > +# Restricted to 50MB, rm from coreutils-8.12 would fail with a >> > +# diagnostic like "rm: fts_read failed: Cannot allocate memory". >> > +ulimit -v 50000 >> > +rm -rf d || fail=1 >> > + >> > +Exit $fail >> > >> > wouldn't this leave behind lots of used inodes in case of a failure? >> >> No, at least I hope not. >> The test is run via a framework (tests/init.sh) that creates a temporary >> directory in which those commands are run, and it (init.sh) also arranges >> to remove that temporary directory upon exit, interrupt, etc. > > ok, good to know. > >> > Additionally, looking at 2 of my (not so big) SLES servers: >> > most partitions only have <500000 inodes (/ /opt /usr /tmp /var), >> > so maybe it's worth checking beforehand whether the filesystem >> > meets the test requirements. What do you think? > >> If the setup phase fails (the seq...|xargs touch), then the test fails >> with a diagnostic already. And considering that the test is already >> marked as "very expensive", and hence not run by default (you have to have >> to set RUN_VERY_EXPENSIVE_TESTS=yes in order to run it), I think we're >> ok, since I'd prefer a failure to a skip in that case. > > I agree. > >> People who take the trouble to run the very expensive tests (I'd be >> surprised if there are more than a handful) probably want to know when/if >> their test environment is causing test failures. > > I like such tests ;-) > > BTW: Wouldn't this test deserve a proper make target, e.g. > "make check-expensive"?
Yes, good idea. That would make it easier to run just those test. However, hard-coding the list of expensive and very-expensive tests would require doing the same sort of thing as is done for root_tests (see check-root) in tests/Makefile.am, where there'd be a hand-maintained list of expensive and very-expensive tests (in tests/Makefile.am) as well as rules to run them and rules to cross-check that the lists are complete, as is done in cfg.mk's sc-root_tests rule. Back to your inode limitation point, it's a good one. I realized that we certainly don't need 4M files to demonstrate the problem. Since 100,000 is the threshold, I lowered the number used in the test to just 200,000. With that, the test runs much more quickly, of course: >From 1871225451473d1bae3d41607e62ac02821aa042 Mon Sep 17 00:00:00 2001 From: Jim Meyering <meyer...@redhat.com> Date: Wed, 24 Aug 2011 10:36:25 +0200 Subject: [PATCH 1/2] tests: adjust the new, very expensive rm test to be less expensive * tests/rm/4-million-entry-dir: Create only 200,000 files, rather than 4 million. The latter was overkill, and was too likely to fail due to inode exhaustion. Not everyone is using btrfs yet. Now that this file doesn't take so long, label it as merely "expensive", rather than "very expensive". Thanks to Bernhard Voelker for pointing out the risk of inode exhaustion. --- tests/rm/4-million-entry-dir | 19 +++++++++++-------- 1 files changed, 11 insertions(+), 8 deletions(-) diff --git a/tests/rm/4-million-entry-dir b/tests/rm/4-million-entry-dir index 23130a6..44855cf 100755 --- a/tests/rm/4-million-entry-dir +++ b/tests/rm/4-million-entry-dir @@ -1,5 +1,6 @@ #!/bin/sh -# in coreutils-8.12, this would have required ~1GB of memory +# In coreutils-8.12, rm,du,chmod, etc. would use too much memory +# when processing a directory with many entries (as in > 100,000). # Copyright (C) 2011 Free Software Foundation, Inc. @@ -17,19 +18,21 @@ # along with this program. If not, see <http://www.gnu.org/licenses/>. . "${srcdir=.}/init.sh"; path_prepend_ ../src -print_ver_ rm +print_ver_ rm du -very_expensive_ +expensive_ -# Put 4M files in a directory. +# With many files in a single directory... mkdir d && cd d || framework_failure_ -seq 4000000|xargs touch || framework_failure_ +seq 200000|xargs touch || framework_failure_ cd .. -# Restricted to 50MB, rm from coreutils-8.12 would fail with a -# diagnostic like "rm: fts_read failed: Cannot allocate memory". -ulimit -v 50000 +# Restricted to 40MB, rm from coreutils-8.12 each of these would fail +# with a diagnostic like "rm: fts_read failed: Cannot allocate memory". +ulimit -v 40000 +du -sh d || fail=1 +chmod -R 700 d || fail=1 rm -rf d || fail=1 Exit $fail -- 1.7.6.677.gb5fca >From be380bfa8360e0232da7dbee198206dc57023745 Mon Sep 17 00:00:00 2001 From: Jim Meyering <meyer...@redhat.com> Date: Wed, 24 Aug 2011 10:40:51 +0200 Subject: [PATCH 2/2] maint: rename a test Lesson: do not include details like "4 million" in a file name. * tests/rm/many-dir-entries-vs-OOM: Renamed from ... * tests/rm/4-million-entry-dir: ...this. * tests/Makefile.am (TESTS): Reflect renaming. --- tests/Makefile.am | 2 +- ...4-million-entry-dir => many-dir-entries-vs-OOM} | 0 2 files changed, 1 insertions(+), 1 deletions(-) rename tests/rm/{4-million-entry-dir => many-dir-entries-vs-OOM} (100%) diff --git a/tests/Makefile.am b/tests/Makefile.am index f0200e1..c37cca6 100644 --- a/tests/Makefile.am +++ b/tests/Makefile.am @@ -135,7 +135,7 @@ TESTS = \ rm/unread3 \ rm/unreadable \ rm/v-slash \ - rm/4-million-entry-dir \ + rm/many-dir-entries-vs-OOM \ chgrp/default-no-deref \ chgrp/deref \ chgrp/no-x \ diff --git a/tests/rm/4-million-entry-dir b/tests/rm/many-dir-entries-vs-OOM similarity index 100% rename from tests/rm/4-million-entry-dir rename to tests/rm/many-dir-entries-vs-OOM -- 1.7.6.677.gb5fca