I have a directory that grows to 150k files and higher over time on an ext2
filesystem. Over time, due to the sheer number of files, it of course takes
longer and longer to do things like simple finds, ls etc. as expected.

What's not expected is that even after clearing the directory out, the
performance is degraded. Only deleting and recreating the dir fixes the
problem completely, until it fills up again. It was described to me as
something to do with a "high water mark" that is kept on the dir.

For instance, if I create a dir say,,,,

drwxr-xr-x  2 root    root     4096 2009-11-03 20:27 temp

and fill it with several thousand files....

for i in `seq 1 1 100000`; do touch temp/file$i; done

drwxr-xr-x  2 root    root    1896448 2009-11-03 20:30 temp

The size of the dir reflects the increase, which is expected, but now if I
do...

rm -f temp/*

drwxr-xr-x  2 root    root    1896448 2009-11-03 20:31 temp

The dir is empty but the size remains at 1896448.

This seems to have an impact on the overall performance of doing finds and
ls commands in those dir and it gets worse as time goes on.

Can someone please shed some light on this?

This is on a RHEL4 update 4 machine.

Thanks

Corey
_______________________________________________
rhelv5-list mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/rhelv5-list

Reply via email to