URL:
<http://savannah.gnu.org/bugs/?31961>
Summary: Out-of-control memory usage when run against a
large directory
Project: findutils
Submitted by: None
Submitted on: Thu 23 Dec 2010 07:52:10 PM UTC
Category: find
Severity: 3 - Normal
Item Group: None
Status: None
Privacy: Public
Assigned to: None
Originator Name: Alex
Originator Email: [email protected]
Open/Closed: Open
Discussion Lock: Any
Release: 4.4.0
Fixed Release: None
_______________________________________________________
Details:
So I had this directory:
drwxrwxrwx 2 dan users 980M 2010-12-22 18:38 sessions2
The directories contents is small - just millions of tiny little files.
I ran
find sessions2 -type f -delete
but had to stop because both caused escalating memory usage. At one point it
was using 65% of the system's memory.
Using simply
find sessions2 -print
printed the ".", then stopped printing, and the memory usage climbed and
climbed and climbed.
I suspect that find is reading the directory's entire index into memory
before doing anything.
I was able to empty the directory with this PHP script, with insignificant
memory usage:
<?php
$dh = opendir($dir)) {
while (($file = readdir($dh)) !== false) {
unlink($dir . '/' . $file);
}
closedir($dh);
?>
Could there perhaps be an option that allows the usage of find in these huge
directories without reading the entire thing into memory? Perhaps disabling
sorting and whatnot.
Thanks!
_______________________________________________________
Reply to this item at:
<http://savannah.gnu.org/bugs/?31961>
_______________________________________________
Message sent via/by Savannah
http://savannah.gnu.org/