2011/8/19 Pádraig Brady p...@draigbrady.com:
file descriptors a usually more constrained so I'd
lean towards using more memory.
Of course for fts, it's possible to take an adaptive strategy also.
If at any time, fts fails to open a directory with ENFILE, it can
check the parent directories;
A few weeks ago, I noticed that removing a directory with very many
entries made rm -rf use an inordinate amount of memory. I've fixed the
underlying fts module to batch its readdir calls in the common case,
thus to imposing a reasonable ceiling on memory usage. The ceiling of
~30MB is reached
On 08/18/2011 02:53 PM, Jim Meyering wrote:
A few weeks ago, I noticed that removing a directory with very many
entries made rm -rf use an inordinate amount of memory. I've fixed the
underlying fts module to batch its readdir calls in the common case,
thus to imposing a reasonable ceiling on
Pádraig Brady wrote:
With the default threshold of 100,000, the maximum is under 30MB
and slightly faster than the first run:
...
0
+---Gi
0 4.481
On 08/18/2011 09:14 PM, Jim Meyering wrote:
Pádraig Brady wrote:
With the default threshold of 100,000, the maximum is under 30MB
and slightly faster than the first run:
...
0
+---Gi
0