Re: fts: do not exhaust memory when processing million-entry directory

2011-08-21 Thread James Youngman
2011/8/19 Pádraig Brady p...@draigbrady.com: file descriptors a usually more constrained so I'd lean towards using more memory. Of course for fts, it's possible to take an adaptive strategy also. If at any time, fts fails to open a directory with ENFILE, it can check the parent directories;

fts: do not exhaust memory when processing million-entry directory

2011-08-18 Thread Jim Meyering
A few weeks ago, I noticed that removing a directory with very many entries made rm -rf use an inordinate amount of memory. I've fixed the underlying fts module to batch its readdir calls in the common case, thus to imposing a reasonable ceiling on memory usage. The ceiling of ~30MB is reached

Re: fts: do not exhaust memory when processing million-entry directory

2011-08-18 Thread Pádraig Brady
On 08/18/2011 02:53 PM, Jim Meyering wrote: A few weeks ago, I noticed that removing a directory with very many entries made rm -rf use an inordinate amount of memory. I've fixed the underlying fts module to batch its readdir calls in the common case, thus to imposing a reasonable ceiling on

Re: fts: do not exhaust memory when processing million-entry directory

2011-08-18 Thread Jim Meyering
Pádraig Brady wrote: With the default threshold of 100,000, the maximum is under 30MB and slightly faster than the first run: ... 0 +---Gi 0 4.481

Re: fts: do not exhaust memory when processing million-entry directory

2011-08-18 Thread Pádraig Brady
On 08/18/2011 09:14 PM, Jim Meyering wrote: Pádraig Brady wrote: With the default threshold of 100,000, the maximum is under 30MB and slightly faster than the first run: ... 0 +---Gi 0