On 12/17/2012 10:28 AM, Gilles Lenfant wrote:
Hi,

I have googled but did not find an efficient solution to my problem.
My customer provides a directory with a huuuuge list of files (flat,
potentially 100000+) and I cannot reasonably use
os.listdir(this_path) unless creating a big memory footprint.

Is is really big enough to be a real problem? See below.

So I'm looking for an iterator that yields the file names of a
directory and does not make a giant list of what's in.

i.e :

for filename in enumerate_files(some_directory): # My cooking...

See http://bugs.python.org/issue11406
As I said there, I personally think (and still do) that listdir should have been changed in 3.0 to return an iterator rather than a list. Developers who count more than me disagree on the basis that no application has the millions of directory entries needed to make space a real issue. They also claim that time is a wash either way.

As for space, 100000 entries x 100 bytes/entry (generous guess at average) = 10,000,000 bytes, no big deal with gigabyte memories. So the logic goes. A smaller example from my machine with 3.3.

from sys import getsizeof

def seqsize(seq):
    "Get size of flat sequence and contents"
    return sum((getsizeof(item) for item in seq), getsizeof(seq))

import os
d = os.listdir()
print(seqsize([1,2,3]), len(d), seqsize(d))
#
172 45 3128

The size per entry is relatively short because the two-level directory prefix for each path is only about 15 bytes. By using 3.3 rather than 3.0-3.2, the all-ascii-char unicode paths only take 1 byte per char rather than 2 or 4.

If you disagree with the responses on the issue, after reading them, post one yourself with real numbers.

--
Terry Jan Reedy

--
http://mail.python.org/mailman/listinfo/python-list

Reply via email to