On Wed, Apr 11, 2018 at 10:03 AM, Steven D'Aprano <st...@pearwood.info> wrote: > On Wed, Apr 11, 2018 at 03:38:08AM +1000, Chris Angelico wrote: >> A deployed Python distribution generally has .pyc files for all of the >> standard library. I don't think people want to lose the ability to >> call help(), and unless I'm misunderstanding, that requires >> docstrings. So this will mean twice as many files and twice as many >> file-open calls to import from the standard library. What will be the >> impact on startup time? > > I shouldn't think that the number of files on disk is very important, > now that they're hidden away in the __pycache__ directory where they can > be ignored by humans. Even venerable old FAT32 has a limit of 65,534 > files in a single folder, and 268,435,437 on the entire volume. So > unless the std lib expands to 16000+ modules, the number of files in the > __pycache__ directory ought to be well below that limit. > > I think even MicroPython ought to be okay with that. (But it would be > nice to find out for sure: does it support file systems with *really* > tiny limits?)
File system limits aren't usually an issue; as you say, even FAT32 can store a metric ton of files in a single directory. I'm more interested in how long it takes to open a file, and whether doubling that time will have a measurable impact on Python startup time. Part of that cost can be reduced by using openat(), on platforms that support it, but even with a directory handle, there's still a definite non-zero cost to opening and reading an additional file. ChrisA _______________________________________________ Python-ideas mailing list Python-ideas@python.org https://mail.python.org/mailman/listinfo/python-ideas Code of Conduct: http://python.org/psf/codeofconduct/