Jeremy Chadwick wrote:

I don't want to change the topic of discussion, but I *highly* recommend
you ***stop*** whatever it is you're doing that is creating such a
directory structure.  Software which has to iterate through that
directory using opendir() and readdir() will get slower and slower as
time goes on.

With the implementation of UFS_DIRHASH the practical limit on the
size of directories is now a great deal larger. In particular
the slow down caused by linear search through the contents has been eliminated. See ffs(7). 10,000 files or sub-directories, whist
not a particularly elegant setup, is actually not unworkable
nowadays.

As for the maximum number of subdirectories it is possible to create
on UFS2 -- it is limited by the inode structure to a 16 bit quantity.

% jot 100000 1 | xargs mkdir -v
[...]
32725
32726
32727
32728
32729
32730
3273mkdir: 32766: Too many links
mkdir: 32767: Too many links
mkdir: 32768: Too many links
mkdir: 32769: Too many links
mkdir: 32770: Too many links
mkdir: 32771: Too many links
[...]

Which is 32768 - 2 for the '.' and '..' links.  Trying to create too
many subdirectories just results in mkdir failing: the filesystem
itself is not damaged.

        Cheers,

        Matthew

--
Dr Matthew J Seaman MA, D.Phil.                   7 Priory Courtyard
                                                 Flat 3
PGP: http://www.infracaninophile.co.uk/pgpkey     Ramsgate
                                                 Kent, CT11 9PW

Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to