On Sun, 28 Feb 1999, Kaz Kylheku <[EMAIL PROTECTED]> wrote:
> > I know that the "ls" command would take lots of time to show the results.
> > But besides that, is there any reason to avoid doing this?
>
> I believe that ext2 is a decent filesystem; however, it handles directory
> searching using a linear algorithm. Each search for a name through that
> 15000 element directory will be executed as a naive linear search.
fortunately, the way how ext2fs accesses inodes is a secondary issue,
Linux 2.2 uses a dynamic namecache to access inodes. Thus if you system
has enough memory to cache 15000 dentries, it will perform very fast.
i've created a test-directory with 28591 files:
[mingo@moon mingo]$ time ls bigdir | wc -l
0.33 user 0.10 system 0:00.42 elapsed 100%CPU
28591
[mingo@moon mingo]$
so it takes some 0.1 seconds system time to ls 28000 files. I wouldnt be
worried. OTOH, if the application relies heavily on things like file
creation/deletition latencies, those will not be as good. But if it's
mainly lookup, with light creation/deletition activities, then i wouldnt
worry much ...
-- mingo
-
To unsubscribe from this list: send the line "unsubscribe linux-net" in
the body of a message to [EMAIL PROTECTED]