Indexing is the key; observe how Google accesses millions of files in
split seconds; this could easily be achieved in a PC file system.
Paul
Joe Landman wrote:
Methinks I lost lots of folks with my points ...
Major thesis is that on well designed hardware/software/filesystems,
50000 files is not a problem for accesses (though from a management
point of view it is a nightmare). For poorly designed/implemented
file systems it is a nightmare.
Way back when in the glory days of SGI, I seem to remember xfs being
tested with millions of files per directory (though don't hold me to
that old memory). Call this hearsay at this moment.
A well designed and implemented file system shouldn't bog you down as
you scale out in size, even if you shouldn't. Its sort of like your
car. If you go beyond 70 MPH somewhere in the US that supports such
speeds, your transmission shouldn't just drop out because you hit 71 MPH.
Graceful degradation is a good thing.
Joe
_______________________________________________
Beowulf mailing list, [email protected]
To change your subscription (digest mode or unsubscribe) visit
http://www.beowulf.org/mailman/listinfo/beowulf