> On a more general note, I have a directory on my linux server which has > over 90,000 files. when I do a dir | wc -l I receive a number of >46,000 > (which I take to be >90,000 files since dir gives me 2 columns of file > names. > However, I can't ls. I have waited for up to 15 minutes. Can ls not > handle the vast quantity of files in the directory? Is there a way around > this 'ceiling'?
>From memory, the difference is that 'ls' sorts by default and 'dir' does not. There is a "-U" option on 'ls' that disables the sort and may be much faster. Plus, if you're using any of the options that request additional details on the files, the command and the kernel have to access the directory _many_ times. In any case, I'd avoid more than 1000 files per directory - irrespective of kernel efficiency - because it is so easy to make a stupid mistake in there. I normally put the real files in an adjacent multilevel heirarchy for future users and for applications where we can be bothered to migrate the code, then have a script that creates a large number of soft links to form a single level flattened directory for legacy read-only access to the files. My $0.02 _____________________________________________________________________ Ltsp-discuss mailing list. To un-subscribe, or change prefs, goto: https://lists.sourceforge.net/lists/listinfo/ltsp-discuss For additional LTSP help, try #ltsp channel on irc.openprojects.net