"Tony A. Lambley" <[EMAIL PROTECTED]> writes: > I find it interesting that the `ls' works for an obscene number of > files, yet `ls *' doesn't. bash fails nicely as above, but /bin/sh > either returns "Segmentation fault" or kills my connection!
This is actually a bug in your system because it cannot handle arguments longer than ARG_MAX (or some such). `ls *' gets expanded to an insanely long argument list (bigger than ARG_MAX) which causes exec() to fail, and in turn makes bash output an error. Where `ls' works because it doesn't parse any arguments. On GNU this limit doesn't exist, and the argument length is only limited by available memory. I have no idea about /bin/sh (is it a link to bash? Is it the Bourne shell?), so I can't comment on that. > Any ideas (other than "don't get involved with legacy systems that store > lots of files in a single dir")? There is no such "policy" for GNU projects. See above. To quote (standards)Semantics: Avoid arbitrary limits on the length or number of _any_ data structure, including file names, lines, files, and symbols, by allocating all data structures dynamically. In most Unix utilities, "long lines are silently truncated". This is not acceptable in a GNU utility. Cheers, -- Alfred M. Szmidt _______________________________________________ Bug-fileutils mailing list [EMAIL PROTECTED] http://mail.gnu.org/mailman/listinfo/bug-fileutils