Hi, This isn't a real bug, but I thought I'd better mention it anyway as
I've hit it a few times now. I guess it stems from the days when storage
was rare and memory rarer.

Basically, I've occasionally have problems with the limit imposed by the
"*" operator when handling files. E.g.


[tonyl@saturn 2002-replaced]$ rm -f *
bash: /bin/rm: Argument list too long

[tonyl@saturn 2002-replaced]$ ls | wc -l
 231937 (yeah yeah I know!)

[tonyl@saturn 2002-replaced]$ ls * | wc -l
bash: /bin/ls: Argument list too long


In this case I can simple delete the dir and recreate it. The situation
has only come to light since moving a process off an NT box onto a 
GNU/Linux samba combo.

I find it interesting that the `ls' works for an obscene number of
files, yet `ls *' doesn't. bash fails nicely as above, but /bin/sh
either returns "Segmentation fault" or kills my connection!

Any ideas (other than "don't get involved with legacy systems that store
lots of files in a single dir")?

Regards,
Tony


_______________________________________________
Bug-fileutils mailing list
[EMAIL PROTECTED]
http://mail.gnu.org/mailman/listinfo/bug-fileutils

Reply via email to