I was going to suggest this analysis, but George has gotten ahead of me, so let me collate the two:
My understanding is that the details of how different programs read directories differs, and how different operating systems implement those operations differs, and how different file systems under an operating system implement them differs. But overall, reading large directories can take a long time, and often the operations involved are not interruptable. (At root, that is because this is an uncommon scenario in Unix and people don't work to optimize it.) So I was going to recommend this calibration: Go to the directory in question and execute "time ls -1 >/dev/null". That gives you the maximum amount of time it will take to read that directory, and that should be an upper bound on how long it will take bash to finish whatever completion operation it is doing. George R Goffe <grgo...@yahoo.com> writes: > I found how how to make konsole do a visual alarm and then tried my > failure scenario. > > I cd'd to a "big" directory and then entered "ls -al abc<tab key>", > waited a few seconds then did a ctrl-c. As usual, a freeze happened. I > waited a while and then saw the visual alarm followed by the ctr-c. OK, so it looks like there is some long, uniterruptable operation happening as part of filename completion. How does the length of that freeze compare to "ls -1", as described above? > Somewhere between the request for filename completion and the > recognition of ctrl-c appears to be where the "bug" is located. > > I could try running strace during all of this. Would it help? My guess is that could reveal some particularly slow operating system call that is involved in implementing filename completion, but it would not indicate a way to improve it. Dale