On 02/04/2023 20:28, Achim Gratz via Cygwin-apps wrote:
Jon Turney via Cygwin-apps writes:
I think there is already a perfectly good, filesystem safe,
computationally cheap unique identifier for each filename, which is
it's ordinal number in the list of filenames we are examining.
I've implemented a counter now. However I don't see the hashing of a
filename as onerous when Git does that much more often and on much
larger data.
'wait -f' seems to be new in bash 5.0. I assume this fails horribly
on earlier bash versions. I'm ok with requiring that, but maybe we
should check the bash version?
It should indeed be possible to drop the -f as long as job control is not
enabled if I understand the manual correctly after re-reading it several
times. I've done that and it looks like things still work.
On the plus side, the testsuite passes! :)
I found an interesting wrinkle with this today (when adding some tests
that check that packages have the expected list of files).
If the executable is hardlinked under multiple names, then parallel
instances of __prestrip_one are going to extract debug info to multiple
.dbg files, then fight over which one gets to set the debuglink to point
at one of those .dbg files.
This occurs with the 'bvi' package, where bview and bvedit are
hardlinked to bvi.
Now, maybe this already doing the wrong thing in the old,
non-parallelized code as well (or maybe it worked OK, because if a
debuglink was already present, we don't try to strip the file again the
subsequent times we process it?)
So, idk how this should be written correctly? Make sure we only process
each inode once?