> On Jun 14, 2019, at 4:34 PM, Matthew Flatt <mfl...@cs.utah.edu> wrote: > > Sometimes, staying out of the error-triggering space is unavoidable, > and the possible errors are well enough defined by all filesystems, so > retrying is a workable solution. If you really have to be in that > space, then retry an arbitrary number of times, perhaps with an > exponential backoff (i.e., double the delay between retries).
OK, thanks for the encouragement. After more experimentation I got the lock-server idea to work (meaning, my parallel renders now complete without errors, which was not true before). For the benefit of future generations, two complications tripped me up, the first I understand, the second I don't, but maybe someday: 1) At first I had each rendering place issue its lock requests file by file. This didn't work because each place needed to lock a set of files, and the individual lock / unlock requests could end up interleaved in the lock server, with strange results (some locked / some unlocked). So I bundled each set of files into a "lock transaction" which I passed in a single place message, then all of them would get locked or unlocked as a group. 2) I set up each rendering place to repeatedly ask for a lock until it was approved. But even after approving a lock request from a certain rendering place, the lock server would end up with repeat lock requests from there. To fix this, I needed to create a third response for the lock server, which was "dude you already have a lock". -- You received this message because you are subscribed to the Google Groups "Racket Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to racket-users+unsubscr...@googlegroups.com. To view this discussion on the web visit https://groups.google.com/d/msgid/racket-users/BCC26E32-FFF6-40ED-92F9-98C70895E227%40mbtype.com. For more options, visit https://groups.google.com/d/optout.