Quoth Steve Simon <st...@quintile.net>:
> was this hard to reproduce?

I've seen it sporadically over the last year, and - when looking for it -
was able to trigger it deliberately yesterday with minimal effort the
first time I looked for it.

...of course, when I was later trying to reproduce it a third time, I
wasn't able to trigger it even on the fs running *without* the patch, so.
~50% reproducibility rate so far when I'm actively trying to hit it.

I'm fairly sure the root cause is a race condition between some of the
periodic threads - this is only triggered when we try to flush a clean
block, which isn't a common occurrence - but I wouldn't have put in so
much effort to fix this if it wasn't something I semiregularly ran into.

for(f in `{walk /sys})
        chmod +w $f

I think this, combined with the periodic flush routines and tight timing,
consistently reproduces it. There's probably a more general way to do so,
but without diving even deeper and seeing how we end up trying to flush
clean blocks, it's hard to say for sure.

fossil is *usually* pretty stable for me these days. My thinkpad often
has an uptime of weeks, and usually resets because I'm hacking on the
system and need to reboot to test it, not because of fossil.

I *have* seen bugs depressingly often, though. Once every month or so,
pretty consistently.

e.g. building any version of Go newer than 1.7 on my thinkpad crashes
fossil with ~50% consistency. The rest of the time, it alternates
between sporadic failures due to bugs in the Go compiler, and actually
working.

Similarly, using kvik's clone tool to move large volumes of data has
been a reliable way to crash fossil for me in the past (100% reliability
- there was an invocation that would cause the system to die literally
every time, but I don't remember exactly which dataset it was, or
what level of parallelism was required).


------------------------------------------
9fans: 9fans
Permalink: 
https://9fans.topicbox.com/groups/9fans/T354fe702e1e9d5e9-M4de9844d7316ae1c346391d7
Delivery options: https://9fans.topicbox.com/groups/9fans/subscription

Reply via email to