On Mon, 2 Nov 2015 09:48:41 -1000 Chris Worley <chris.wor...@primarydata.com> wrote:
> On Sat, Oct 31, 2015 at 11:54 AM, Jeff Layton <jlay...@poochiereds.net> wrote: > > On Sat, 31 Oct 2015 17:31:07 -0400 > > Tejun Heo <t...@kernel.org> wrote: > ... > >> > >> > I have asked Chris and Michael to see if they can bisect it down, but > >> > it may be a bit before they can get that done. Any insight you might > >> > have in the meantime would helpful. > >> > >> Yeah, I'd love to find out how reproducible the issue is. If the > >> problem is rarely reproducible, it might make sense to try > >> instrumentation before trying bisection as it *could* be a latent bug > >> which has been there all along and bisecting to the commit introducing > >> the code wouldn't help us too much. > >> > > > > It seems fairly reproducible, at least on v4.3-rc7 kernels: > > > > This came about when I asked them to perf test some nfsd patches that I > > have queued up. I patched a Fedora 4.3-rc7 kernel and wanted to see > > what the perf delta was (with NFSv3, fwiw): > > > > Patched kernels here: > > http://koji.fedoraproject.org/koji/taskinfo?taskID=11598089 > > > > Unpatched kernels here: > > http://koji.fedoraproject.org/koji/buildinfo?buildID=694377 > > > > Michael was using the SPEC SFS VDI workload to test, and was able to > > get the same panic on both kernels. So it does seem to be reproducible. > > It might even be possible to tune the VM to make the shrinker fire more > > often, which may help tickle this more. > > > > In any case, I've asked them to try something v4.2-ish and see if it's > > reproducible there, and then try v4.1 if it is. I figure anything > > earlier is probably not worth testing if it still fails on v4.1. If it > > turns out not to be reproducible on those earlier kernels then we can > > bisect from there to track it down. > > The trick seems to be the NFS thread count: I initially though this > was SFS/VDI specific, but when I ratcheted up the thread count to what > Michael was using, 256 threads oopses on fio (throughput) benchmarks > too. > > In bisecting kernels, it appeared between 4.2.3-200 and 4.2.5-200 (all > the 4.2.4 kernels were bad). > > Jeff has a lead on this... > > Chris Thanks Chris. This commit went in between those kernels: commit a7c571f2e3ff9243ba41c242486f53dbca37d864 Author: Shaohua Li <s...@fb.com> Date: Wed Sep 30 09:05:30 2015 -0700 workqueue: make sure delayed work run in local cpu commit 874bbfe600a660cba9c776b3957b1ce393151b76 upstream. ...and I suspect that it's the culprit. That commit causes this code to always use add_timer_on, which seems to have different semantics from add_timer. I'm going to build a v4.2.5 kernel with that patch reverted to confirm it, but it seems likely... -- Jeff Layton <jlay...@poochiereds.net> -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majord...@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/