On Fri, Mar 15, 2019 at 01:26:04PM +1100, Dave Chinner wrote:
> On Thu, Mar 14, 2019 at 12:34:51AM -0700, Dan Williams wrote:
> > On Mon, Mar 11, 2019 at 9:38 PM Dave Chinner <da...@fromorbit.com> wrote:
> > >
> > > On Mon, Mar 11, 2019 at 08:35:05PM -0700, Dan Williams wrote:
> > > > On Mon, Mar 11, 2019 at 8:10 AM Matthew Wilcox <wi...@infradead.org> 
> > > > wrote:
> > > > >
> > > > > On Thu, Mar 07, 2019 at 10:16:17PM -0800, Dan Williams wrote:
> > > > > > Hi Willy,
> > > > > >
> > > > > > We're seeing a case where RocksDB hangs and becomes defunct when
> > > > > > trying to kill the process. v4.19 succeeds and v4.20 fails. Robert 
> > > > > > was
> > > > > > able to bisect this to commit b15cd800682f "dax: Convert page fault
> > > > > > handlers to XArray".
> > > > > >
> > > > > > I see some direct usage of xa_index and wonder if there are some 
> > > > > > more
> > > > > > pmd fixups to do?
> > > > > >
> > > > > > Other thoughts?
> > > > >
> > > > > I don't see why killing a process would have much to do with PMD
> > > > > misalignment.  The symptoms (hanging on a signal) smell much more like
> > > > > leaving a locked entry in the tree.  Is this easy to reproduce?  Can 
> > > > > you
> > > > > get /proc/$pid/stack for a hung task?
> > > >
> > > > It's fairly easy to reproduce, I'll see if I can package up all the
> > > > dependencies into something that fails in a VM.
> > > >
> > > > It's limited to xfs, no failure on ext4 to date.
> > > >
> > > > The hung process appears to be:
> > > >
> > > >      kworker/53:1-xfs-sync/pmem0
> > >
> > > That's completely internal to XFS. Every 30s the work is triggered
> > > and it either does a log flush (if the fs is active) or it syncs the
> > > superblock to clean the log and idle the filesystem. It has nothing
> > > to do with user processes, and I don't see why killing a process has
> > > any effect on what it does...
> > >
> > > > ...and then the rest of the database processes grind to a halt from 
> > > > there.
> > > >
> > > > Robert was kind enough to capture /proc/$pid/stack, but nothing 
> > > > interesting:
> > > >
> > > > [<0>] worker_thread+0xb2/0x380
> > > > [<0>] kthread+0x112/0x130
> > > > [<0>] ret_from_fork+0x1f/0x40
> > > > [<0>] 0xffffffffffffffff
> > >
> > > Much more useful would be:
> > >
> > > # echo w > /proc/sysrq-trigger
> > >
> > > And post the entire output of dmesg.
> > 
> > Here it is:
> > 
> > https://gist.github.com/djbw/ca7117023305f325aca6f8ef30e11556
> 
> Which tells us nothing. :(

Nothing from a filesystem side, perhaps, but I find it quite interesting.

We have a number of threads blocking in down_read() on mmap_sem.  That
means a task is holding the mmap_sem for write, or is blocked trying
to take the mmap_sem for write.  I think it's the latter; pid 4650
is blocked in munmap().  pid 4673 is blocking in get_unlocked_entry()
and will be holding the mmap_sem for read while doing so.

Since this is provoked by a fatal signal, it must have something to do
with a killable or interruptible sleep.  There's only one of those in the
DAX code; fatal_signal_pending() in dax_iomap_actor().  Does rocksdb do
I/O with write() or through a writable mmap()?  I'd like to know before
I chase too far down this fault tree analysis.

My current suspicion is that we have a PMD fault being not-woken by a PTE
modification, and the evidence seems to fit, but I don't quite see it yet.

(I meant to ask Dan about this while we were in San Juan, but with all
the other excitement, it slipped my mind).
_______________________________________________
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm

Reply via email to