Re: 9pfs hangs since 4.7

2017-01-04 Thread Al Viro
> Here's logs that should be complete this time:
> 
> https://gist.githubusercontent.com/dezgeg/08629d4c8ca79da794bc087e5951e518/raw/a1a82b9bc24e5282c82beb43a9dc91974ffcf75a/9p.qemu.log
> https://gist.githubusercontent.com/dezgeg/1d5f1cc0647e336c59989fc62780eb2e/raw/d7623755a895b0441c608ddba366d20bbf47ab0b/9p.dmesg.log

Fun.  All requests prior to
[  360.110282]   dd-18991 18497262us : 9p_client_req: client 
18446612134390128640 request P9_TWALK tag  25
line in dmesg had been seen by the servers; all requests starting with that
one had not.  Replies to earlier requests kept arriving just fine.

>From the server side, everything looks nice and sane - it has processed
all requests it had seen, and aside of slight difference in the arrival
order server-side and client-side logs match, except for the last 26
requests the client claims to have sent and the server has never seen.

All traffic for another client (there had been less of it) has ceased long
before that point, so we can't really tell if it's just this client that
got buggered.  Interesting...

Looking at the tracepoints, those requests got through p9_client_prepare_req();
we have no idea whether it got through p9_virtio_request().  OTOH, AFAICS
nothing had been sleeping in there...

FWIW, it might be interesting to try
WARN_ON(!virtqueue_kick(chan->vq));
in p9_virtio_request() (instead of blind virtqueue_kick(chan->vq)) and see
if it triggers.  Incidentally, it looks like p9_virtio_request() ought to
return an error if that happens...


Re: 9pfs hangs since 4.7

2017-01-04 Thread Tuomas Tynkkynen
On Wed, 4 Jan 2017 01:47:53 +
Al Viro  wrote:

> On Wed, Jan 04, 2017 at 01:34:50AM +0200, Tuomas Tynkkynen wrote:
> 
> > I got these logs from the server & client with 9p tracepoints enabled:
> > 
> > https://gist.githubusercontent.com/dezgeg/02447100b3182167403099fe7de4d941/raw/3772e408ddf586fb662ac9148fc10943529a6b99/dmesg%2520with%25209p%2520trace
> > https://gist.githubusercontent.com/dezgeg/e1e0c7f354042e1d9bdf7e9135934a65/raw/3a0e3b4f7a5229fd0be032c6839b578d47a21ce4/qemu.log
> >   
> 
> Lovely...
> 
> 27349:out 8 0001  TSTATFS, tag 1
> 27350:out 12 0001 TLOPEN, tag 1
> 27351:complete 9 0001 RSTATFS, tag 1
> 27352:complete 13 0001RLOPEN, tag 1
> 
> 27677:out 8 0001  TSTATFS, tag 1
> 27678:out 12 0001 TLOPEN, tag 1
> 27679:complete 9 0001 RSTATFS, tag 1
> 27680:complete 13 0001RLOPEN, tag 1
> 
> 29667:out 8 0001  TSTATFS, tag 1
> 29668:out 110 0001TWALK, tag 1
> 29669:complete 9 0001 RSTATFS, tag 1
> 29670:complete 7 0001 RLERROR, tag 1
> 
> 42317:out 110 0001TWALK, tag 1
> 42318:out 8 0001  TSTATFS, tag 1
> 42319:complete 9 0001 RSTATFS, tag 1
> 42320:complete 7 0001 RERROR, tag 1
> 
> Those are outright protocol violations: tag can be reused only after either
> a reply bearing the same tag has arrived *or* TFLUSH for that tag had been
> issued and successful (type == RFLUSH) reply bearing the tag of that TFLUSH
> has arrived.  Your log doesn't contain any TFLUSH (108) at all, so it 
> should've
> been plain and simple "no reuse until server sends a reply with matching tag".
> 

Argh, I had completely forgotten there is another 9p mount involved, so the log
is mixed between the two mounts. Apologies, there is now a new trace attached
with the V9fsState pointer dumped in the mix as well.

> Otherwise the the dump looks sane.  In the end all issued requests had been
> served, so it's not as if the client had been waiting for a free tag or for
> a response to be produced by the server.
> 
> Interesting...  dmesg doesn't seem to contain the beginning of the 9P trace -
> only 89858 out of 108818 in the dump, even though it claims to have lost
> only 4445 events.  [...]

Here's logs that should be complete this time:

https://gist.githubusercontent.com/dezgeg/08629d4c8ca79da794bc087e5951e518/raw/a1a82b9bc24e5282c82beb43a9dc91974ffcf75a/9p.qemu.log
https://gist.githubusercontent.com/dezgeg/1d5f1cc0647e336c59989fc62780eb2e/raw/d7623755a895b0441c608ddba366d20bbf47ab0b/9p.dmesg.log

> 
> Interesting...  Which version of qemu/what arguments are you using there?

This is QEMU 2.7.0, with these on the server side:
-virtfs local,path=/nix/store,security_model=none,mount_tag=store
-virtfs local,path=$TMPDIR/xchg,security_model=none,mount_tag=xchg

and on the client side:

store on /nix/store type 9p 
(rw,relatime,dirsync,trans=virtio,version=9p2000.L,cache=loose)
xchg on /tmp/xchg type 9p 
(rw,relatime,dirsync,trans=virtio,version=9p2000.L,cache=loose)

'store' is the one receiving the hammering, 'xchg' I think is mostly sitting 
idle.


Re: 9pfs hangs since 4.7

2017-01-03 Thread Al Viro
On Wed, Jan 04, 2017 at 01:34:50AM +0200, Tuomas Tynkkynen wrote:

> I got these logs from the server & client with 9p tracepoints enabled:
> 
> https://gist.githubusercontent.com/dezgeg/02447100b3182167403099fe7de4d941/raw/3772e408ddf586fb662ac9148fc10943529a6b99/dmesg%2520with%25209p%2520trace
> https://gist.githubusercontent.com/dezgeg/e1e0c7f354042e1d9bdf7e9135934a65/raw/3a0e3b4f7a5229fd0be032c6839b578d47a21ce4/qemu.log
> 

Lovely...

27349:out 8 0001TSTATFS, tag 1
27350:out 12 0001   TLOPEN, tag 1
27351:complete 9 0001   RSTATFS, tag 1
27352:complete 13 0001  RLOPEN, tag 1

27677:out 8 0001TSTATFS, tag 1
27678:out 12 0001   TLOPEN, tag 1
27679:complete 9 0001   RSTATFS, tag 1
27680:complete 13 0001  RLOPEN, tag 1

29667:out 8 0001TSTATFS, tag 1
29668:out 110 0001  TWALK, tag 1
29669:complete 9 0001   RSTATFS, tag 1
29670:complete 7 0001   RLERROR, tag 1

42317:out 110 0001  TWALK, tag 1
42318:out 8 0001TSTATFS, tag 1
42319:complete 9 0001   RSTATFS, tag 1
42320:complete 7 0001   RERROR, tag 1

Those are outright protocol violations: tag can be reused only after either
a reply bearing the same tag has arrived *or* TFLUSH for that tag had been
issued and successful (type == RFLUSH) reply bearing the tag of that TFLUSH
has arrived.  Your log doesn't contain any TFLUSH (108) at all, so it should've
been plain and simple "no reuse until server sends a reply with matching tag".

Otherwise the the dump looks sane.  In the end all issued requests had been
served, so it's not as if the client had been waiting for a free tag or for
a response to be produced by the server.

Interesting...  dmesg doesn't seem to contain the beginning of the 9P trace -
only 89858 out of 108818 in the dump, even though it claims to have lost
only 4445 events.  The last exchanges in the trace are
> P9_TLOPEN tag  90 err 0
> P9_TLOPEN tag  25 err 0
> P9_TLOPEN tag  15 err 0
> P9_TLOPEN tag  104 err 0
> P9_TLOPEN tag  102 err 0
> P9_TLOPEN tag  91 err 0
> P9_TLOPEN tag  12 err 0
< P9_TREADLINK tag  12

which should've been
complete 13 005a
complete 13 0019
complete 13 000f
complete 13 0068
complete 13 0066
complete 13 005b
out 22 000c

and the only plausible match for that is

108784:complete 13 005a
108798:complete 13 000f
108799:complete 13 0019
108800:complete 13 0068
108801:complete 13 0066
108802:complete 13 005b

but there's a bunch of replies in between that hadn't been logged *and* such
a TREADLINK request isn't seen after
108341:out 22 000c

... and sure enough, there's a whole lot of processes stuck in readlink.

Interesting...  Which version of qemu/what arguments are you using there?


Re: 9pfs hangs since 4.7

2017-01-03 Thread Tuomas Tynkkynen
On Mon, 2 Jan 2017 16:23:09 +
Al Viro  wrote:


> 
> What I'd like to see is a log of 9p traffic in those; to hell with the
> payloads, just the type and tag of from each message [...]

Thanks for the suggestions. With the following patch to QEMU:

diff --git a/hw/9pfs/9p.c b/hw/9pfs/9p.c
index aea7e9d..8a6b426 100644
--- a/hw/9pfs/9p.c
+++ b/hw/9pfs/9p.c
@@ -662,6 +662,7 @@ static void coroutine_fn pdu_complete(V9fsPDU *pdu, ssize_t 
len)
 
 /* fill out the header */
 pdu_marshal(pdu, 0, "dbw", (int32_t)len, id, pdu->tag);
+fprintf(stderr, "complete %d %04x\n", id, pdu->tag);
 
 /* keep these in sync */
 pdu->size = len;
@@ -2347,6 +2348,7 @@ static void v9fs_flush(void *opaque)
 return;
 }
 trace_v9fs_flush(pdu->tag, pdu->id, tag);
+fprintf(stderr, "flush %04x %04x\n", tag, pdu->tag);
 
 QLIST_FOREACH(cancel_pdu, &s->active_list, next) {
 if (cancel_pdu->tag == tag) {
diff --git a/hw/9pfs/virtio-9p-device.c b/hw/9pfs/virtio-9p-device.c
index 1782e4a..6a5ac04 100644
--- a/hw/9pfs/virtio-9p-device.c
+++ b/hw/9pfs/virtio-9p-device.c
@@ -76,6 +76,7 @@ static void handle_9p_output(VirtIODevice *vdev, VirtQueue 
*vq)
 
 pdu->id = out.id;
 pdu->tag = le16_to_cpu(out.tag_le);
+fprintf(stderr, "out %d %04x\n", pdu->id, pdu->tag);
 
 qemu_co_queue_init(&pdu->complete);
 pdu_submit(pdu);

I got these logs from the server & client with 9p tracepoints enabled:

https://gist.githubusercontent.com/dezgeg/02447100b3182167403099fe7de4d941/raw/3772e408ddf586fb662ac9148fc10943529a6b99/dmesg%2520with%25209p%2520trace
https://gist.githubusercontent.com/dezgeg/e1e0c7f354042e1d9bdf7e9135934a65/raw/3a0e3b4f7a5229fd0be032c6839b578d47a21ce4/qemu.log



Re: 9pfs hangs since 4.7

2017-01-02 Thread Al Viro
On Mon, Jan 02, 2017 at 10:20:35AM +0200, Tuomas Tynkkynen wrote:
> Hi fsdevel,
> 
> I tracked this problem down a bit and it seems that it started happening
> after the VFS merge in 4.7-rc1: 7f427d3a6029331304f91ef4d7cf646f054216d2:
> 
> Merge branch 'for-linus' of 
> git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs
> 
> Pull parallel filesystem directory handling update from Al Viro.
> 
> Al, do you have any ideas about this problem with 9pfs I've been observing
> (quoted below in full)? Many thanks in advance!

d_alloc_parallel() is basically waiting for another lookup on the same
name in the same directory to finish.  In earlier kernels it would've
been waiting on ->i_mutex of parent, now - on the waitqueue pointed to
be ->d_wait of in-lookup dentry.

p9_client_rpc ones are more interesting; the question is what are they
waiting for.  Might be waiting for reply from server, might be - for
a tag to get freed.  We had at least one bug with tag lifetimes, BTW
(see commit a84b69cb6e0a for details) and that kind of crap can end up
with responses from server getting lost on client.

What I'd like to see is a log of 9p traffic in those; to hell with the
payloads, just the type and tag of from each message (octets at offsets
4, 5 and 6) plus otag (octets at offsets 7 and 8) when the type is equal to
108 (message[4] == TFLUSH).  That's 3 bytes for most of the messages and
once in a while 2 more.  Should be easier to do dumping on the qemu side
of things; in principle, there are tracing mechanisms in there (grep for
trace_v9fs_ in hw/9pfs/virtio-9p.c), but when I had to do that kind of
work, I simply did manual dumping (with write(2), nothing fancier
than that) in handle_9p_output() (out.id and out.tag_le for type and tag
of incoming messages), v9fs_flush() (tag for otag of TFLUSH messages) and
complete_pdu() (id and pdu->tag for type and tag of replies).


Re: 9pfs hangs since 4.7

2017-01-02 Thread Tuomas Tynkkynen
Hi fsdevel,

I tracked this problem down a bit and it seems that it started happening
after the VFS merge in 4.7-rc1: 7f427d3a6029331304f91ef4d7cf646f054216d2:

Merge branch 'for-linus' of 
git://git.kernel.org/pub/scm/linux/kernel/git/viro/vfs

Pull parallel filesystem directory handling update from Al Viro.

Al, do you have any ideas about this problem with 9pfs I've been observing
(quoted below in full)? Many thanks in advance!

On Thu, 24 Nov 2016 21:50:23 +0200
Tuomas Tynkkynen  wrote:

> Hi fsdevel,
> 
> I have been observing hangs when running xfstests generic/224. Curiously
> enough, the test is *not* causing problems on the FS under test (I've
> tried both ext4 and f2fs) but instead it's causing the 9pfs that I'm
> using as the root filesystem to crap out.
> 
> How it shows up is that the test doesn't finish in time (usually
> takes ~50 sec) but the hung task detector triggers for some task in
> d_alloc_parallel():
> 
> [  660.701646] INFO: task 224:7800 blocked for more than 300 seconds.
> [  660.702756]   Not tainted 4.9.0-rc5 #1-NixOS
> [  660.703232] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
> this message.
> [  660.703927] 224 D0  7800549 0x
> [  660.704501]  8a82ec022800  8a82fc03c800 
> 8a82ff217dc0
> [  660.705302]  8a82d0f88c00 a94a41a27b88 aeb4ad1d 
> a94a41a27b78
> [  660.706125]  ae800fc6 8a82fbd90f08 8a82d0f88c00 
> 8a82fbfd5418
> [  660.706924] Call Trace:
> [  660.707185]  [] ? __schedule+0x18d/0x640
> [  660.707751]  [] ? __d_alloc+0x126/0x1e0
> [  660.708304]  [] schedule+0x36/0x80
> [  660.708841]  [] d_alloc_parallel+0x3a7/0x480
> [  660.709454]  [] ? wake_up_q+0x70/0x70
> [  660.710007]  [] lookup_slow+0x73/0x140
> [  660.710572]  [] walk_component+0x1ca/0x2f0
> [  660.711167]  [] ? path_init+0x1d9/0x330
> [  660.711747]  [] ? mntput+0x24/0x40
> [  660.716962]  [] path_lookupat+0x5d/0x110
> [  660.717581]  [] filename_lookup+0x9e/0x150
> [  660.718194]  [] ? kmem_cache_alloc+0x156/0x1b0
> [  660.719037]  [] ? getname_flags+0x56/0x1f0
> [  660.719801]  [] ? getname_flags+0x72/0x1f0
> [  660.720492]  [] user_path_at_empty+0x36/0x40
> [  660.721206]  [] vfs_fstatat+0x53/0xa0
> [  660.721980]  [] SYSC_newstat+0x1f/0x40
> [  660.722732]  [] SyS_newstat+0xe/0x10
> [  660.723702]  [] entry_SYSCALL_64_fastpath+0x1a/0xa9
> 
> SysRq-T is full of things stuck inside p9_client_rpc like:
> 
> [  271.703598] bashS0   100 96 0x
> [  271.703968]  8a82ff824800  8a82faee4800 
> 8a82ff217dc0
> [  271.704486]  8a82fb946c00 a94a404ebae8 aeb4ad1d 
> 8a82fb9fc058
> [  271.705024]  a94a404ebb10 ae8f21f9 8a82fb946c00 
> 8a82fbbba000
> [  271.705542] Call Trace:
> [  271.705715]  [] ? __schedule+0x18d/0x640
> [  271.706079]  [] ? idr_get_empty_slot+0x199/0x3b0
> [  271.706489]  [] schedule+0x36/0x80
> [  271.706825]  [] p9_client_rpc+0x12a/0x460 [9pnet]
> [  271.707239]  [] ? idr_alloc+0x87/0x100
> [  271.707596]  [] ? wake_atomic_t_function+0x60/0x60
> [  271.708043]  [] p9_client_walk+0x77/0x200 [9pnet]
> [  271.708459]  [] v9fs_vfs_lookup.part.16+0x59/0x120 [9p]
> [  271.708912]  [] v9fs_vfs_lookup+0x1f/0x30 [9p]
> [  271.709308]  [] lookup_slow+0x96/0x140
> [  271.709664]  [] walk_component+0x1ca/0x2f0
> [  271.710036]  [] ? path_init+0x1d9/0x330
> [  271.710390]  [] path_lookupat+0x5d/0x110
> [  271.710763]  [] filename_lookup+0x9e/0x150
> [  271.711136]  [] ? mem_cgroup_commit_charge+0x7e/0x4a0
> [  271.711581]  [] ? kmem_cache_alloc+0x156/0x1b0
> [  271.711977]  [] ? getname_flags+0x56/0x1f0
> [  271.712349]  [] ? getname_flags+0x72/0x1f0
> [  271.712726]  [] user_path_at_empty+0x36/0x40
> [  271.713110]  [] vfs_fstatat+0x53/0xa0
> [  271.713454]  [] SYSC_newstat+0x1f/0x40
> [  271.713810]  [] SyS_newstat+0xe/0x10
> [  271.714150]  [] entry_SYSCALL_64_fastpath+0x1a/0xa9
> 
> [  271.729022] sleep   S0   218216 0x0002
> [  271.729391]  8a82fb990800  8a82fc0d8000 
> 8a82ff317dc0
> [  271.729915]  8a82fbbec800 a94a404f3cf8 aeb4ad1d 
> 8a82fb9fc058
> [  271.730426]  ec95c1ee08c0 0001 8a82fbbec800 
> 8a82fbbba000
> [  271.730950] Call Trace:
> [  271.731115]  [] ? __schedule+0x18d/0x640
> [  271.731479]  [] schedule+0x36/0x80
> [  271.731814]  [] p9_client_rpc+0x12a/0x460 [9pnet]
> [  271.732226]  [] ? wake_atomic_t_function+0x60/0x60
> [  271.732649]  [] p9_client_clunk+0x38/0xb0 [9pnet]
> [  271.733061]  [] v9fs_dir_release+0x1a/0x30 [9p]
> [  271.733494]  [] __fput+0xdf/0x1f0
> [  271.733844]  [] fput+0xe/0x10
> [  271.734176]  [] task_work_run+0x7e/0xa0
> [  271.734532]  [] do_exit+0x2b9/0xad0
> [  271.734888]  [] ? __do_page_fault+0x287/0x4b0
> [  271.735276]  [] do_group_exit+0x43/0xb0
> [  271.735639]  [] SyS_exit_group+0x14/0x20
> [  271.736002]  [] entry_SYSCALL_64_fastpath+0x

Re: 9pfs hangs since 4.7

2016-12-02 Thread Tuomas Tynkkynen
On Tue, 29 Nov 2016 10:39:39 -0600
Eric Van Hensbergen  wrote:

> Any idea of what xfstests is doing at this point in time?  I'd be a
> bit worried about some sort of loop in the namespace since it seems to
> be in path traversal.  Could also be some sort of resource leak or
> fragmentation, I'll admit that many of the regression tests we do are
> fairly short in duration.  Another approach would be to look at doing
> this with a different server (over a network link instead of virtio)
> to isolate it as a client versus server side problem (although from
> the looks of things this does seem to be a client issue).

The xfstests part where it hangs is either of these loops:

FILES=1000
for i in `seq 0 1 $FILES`; do
(
sleep 5
xfs_io -f -c "truncate 10485760" $SCRATCH_MNT/testfile.$i
dd if=/dev/zero of=$SCRATCH_MNT/testfile.$i bs=4k conv=notrunc
) > /dev/null 2>&1 &
done
wait

for i in `seq 0 1 $FILES`; do
dd of=/dev/null if=$SCRATCH_MNT/testfile.$i bs=512k iflag=direct > 
/dev/null 2>&1 &
done
wait

So all what's happening on the 9p is a bunch of reads+opens
on the binaries (sleep, xfs_io, dd) and their .so dependencies
(which includes some readlinks as well apparently).

I also tried building QEMU with tracing support enabled and
according to its own 9p event log the server did end up replying
to each client request (i.e. each v9fs_foo with a given tag was
was matched up with a v9fs_foo_return or a v9fs_rerror)...
so yes, looking more like a client problem.


Re: 9pfs hangs since 4.7

2016-11-29 Thread Eric Van Hensbergen
Any idea of what xfstests is doing at this point in time?  I'd be a
bit worried about some sort of loop in the namespace since it seems to
be in path traversal.  Could also be some sort of resource leak or
fragmentation, I'll admit that many of the regression tests we do are
fairly short in duration.  Another approach would be to look at doing
this with a different server (over a network link instead of virtio)
to isolate it as a client versus server side problem (although from
the looks of things this does seem to be a client issue).

On Thu, Nov 24, 2016 at 1:50 PM, Tuomas Tynkkynen  wrote:
> Hi fsdevel,
>
> I have been observing hangs when running xfstests generic/224. Curiously
> enough, the test is *not* causing problems on the FS under test (I've
> tried both ext4 and f2fs) but instead it's causing the 9pfs that I'm
> using as the root filesystem to crap out.
>
> How it shows up is that the test doesn't finish in time (usually
> takes ~50 sec) but the hung task detector triggers for some task in
> d_alloc_parallel():
>
> [  660.701646] INFO: task 224:7800 blocked for more than 300 seconds.
> [  660.702756]   Not tainted 4.9.0-rc5 #1-NixOS
> [  660.703232] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables 
> this message.
> [  660.703927] 224 D0  7800549 0x
> [  660.704501]  8a82ec022800  8a82fc03c800 
> 8a82ff217dc0
> [  660.705302]  8a82d0f88c00 a94a41a27b88 aeb4ad1d 
> a94a41a27b78
> [  660.706125]  ae800fc6 8a82fbd90f08 8a82d0f88c00 
> 8a82fbfd5418
> [  660.706924] Call Trace:
> [  660.707185]  [] ? __schedule+0x18d/0x640
> [  660.707751]  [] ? __d_alloc+0x126/0x1e0
> [  660.708304]  [] schedule+0x36/0x80
> [  660.708841]  [] d_alloc_parallel+0x3a7/0x480
> [  660.709454]  [] ? wake_up_q+0x70/0x70
> [  660.710007]  [] lookup_slow+0x73/0x140
> [  660.710572]  [] walk_component+0x1ca/0x2f0
> [  660.711167]  [] ? path_init+0x1d9/0x330
> [  660.711747]  [] ? mntput+0x24/0x40
> [  660.716962]  [] path_lookupat+0x5d/0x110
> [  660.717581]  [] filename_lookup+0x9e/0x150
> [  660.718194]  [] ? kmem_cache_alloc+0x156/0x1b0
> [  660.719037]  [] ? getname_flags+0x56/0x1f0
> [  660.719801]  [] ? getname_flags+0x72/0x1f0
> [  660.720492]  [] user_path_at_empty+0x36/0x40
> [  660.721206]  [] vfs_fstatat+0x53/0xa0
> [  660.721980]  [] SYSC_newstat+0x1f/0x40
> [  660.722732]  [] SyS_newstat+0xe/0x10
> [  660.723702]  [] entry_SYSCALL_64_fastpath+0x1a/0xa9
>
> SysRq-T is full of things stuck inside p9_client_rpc like:
>
> [  271.703598] bashS0   100 96 0x
> [  271.703968]  8a82ff824800  8a82faee4800 
> 8a82ff217dc0
> [  271.704486]  8a82fb946c00 a94a404ebae8 aeb4ad1d 
> 8a82fb9fc058
> [  271.705024]  a94a404ebb10 ae8f21f9 8a82fb946c00 
> 8a82fbbba000
> [  271.705542] Call Trace:
> [  271.705715]  [] ? __schedule+0x18d/0x640
> [  271.706079]  [] ? idr_get_empty_slot+0x199/0x3b0
> [  271.706489]  [] schedule+0x36/0x80
> [  271.706825]  [] p9_client_rpc+0x12a/0x460 [9pnet]
> [  271.707239]  [] ? idr_alloc+0x87/0x100
> [  271.707596]  [] ? wake_atomic_t_function+0x60/0x60
> [  271.708043]  [] p9_client_walk+0x77/0x200 [9pnet]
> [  271.708459]  [] v9fs_vfs_lookup.part.16+0x59/0x120 [9p]
> [  271.708912]  [] v9fs_vfs_lookup+0x1f/0x30 [9p]
> [  271.709308]  [] lookup_slow+0x96/0x140
> [  271.709664]  [] walk_component+0x1ca/0x2f0
> [  271.710036]  [] ? path_init+0x1d9/0x330
> [  271.710390]  [] path_lookupat+0x5d/0x110
> [  271.710763]  [] filename_lookup+0x9e/0x150
> [  271.711136]  [] ? mem_cgroup_commit_charge+0x7e/0x4a0
> [  271.711581]  [] ? kmem_cache_alloc+0x156/0x1b0
> [  271.711977]  [] ? getname_flags+0x56/0x1f0
> [  271.712349]  [] ? getname_flags+0x72/0x1f0
> [  271.712726]  [] user_path_at_empty+0x36/0x40
> [  271.713110]  [] vfs_fstatat+0x53/0xa0
> [  271.713454]  [] SYSC_newstat+0x1f/0x40
> [  271.713810]  [] SyS_newstat+0xe/0x10
> [  271.714150]  [] entry_SYSCALL_64_fastpath+0x1a/0xa9
>
> [  271.729022] sleep   S0   218216 0x0002
> [  271.729391]  8a82fb990800  8a82fc0d8000 
> 8a82ff317dc0
> [  271.729915]  8a82fbbec800 a94a404f3cf8 aeb4ad1d 
> 8a82fb9fc058
> [  271.730426]  ec95c1ee08c0 0001 8a82fbbec800 
> 8a82fbbba000
> [  271.730950] Call Trace:
> [  271.731115]  [] ? __schedule+0x18d/0x640
> [  271.731479]  [] schedule+0x36/0x80
> [  271.731814]  [] p9_client_rpc+0x12a/0x460 [9pnet]
> [  271.732226]  [] ? wake_atomic_t_function+0x60/0x60
> [  271.732649]  [] p9_client_clunk+0x38/0xb0 [9pnet]
> [  271.733061]  [] v9fs_dir_release+0x1a/0x30 [9p]
> [  271.733494]  [] __fput+0xdf/0x1f0
> [  271.733844]  [] fput+0xe/0x10
> [  271.734176]  [] task_work_run+0x7e/0xa0
> [  271.734532]  [] do_exit+0x2b9/0xad0
> [  271.734888]  [] ? __do_page_fault+0x287/0x4b0
> [  271.735276]  [] do_group_exit+0x43/0xb0
> [  271.7