Hmm, seems rather unlikely that these two IOs are related.  Thread 1
is trying to read a dnode in order to extract the znode data from its
bonus buffer.  Thread 2 is completing a dmu_sync() write (so this is
the result of a zil operation).  While its possible that the dmu_sync()
write may involve reading some of the blocks in the dnode from Thread 1,
this should not result in Thread 1 waiting for anything.

I think vdev_mirror_io_done() often shows up in the stack because the
ditto-ing code leverages that code path.

-Mark

Pawel Jakub Dawidek wrote:
> On Wed, Nov 07, 2007 at 07:01:52AM -0700, Mark Maybee wrote:
>> Pawel,
>>
>> I'm not quite sure I understand why thread #1 below is stalled.  Is
>> there only a single thread available for IO completion?
> 
> There are few, but I belive the thread #2 is trying to complete the very
> I/O request on which thread #1 is waiting.
> Thread #2 can't complete this I/O request, because the lock it needs to
> acquire is held by thread #1.
> 
>>> One thread holds zfsvfs->z_hold_mtx[i] lock and waits for I/O:
>>>
>>> Tracing pid 1188 tid 100114 td 0xc41b1660
> [...]
>>> _cv_wait(de465f94,de465f7c,c43fc4cd,34f,0,...) at _cv_wait+0x1fc
>>> zio_wait(de465d68,c43ea3c5,240,4000,0,...) at zio_wait+0x99
>>> dbuf_read(d3a39594,de465d68,2,c43ee6df,d2e09900,...) at dbuf_read+0x2e7
>>> dnode_hold_impl(c4bbf000,a075,0,1,c43ec318,...) at dnode_hold_impl+0x15a
>>> dnode_hold(c4bbf000,a075,0,c43ec318,f899a758,...) at dnode_hold+0x35
>>> dmu_bonus_hold(c480e540,a075,0,0,f899a7a0,...) at dmu_bonus_hold+0x31
>>> zfs_zget(c3f4f000,a075,0,f899a89c,1,...) at zfs_zget+0x7a
>>> zfs_dirent_lock(f899a8a0,c4d18c64,f899a928,f899a89c,6,...) at 
>>> zfs_dirent_lock+0x619
>>> zfs_dirlook(c4d18c64,f899a928,f899ac20,0,0,...) at zfs_dirlook+0x272
>>> zfs_lookup(cbef4414,f899a928,f899ac20,f899ac34,2,...) at zfs_lookup+0x2df
> [...]
> 
>>> Another thread tries to finish I/O, but can't, because it is trying to
>>> acquire zfsvfs->z_hold_mtx[i] lock:
>>>
>>> Tracing command spa_zio_intr_2 pid 1117 tid 100020 td 0xc3b38440
> [...]
>>> _sx_xlock(c3f4f6a4,0,c43fb80c,31e,c7af015c,...) at _sx_xlock+0xb8
>>> zfs_zinactive(d0d00988,0,c4401326,e27,d0d00988,...) at zfs_zinactive+0xa2
>>> zfs_inactive(c7af015c,c3aa9600,0,c7af015c,f8aa2888,...) at 
>>> zfs_inactive+0x307
> [...]
>>> zfs_get_done(c5832d8c,c71317d0,292,28d,d2387b70,...) at zfs_get_done+0xad
>>> dmu_sync_done(d11e56b4,ccc323c0,c6940930,0,0,...) at dmu_sync_done+0x1f0
>>> arc_write_done(d11e56b4,d4863b2c,e38a2208,4d3,d11d68f0,...) at 
>>> arc_write_done+0x44a
>>> zio_done(d11e56b4,c44033e0,f8aa2a38,c434906a,d13c9e00,...) at zio_done+0xb2
>>> zio_next_stage(d11e56b4,d11e56f8,80,0,f8aa2a68,...) at zio_next_stage+0x236
>>> zio_assess(d11e56b4,c3b384d8,c069938c,c43fc4cd,d11e58c8,...) at 
>>> zio_assess+0x843
>>> zio_next_stage(d11e56b4,c43fc4cd,36d,36a,f8aa2b44,...) at 
>>> zio_next_stage+0x236
>>> zio_wait_for_children(d11e56b4,12,d11e58bc,f8aa2b8c,c436cc76,...) at 
>>> zio_wait_for_children+0x99
>>> zio_wait_children_done(d11e56b4,c0a575a0,c3b384d8,c0697e04,c0679402,...) at 
>>> zio_wait_children_done+0x25
>>> zio_next_stage(d11e56b4,c1074808,0,c0679402,8d6,...) at zio_next_stage+0x236
>>> zio_vdev_io_assess(d11e56b4,c44033e0,40,cef4def8,c3,...) at 
>>> zio_vdev_io_assess+0x2a7
>>> zio_next_stage(d11e56b4,c3b38440,f8aa2c3c,246,c0a116b4,...) at 
>>> zio_next_stage+0x236
>>> vdev_mirror_io_done(d11e56b4,f8aa2cf8,c42c94de,d11e56b4,0,...) at 
>>> vdev_mirror_io_done+0x113
>>> zio_vdev_io_done(d11e56b4,0,c43e6f2d,33d,c442ec14,...) at 
>>> zio_vdev_io_done+0x21
> [...]
> 
> BTW. Why I always see vdev_mirror_io_done() even if I don't use mirror?
> Here I had RAIDZ vdev.
> 
> 
> 
> ------------------------------------------------------------------------
> 
> _______________________________________________
> zfs-code mailing list
> zfs-code at opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-code

Reply via email to