The problem is not with how the replication is done.  The locking happens 
during the basic zfs operations. 

We noticed:
on server2 (which is quite busy serving maildirs) we did

zfs create tank/newfs
rsync 4GB from someotherserver to /tank/newfs
zfs destroy tank/newfs

Destroying newfs took more than 30 minutes, and during this time the production 
filesystem was inaccessible via NFS.

We got a hint on priv that we should try the following experiment

zfs create tank/tmp  
dd of=/tank/tmp/data [...]
zpool scrub tank 
zfs destroy tank/tmp

At this point the zfs destroy command gets suspended. 
Issuing 
zpool scrub -s  
causes the destroy to finish immediately.

The other part of the hint is that it is an issue with I/O scheduler and we 
should upgrade.
We will provide the details a soon as we have sorted this out.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to