On 11/22/12 10:15, Ian Collins wrote:
I look after a remote server that has two iSCSI pools.  The volumes for
each pool are sparse volumes and a while back the target's storage
became full, causing weird and wonderful corruption issues until they
manges to free some space.

Since then, one pool has been reasonably OK, but the other has terrible
performance receiving snapshots.  Despite both iSCSI devices using the
same IP connection, iostat shows one with reasonable service times while
the other shows really high (up to 9 seconds) service times and 100%
busy.  This kills performance for snapshots with many random file
removals and additions.

I'm currently zero filling the bad pool to recover space on the target
storage to see if that improves matters.

Has anyone else seen similar behaviour with previously degraded iSCSI
pools?

As a data point, both pools are being zero filled with dd. A 30 second iostat sample shows one device getting more than double the write throughput of the other:

r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
0.2 64.0 0.0 50.1 0.0 5.6 0.7 87.9 4 64 c0t600144F096C94AC700004ECD96F20001d0 5.6 44.9 0.0 18.2 0.0 5.8 0.3 115.7 2 76 c0t600144F096C94AC700004FF354B00002d0

--
Ian.

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to