Richard Elling wrote:
On Jun 14, 2010, at 2:12 PM, Roy Sigurd Karlsbakk wrote:
Hi all

It seems zfs scrub is taking a big bit out of I/O when running. During a scrub, 
sync I/O, such as NFS and iSCSI is mostly useless. Attaching an SLOG and some 
L2ARC helps this, but still, the problem remains in that the scrub is given 
full priority.

Scrub always runs at the lowest priority. However, priority scheduling only
works before the I/Os enter the disk queue. If you are running Solaris 10 or
older releases with HDD JBODs, then the default zfs_vdev_max_pending is 35. This means that your slow disk will have 35 I/Os queued to it before
priority scheduling makes any difference.  Since it is a slow disk, that could
mean 250 to 1500 ms before the high priority I/O reaches the disk.

Is this problem known to the developers? Will it be addressed?

In later OpenSolaris releases, the zfs_vdev_max_pending defaults to 10
which helps.  You can tune it lower as described in the Evil Tuning Guide.

Also, as Robert pointed out, CR 6494473 offers a more resource management
friendly way to limit scrub traffic (b143).  Everyone can buy George a beer for
implementing this change :-)


I'll glad accept any beer donations and others on the ZFS team are happy to help consume it. :-)

I look forward to hearing people's experience with the new changes.

- George

Of course, this could mean that on a busy system a scrub that formerly took
a week might now take a month. And the fix does not directly address the tuning of the queue depth issue with HDDs. TANSTAAFL.
 -- richard



_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to