Hello

I am having a slight issue (and judging by Google results, similar
issues have been seen by other FreeBSD and Solaris/OpenSolaris users)
with writes choking the read IO. The issue I am having is described
pretty well here:
http://opensolaris.org/jive/thread.jspa?threadID=106453 It seems that
under heavy write load, ZFS likes to aggregate a really huge amount of
data before actually writing it to disks, resulting in sudden 10+
second stalls where it frantically tries to commit everything,
completely choking read IO in the process and sometimes even the
network (with a large enough write to a mirror pool using DD, I can
cause my SSH sessions to drop dead, without actually running out of
RAM. As soon as the data is committed, I can reconnect back).

Beyond the issue of system interactivity (or rather, the
near-disappearance thereof) during these enormous flushes, this kind
of pattern seems really ineffective from the CPU utilization point of
view. Instead of a relatively stable and consistent flow of reads and
writes, allowing the CPU to be utilized as much as possible, when the
system is committing the data the CPU basically stays IDLE for 10+
seconds (or as long as the flush takes) and the process of committing
unwritten data to the pool seemingly completely trounces the priority
of any read operations.

Has anyone done any extensive testing of the effects of tuning
vfs.zfs.vdev.max_pending on this issue? Is there some universally
recommended value beyond the default 35? Anything else I should be
looking at?


- Sincerely,
Dan Naumov
_______________________________________________
freebsd-questions@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-questions
To unsubscribe, send any mail to "freebsd-questions-unsubscr...@freebsd.org"

Reply via email to