On 06/01/2010 07:57 AM, Bob Friesenhahn wrote:
> On Mon, 31 May 2010, Sandon Van Ness wrote:
>> With sequential writes I don't see how parity writing would be any
>> different from when I just created a 20 disk zpool which is doing the
>> same writes every 5 seconds but the only difference is it isn't maxing
>> out CPU usage when doing the writes and and I don't see the transfer
>> stall during the writes like I did on raidz2.
>
> I am not understanding the above paragraph, but hopefully you agree
> that raidz2 issues many more writes (based on vdev stripe width) to
> the underlying disks than a simple non-redundant load-shared pool
> does.  Depending on your system, this might not be an issue, but it is
> possible that there is an I/O threshold beyond which something
> (probably hardware) causes a performance issue.
>
> Bob

Interesting enough when I went to copy the data back I got even worse
download speeds than I did write speeds! It looks like i need some sort
of read-ahead as unlike the writes it doesn't appear to be CPU bound as
using mbuffer/tar gives me full gigabit speeds. You can see in my graph
here:

http://uverse.houkouonchi.jp/stats/netusage/1.1.1.3_2.html

On the weekly graph is when I was sending to the ZFS server and then
daily is showing it comming back but I stopped it  and shut down the
computer for a while which is the low speed flat line and then started
it up again this time using mbuffer and speeds are great. I don't see
why I am having a trouble getting full speeds when doing reads unless it
needs to read ahead more than it is.

I decided to go ahead and to tar + mbuffer for the first pass and then
run rsync after for the final sync just to make sure nothing was missed.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to