[EMAIL PROTECTED] wrote:
>   
>> In my experimentation (using my own buffer program), it's the receive 
>> side buffering you need. The size of the buffer needs to be large enough 
>> to hold 5 seconds worth of data. How much data/second you get will 
>> depend on which part of your system is the limiting factor. In my case, 
>> with 7200 RPM drives not striped and a 1Gbit network, the limiting 
>> factor is the drives, which can easily deliver 50MBytes/sec, so a buffer 
>> size of 250MBytes works well. With striped disks or 10,000 or 15,000 RPM 
>> disks, the 1Gbit network might become the limiting factor (at around 
>> 100MByte/sec).
>>     
>
> The modern "Green Caviars" from Western Digital run at 5400rpm; yet they
> deliver 95MB/s from the outer tracks.
>
> For ufs "ufsdump | ufsrestore" I have found that I prefer the buffer on the
> receive side, but it should be much bigger.  ufsrestore starts with 
> creating all directories and that is SLOW.

My 7200 RPM drives are spec'ed at 76MByte/second, and on a resilver, I 
get exactly that. (It's not clear from the spec if this is a peak, or 
average from anywhere on the surface. The outer edge of disks is 
typically 2.5 times the throughput of the inner edge.)

zfs-send doesn't seem to quite match resilver speeds (at least, for me), 
but IIRC, my 50MBytes/second was averaged across the whole send. I found 
that by going up to a 350MByte buffer, I did manage to fill it just 
occasionally during a send/recv, but it didn't make any significant 
difference to the total send/recv time.

-- 
Andrew
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to