On Mar 26, 2010, at 2:34 AM, Bruno Sousa wrote:
> Hi,
> 
> The jumbo-frames in my case give me a boost of around 2 mb/s, so it's not 
> that much. 

That is about right.  IIRC, the theoretical max is about 4% improvement, for 
MTU of 8KB.

> Now i will play with link aggregation and see how it goes, and of course i'm 
> counting that incremental replication will be slower...but since the amount 
> of data would be much less probably it will still deliver a good performance.

Probably won't help at all because of the brain dead way link aggregation has to
work.  See "Ordering of frames" at
http://en.wikipedia.org/wiki/Link_Aggregation_Control_Protocol#Link_Aggregation_Control_Protocol

If you see the workload on the wire go through regular patterns of fast/slow 
response
then there are some additional tricks that can be applied to increase the 
overall
throughput and smooth the jaggies. But that is fodder for another post...
You can measure this with iostat using samples < 15 seconds or with tcpstat.
tcpstat is a handy DTrace script often located as /opt/DTT/Bin/tcpstat.d
 -- richard

> And what a relief to know that i'm not alone when i say that storage 
> management is part science, part arts and part "voodoo magic" ;)
> 
> Cheers,
> Bruno
> 
> On 25-3-2010 23:22, Ian Collins wrote:
>> On 03/26/10 10:00 AM, Bruno Sousa wrote:
>> 
>> [Boy top-posting sure mucks up threads!]
>> 
>>> Hi,
>>> 
>>> Indeed the 3 disks per vdev (raidz2) seems a bad idea...but it's the system 
>>> i have now.
>>> Regarding the performance...let's assume that a bonnie++ benchmark could go 
>>> to 200 mg/s in. The possibility of getting the same values (or near) in a 
>>> zfs send / zfs receive is just a matter of putting , let's say a 10gbE card 
>>> between both systems?
>> 
>> Maybe, or a 2x1G LAG would me more cost effective (and easier to check!).  
>> The only way to know for sure is to measure.  I managed to get slightly 
>> better transfers by enabling jumbo frames.
>> 
>>> I have the impression that benchmarks are always synthetic, therefore 
>>> live/production environments behave quite differently.
>> 
>> Very true, especially in the black arts of storage management!
>> 
>>> Again, it might be just me, but with 1gb link being able to replicate 2 
>>> servers with a average speed above 60 mb/s does seems quite good. However, 
>>> like i said i would like to know other results from other guys...
>>> 
>> As I said, the results are typical for a 1G link.  Don't forget you are 
>> measuring full copies, incremental replications may well be significantly 
>> slower.
>> 
>> -- 
>> Ian.
>>   
>> 
>> 
>> -- 
>> This message has been scanned for viruses and 
>> dangerous content by MailScanner, and is 
>> believed to be clean.
> 
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

ZFS storage and performance consulting at http://www.RichardElling.com
ZFS training on deduplication, NexentaStor, and NAS performance
Las Vegas, April 29-30, 2010 http://nexenta-vegas.eventbrite.com 





_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to