Hi,

Indeed the 3 disks per vdev (raidz2) seems a bad idea...but it's the
system i have now.
Regarding the performance...let's assume that a bonnie++ benchmark could
go to 200 mg/s in. The possibility of getting the same values (or near)
in a zfs send / zfs receive is just a matter of putting , let's say a
10gbE card between both systems?
I have the impression that benchmarks are always synthetic, therefore
live/production environments behave quite differently.
Again, it might be just me, but with 1gb link being able to replicate 2
servers with a average speed above 60 mb/s does seems quite good.
However, like i said i would like to know other results from other guys...

Thanks for the time.
Bruno

On 25-3-2010 21:52, Ian Collins wrote:
> On 03/26/10 08:47 AM, Bruno Sousa wrote:
>> Hi all,
>>
>> The more readings i do about ZFS, and experiments the more i like
>> this stack of technologies.
>> Since we all like to see real figures in real environments , i might
>> as well share some of my numbers ..
>> The replication has been achieved with the zfs send / zfs receive but
>> piped with mbuffer (http://www.maier-komor.de/mbuffer.html), during
>> business hours , so it's a live environment and *not *a controlled
>> test environment.
>>
>> storageA
>>
>> opensolaris snv_133
>> 2 quad-core amd
>> 28 gb ram
>>
>> Seagate Barracuda SATA drives 1.5TB 7.200 rpm (ST31500341AS) -
>> *non-enterprise class disks*
>> 1 RAIDZ2 pool with 6 vdevs with 3 disks each connected to a lsi
>> non-raid controller
>>
> As others have already said, raidz2 with 3 drives is Not A Good Idea!
>
>> storageB
>>
>> opensolaris snv_134
>> 2 Intel Xeon 2.0ghz
>> 8 gb ram
>>
>>
>> Seagate Barracuda SATA drives 1TB 7.200 rpm (ST31000640SS) -
>> *enterprise class disks*
>> 1 RAIDZ2 pool with 4 vdevs with 5 disks each connected to a Adaptec
>> RAID controller(52445, 512 mb cache) with read and write cache
>> enabled. The adaptec hba has 20 volumes , where one volume = one
>> drive..something similar to a jbod
>>
>> Both systems are connected to a gigabit switch without vlans (switch
>> is a 3com), and  jumbo-frames are disabled.
>>
>> And now the results :
>>
>> Dataset : around 26.5 gb in files bigger than 256 KB and smaller than 1MB
>>
>> summary: 26.6 GByte in  6 min 20.6 sec - average of *71.7 MB/s*
>>
>> Dataset : around 160gb of data with files small (less than 20 kb) and
>> large (bigger than 10MB)
>>
>> summary:  164 GByte in 34 min 41.9 sec - average of *80.6 MB/s*
>>
>
> Those numbers look right for a 1 Gig link.  Try a tool such as
> bonnie++ to see what the block read and write numbers are for your
> pools and if the they are significantly better than these, try an
> aggregated link between the systems.
> -- 
> Ian.
>   
>
> -- 
> This message has been scanned for viruses and
> dangerous content by *MailScanner* <http://www.mailscanner.info/>, and is
> believed to be clean. 

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to