Is there any forecast to improve the efficiency of the replication 
mechanisms of ZFS ? Fishwork - new NAS release ....

Considering the solution we are offering to our customer ( 5 remote 
sites replicating in one central data-center ) with ZFS ( cheapest 
solution )  I should consider
3 times the network load of a solution based on SNDR-AVS and 3 times the 
storage space too..correct ?

I there any documentation on that ?
Thanks

Richard Elling ha scritto:
> Enrico Rampazzo wrote:
>>>> Hello
>>>> I'm offering a solution based on our disks where replication and 
>>>> storage management should be made using only ZFS...
>>>> The test change few bytes on one file ( 10bytes ) and check how 
>>>> many bytes the source sends to target.
>>>> The customer tried the replication between 2 volume...They compared 
>>>> ZFS replica with true copy replica and they realized the following 
>>>> considerations:
>>>>
>>>>   1. ZFS uses a block bigger than HDS true copy
>>>>       
>
> ZFS uses dynamic block sizes.  Depending on the configuration and
> workload, just a few disk blocks will change, or a bunch of redundant
> metadata might change.  In either case, changing the ZFS recordsize
> will make little, if any, change.
>
>>>>   2. true copy sends 32Kbytes and ZFS 100K and more changing only 10
>>>>      file bytes
>>>>
>>>> Can we configure ZFS to improve replication efficiencies ?
>>>>       
>
> By default, ZFS writes two copies of metadata. I would not recommend
> reducing this because it will increase your exposure to faults.  What may
> be happening here is that a 10 byte write may cause a metadata change
> resulting in a minimum of three 512 byte physical blocks being 
> changed. The metadata copies are on spatially diverse, so you may see 
> these three
> blocks starting at non-contiguous boundaries.  If Truecopy sends only
> 32kByte blocks (speculation), then the remote transfer will be 96kBytes
> for 3 local, physical block writes.
>
> OTOH, ZFS will coalesce writes.  So you may be able to update a
> number of files yet still only replicate 96kBytes through Truecopy.
> YMMV.
>
> Since the customer is performing replication, I'll assume they are very
> interested in data protection, so keeping the redundant metadata is a
> good idea. The customer should also be aware that replication at the
> application level is *always* more efficient than replicating somewhere
> down the software stack where you lose data context.
> -- richard
>
>>>> The solution should consider 5 remote site replicating on one 
>>>> central data-center. Considering the zfs block overhead the 
>>>> customer is thinking to buy a solution based on traditional storage 
>>>> arrays like HDS entry level arrays ( our 2530/2540 ). If so ..with 
>>>> the ZFS the network traffic, storage space become big problems for 
>>>> the customer infrastructures.
>>>>
>>>> Are there any documentation explaining internal ZFS replication 
>>>> mechanism to face the customer doubts ? Thanks
>>>>       
>> Do we need of AVS in our solution to solve the problem ?
>>  
>>>> Thanks
>>>>       
>>
>> _______________________________________________
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>   
>
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to