On Thu, Feb 13, 2014 at 09:22:07PM +0100, Goffredo Baroncelli wrote:
> Hi Jim,
> On 02/13/2014 05:13 PM, Jim Salter wrote:
> > Let's say you have five disks, and you arbitrarily want to define a
> > stripe length of four data blocks plus one parity block per "stripe".
> 
> I what it is different from a raid5 setup (which is supported by btrfs)?

   With what's above, yes, that's the current RAID-5 code.

> > Right now, what you're looking at effectively amounts to a RAID3
> > array, like FreeBSD used to use.  But, what if we add two more disks?
> > Or three more disks? Or ten more?  Is there any reason we can't keep
> > our stripe length of four blocks + one parity block, and just
> > distribute them relatively ad-hoc in the same way btrfs-raid1
> > distributes redundant data blocks across an ad-hoc array of disks?
> > 
> > This could be a pretty powerful setup IMO - if you implemented
> > something like this, you'd be able to arbitrarily define your storage
> > efficiency (percentage of parity blocks / data blocks) and your
> > fault-tolerance level (how many drives you can afford to lose before
> > failure) WITHOUT tying it directly to your underlying disks
> 
> May be that it is a good idea, but which would be the advantage to 
> use less drives that the available ones for a RAID ?

   Performance, plus the ability to handle different sized drives.
Hmm... maybe I should do an "optimise" option for the space planner...

> Regarding the fault tolerance level, few weeks ago there was a 
> posting about a kernel library which would provide a generic
> RAID framework capable of several degree of fault tolerance 
> (raid 5,6,7...) [give a look to 
> "[RFC v4 2/3] fs: btrfs: Extends btrfs/raid56 to 
> support up to six parities" 2014/1/25]. This definitely would be a
> big leap forward.
> 
> BTW, the raid5/raid6 support in BTRFS is only for testing purpose. 
> However Chris Mason, told few week ago that he will work on these
> issues.
> 
> [...]
> > necessarily needing to rebalance as you add more disks to the array.
> > This would be a heck of a lot more flexible than ZFS' approach of
> > adding more immutable vdevs.
> 
> There is no needing to re-balance if you add more drives. The next 
> chunk allocation will span all the available drives anyway. It is only 
> required when you want to spans all data already written on all the drives.

   The balance opens up more usable space, unless the new device is
(some nasty function of) the remaining free space on the other drives.
It's not necessarily about spanning the data, although that's an
effect, too.

   Hugo.

-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 65E74AC0 from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
      --- It used to take a lot of talent and a certain type of ---      
        upbringing to be perfectly polite and have filthy manners        
            at the same time. Now all it needs is a computer.            

Attachment: signature.asc
Description: Digital signature

Reply via email to