On Sat, May 17, 2014 at 01:50:52PM +0100, Martin wrote:
> On 16/05/14 04:07, Russell Coker wrote:
> > https://blogs.oracle.com/bill/entry/ditto_blocks_the_amazing_tape
> > 
> > Probably most of you already know about this, but for those of you who 
> > haven't 
> > the above describes ZFS "ditto blocks" which is a good feature we need on 
> > BTRFS.  The briefest summary is that on top of the RAID redundancy there...
> [... are additional copies of metadata ...]
> 
> 
> Is that idea not already implemented in effect in btrfs with the way
> that the superblocks are replicated multiple times, ever more times, for
> ever more huge storage devices?

   Superblocks are the smallest part of the metadata. There's a whole
load of metadata that's not in the superblocks that isn't replicated
in this way.

> The one exception is for SSDs whereby there is the excuse that you
> cannot know whether your data is usefully replicated across different
> erase blocks on a single device, and SSDs are not 'that big' anyhow.
> 
> 
> So... Your idea of replicating metadata multiple times in proportion to
> assumed 'importance' or 'extent of impact if lost' is an interesting
> approach. However, is that appropriate and useful considering the real
> world failure mechanisms that are to be guarded against?
> 
> Do you see or measure any real advantage?

   This. How many copies do you actually need? Are there concrete
statistics to show the marginal utility of each additional copy?

   Hugo.

-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 65E74AC0 from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
     --- IMPROVE YOUR ORGANISMS!!  -- Subject line of spam email ---     

Attachment: signature.asc
Description: Digital signature

Reply via email to