On Thu, Apr 18, 2013 at 02:45:24PM +0100, Martin wrote:
> Dear Devs,
> 
> I have a number of esata disk packs holding 4 physical disks each where
> I wish to use the disk packs aggregated for 16TB and up to 64TB backups...
> 
> Can btrfs...?
> 
> 1:
> 
> Mirror data such that there is a copy of data on each *disk pack* ?
> 
> Note that esata shows just the disks as individual physical disks, 4 per
> disk pack. Can physical disks be grouped together to force the RAID data
> to be mirrored across all the nominated groups?

   Interesting you should ask this: I realised quite recently that
this could probably be done fairly easily with a modification to the
chunk allocator.

> 2:
> 
> Similarly for a mix of different storage technologies such as
> manufacturer or type (SSD/HDD), can the disks be grouped to ensure a
> copy of the data is replicated across all the groups?
> 
> For example, I deliberately buy HDDs from different
> batches/manufacturers to try to avoid common mode or similarly timed
> failures. Can btrfs be guided to safely spread the RAID data across the
> *different* hardware types/batches?

   From the kernel point of view, this is the same question as the
previous one.

> 3:
> 
> Also, for different speeds of disks, can btrfs tune itself to balance
> the read/writes accordingly?

   Not that I'm aware of.

> 4:
> 
> Further thought: For SSDs, is the "minimise heads movement" 'staircase'
> code bypassed so as to speed up allocation for the "don't care"
> addressing (near zero seek time) of SSDs?

   I think this is more to do with the behaviour of the block layer
than the FS. There are alternative elevators that can be used, but I
don't know how to configure them (or whether they need configuring at
all).

> And then again: Is 64TBytes of btrfs a good idea in the first place?!
> 
> (There's more than one physical set of backups but I'd rather not suffer
> weeks to recover from one hiccup in the filesystem... Should I partition
> btrfs down to smaller gulps, or does the structure of btrfs in effect
> already do that?)

   You have backups, which is good. Keep up with the latest kernels
from kernel.org. The odds of you hitting something major are small,
but non-zero. One thing that's probably fairly likely with your setup
is accidental disconnection of a disk or block of disks. Having
duplicate data is really quite handy in that instance -- if you lose
one device and reinsert it, you can recover easily with a scrub (I've
done that). If you lose multiple devices in a block, then the FS will
probably go read-only and stop any further damage from being done
until it can be unmounted and the hardware reassembled (I've done this
too(+), with half of my 10 TB(*) array).

   So with light home use on a largeish array, I've had a number of
cockups recently that were recoverable, albeit with some swearing.

   On the other hand, it's entirely possible that something else will
go wrong and things will blow up. My guess is that unless you have
really dodgy hardware that keeps screwing stuff up, you _probably_
won't have to restore from backup. But it still may happen. It's
really hard to put figures on it, because (a) we don't have figures on
how many people actually use the FS, and (b) we don't have many that
we're aware of working at the >10TB level.

   Hugo.

(+) eSATA connectors look regrettably similar to HDMI connectors in
the half-light under the desk.
(*) 5 TB after RAID-1.

-- 
=== Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ===
  PGP key: 65E74AC0 from wwwkeys.eu.pgp.net or http://www.carfax.org.uk
    --- But somewhere along the line, it seems / That pimp became ---    
                       cool,  and punk mainstream.                       

Attachment: signature.asc
Description: Digital signature

Reply via email to