Re: [PATCH] btrfs-progs: output raid[56] options in mkfs.btrfs

2013-03-12 Thread David Sterba
On Sun, Mar 10, 2013 at 09:30:13PM +0100, Matias Bjørling wrote: This patch adds the raid[56] options to the output of mkfs.btrfs help. Thanks, there was a patch for that in my branch already. Please don't forget to add your signed-off-by line. david -- To unsubscribe from this list: send

[PATCH] btrfs-progs: output raid[56] options in mkfs.btrfs

2013-03-10 Thread Matias Bjørling
This patch adds the raid[56] options to the output of mkfs.btrfs help. --- mkfs.c |2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mkfs.c b/mkfs.c index 5ece186..f9f26a5 100644 --- a/mkfs.c +++ b/mkfs.c @@ -326,7 +326,7 @@ static void print_usage(void) fprintf(stderr

Re: RAID-[56]?

2013-01-28 Thread Gareth Pye
More testing usually means more bugs found etc… Yes, but releasing code before it's somewhat polished just generates a mountain of bug reports. Back in 2010 when I set up a server at work I was eagerly awaiting the RAID5 implementation that was just a couple of months away. Don't worry it

RAID-[56]?

2013-01-27 Thread Roy Sigurd Karlsbakk
Hi all I've heard raid-[56] is on its way somehow, and may be added to the next (or the one after (or perhaps a bit later)) kernel. While this is good, I want to ask if I can check out this source tree for testing (typically in a VM). More testing usually means more bugs found etc

RAID[56] status?

2010-05-23 Thread Roy Sigurd Karlsbakk
Hi all It's about a year now since I saw the first posts about RAID[56] in Btrfs. Has this gotten any further? Vennlige hilsener / Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 r...@karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres

Re: RAID[56] status?

2010-05-23 Thread Mike Fedyk
On Sun, May 23, 2010 at 1:55 PM, Roy Sigurd Karlsbakk r...@karlsbakk.net wrote: Hi all It's about a year now since I saw the first posts about RAID[56] in Btrfs. Has this gotten any further? There are patches in development. Nothing ready to test yet. -- To unsubscribe from this list

Updating RAID[56] support

2010-04-30 Thread David Woodhouse
, em-len - offset, - map-stripe_len - stripe_offset); + /* For writes to RAID[56], allow a full stripe, not just a single + disk's worth */ + if (map-type (BTRFS_BLOCK_GROUP_RAID5 | BTRFS_BLOCK_GROUP_RAID6

Re: Updating RAID[56] support

2010-04-30 Thread Josef Bacik
On Thu, Apr 29, 2010 at 07:06:06PM +0100, David Woodhouse wrote: I've been looking again at the RAID5/RAID6 support, and updated the tree at git://git.infradead.org/users/dwmw2/btrfs-raid56.git#merged At the moment, we limit writes to a single disk's worth at a time, which means we _always_

Re: Updating RAID[56] support

2010-04-30 Thread Roy Sigurd Karlsbakk
- David Woodhouse dw...@infradead.org skrev: I've been looking again at the RAID5/RAID6 support, and updated the tree at git://git.infradead.org/users/dwmw2/btrfs-raid56.git#merged At the moment, we limit writes to a single disk's worth at a time, which means we _always_ do the

Re: Updating RAID[56] support

2010-04-30 Thread David Woodhouse
On Fri, 2010-04-30 at 14:39 -0400, Josef Bacik wrote: It seems to work, and recovery is successful when I mount the file system with -oro,degraded. But in read-write mode it'll oops (even without the below patch) because it's trying to _write_ to the degraded RAID6. Last time I was testing

Re: RAID[56] status

2009-11-10 Thread Dan Williams
On Thu, Aug 6, 2009 at 3:17 AM, David Woodhouse dw...@infradead.org wrote: If we've abandoned the idea of putting the number of redundant blocks into the top bits of the type bitmask (and I hope we have), then we're fairly much there. Current code is at:   git://,

Re: RAID[56] status

2009-11-10 Thread Chris Mason
On Tue, Nov 10, 2009 at 12:51:06PM -0700, Dan Williams wrote: On Thu, Aug 6, 2009 at 3:17 AM, David Woodhouse dw...@infradead.org wrote: If we've abandoned the idea of putting the number of redundant blocks into the top bits of the type bitmask (and I hope we have), then we're fairly much

Re: RAID[56] status

2009-11-10 Thread tsuraan
3/ The md-raid6 recovery code assumes that there is always at least two good blocks to perform recovery. That makes the current minimum number of raid6 members 4, not 3. (small nit the btrfs code calls members 'stripes', in md a stripe of data is a collection of blocks from all members).

Re: RAID[56] status

2009-11-10 Thread Gregory Maxwell
On Tue, Nov 10, 2009 at 4:06 PM, tsuraan tsur...@gmail.com wrote: 3/ The md-raid6 recovery code assumes that there is always at least two good blocks to perform recovery.  That makes the current minimum number of raid6 members 4, not 3.  (small nit the btrfs code calls members 'stripes', in md

Re: RAID[56] with arbitrary numbers of parity stripes.

2009-08-22 Thread tsuraan
We discussed using the top bits of the chunk type field field to store a number of redundant disks -- so instead of RAID5, RAID6, etc., we end up with a single 'RAID56' flag, and the amount of redundancy is stored elsewhere. Is there any sort of timeline for RAID5/6 support in btrfs? I

Re: RAID[56] with arbitrary numbers of parity stripes.

2009-08-22 Thread Roy Sigurd Karlsbakk
of timeline for RAID5/6 support in btrfs? I currently have 8 drives in a zfs-fuse RAIDZ2 (RAID6) configuration, and I'd love to see how btrfs compares to that, once it's ready. I think someone started doing RAID[56] (see threads A start at RAID[56] support and perhaps Factor out RAID6 algorithms

Re: RAID[56] with arbitrary numbers of parity stripes.

2009-08-22 Thread tsuraan
By the way - how does FUSE ZFS work? Is it stable? Good performance? We're using ZFS natively on Solaris 10 now, perhaps moving the storage to opensolaris soon. It's pretty stable; I wouldn't put anything on it that isn't backed up, but I guess that holds for any other filesystem. The speed

Re: RAID[56] status

2009-08-07 Thread Roy Sigurd Karlsbakk
Hi This is great. How does the current code handle corruption on a drive, or two drives with RAID-6 in a stripe? Is the checksumming done per drive or for the whole stripe? roy On 6. aug.. 2009, at 12.17, David Woodhouse wrote: If we've abandoned the idea of putting the number of

RAID[56] recovery...

2009-07-14 Thread David Woodhouse
On Mon, 2009-07-13 at 11:05 +0100, David Woodhouse wrote: This hack serves two purposes: - It does actually write parity (and RAID6 syndrome) blocks so that I can implement and test the recovery. diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c index 1f509ab..a23510b 100644 ---

Re: A start at RAID[56] support.

2009-07-14 Thread David Woodhouse
+1034,24 @@ again: stripes_required = map-sub_stripes; } } + if (map-type (BTRFS_BLOCK_GROUP_RAID5 | BTRFS_BLOCK_GROUP_RAID6) +multi_ret ((rw WRITE) || mirror_num 1) raid_map_ret) { + /* RAID[56] write

Re: A start at RAID[56] support.

2009-07-13 Thread David Woodhouse
full stripe information for RAID[56] ... in the cases where it's necessary -- which is for a write, or for a parity recovery attempt. We'll let btrfs_map_bio() do the rest. Signed-off-by: David Woodhouse david.woodho...@intel.com diff --git a/fs/btrfs/volumes.c b/fs/btrfs

First attempt at writing RAID[56] parity stripes

2009-07-13 Thread David Woodhouse
This is a fairly crap hack. Even if the file system _does_ want to write a full stripe-set at a time, the merge_bio_hook logic will prevent it from doing so, and ensure that we always have to read the other stripes to recreate the parity -- with all the concurrency issues that involves. The

A start at RAID[56] support.

2009-07-11 Thread David Woodhouse
-num_stripes + i; - } - bytenr = chunk_start + stripe_nr * map-stripe_len; + } /* else if RAID[56], multiply by nr_data_stripes(). + * Alternatively, just use rmap_len below instead of + * map-stripe_len