Hi all
I've heard raid-[56] is on its way somehow, and may be added to the next (or
the one after (or perhaps a bit later)) kernel. While this is good, I want to
ask if I can check out this source tree for testing (typically in a VM). More
testing usually means more bugs foun
Hi all
It's about a year now since I saw the first posts about RAID[56] in Btrfs. Has
this gotten any further?
Vennlige hilsener / Best regards
roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presen
On Mon, 2009-07-13 at 11:05 +0100, David Woodhouse wrote:
>
> This hack serves two purposes:
> - It does actually write parity (and RAID6 syndrome) blocks so that I
>can implement and test the recovery.
diff --git a/fs/btrfs/volumes.c b/fs/btrfs/volumes.c
index 1f509ab..a23510b 100644
--- a/
If we've abandoned the idea of putting the number of redundant blocks
into the top bits of the type bitmask (and I hope we have), then we're
fairly much there. Current code is at:
git://, http://git.infradead.org/users/dwmw2/btrfs-raid56.git
git://, http://git.infradead.org/users/dwmw2/btrfs
> More testing usually means more bugs found etc…
Yes, but releasing code before it's somewhat polished just generates a
mountain of bug reports.
Back in 2010 when I set up a server at work I was eagerly awaiting the
RAID5 implementation that was just a couple of months away.
Don't worry it doe
BTRFS_BLOCK_GROUP_RAID10 |
BTRFS_BLOCK_GROUP_DUP)) {
/* we limit the length of each bio to what fits in a stripe */
- *length = min_t(u64, em->len - offset,
- map->stripe_len - stripe_offset);
+
On Sun, May 23, 2010 at 1:55 PM, Roy Sigurd Karlsbakk
wrote:
> Hi all
>
> It's about a year now since I saw the first posts about RAID[56] in Btrfs.
> Has this gotten any further?
>
There are patches in development. Nothing ready to test yet.
--
To unsubscribe from this
Hi
This is great. How does the current code handle corruption on a drive,
or two drives with RAID-6 in a stripe? Is the checksumming done per
drive or for the whole stripe?
roy
On 6. aug.. 2009, at 12.17, David Woodhouse wrote:
If we've abandoned the idea of putting the number of redunda
On Fri, 2009-08-07 at 11:43 +0200, Roy Sigurd Karlsbakk wrote:
> This is great. How does the current code handle corruption on a drive,
> or two drives with RAID-6 in a stripe? Is the checksumming done per
> drive or for the whole stripe?
http://git.infradead.org/users/dwmw2/btrfs-raid56.git
--
On Thu, Aug 6, 2009 at 3:17 AM, David Woodhouse wrote:
> If we've abandoned the idea of putting the number of redundant blocks
> into the top bits of the type bitmask (and I hope we have), then we're
> fairly much there. Current code is at:
>
> git://, http://git.infradead.org/users/dwmw2/btrfs-
On Tue, Nov 10, 2009 at 12:51:06PM -0700, Dan Williams wrote:
> 4/ A small issue, there appears to be no way to specify different
> raid10/5/6 data layouts, maybe I missed it. See the --layout option
> to mdadm. It appears the only layout option is the raid level.
Is this really important? In
On Tue, Nov 10, 2009 at 12:51:06PM -0700, Dan Williams wrote:
> On Thu, Aug 6, 2009 at 3:17 AM, David Woodhouse wrote:
> > If we've abandoned the idea of putting the number of redundant blocks
> > into the top bits of the type bitmask (and I hope we have), then we're
> > fairly much there. Current
> 3/ The md-raid6 recovery code assumes that there is always at least
> two good blocks to perform recovery. That makes the current minimum
> number of raid6 members 4, not 3. (small nit the btrfs code calls
> members 'stripes', in md a stripe of data is a collection of blocks
> from all members)
On Tue, Nov 10, 2009 at 4:06 PM, tsuraan wrote:
>> 3/ The md-raid6 recovery code assumes that there is always at least
>> two good blocks to perform recovery. That makes the current minimum
>> number of raid6 members 4, not 3. (small nit the btrfs code calls
>> members 'stripes', in md a stripe
On Thu, Apr 29, 2010 at 07:06:06PM +0100, David Woodhouse wrote:
> I've been looking again at the RAID5/RAID6 support, and updated the tree
> at git://git.infradead.org/users/dwmw2/btrfs-raid56.git#merged
>
> At the moment, we limit writes to a single disk's worth at a time, which
> means we _alwa
- "David Woodhouse" skrev:
> I've been looking again at the RAID5/RAID6 support, and updated the
> tree
> at git://git.infradead.org/users/dwmw2/btrfs-raid56.git#merged
>
> At the moment, we limit writes to a single disk's worth at a time,
> which
> means we _always_ do the read-calculatepar
On Fri, 2010-04-30 at 14:39 -0400, Josef Bacik wrote:
> > It seems to work, and recovery is successful when I mount the file
> > system with -oro,degraded. But in read-write mode it'll oops (even
> > without the below patch) because it's trying to _write_ to the degraded
> > RAID6. Last time I was
I am very interested in support for this functionality. I have been
using mdadm for all of my redundant file systems, but it requires I
have homogeneous partitions/drives for a volume. I would very much
like to use a heterogeneous drive structure for a large redundant
volume and have been waiting i
else if (map->type & (BTRFS_BLOCK_GROUP_RAID5 |
+ BTRFS_BLOCK_GROUP_RAID6)) {
+ do_div(length, nr_data_stripes(map));
+ rmap_len = map->stripe_len * nr_data_stripes(map);
+ }
buf = kzalloc(sizeof(u64) * map->num_str
Hello list,
for an internal demonstration system due in Q1/2011, I am very much interested
in setting up a btrfs fs with RAID5; RAID1 just won't cut it.
As can be read in the btrfs wiki [1], the 'magic' number for RAID5 support in
btrfs is 2.6.37 which is just around the corner.
So my question i
map->sub_stripes;
} else if (map->type & BTRFS_BLOCK_GROUP_RAID0) {
stripe_nr = stripe_nr * map->num_stripes + i;
- }
- bytenr = ce->start + stripe_nr * map->stripe_len;
+ } /* else if
+0100
Btrfs: Let btrfs_map_block() return full stripe information for RAID[56]
... in the cases where it's necessary -- which is for a write, or for a
parity recovery attempt. We'll let btrfs_map_bio() do the rest.
Signed-off-by: David Woodhouse
diff --git a/
u64 *raid_map = NULL;
int stripes_allocated = 8;
int stripes_required = 1;
int stripe_index;
@@ -1026,10 +1034,24 @@ again:
stripes_required = map->sub_stripes;
}
}
+ if (map->type & (B
This is a fairly crap hack. Even if the file system _does_ want to write
a full stripe-set at a time, the merge_bio_hook logic will prevent it
from doing so, and ensure that we always have to read the other stripes
to recreate the parity -- with all the concurrency issues that involves.
The raid_h
CK_GROUP_RAID10)
ret = map->sub_stripes;
- else if (map->type & BTRFS_BLOCK_GROUP_RAID5)
- ret = 2;
- else if (map->type & BTRFS_BLOCK_GROUP_RAID6)
- ret = 3;
+ else if (map->type & BTRFS_BLOCK_GROUP_RA
> We discussed using the top bits of the chunk type field field to store a
> number of redundant disks -- so instead of RAID5, RAID6, etc., we end up
> with a single 'RAID56' flag, and the amount of redundancy is stored
> elsewhere.
Is there any sort of timeline for RAID5/6 support in btrfs? I
cu
ny sort of timeline for RAID5/6 support in btrfs? I
currently have 8 drives in a zfs-fuse RAIDZ2 (RAID6) configuration,
and I'd love to see how btrfs compares to that, once it's ready.
I think someone started doing RAID[56] (see threads "A start at
RAID[56] support"
> By the way - how does FUSE ZFS work? Is it stable? Good performance?
> We're using ZFS natively on Solaris 10 now, perhaps moving the storage
> to opensolaris soon.
It's pretty stable; I wouldn't put anything on it that isn't backed
up, but I guess that holds for any other filesystem. The speed
This patch adds the raid[56] options to the output of mkfs.btrfs help.
---
mkfs.c |2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mkfs.c b/mkfs.c
index 5ece186..f9f26a5 100644
--- a/mkfs.c
+++ b/mkfs.c
@@ -326,7 +326,7 @@ static void print_usage(void)
fprintf(stderr
On Sun, Mar 10, 2013 at 09:30:13PM +0100, Matias Bjørling wrote:
> This patch adds the raid[56] options to the output of mkfs.btrfs help.
Thanks, there was a patch for that in my branch already. Please don't
forget to add your signed-off-by line.
david
--
To unsubscribe from this list:
30 matches
Mail list logo