Le 21 February 2012 ? 07:54, Hugo Mills a écrit:
Some time ago, I proposed the following scheme:
nCmSpP
where n is the number of copies (suffixed by C), m is the number of
stripes for that data (suffixed by S), and p is the number of parity
blocks (suffixed by P). Values of zero are
On Wed, Feb 22, 2012 at 11:22:08AM +0100, Hubert Kario wrote:
On Wednesday 22 of February 2012 09:56:27 Xavier Nicollet wrote:
Le 21 February 2012 ? 07:54, Hugo Mills a écrit:
Some time ago, I proposed the following scheme:
nCmSpP
where n is the number of copies (suffixed by
Hugo Mills posted on Tue, 21 Feb 2012 01:21:48 + as excerpted:
On Mon, Feb 20, 2012 at 08:13:43PM -0500, Tom Cameron wrote:
On Mon, Feb 20, 2012 at 8:07 PM, Hugo Mills h...@carfax.org.uk wrote:
However, you can remove any one drive, and your data is fine,
which
is what btrfs's
I had a 4 drive RAID10 btrfs setup that I added a fifth drive to with
the btrfs device add command. Once the device was added, I used the
balance command to distribute the data through the drives. This
resulted in an infinite run of the btrfs tool with data moving back
and forth across the drives
I've noticed similar behavior when even RAID0'ing an odd number of
devices which should be even more trivial in practice.
You would expect something like:
sda A1 B1
sdb A2 B2
sdc A3 B3
or at least, if BTRFS can only handle block pairs,
sda A1 B2
sdb A2 C1
sdc B1 C2
But the end result was
Sorry, I meant 'removing 2 drives' in the raid1 with 3 drives example
On Tue, Feb 21, 2012 at 11:45 AM, Wes anomaly...@gmail.com wrote:
I've noticed similar behavior when even RAID0'ing an odd number of
devices which should be even more trivial in practice.
You would expect something like:
On Tue, Feb 21, 2012 at 11:45:51AM +1100, Wes wrote:
I've noticed similar behavior when even RAID0'ing an odd number of
devices which should be even more trivial in practice.
You would expect something like:
sda A1 B1
sdb A2 B2
sdc A3 B3
This is what it should do -- it'll use as many
I figured you meant that.
Using RAID1 on N drives normally would mean all drives have a copy of
the object. The upshot of this is that you can lose N-1 drives and
still access data. In systems like ZFS or BTRFS you would also expect
a read speed of N*, since you could theoretically read from all
On Mon, Feb 20, 2012 at 07:35:18PM -0500, Tom Cameron wrote:
I had a 4 drive RAID10 btrfs setup that I added a fifth drive to with
the btrfs device add command. Once the device was added, I used the
balance command to distribute the data through the drives. This
resulted in an infinite run of
On Mon, Feb 20, 2012 at 8:07 PM, Hugo Mills h...@carfax.org.uk wrote:
However, you can remove any one drive, and your data is fine, which
is what btrfs's RAID-1 guarantee is. I understand that there will be
additional features coming along Real Soon Now (possibly at the same
time that
On 02/21/2012 08:45 AM, Wes wrote:
I've noticed similar behavior when even RAID0'ing an odd number of
devices which should be even more trivial in practice.
You would expect something like:
sda A1 B1
sdb A2 B2
sdc A3 B3
or at least, if BTRFS can only handle block pairs,
sda A1 B2
sdb
On Mon, Feb 20, 2012 at 08:13:43PM -0500, Tom Cameron wrote:
On Mon, Feb 20, 2012 at 8:07 PM, Hugo Mills h...@carfax.org.uk wrote:
However, you can remove any one drive, and your data is fine, which
is what btrfs's RAID-1 guarantee is. I understand that there will be
additional features
On Tue, Feb 21, 2012 at 09:16:40AM +0800, Liu Bo wrote:
On 02/21/2012 08:45 AM, Wes wrote:
meaning removing any 1 drive would result in lost data.
Removing any disk will not lose data cause btrfs ensure all the data
in the removed disk is safely placed on right places. And if there
is not
@hugo
iirc that was on ~3.0.8 but it might have been 3.0.0. I'll revisit
the raid0 setup on a newer kernel series and test though before making
any more claims. :)
--
To unsubscribe from this list: send the line unsubscribe linux-btrfs in
the body of a message to majord...@vger.kernel.org
More
On Tue, Feb 21, 2012 at 12:27:56PM +1100, Wes wrote:
@hugo
iirc that was on ~3.0.8 but it might have been 3.0.0. I'll revisit
the raid0 setup on a newer kernel series and test though before making
any more claims. :)
There's a repeating pattern of three log messages that comes out in
Gareth,
I would completely agree. I only use the RAID vernacular here because,
well, it's the unfortunate defacto standard way to talk about data
protection.
I'd go a step beyond saying dupe or dupe + stripe, because future
modifications could conceivably see the addition of multiple
duplicated
On Mon, Feb 20, 2012 at 08:59:05PM -0500, Tom Cameron wrote:
Gareth,
I would completely agree. I only use the RAID vernacular here because,
well, it's the unfortunate defacto standard way to talk about data
protection.
I'd go a step beyond saying dupe or dupe + stripe, because future
17 matches
Mail list logo