On Wed, 27 Feb 2013 02:50:16 +
"Edward Ned Harvey (blu)" wrote:
> Or a vdev can be raidzN, where raidz1 has the redundancy to survive a
> single device failure, and behind the scenes, is implemented similar
> to raid-1e.
RAID-Z is not at all like RAID-1E. RAID-Z uses the same basic data and
> From: discuss-bounces+blu=nedharvey@blu.org [mailto:discuss-
> bounces+blu=nedharvey@blu.org] On Behalf Of Derek Atkins
>
> Thank you for the detailed description. Could you give (or point me to)
> a brief description of how ZFS's RAID differs from these configurations?
I think it's pr
On Tue, 26 Feb 2013 11:02:35 -0500
Derek Atkins wrote:
> Thank you for the detailed description. Could you give (or point me
> to) a brief description of how ZFS's RAID differs from these
> configurations?
The basic difference is that ZFS mirrors device blocks while Btrfs
replicates file extent
Dan Ritter writes:
> On Tue, Feb 26, 2013 at 11:00:58AM -0500, Derek Atkins wrote:
>> Dan Ritter writes:
>>
>> > +++
>> > How much space do I get with unequal devices in RAID-1 mode?
>>
>> I presume this is also true of RAID-10 mode?
>
> I haven't done this myself, and I'm not sure.
Fair eno
On Tue, Feb 26, 2013 at 11:00:58AM -0500, Derek Atkins wrote:
> Dan Ritter writes:
>
> > +++
> > How much space do I get with unequal devices in RAID-1 mode?
>
> I presume this is also true of RAID-10 mode?
I haven't done this myself, and I'm not sure.
> If you add new disks to an existing ar
Rich,
Rich Pieri writes:
> On Mon, 25 Feb 2013 10:53:28 -0500
> Derek Atkins wrote:
>
>> How is it still raid1?
>
> What Btrfs calls "RAID" isn't actually RAID. It isn't redundant disks.
> What Btrfs calls "RAID" is actually striped or mirrored data and
> metadata.
>
> Say that you have four d
Dan Ritter writes:
> On Tue, Feb 26, 2013 at 12:44:32AM +, Edward Ned Harvey (blu) wrote:
>> performance should be approx N-1 disks times a single disk
>> Incrementally expandable by adding individual disks? I know raidz is not.
>
> Yes, and also live-convertible to different raid schemes (a
On Mon, 25 Feb 2013 10:53:28 -0500
Derek Atkins wrote:
> How is it still raid1?
What Btrfs calls "RAID" isn't actually RAID. It isn't redundant disks.
What Btrfs calls "RAID" is actually striped or mirrored data and
metadata.
Say that you have four devices in a Btrfs volume. There are three
di
On Tue, Feb 26, 2013 at 12:44:32AM +, Edward Ned Harvey (blu) wrote:
> performance should be approx N-1 disks times a single disk
> Incrementally expandable by adding individual disks? I know raidz is not.
Yes, and also live-convertible to different raid schemes (albeit
slowly).
>From the fa
> From: Dan Ritter [mailto:d...@randomstring.org]
>
> > In btrfs and zfs, it goes like this:
> > mirror dev0 dev1 dev2
> > or
> > raid1 dev0 dev1 dev2
> > This makes a 3-way mirror. Total usable capacity of a single disk, triple
> redundant.
>
> This is incorrect for btrfs. Assuming 3 identical
On Mon, Feb 25, 2013 at 07:11:13PM +, Edward Ned Harvey (blu) wrote:
> All of this distinction between raid0, mirroring, raid10, in context of
> btrfs, is irrelevant, because it's true for straight-up traditional RAID,
> which is not what's happening in btrfs or zfs.
You are incorrect about
All of this distinction between raid0, mirroring, raid10, in context of btrfs,
is irrelevant, because it's true for straight-up traditional RAID, which is not
what's happening in btrfs or zfs.
In btrfs and zfs, it goes like this:
mirror dev0 dev1 dev2
or
raid1 dev0 dev1 dev2
This makes a 3-way m
On Mon, Feb 25, 2013 at 10:53:28AM -0500, Derek Atkins wrote:
> Dan Ritter writes:
>
> >> mkfs.btrfs -d raid1 /dev/sda /dev/sdb /dev/sdc /dev/sdd
> >>
> >> It this going to be more like raid10?
> >
> > No, that's still RAID1: two copies of every file, no striping.
> > If you want striping+mirr
Dan Ritter writes:
>> > "-d raid1" means mirrored data. Metadata is mirrored by default even
>> > on single drive volumes.
>> >
>> > If /dev/sdb faults then you should lose no data since every extent is
>> > replicated on both /dev/sda and /dev/sdb. If a bit error arises on
>> > either sda or sdb
On Sat, 23 Feb 2013 03:08:36 +
"Edward Ned Harvey (blu)" wrote:
> able to run jobs on it, slice and dice everything the way I wanted
> to. But the system was crashy. (About once a week.) Between me and
I remember you mentioning this. I recall similar issues with ReiserFS
and slightly flak
> From: discuss-bounces+blu=nedharvey@blu.org [mailto:discuss-
> bounces+blu=nedharvey@blu.org] On Behalf Of Jerry Feldman
>
> Essentially, btrfs has been mentioned in a number of other contexts, but
> since it is now available on several distros, let's just start a thread
> on btrfs.
The
On Fri, 22 Feb 2013 12:10:01 -0500
Jerry Feldman wrote:
> Ok, question answered.
> So if I currently had a RAID1(/dev/mdn == /dev/sdxn + /dev/sdyn)
> ThenI would achieve roughly the same benefits with btrfs -d raid1.
Roughly. It gets a little complicated with more than 2 devices and
with non-ide
On Fri, Feb 22, 2013 at 12:08:32PM -0500, Derek Atkins wrote:
> Rich Pieri writes:
>
> > On Fri, 22 Feb 2013 11:29:42 -0500
> > Jerry Feldman wrote:
> >
> >> So, assume I have 2 physical volumes, /dev/sda and /dev/sdb.
> >> mkfs.btrfs -d raid1 /dev/sda /dev/sdb
> >> What happens if I get a failu
On 02/22/2013 11:45 AM, Rich Pieri wrote:
> On Fri, 22 Feb 2013 11:29:42 -0500
> Jerry Feldman wrote:
>
>> So, assume I have 2 physical volumes, /dev/sda and /dev/sdb.
>> mkfs.btrfs -d raid1 /dev/sda /dev/sdb
>> What happens if I get a failure on /dev/sdb.
>> Assume no snapshots.
> "-d raid1" mean
Rich Pieri writes:
> On Fri, 22 Feb 2013 11:29:42 -0500
> Jerry Feldman wrote:
>
>> So, assume I have 2 physical volumes, /dev/sda and /dev/sdb.
>> mkfs.btrfs -d raid1 /dev/sda /dev/sdb
>> What happens if I get a failure on /dev/sdb.
>> Assume no snapshots.
>
> "-d raid1" means mirrored data. Me
On Fri, 22 Feb 2013 11:29:42 -0500
Jerry Feldman wrote:
> So, assume I have 2 physical volumes, /dev/sda and /dev/sdb.
> mkfs.btrfs -d raid1 /dev/sda /dev/sdb
> What happens if I get a failure on /dev/sdb.
> Assume no snapshots.
"-d raid1" means mirrored data. Metadata is mirrored by default eve
Rich Pieri writes:
> On Fri, 22 Feb 2013 10:04:24 -0500
> Jerry Feldman wrote:
>
>> Most of the examples I have seen are to install btrfs on raw drives.
>
> Btrfs is, like ZFS, both file system and volume manager. There is
> typically no benefit to not allowing Btrfs to manage entire devices
> u
On Fri, Feb 22, 2013 at 11:29:42AM -0500, Jerry Feldman wrote:
> On 02/22/2013 11:01 AM, Rich Pieri wrote:
> > On Fri, 22 Feb 2013 10:04:24 -0500
> > Jerry Feldman wrote:
> >
> >> Most of the examples I have seen are to install btrfs on raw drives.
> > Btrfs is, like ZFS, both file system and volu
On 02/22/2013 11:01 AM, Rich Pieri wrote:
> On Fri, 22 Feb 2013 10:04:24 -0500
> Jerry Feldman wrote:
>
>> Most of the examples I have seen are to install btrfs on raw drives.
> Btrfs is, like ZFS, both file system and volume manager. There is
> typically no benefit to not allowing Btrfs to manage
On Fri, 22 Feb 2013 10:04:24 -0500
Jerry Feldman wrote:
> Most of the examples I have seen are to install btrfs on raw drives.
Btrfs is, like ZFS, both file system and volume manager. There is
typically no benefit to not allowing Btrfs to manage entire devices
unless you need to have part of the
Essentially, btrfs has been mentioned in a number of other contexts, but
since it is now available on several distros, let's just start a thread
on btrfs.
Here is a link to the main Wiki:
https://btrfs.wiki.kernel.org/index.php/Main_Page
First we have an installfest coming up next week on March 2n
How do you make backups of your Btrfs subvolumes? I don't mean how do
you make snapshots. I mean how do you make backups of them.
--
Rich P.
___
Discuss mailing list
Discuss@blu.org
http://lists.blu.org/mailman/listinfo/discuss
Edward Ned Harvey writes:
> Anybody using btrfs in production? I know it says all over it, "not ready
> for production" and so forth. But it's like dangling a big piece of candy
> in front of a child with a sticker that says "Do not eat." ;-)
>
>
>
> I've had a somewhat bad experience, I'd
Actually, my first assumption would be something in netatalk rather than
btrfs.
Apple does some baroque things with Time Machine volumes that require
"options:tm" in the AppleVolumes.default for each AFP volume used by
Time Machine. If this option isn't set then things will stop working
corr
Ned,
Sounds like you took a very reasonable and scientific approach to
narrow it down to a btrfs issue, most likely a memory leak. Good job!
I don't have direct experience with btrfs but I have been playing with
ZFS. Which they have no code in common they do have some philosophy
and design in c
Wait it's Precise that is in beta right now...sorry for the
confusion...need more coffee.
--
David
On Wed, Apr 4, 2012 at 10:00 AM, David Miller wrote:
> On Wed, Apr 4, 2012 at 9:17 AM, Edward Ned Harvey wrote:
>
>> Anybody using btrfs in production? I know it says all over it, "not ready
>>
On Wed, Apr 4, 2012 at 9:17 AM, Edward Ned Harvey wrote:
> Anybody using btrfs in production? I know it says all over it, "not ready
> for production" and so forth. But it's like dangling a big piece of candy
> in front of a child with a sticker that says "Do not eat." ;-)
>
>
>
> I've had a
Anybody using btrfs in production? I know it says all over it, "not ready
for production" and so forth. But it's like dangling a big piece of candy
in front of a child with a sticker that says "Do not eat." ;-)
I've had a somewhat bad experience, I'd like to share, and see if others
experie
33 matches
Mail list logo