Hi Chris,
the current for-linus head as of today (d98456fc) gets stuck
in a deadlock when executing xfstest 083. This is the
corresponding output, preceeded by a related lockdep warning:
Feb 21 08:30:52 oglaroon kernel: [56906.451059]
==
Feb 21
On Mon, Feb 20, 2012 at 08:59:05PM -0500, Tom Cameron wrote:
> Gareth,
>
> I would completely agree. I only use the RAID vernacular here because,
> well, it's the unfortunate defacto standard way to talk about data
> protection.
>
> I'd go a step beyond saying dupe or dupe + stripe, because futur
On Mon, Feb 20, 2012 at 9:29 PM, Olivier Bonvalet wrote:
> On 20/02/2012 15:00, Fajar A. Nugraha wrote:
>>
>> On Mon, Feb 20, 2012 at 8:50 PM, Hubert Kario wrote:
>>>
>>> On Monday 20 of February 2012 14:41:33 Olivier Bonvalet wrote:
Lot of small files (like compressed email from Maildi
I'd probably want to use DupeX to refer to what was classically RAID1
(Duplicate across all disks) and Dupe is an alias for Dup2 but one can
also choose Dupe3 through Dupe99
And I keep forgetting to post to the list in plain text, so many of
you may not have noticed my original email that only exi
Fajar,
Thanks for the instructions. I've included some extra detail here for future
generations.
I followed the instructions here (https://wiki.archlinux.org/index.php/Btrfs)
to obtain the source code for btrfs-zero-log, specifically:
git clone
git://git.kernel.org/pub/scm/linux/kerne
Gareth,
I would completely agree. I only use the RAID vernacular here because,
well, it's the unfortunate defacto standard way to talk about data
protection.
I'd go a step beyond saying dupe or dupe + stripe, because future
modifications could conceivably see the addition of multiple
duplicated s
I have a system running 3.2.6 that creates and deletes a number of
snapshots on a daily basis. It seems to have run into a problem while
attempting to create a snapshot:
[431642.714979] [ cut here ]
[431642.714997] kernel BUG at /home/apw/COD/linux/fs/btrfs/locking.c:214!
[
On Tue, Feb 21, 2012 at 12:27:56PM +1100, Wes wrote:
> @hugo
>
> iirc that was on ~3.0.8 but it might have been 3.0.0. I'll revisit
> the raid0 setup on a newer kernel series and test though before making
> any more claims. :)
There's a repeating pattern of three log messages that comes out i
@hugo
iirc that was on ~3.0.8 but it might have been 3.0.0. I'll revisit
the raid0 setup on a newer kernel series and test though before making
any more claims. :)
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More
On Tue, Feb 21, 2012 at 09:16:40AM +0800, Liu Bo wrote:
> On 02/21/2012 08:45 AM, Wes wrote:
> > meaning removing any 1 drive would result in lost data.
>
> Removing any disk will not lose data cause btrfs ensure all the data
> in the removed disk is safely placed on right places. And if there
>
On Mon, Feb 20, 2012 at 08:13:43PM -0500, Tom Cameron wrote:
> On Mon, Feb 20, 2012 at 8:07 PM, Hugo Mills wrote:
> >
> > However, you can remove any one drive, and your data is fine, which
> > is what btrfs's RAID-1 guarantee is. I understand that there will be
> > additional features coming al
On 02/21/2012 08:45 AM, Wes wrote:
> I've noticed similar behavior when even RAID0'ing an odd number of
> devices which should be even more trivial in practice.
> You would expect something like:
> sda A1 B1
> sdb A2 B2
> sdc A3 B3
>
> or at least, if BTRFS can only handle block pairs,
>
> sda A
On Mon, Feb 20, 2012 at 8:07 PM, Hugo Mills wrote:
>
> However, you can remove any one drive, and your data is fine, which
> is what btrfs's RAID-1 guarantee is. I understand that there will be
> additional features coming along Real Soon Now (possibly at the same
> time that RAID-5 and -6 are i
On Mon, Feb 20, 2012 at 07:35:18PM -0500, Tom Cameron wrote:
> I had a 4 drive RAID10 btrfs setup that I added a fifth drive to with
> the "btrfs device add" command. Once the device was added, I used the
> balance command to distribute the data through the drives. This
> resulted in an infinite ru
I figured you meant that.
Using RAID1 on N drives normally would mean all drives have a copy of
the object. The upshot of this is that you can lose N-1 drives and
still access data. In systems like ZFS or BTRFS you would also expect
a read speed of N*, since you could theoretically read from all d
On Tue, Feb 21, 2012 at 11:45:51AM +1100, Wes wrote:
> I've noticed similar behavior when even RAID0'ing an odd number of
> devices which should be even more trivial in practice.
> You would expect something like:
> sda A1 B1
> sdb A2 B2
> sdc A3 B3
This is what it should do -- it'll use as man
Sorry, I meant 'removing 2 drives' in the raid1 with 3 drives example
On Tue, Feb 21, 2012 at 11:45 AM, Wes wrote:
> I've noticed similar behavior when even RAID0'ing an odd number of
> devices which should be even more trivial in practice.
> You would expect something like:
> sda A1 B1
> sdb A
I've noticed similar behavior when even RAID0'ing an odd number of
devices which should be even more trivial in practice.
You would expect something like:
sda A1 B1
sdb A2 B2
sdc A3 B3
or at least, if BTRFS can only handle block pairs,
sda A1 B2
sdb A2 C1
sdc B1 C2
But the end result was that
I had a 4 drive RAID10 btrfs setup that I added a fifth drive to with
the "btrfs device add" command. Once the device was added, I used the
balance command to distribute the data through the drives. This
resulted in an infinite run of the btrfs tool with data moving back
and forth across the drives
Sorry for the subject, this a single patch, not in a series.
Hubert Kario
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Signed-off-by: Hubert Kario
---
fs/btrfs/ioctl.c |5 -
1 files changed, 4 insertions(+), 1 deletions(-)
diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
index 3dede5c..d536816 100644
--- a/fs/btrfs/ioctl.c
+++ b/fs/btrfs/ioctl.c
@@ -2120,7 +2120,10 @@ static long btrfs_ioctl_dev_info(str
Signed-off-by: Hubert Kario
---
fs/btrfs/ioctl.c |1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c
index dae5dfe..3dede5c 100644
--- a/fs/btrfs/ioctl.c
+++ b/fs/btrfs/ioctl.c
@@ -2121,6 +2121,7 @@ static long btrfs_ioctl_dev_info(struct b
Signed-off-by: Hubert Kario
---
scrub.c |2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/scrub.c b/scrub.c
index 9dca5f6..630a1bf 100644
--- a/scrub.c
+++ b/scrub.c
@@ -1010,7 +1010,7 @@ static int scrub_fs_info(int fd, char *path,
if (!fi_args->num_devices)
On Tue, Feb 21, 2012 at 12:51:57AM +0600, Roman Mamedov wrote:
> Hello,
>
> I have noticed I have the following in my dmesg:
>
> [319769.043163] nbd: registered device at major 43
> [319769.104176] nbd11: unknown partition table
> [319769.130273] device fsid c2598ff2-1e3e-4edf-ab19-1f7ab41b0160
Hello,
I have noticed I have the following in my dmesg:
[319769.043163] nbd: registered device at major 43
[319769.104176] nbd11: unknown partition table
[319769.130273] device fsid c2598ff2-1e3e-4edf-ab19-1f7ab41b0160 devid 1
transid 125743 /dev/nbd11
[319769.130522] btrfs: force lzo compressi
Hi,
I'm no NetApp expert, but as far as I know they have their own piece of
software which integrates their snapshots with oracle.
Creating consistent backups of a database isn't a problem that a
filesystem should solve at all. A simple approach for your problem would
be "Put DB in hot backu
On 02/20/2012 07:49 PM, Andrew Henry wrote:
Will there be support for consistent backups á la NetApp when using
snapshots on btrfs filesystems that contain Oracle databases?
Im not that familiar with NetApp, and don't know whether there needs
to be "extra support" for the Oracle bits for their s
On 20/02/2012 15:00, Fajar A. Nugraha wrote:
On Mon, Feb 20, 2012 at 8:50 PM, Hubert Kario wrote:
On Monday 20 of February 2012 14:41:33 Olivier Bonvalet wrote:
Lot of small files (like compressed email from Maildir), and lot of
hardlinks, and probably low free space (near 15% I suppose).
So
Will there be support for consistent backups á la NetApp when using
snapshots on btrfs filesystems that contain Oracle databases?
Im not that familiar with NetApp, and don't know whether there needs
to be "extra support" for the Oracle bits for their snapshoting to
work, so please forgive me if th
Chris: What will btrfs-convert do when it encounters a directory with more
hardlinks than the btrfs limit?
On Monday 20 of February 2012 21:00:34 Fajar A. Nugraha wrote:
> On Mon, Feb 20, 2012 at 8:50 PM, Hubert Kario wrote:
> > On Monday 20 of February 2012 14:41:33 Olivier Bonvalet wrote:
> >>
On Mon, Feb 20, 2012 at 8:50 PM, Hubert Kario wrote:
> On Monday 20 of February 2012 14:41:33 Olivier Bonvalet wrote:
>> Lot of small files (like compressed email from Maildir), and lot of
>> hardlinks, and probably low free space (near 15% I suppose).
>>
>>
>> So I think I have my answer :)
>>
>
On Monday 20 of February 2012 14:41:33 Olivier Bonvalet wrote:
> On 20/02/2012 14:20, Hubert Kario wrote:
> > On Monday 20 of February 2012 13:51:29 Olivier Bonvalet wrote:
> >> Hi,
> >>
> >> I'm trying to convert two ext4 FS to btrfs, but I'm surprised by the
> >> time needed to do that conversio
On 20/02/2012 14:20, Hubert Kario wrote:
On Monday 20 of February 2012 13:51:29 Olivier Bonvalet wrote:
Hi,
I'm trying to convert two ext4 FS to btrfs, but I'm surprised by the
time needed to do that conversion.
The first FS is on a 500GiB block device, and btrfs-convert is running
since more
(sorry for the duplicate, previous one has broken signature)
On Monday 20 of February 2012 13:51:29 Olivier Bonvalet wrote:
> Hi,
>
> I'm trying to convert two ext4 FS to btrfs, but I'm surprised by the
> time needed to do that conversion.
>
> The first FS is on a 500GiB block device, and btrfs-c
On Monday 20 of February 2012 13:51:29 Olivier Bonvalet wrote:
> Hi,
>
> I'm trying to convert two ext4 FS to btrfs, but I'm surprised by the
> time needed to do that conversion.
>
> The first FS is on a 500GiB block device, and btrfs-convert is running
> since more than 48h :
> root 1978 25.6
Hi,
I'm trying to convert two ext4 FS to btrfs, but I'm surprised by the
time needed to do that conversion.
The first FS is on a 500GiB block device, and btrfs-convert is running
since more than 48h :
root 1978 25.6 47.7 748308 732556 ? DFeb18 944:44
btrfs-convert /dev/vg-back
36 matches
Mail list logo