Here we expect 0 as return value, fix it.
Signed-off-by: Wang Shilong wangsl.f...@cn.fujitsu.com
Cc: Josef Bacik jba...@fb.com
---
tests/btrfs/022 | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
mode change 100644 = 100755 tests/btrfs/022
diff --git a/tests/btrfs/022 b/tests/btrfs/022
On 2014/01/06 17:08, Wang Shilong wrote:
Here we expect 0 as return value, fix it.
Signed-off-by: Wang Shilong wangsl.f...@cn.fujitsu.com
Cc: Josef Bacik jba...@fb.com
---
tests/btrfs/022 | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
mode change 100644 = 100755
Itoh San,
On 01/06/2014 04:23 PM, Tsutomu Itoh wrote:
On 2014/01/06 17:08, Wang Shilong wrote:
Here we expect 0 as return value, fix it.
Signed-off-by: Wang Shilong wangsl.f...@cn.fujitsu.com
Cc: Josef Bacik jba...@fb.com
---
tests/btrfs/022 | 2 +-
1 file changed, 1 insertion(+), 1
On Sun, Jan 05, 2014 at 06:26:11PM +, Hugo Mills wrote:
On Sun, Jan 05, 2014 at 05:55:27PM +, Hugo Mills wrote:
The structure for BTRFS_SET_RECEIVED_IOCTL packs differently on 32-bit
and 64-bit systems. This means that it is impossible to use btrfs
receive on a system with a 64-bit
From: Miao Xie mi...@cn.fujitsu.com
From: Miao Xie mi...@cn.fujitsu.com
_require_scratch_dev_pool() checks the devices number in
SCRATCH_DEV_POOL, but it's not enough since some btrfs RAID10 tests
needs 4 devices, but when 3 or less devices are provided, the check is
useless and related test
Qu Wenruo 写道:
From: Miao Xie mi...@cn.fujitsu.com
Sorry for the double from line.
I'll resend the patch.
Qu
From: Miao Xie mi...@cn.fujitsu.com
_require_scratch_dev_pool() checks the devices number in
SCRATCH_DEV_POOL, but it's not enough since some btrfs RAID10 tests
needs 4 devices, but
To have noexceed test, we should clear data before and then retry.
However, when we are near to quota limit, we may fail to truncate/remove
data before, so we restart everthing here.
Signed-off-by: Wang Shilong wangsl.f...@cn.fujitsu.com
---
changelog v1-v2:
on the right way to fix failed
From: Miao Xie mi...@cn.fujitsu.com
_require_scratch_dev_pool() checks the devices number in
SCRATCH_DEV_POOL, but it's not enough since some btrfs RAID10 tests
needs 4 devices, but when 3 or less devices are provided, the check is
useless and related test case will fail(btrfs/003 btrfs/011
Steps to reproduce:
# mkfs.btrfs -f /dev/sda8
# mount /dev/sda8 /mnt
# btrfs sub snapshot -r /mnt /mnt/snap1
# btrfs sub snapshot -r /mnt /mnt/snap2
# btrfs send /mnt/snap2 -p /mnt/snap1
As @send_root will also add into clone_sources, and we should
take care not to decrease its count twice.
We should gurantee that parent and clone root can not be destroyed
during send, for this we have two ideas.
1.by holding @subvol_sem, this might be a nightmare, because it will
block all subvolumes deletion for a long time.
2.Miao pointed out we can reuse @send_in_progress, that mean we will
We may return early in btrfs_drop_snapshot(), we shouldn't
call btrfs_std_err() for this case, fix it.
Signed-off-by: Wang Shilong wangsl.f...@cn.fujitsu.com
---
fs/btrfs/extent-tree.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/fs/btrfs/extent-tree.c
We will finish orphan cleanups during snapshot, so we don't
have to commit transaction here.
Signed-off-by: Wang Shilong wangsl.f...@cn.fujitsu.com
Reviewed-by: Miao Xie mi...@cn.fujitsu.com
---
fs/btrfs/send.c | 29 -
1 file changed, 29 deletions(-)
diff --git
Hi,
This is a port to the Linux kernel of a RAID engine that I'm currently using
in a hobby project called SnapRAID. This engine supports up to six parities
levels and at the same time maintains compatibility with the existing Linux
RAID6 one.
The mathematical method used was already discussed
This patch changes btrfs/raid56.c to use the new raid interface and
extends its support to an arbitrary number of parities.
More in details, the two faila/failb failure indexes are now replaced
with a fail[] vector that keeps track of up to six failures, and now
the new raid_par() and raid_rec()
On Sun, 5 Jan 2014 01:25:19 PM Chris Murphy wrote:
Does the Ubuntu 12.03 LTS installer let you create sysroot on a Btrfs raid1
volume?
I doubt it, given the alpha for 14.04 doesn't seem to have the concept yet.
:-)
https://bugs.launchpad.net/ubuntu/+source/grub-installer/+bug/1266200
All
On 06/01/2014 10:31, Andrea Mazzoleni wrote:
Hi,
This is a port to the Linux kernel of a RAID engine that I'm currently using
in a hobby project called SnapRAID. This engine supports up to six parities
levels and at the same time maintains compatibility with the existing Linux
RAID6 one.
On Mon, 2014-01-06 at 10:31 +0100, Andrea Mazzoleni wrote:
This patch changes btrfs/raid56.c to use the new raid interface and
extends its support to an arbitrary number of parities.
More in details, the two faila/failb failure indexes are now replaced
with a fail[] vector that keeps track
On Mon, Dec 30, 2013 at 06:18:53PM +0100, Tom Gundersen wrote:
* fsck is skipped for filesystems where the relevant helper does not
exist, so fs_passno=1 has the same effect for xfs and btrfs
filesystems (either way, nothing happens).
That still leaves non-systemd systems and calling fsck
On 06/01/2014 14:11, Alex Elsayed wrote:
joystick wrote:
Just by looking at the Subjects, it seems patch number 0/1 is missing.
It might have not gotten through to the lists, or be a numbering mistake.
No, the numbering style is ${index}/${total}, where index = 0 is a cover
letter. So there
On Wed, Jan 01, 2014 at 03:10:25PM +0100, Pascal VITOUX wrote:
---
cmds-filesystem.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/cmds-filesystem.c b/cmds-filesystem.c
index 1c1926b..979dbd9 100644
--- a/cmds-filesystem.c
+++ b/cmds-filesystem.c
@@ -646,7 +646,7 @@
joystick wrote:
On 06/01/2014 14:11, Alex Elsayed wrote:
joystick wrote:
Just by looking at the Subjects, it seems patch number 0/1 is missing.
It might have not gotten through to the lists, or be a numbering
mistake.
No, the numbering style is ${index}/${total}, where index = 0 is a cover
On Sun, 2014-01-05 at 19:00 +, Piotr Pawłow wrote:
Hello,
distribution, used space on each device should be accordingly: 160,
216, and 405.
The last number should be 376, I copied the wrong one. Anyway, I deleted
as much data as possible, which probably won't help in the end, but at
On Mon, Jan 06, 2014 at 05:25:06PM +0800, Wang Shilong wrote:
Steps to reproduce:
# mkfs.btrfs -f /dev/sda8
# mount /dev/sda8 /mnt
# btrfs sub snapshot -r /mnt /mnt/snap1
# btrfs sub snapshot -r /mnt /mnt/snap2
# btrfs send /mnt/snap2 -p /mnt/snap1
As @send_root will also add into
On Mon, Jan 06, 2014 at 05:25:39PM +0800, Wang Shilong wrote:
We may return early in btrfs_drop_snapshot(), we shouldn't
call btrfs_std_err() for this case, fix it.
Somebody reported this 2 days ago on IRC. I think it should go to stable
as well, so it would be good to squeeze it to the next rc
test case:
disappear a disk then replace (RAID1) the disappeared disk
and then make disappeared disk to reappear.
mkfs.btrfs -f -m raid1 -d raid1 /dev/sdc /dev/sdd
mount /dev/sdc /btrfs
dd if=/dev/zero of=/btrfs/tf1 count=1
btrfs fi sync /btrfs
---
devmgt[1] will help to attach or
On 01/06/2014 04:31 AM, Andrea Mazzoleni wrote:
Hi,
This is a port to the Linux kernel of a RAID engine that I'm currently using
in a hobby project called SnapRAID. This engine supports up to six parities
levels and at the same time maintains compatibility with the existing Linux
RAID6 one.
On Mon, Jan 06, 2014 at 05:25:37PM +0800, Wang Shilong wrote:
We should gurantee that parent and clone root can not be destroyed
during send, for this we have two ideas.
1.by holding @subvol_sem, this might be a nightmare, because it will
block all subvolumes deletion for a long time.
On Mon, Jan 06, 2014 at 12:02:03PM -0500, Phil Turmel wrote:
On 01/06/2014 04:31 AM, Andrea Mazzoleni wrote:
FWIW, your patch 1/2 doesn't seem to have gone through on linux-raid,
although I saw it on lkml. Probably a different file size limit, as
that's a very large patch.
For the reference
On Mon, Jan 06, 2014 at 12:02:51AM +0100, Gerhard Heift wrote:
I am currently playing with snapshots and manual deduplication of
files. During these tests I noticed the change of ctime and mtime in
the snapshot after the deduplication with FILE_EXTENT_SAME. Does this
happens on purpose?
On Jan 6, 2014, at 3:20 AM, Chris Samuel ch...@csamuel.org wrote:
On Sun, 5 Jan 2014 01:25:19 PM Chris Murphy wrote:
Does the Ubuntu 12.03 LTS installer let you create sysroot on a Btrfs raid1
volume?
I doubt it, given the alpha for 14.04 doesn't seem to have the concept yet.
:-)
Hello,
I'm not sure what the solution is, but the issue seems to be that
btrfs is laying out the RAID1 like this:
[snip]
Yeah, it kinda ended up like this. I think the problem stems from the
fact, that restoring redundancy works by relocating block groups, which
rewrites all chunks instead of
FWIW, Ubuntu (and I presume Debian) will work just fine with a single /
on btrfs, single or multi disk.
I currently have two machines booting to a btrfs-raid10 / with no
separate /boot, one booting to a btrfs single disk / with no /boot, and
one booting to a btrfs-raid10 / with an
Hi list -
I tried a kernel upgrade with moderately disastrous (non-btrfs-related)
results this morning; after the kernel upgrade Xorg was completely
borked beyond my ability to get it working properly again through any
normal means. I do have hourly snapshots being taken by cron, though, so
I was trying to reproduce something with fsx and I noticed that no matter what
seed I set I was getting the same file. Come to find out we are overloading
random() with our own custom horribleness for some unknown reason. So nuke the
damn thing from orbit and rely on glibc's random(). With this
On 1/6/14, 1:58 PM, Josef Bacik wrote:
I was trying to reproduce something with fsx and I noticed that no matter what
seed I set I was getting the same file. Come to find out we are overloading
random() with our own custom horribleness for some unknown reason. So nuke
the
damn thing from
On 01/06/2014 04:32 PM, Eric Sandeen wrote:
On 1/6/14, 1:58 PM, Josef Bacik wrote:
I was trying to reproduce something with fsx and I noticed that no matter what
seed I set I was getting the same file. Come to find out we are overloading
random() with our own custom horribleness for some
On 1/6/14, 3:42 PM, Josef Bacik wrote:
On 01/06/2014 04:32 PM, Eric Sandeen wrote:
On 1/6/14, 1:58 PM, Josef Bacik wrote:
I was trying to reproduce something with fsx and I noticed that no matter
what
seed I set I was getting the same file. Come to find out we are overloading
random()
On Jan 6, 2014, at 12:25 PM, Jim Salter j...@jrs-s.net wrote:
FWIW, Ubuntu (and I presume Debian) will work just fine with a single / on
btrfs, single or multi disk.
I currently have two machines booting to a btrfs-raid10 / with no separate
/boot, one booting to a btrfs single disk /
Chris, the patch below seems to be incorrect - with it we get hangs, so
bi_remaining (probably) isn't getting decremented when it should be. You sent
Jens fixes for btrfs which I somehow lost when I rebased, do you remember how
this is supposed to work? Looking at the code I'm not quite sure
No, the installer is completely unaware. What I was getting at is that
rebalancing (and installing the bootloader) is dead easy, so it doesn't
bug me personally much. It'd be nice to eventually get something in the
installer to make it obvious to the oblivious that it can be done and
how, but
On Fri, Dec 20, 2013 at 03:46:30PM +, Chris Mason wrote:
On Fri, 2013-12-20 at 10:42 -0200, Fábio Pfeifer wrote:
Hello,
I put the WARN_ON(1); after the printk lines (incomplete page read
and incomplete page write) in extent_io.c.
here some call traces:
[ 19.509497]
On Mon, 6 Jan 2014 10:45:23 +0100 Andrea Mazzoleni amadva...@gmail.com
wrote:
Hi Neil,
Thanks for your feedback. In the meantime I went further in developing and
I've just sent version 2 of the patch, that contains a preliminary btrfs
modification to use the new interface.
Please use
OK, after a bit more staring I believe the correct fix is the following.
Fengguang, Please try this one?
Regards,
Muthu
In btrfs_end_bio(), we increment bi_remaining if is_orig_bio. If not,
we restore the orig_bio but failed to increment bi_remaining for
orig_bio, which triggers a
On 2014/01/06 17:48, Wang Shilong wrote:
Itoh San,
On 01/06/2014 04:23 PM, Tsutomu Itoh wrote:
On 2014/01/06 17:08, Wang Shilong wrote:
Here we expect 0 as return value, fix it.
Signed-off-by: Wang Shilong wangsl.f...@cn.fujitsu.com
Cc: Josef Bacik jba...@fb.com
---
tests/btrfs/022 |
On 01/07/2014 12:30 AM, David Sterba wrote:
On Mon, Jan 06, 2014 at 05:25:06PM +0800, Wang Shilong wrote:
Steps to reproduce:
# mkfs.btrfs -f /dev/sda8
# mount /dev/sda8 /mnt
# btrfs sub snapshot -r /mnt /mnt/snap1
# btrfs sub snapshot -r /mnt /mnt/snap2
# btrfs send /mnt/snap2 -p
On 01/07/2014 09:11 AM, Tsutomu Itoh wrote:
On 2014/01/06 17:48, Wang Shilong wrote:
Itoh San,
On 01/06/2014 04:23 PM, Tsutomu Itoh wrote:
On 2014/01/06 17:08, Wang Shilong wrote:
Here we expect 0 as return value, fix it.
Signed-off-by: Wang Shilong wangsl.f...@cn.fujitsu.com
Cc: Josef
On Mon, Jan 06, 2014 at 04:47:38PM -0800, Muthu Kumar wrote:
OK, after a bit more staring I believe the correct fix is the following.
This code still confuses me but I think you're correct, the fix certainly
matches the evidence we have.
Fengguang, Please try this one?
Regards,
Muthu
Hi David,
On 01/07/2014 12:30 AM, David Sterba wrote:
On Mon, Jan 06, 2014 at 05:25:06PM +0800, Wang Shilong wrote:
Steps to reproduce:
# mkfs.btrfs -f /dev/sda8
# mount /dev/sda8 /mnt
# btrfs sub snapshot -r /mnt /mnt/snap1
# btrfs sub snapshot -r /mnt /mnt/snap2
# btrfs send
On 01/07/2014 11:10 AM, Wang Shilong wrote:
Hi David,
On 01/07/2014 12:30 AM, David Sterba wrote:
On Mon, Jan 06, 2014 at 05:25:06PM +0800, Wang Shilong wrote:
Steps to reproduce:
# mkfs.btrfs -f /dev/sda8
# mount /dev/sda8 /mnt
# btrfs sub snapshot -r /mnt /mnt/snap1
# btrfs sub
On 2014/01/07 11:19, Wang Shilong wrote:
On 01/07/2014 09:11 AM, Tsutomu Itoh wrote:
On 2014/01/06 17:48, Wang Shilong wrote:
Itoh San,
On 01/06/2014 04:23 PM, Tsutomu Itoh wrote:
On 2014/01/06 17:08, Wang Shilong wrote:
Here we expect 0 as return value, fix it.
Signed-off-by: Wang
On 01/07/2014 11:24 AM, Tsutomu Itoh wrote:
On 2014/01/07 11:19, Wang Shilong wrote:
On 01/07/2014 09:11 AM, Tsutomu Itoh wrote:
On 2014/01/06 17:48, Wang Shilong wrote:
Itoh San,
On 01/06/2014 04:23 PM, Tsutomu Itoh wrote:
On 2014/01/06 17:08, Wang Shilong wrote:
Here we expect 0 as
On Fri, 3 Jan 2014 19:36:10 +0100, David Sterba wrote:
On Fri, Jan 03, 2014 at 05:27:51PM +0800, Miao Xie wrote:
On Thu, 2 Jan 2014 18:49:55 +0100, David Sterba wrote:
On Thu, Dec 26, 2013 at 01:07:05PM +0800, Miao Xie wrote:
+#define BTRFS_DELAYED_NODE_IN_LIST0
+#define
On 07/01/14 06:25, Jim Salter wrote:
FWIW, Ubuntu (and I presume Debian) will work just fine with a single /
on btrfs, single or multi disk.
I currently have two machines booting to a btrfs-raid10 / with no
separate /boot, one booting to a btrfs single disk / with no /boot, and
one booting
On Mon, Jan 06, 2014 at 04:47:38PM -0800, Muthu Kumar wrote:
OK, after a bit more staring I believe the correct fix is the following.
Fengguang, Please try this one?
Yes, it runs fine now!
Tested-by: Fengguang Wu fengguang...@intel.com
Thanks,
Fengguang
In btrfs_end_bio(),
54 matches
Mail list logo