On 2015-08-14 12:36, Anand Jain wrote:
> This patch introduces new option for the command
>
> btrfs device delete [..]
>
> In a user reported issue on a 3-disk-RAID1, one disk failed with its
> SB unreadable. Now with this patch user will have a choice to delete
> the device using devid.
>
On Mon, Aug 17, 2015 at 7:07 AM, Qu Wenruo wrote:
> Btrfs qgroup reserve codes lacks check for rewriting dirty page, causing
> every write, even rewriting a uncommitted dirty page, to reserve space.
>
> But only written data will free the reserved space, resulting reserved
> space leaking.
>
> The
Filipe David Manana wrote on 2015/08/17 10:18 +0100:
On Mon, Aug 17, 2015 at 7:07 AM, Qu Wenruo wrote:
Btrfs qgroup reserve codes lacks check for rewriting dirty page, causing
every write, even rewriting a uncommitted dirty page, to reserve space.
But only written data will free the reserved
Dan Carpenter reported a smatch warning
for start_log_trans():
fs/btrfs/tree-log.c:178 start_log_trans()
warn: we tested 'root->log_root' before and it was 'false'
fs/btrfs/tree-log.c
147 if (root->log_root) {
We test "root->log_root" here.
...
Reason:
Condition of:
fs/btrfs/tre
Following arguments are not used in tree-log.c:
insert_one_name(): path, type
wait_log_commit(): trans
wait_for_writer(): trans
This patch remove them.
Signed-off-by: Zhao Lei
---
fs/btrfs/tree-log.c | 25 +++--
1 file changed, 11 insertions(+), 14 deletions(-)
diff --gi
-*remove* [...] ::
+*remove* | [|...] ::
Remove device(s) from a filesystem identified by .
*delete* [...] ::
also here is missing (below you added)
Thank you. Got this fixed.
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majo
On 2015-08-15 02:30, Duncan wrote:
Austin S Hemmelgarn posted on Fri, 14 Aug 2015 15:58:30 -0400 as
excerpted:
FWIW, running BTRFS on top of MDRAID actually works very well,
especially for BTRFS raid1 on top of MD-RAID0 (I get an almost 50%
performance increase for this usage over BTRFS raid10,
On 2015-08-15 17:46, Timothy Normand Miller wrote:
To those of you who have been helping out with my 4-drive RAID1
situation, is there anything further we should do to investigate this,
in case we can uncover any more bugs, or should I just wipe everything
out and restore from backup?
If you nee
On 2015-08-16 23:35, Tyler Bletsch wrote:
I just wanted to drop you guys a line to say that I am stunned with how
excellent btrfs is. I did some testing, and the things that it did were
amazing. I took a 4-disk RAID 5 and walked it all the way down to a
one-disk volume and back again, mixed in de
On Mon, Aug 17, 2015 at 10:01:16AM +0800, Qu Wenruo wrote:
> Hi Marc,
Hi Qu, thanks for your answer and looking at this.
> Did btrfs-debug-tree also has the crash?
>
> If not, would you please attach the output if it doesn't contain
> classified data.
Sure thing:
btrfs-debug-tree /dev/mapper/
Current code use fprintf(stderr, "...") to output warnning and
error information.
The error message have different style, as:
# grep fprintf *.c
fprintf(stderr, "Open ctree failed\n");
fprintf(stderr, "%s: open ctree failed\n", __func__);
fprintf(stderr, "ERROR: cannot open ctree\n");
...
An
I'm not sure if I'm doing this wrong. Here's what I'm seeing:
# btrfs-image -c9 -t4 -w /mnt/btrfs ~/btrfs_dump.z
Superblock bytenr is larger than device size
Open ctree failed
create failed (No such file or directory)
On Mon, Aug 17, 2015 at 7:43 AM, Austin S Hemmelgarn
wrote:
> On 2015-08-15
Thanks. I will be trying raid5 in production, but "production" in this
case just means my home file server, with btrfs snapshot+sync for all
data and appropriate offsite non-btrfs backups for critical data. If it
hoses up, I'll post a bug report.
Going to try to avoid LVM, since half the appea
Hi Calvin.
thanks a lot for the quick answer and sorry for my delayed to reply.
We got some security issues at some machines. I will answer almost al
the replies below.
Yes raid0 is huge risk. This setup is just for performance demos and
other very specific occasions.
I understand the the need of
The bether xfs performance we got was using 32 disks and 128KB mdadm chunk size.
Could the be the problem we are seen? if each disk get 4KB, 64KB will
be optimal for just 16 disks when usint raid0 with btrfs?
2015-08-14 15:31 GMT-03:00 Chris Murphy :
> On Fri, Aug 14, 2015 at 9:16 AM, Eduardo Bach
On Mon, 2015-08-17 at 16:44 -0300, Eduardo Bach wrote:
> Based on previous testing with a smaller number of disk I'm
> suspecting
> that the 32 disks are not all being used. With 12 discs I got more
> speed with btrfs thanmdadm+xfs. With, btrfs, 12 disks and large files
> we got the entire theoreti
Hi Qu,
Firstly thanks for the response. I have a few new questions and comments
below,
On Mon, Aug 17, 2015 at 09:33:54AM +0800, Qu Wenruo wrote:
> Thanks for pointing out the problem.
> But it's already known and we didn't have a good idea to solve it yet.
>
> BTW, the old framework won't handl
Two years ago I installed btrfs across 8 hard drives on my desktop
system with the entire system ending up on btrfs RAID 1. I did all of
this with btrfs-progs-0.20. Since that time I have been dreading
updating my system because of fear that the old btrfs volumes would
become unstable in the
Austin S Hemmelgarn posted on Mon, 17 Aug 2015 07:38:13 -0400 as
excerpted:
> I've also found that BTRFS raid5/6 on top of MD RAID0 mitigates (to a
> certain extent that is) the performance penalty of doing raid5/6 if you
> aren't on ridiculously fast storage, probably not something that should
>
A couple of days ago while running 4.2.0-RC5 I had a suspected fault on one of
the disks of my 6-disk RAID-1 btrfs filesystem so removed the offending drive
and tried a "btrfs device remove...".
While trying it, the system repeated hung with a kernel BUG after varying
amounts of time ranging f
Mark Fasheh wrote on 2015/08/17 14:13 -0700:
Hi Qu,
Firstly thanks for the response. I have a few new questions and comments
below,
On Mon, Aug 17, 2015 at 09:33:54AM +0800, Qu Wenruo wrote:
Thanks for pointing out the problem.
But it's already known and we didn't have a good idea to solve i
Btrfs qgroup reserve codes lacks check for rewrite dirty page, causing
every write, even rewriting a uncommitted dirty page, to reserve space.
But only written data will free the reserved space, causing reserved
space leaking.
The bug exists almost from the beginning of btrfs qgroup codes, but
no
Hi Dave,
All comments accepted thanks. except for this.
+_mount_dmerror()
+{
+ $MOUNT_PROG -t $FSTYP $MOUNT_OPTIONS $DMERROR_DEV $SCRATCH_MNT
+}
Should mirror _scratch_mount.
_mount -t $FSTYP `_scratch_mount_options` $DMERROR_DEV $SCRATCH_MNT
`_scratch_mount_options` also returns
From: Anand Jain
This test case will test to confirm the replace works with
the failed (EIO) replacing source device. EIO condition is
achieved using the DM device.
Signed-off-by: Anand Jain
Reviewed-by: Filipe Manana
---
v6->v7: accepts Dave's comments
(. in line with changes as in 1/3)
This is v7 of this patch set.
Mainly accepts Dave's latest review comments with thanks.
For actual change-list please ref to the individial patch.
Anand Jain (3):
xfstests: btrfs: add functions to create dm-error device
xfstests: btrfs: test device replace, with EIO on the src dev
xfstests:
From: Anand Jain
This test case tests if the device delete works with
the failed (EIO) source device. EIO errors are achieved
usign the DM device.
This test would need following btrfs-progs and btrfs
kernel patch
btrfs-progs: device delete to accept devid
Btrfs: device delete by devid
How
From: Anand Jain
Controlled EIO from the device is achieved using the dm device.
Helper functions are at common/dmerror.
Broadly steps will include calling _dmerror_init().
_dmerror_init() will use SCRATCH_DEV to create dm linear device and assign
DMERROR_DEV to /dev/mapper/error-test.
When tes
27 matches
Mail list logo