Commit 2cac13e41bf5b99ffc426bd28dfd2248df1dfa67, fix trim 0 bytes after
a device delete, said:
A user reported a bug of btrfs's trim, that is we will trim 0 bytes
after a device delete.
The commit didn't attack the root of the problem so did not fix the bug
except for a special case.
For
Bob Marley posted on Sat, 03 Jan 2015 12:34:41 +0100 as excerpted:
On 29/12/2014 19:56, sys.syphus wrote:
specifically (P)arity. very specifically n+2. when will raid5 raid6
be at least as safe to run as raid1 currently is? I don't like the idea
of being 2 bad drives away from total
luvar posted on Fri, 02 Jan 2015 15:42:29 +0100 as excerpted:
root@blackdawn:/home/luvar# uname -a
[...] 3.13.0-30-generic [...]
root@blackdawn:/home/luvar# btrfs v
Btrfs v0.20-rc1-189-g704a08c
Am I doing something forbidden [...]
Those versions are your problem. Do you know how fast
Hi Yang,
This is how to reproduce the bug,
[root@algodev ~]# uname -r
3.18.0+
[root@algodev ~]# btrfs version
Btrfs v3.18-2-g6938452-dirty
[root@algodev ~]# btrfs quota enable LOOP/
[root@algodev ~]# btrfs qgroup show LOOP/
qgroupid rfer excl
0/5 16384 16384
On 03/01/2015 14:11, Duncan wrote:
Bob Marley posted on Sat, 03 Jan 2015 12:34:41 +0100 as excerpted:
On 29/12/2014 19:56, sys.syphus wrote:
specifically (P)arity. very specifically n+2. when will raid5 raid6
be at least as safe to run as raid1 currently is? I don't like the idea
of being 2
But btrfs raid56 mode should be complete with kernel 3.19 and presumably
btrfs-progs 3.19 tho I'd give it a kernel or two to mature to be sure.
N-way-mirroring (my particular hotly awaited feature) is next up, but
given the time raid56 took, I don't think anybody's predicting when it'll
be
Martin Steigerwald wrote:
I have a 3.19-rc2 with a patch and a working fstrim now:
[...]
I leave it to the patch author to come up with it on the mailing list :)
That would be me. I have just sent in the patch; please see
http://www.mail-archive.com/linux-btrfs@vger.kernel.org/msg40618.html
On 29/12/2014 19:56, sys.syphus wrote:
specifically (P)arity. very specifically n+2. when will raid5 raid6
be at least as safe to run as raid1 currently is? I don't like the
idea of being 2 bad drives away from total catastrophe.
(and yes i backup, it just wouldn't be fun to go down that
Which is really not bad, considering the chance that something gets corrupt.
Already it is an exceedingly rare event. Detection without correction can be
more than enough. Since always things have worked in the computer science
field without even the detection feature.
Most likely even your
On Sat, 3 Jan 2015 13:11:57 + (UTC)
Duncan 1i5t5.dun...@cox.net wrote:
What about using btrfs on top of MD raid?
The problem with that is data integrity. mdraid doesn't have it. btrfs
does.
Most importantly however, you aren't any worse off with Btrfs on top of MD,
than with Btrfs
Got this with 3.18.1 and qgroups enabled. Not sure how to reproduce.
[1262648.802286] [ cut here ]
[1262648.802350] WARNING: CPU: 1 PID: 2436 at fs/btrfs/qgroup.c:1414
btrfs_delayed_qgroup_accounting+0x9f1/0xa0b [btrfs]()
[1262648.802441] Modules linked in:
Remove the function btrfs_reada_detach() that is not used anywhere.
This was partially found by using a static code analysis program called
cppcheck.
Signed-off-by: Rickard Strandqvist rickard_strandqv...@spectrumdigital.se
---
fs/btrfs/ctree.h |1 -
fs/btrfs/reada.c |9 +
2
sys.syphus posted on Sat, 03 Jan 2015 12:55:27 -0600 as excerpted:
But btrfs raid56 mode should be complete with kernel 3.19 and
presumably btrfs-progs 3.19 tho I'd give it a kernel or two to mature
to be sure. N-way-mirroring (my particular hotly awaited feature) is
next up, but given the
Roman Mamedov posted on Sun, 04 Jan 2015 02:58:35 +0500 as excerpted:
On Sat, 3 Jan 2015 13:11:57 + (UTC)
Duncan 1i5t5.dun...@cox.net wrote:
What about using btrfs on top of MD raid?
The problem with that is data integrity. mdraid doesn't have it.
btrfs does.
Most importantly
On Sun, Jan 04, 2015 at 03:22:53AM +, Duncan wrote:
sys.syphus posted on Sat, 03 Jan 2015 12:55:27 -0600 as excerpted:
But btrfs raid56 mode should be complete with kernel 3.19 and
presumably btrfs-progs 3.19 tho I'd give it a kernel or two to mature
to be sure. N-way-mirroring (my
Moving the Z_FINISH into the loop also means we don't have to force a
flush after every input page to guarantee that there won't be more than 4
KiB to write at the end. This patch lets zlib decide when to flush
buffer, which offers a very moderate space savings (on my system, my 400MB
test
The subvol delete output has changed with btrfs-progs
-Delete subvolume 'SCRATCH_MNT/snap'
+Delete subvolume (no-commit): 'SCRATCH_MNT/snap'
so fix 001 failing.
Signed-off-by: Anand Jain anand.j...@oracle.com
v2: Thanks Filipe for mentioning now we have _run_btrfs_util_prog. and
The test should just ignore the output and check if the snapshot
creation command succeeds.
See how more recent tests do it - they are calling
_run_btrfs_util_prog (which calls run_check).
How nice we have _run_btrfs_util_prog. it was needed for a long time Thanks,
v2 is out.
Anand
--
To
18 matches
Mail list logo