On Tue, Nov 29, 2016 at 03:32:54PM +0800, Qu Wenruo wrote:
> Old btrfs qgroup test cases uses fix golden output numbers, which limits
> the coverage since they can't handle mount options like compress or
> inode_map, and cause false alert.
>
> Introduce _btrfs_check_scratch_qgroup() function to ch
On Tuesday, November 29, 2016 03:55:53 PM Qu Wenruo wrote:
> At 11/29/2016 02:36 PM, Chandan Rajendra wrote:
> > When executing btrfs/126 test on kdave/for-next branch on a ppc64 guest, I
> > noticed the following call trace.
> >
> > [ 77.335887] [ cut here ]
> > [ 77.33
At 11/29/2016 04:21 PM, Chandan Rajendra wrote:
On Tuesday, November 29, 2016 03:55:53 PM Qu Wenruo wrote:
At 11/29/2016 02:36 PM, Chandan Rajendra wrote:
When executing btrfs/126 test on kdave/for-next branch on a ppc64 guest, I
noticed the following call trace.
[ 77.335887] [
At 11/29/2016 04:16 PM, Eryu Guan wrote:
On Tue, Nov 29, 2016 at 03:32:54PM +0800, Qu Wenruo wrote:
Old btrfs qgroup test cases uses fix golden output numbers, which limits
the coverage since they can't handle mount options like compress or
inode_map, and cause false alert.
Introduce _btrfs_c
On martedì 29 novembre 2016 06:14:18 CET, Duncan wrote:
Very good question that I don't know the answer to as I've not seen it
discussed previously. (I'm not a dev, just a list regular and user of
btrfs myself, and my personal use-case involves neither snapshots nor
send/receive, so on those t
On 2016-11-29 00:14, Duncan wrote:
Graham Cobb posted on Mon, 28 Nov 2016 09:49:33 + as excerpted:
On 28/11/16 02:56, Duncan wrote:
It should still be worth turning on autodefrag on an existing somewhat
fragmented filesystem. It just might take some time to defrag files
you do modify, and
On 2016-11-29 00:06, Duncan wrote:
Niccolò Belli posted on Mon, 28 Nov 2016 12:11:49 +0100 as excerpted:
On lunedì 28 novembre 2016 09:20:15 CET, Kai Krakow wrote:
You can, however, use chattr to make the subvolume root directory (that
one where it is mounted) nodatacow (chattr +C) _before_ pl
On Tue, 2016-11-29 at 08:35 +0100, Adam Borowski wrote:
> I administer no real storage at this time, and got only 16 disks
> (plus a few
> disk-likes) to my name right now. Yet in a ~2 months span I've seen
> three
> cases of silent data corruption
I didn't meant to say we'd have no silent data c
Hi, as wiki say https://btrfs.wiki.kernel.org/index.php/Glossary:
A part of a block group. Chunks are either 1 GiB in size (for data) or
256 MiB (for metadata).
Btrfs tools show me that allocated size is not 1GiB aligned, things
are changes? I miss something?
# btrfs fi df /; btrfs fi usage /;
Da
On 2016-11-29 09:32, Timofey Titovets wrote:
Hi, as wiki say https://btrfs.wiki.kernel.org/index.php/Glossary:
A part of a block group. Chunks are either 1 GiB in size (for data) or
256 MiB (for metadata).
This is only about the normal case. Chunks are variable in size. In
most cases, data chu
btrfs_super_block->sys_chunk_array_size is stored as le32 data on
disk. However insert_temp_chunk_item() writes sys_chunk_array_size in
host cpu order. This commit fixes this by using super block access
helper functions to read and write
btrfs_super_block->sys_chunk_array_size field.
Signed-off-by
Hi,
just to chime in on this: This issue also affects me as a "downstream" user,
so it also breaks real-life usecases.
I use btrbk for backup, and when performing a regular incremental backup,
after switching to btrfs-progs 4.8.4, I get the same "short read from stream"
problem.
Internally,
On Mon, Nov 28, 2016 at 04:27:06PM +0900, Tsutomu Itoh wrote:
> Many test of xfstests such as btrfs/007, btrfs/008 and btrfs/016 failed
> with the following patch.
>
> fefbab75 btrfs-progs: send-stream: check number of read bytes from stream
>
> This is because cmds-receive.c:do_receive() make
On Tuesday, November 29, 2016 04:41:41 PM Qu Wenruo wrote:
>
> At 11/29/2016 04:21 PM, Chandan Rajendra wrote:
> > On Tuesday, November 29, 2016 03:55:53 PM Qu Wenruo wrote:
> >> At 11/29/2016 02:36 PM, Chandan Rajendra wrote:
> >>> When executing btrfs/126 test on kdave/for-next branch on a ppc64
On 11/29/16 1:36 AM, Chandan Rajendra wrote:
> When executing btrfs/126 test on kdave/for-next branch on a ppc64 guest, I
> noticed the following call trace.
>
> [ 77.335887] [ cut here ]
> [ 77.336115] WARNING: CPU: 0 PID: 8325 at
> /root/repos/linux/fs/btrfs/qgroup.c
On 11/29/16 10:56 AM, Jeff Mahoney wrote:
> On 11/29/16 1:36 AM, Chandan Rajendra wrote:
>> When executing btrfs/126 test on kdave/for-next branch on a ppc64 guest, I
>> noticed the following call trace.
>>
>> [ 77.335887] [ cut here ]
>> [ 77.336115] WARNING: CPU: 0 PID
On Mon, Nov 28, 2016 at 09:40:07AM +0800, Qu Wenruo wrote:
> Goldwyn Rodrigues has exposed and fixed a bug which underflows btrfs
> qgroup reserved space, and leads to non-writable fs.
>
> This reminds us that we don't have enough underflow check for qgroup
> reserved space.
>
> For underflow cas
From: Goldwyn Rodrigues
The values passed to BUG_ON/WARN_ON are negated(!) and printed, which
results in printing the value zero for each bug/warning. For example:
volumes.c:988: btrfs_alloc_chunk: Assertion `ret` failed, value 0
This is not useful. Instead changed to print the value of the para
From: Goldwyn Rodrigues
Code reduction. Call warning_trace from assert_trace in order to
reduce the printf's used. Also, trace variable in warning_trace()
is not required because it is already handled by BTRFS_DISABLE_BACKTRACE.
Signed-off-by: Goldwyn Rodrigues
---
kerncompat.h | 37 ++
Hello All,
This was running on an older 4.6.5 machine. Does anyone know if this has
been fixed in newer kernels?
[340653.975882] [ cut here ]
[340653.978481] kernel BUG at fs/btrfs/ctree.c:3179!
[340653.979757] invalid opcode: [#1]
[340653.989870] CPU: 0 PID: 12861
Hello,
I have 4 harddisks with 3TB capacity each. They are all used in a btrfs RAID 5.
It has come to my attention, that there
seem to be major flaws in btrfs' raid 5 implementation. Because of that, I want
to convert the the raid 5 to a raid 10
and I have several questions.
* Is that possible
On 2016-11-29 12:20, Florian Lindner wrote:
Hello,
I have 4 harddisks with 3TB capacity each. They are all used in a btrfs RAID 5.
It has come to my attention, that there
seem to be major flaws in btrfs' raid 5 implementation. Because of that, I want
to convert the the raid 5 to a raid 10
and
On 2016-11-29 01:48, Qu Wenruo wrote:
> For example, if sectorsize is 64K, and we make stripe len to 32K, and use 3
> disc RAID5, we can avoid such write hole problem.
> Withouth modification to extent/chunk allocator.
>
> And I'd prefer to make stripe len mkfs time parameter, not possible to mod
On 2016-11-29 07:03, Qu Wenruo wrote:
[...]
>> Btrfs is subject to the write hole problem on disk, but any read or
>> scrub that needs to reconstruct from parity that is corrupt results in
>> a checksum error and EIO. So corruption is not passed up to user
>> space. Recent versions of md/mdadm supp
On Tue, Nov 29, 2016 at 09:28:07AM -0800, Eric Wheeler wrote:
> Hello All,
>
> This was running on an older 4.6.5 machine. Does anyone know if this has
> been fixed in newer kernels?
I'm guessing that this is the issue that Josef fixed here [1][2]. It's
not in any current kernel, but it should m
I would love to have the stripe element size (per disk portions of
logical "full" stripes) changeable online with balance anyway
(starting from 512 byte/disk, not placing artificial arbitrary
limitations on it at the low end).
A small stripe size (for example 4k/disk or even 512byte/disk if you
hap
Hi,
Le 29/11/2016 à 18:20, Florian Lindner a écrit :
> [...]
>
> * Any other advice? ;-)
Don't rely on RAID too much... The degraded mode is unstable even for
RAID10: you can corrupt data simply by writing to a degraded RAID10. I
could reliably reproduce this on a 6 devices RAID10 BTRFS filesyste
On 2016-11-29 14:03, Lionel Bouton wrote:
Hi,
Le 29/11/2016 à 18:20, Florian Lindner a écrit :
[...]
* Any other advice? ;-)
Don't rely on RAID too much... The degraded mode is unstable even for
RAID10: you can corrupt data simply by writing to a degraded RAID10. I
could reliably reproduce t
On Tue, Nov 29, 2016 at 03:32:54PM +0800, Qu Wenruo wrote:
> Old btrfs qgroup test cases uses fix golden output numbers, which limits
> the coverage since they can't handle mount options like compress or
> inode_map, and cause false alert.
>
> Introduce _btrfs_check_scratch_qgroup() function to ch
On 29.11.2016 18:54, Austin S. Hemmelgarn wrote:
> On 2016-11-29 12:20, Florian Lindner wrote:
>> Hello,
>>
>> I have 4 harddisks with 3TB capacity each. They are all used in a
>> btrfs RAID 5. It has come to my attention, that there
>> seem to be major flaws in btrfs' raid 5 implementation. Becaus
On Tue, Nov 29, 2016 at 3:34 PM, Wilson Meier wrote:
> On 29.11.2016 18:54, Austin S. Hemmelgarn wrote:
>> On 2016-11-29 12:20, Florian Lindner wrote:
>>> Hello,
>>>
>>> I have 4 harddisks with 3TB capacity each. They are all used in a
>>> btrfs RAID 5. It has come to my attention, that there
>>>
On Tue, Nov 29, 2016 at 01:49:09PM +0800, Qu Wenruo wrote:
> >>>My proposal requires only a modification to the extent allocator.
> >>>The behavior at the block group layer and scrub remains exactly the same.
> >>>We just need to adjust the allocator slightly to take the RAID5 CoW
> >>>constraints
On Tue, Nov 29, 2016 at 02:03:58PM +0800, Qu Wenruo wrote:
> At 11/29/2016 01:51 PM, Chris Murphy wrote:
> >On Mon, Nov 28, 2016 at 5:48 PM, Qu Wenruo wrote:
> >>
> >>
> >>At 11/19/2016 02:15 AM, Goffredo Baroncelli wrote:
> >>>
> >>>Hello,
> >>>
> >>>these are only my thoughts; no code here, but
On 29.11.2016 23:52, Chris Murphy wrote:
> On Tue, Nov 29, 2016 at 3:34 PM, Wilson Meier wrote:
>> On 29.11.2016 18:54, Austin S. Hemmelgarn wrote:
>>> On 2016-11-29 12:20, Florian Lindner wrote:
Hello,
I have 4 harddisks with 3TB capacity each. They are all used in a
btrfs R
On Tue, Nov 29, 2016 at 4:16 PM, Wilson Meier wrote:
>
>
> On 29.11.2016 23:52, Chris Murphy wrote:
>> On Tue, Nov 29, 2016 at 3:34 PM, Wilson Meier wrote:
>>> On 29.11.2016 18:54, Austin S. Hemmelgarn wrote:
On 2016-11-29 12:20, Florian Lindner wrote:
> Hello,
>
> I have 4 hardd
On 30.11.2016 00:49, Chris Murphy wrote:
> On Tue, Nov 29, 2016 at 4:16 PM, Wilson Meier wrote:
>>
>>
>> On 29.11.2016 23:52, Chris Murphy wrote:
>>> On Tue, Nov 29, 2016 at 3:34 PM, Wilson Meier
>>> wrote:
On 29.11.2016 18:54, Austin S. Hemmelgarn wrote:
> On 2016-11-29 12:20, Floria
At 11/30/2016 12:10 AM, David Sterba wrote:
On Mon, Nov 28, 2016 at 09:40:07AM +0800, Qu Wenruo wrote:
Goldwyn Rodrigues has exposed and fixed a bug which underflows btrfs
qgroup reserved space, and leads to non-writable fs.
This reminds us that we don't have enough underflow check for qgroup
At 11/30/2016 05:01 AM, Dave Chinner wrote:
On Tue, Nov 29, 2016 at 03:32:54PM +0800, Qu Wenruo wrote:
Old btrfs qgroup test cases uses fix golden output numbers, which limits
the coverage since they can't handle mount options like compress or
inode_map, and cause false alert.
Introduce _btrf
On Wed, Nov 30, 2016 at 08:56:03AM +0800, Qu Wenruo wrote:
>
>
> At 11/30/2016 05:01 AM, Dave Chinner wrote:
> >On Tue, Nov 29, 2016 at 03:32:54PM +0800, Qu Wenruo wrote:
> >>Old btrfs qgroup test cases uses fix golden output numbers, which limits
> >>the coverage since they can't handle mount op
Austin S. Hemmelgarn posted on Tue, 29 Nov 2016 09:58:50 -0500 as
excerpted:
> On 2016-11-29 09:32, Timofey Titovets wrote:
>> Hi, as wiki say https://btrfs.wiki.kernel.org/index.php/Glossary:
Bad link. Without the terminating colon it works, however.
https://btrfs.wiki.kernel.org/index.php/Glo
On Wed, 30 Nov 2016 00:16:48 +0100
Wilson Meier wrote:
> That said, btrfs shouldn't be used for other then raid1 as every other
> raid level has serious problems or at least doesn't work as the expected
> raid level (in terms of failure recovery).
RAID1 shouldn't be used either:
*) Read perform
Hi David,
On 2016/11/30 0:34, David Sterba wrote:
> On Mon, Nov 28, 2016 at 04:27:06PM +0900, Tsutomu Itoh wrote:
>> Many test of xfstests such as btrfs/007, btrfs/008 and btrfs/016 failed
>> with the following patch.
>>
>> fefbab75 btrfs-progs: send-stream: check number of read bytes from stre
Goldwyn Rodrigues has exposed and fixed a bug which underflows btrfs
qgroup reserved space, and leads to non-writable fs.
This reminds us that we don't have enough underflow check for qgroup
reserved space.
For underflow case, we should not really underflow the numbers but warn
and keeps qgroup s
Newly introduced qgroup reserved space trace points are normally nested
into several common qgroup operations.
While some other trace points are not well placed to co-operate with
them, causing confusing output.
This patch re-arrange trace_btrfs_qgroup_release_data() and
trace_btrfs_qgroup_free_d
Introduce the following trace points:
qgroup_update_reserve
qgroup_meta_reserve
These trace points are handy to trace qgroup reserve space related
problems.
Signed-off-by: Qu Wenruo
---
v2:
None
v3:
Separate from trace point timing modification patch.
v4:
Change type casting from "(s64)-nu
45 matches
Mail list logo