On Mon, Jan 23, 2017 at 5:05 PM, Omar Sandoval wrote:
> Thanks! Hmm, okay, so it's coming from btrfs_update_delayed_inode()...
> That's probably us failing btrfs_lookup_inode(), but just to make sure,
> could you apply the updated diff at the same link as before
>
Introduce a new macro, BTRFS_SB_OFFSET() to calculate backup superblock
offset, this is handy if one wants to initialize static array at
declaration time.
Suggested-by: David Sterba
Signed-off-by: Qu Wenruo
---
disk-io.h | 10 --
1 file
Large numbers like (1024 * 1024 * 1024) may cost reader/reviewer to
waste one second to convert to 1G.
Introduce kernel include/linux/sizes.h to replace any intermediate
number larger than 4096 (not including 4096) to SZ_*.
Signed-off-by: Qu Wenruo
---
At 01/24/2017 01:54 AM, David Sterba wrote:
On Mon, Dec 19, 2016 at 02:56:41PM +0800, Qu Wenruo wrote:
Since we have the whole facilities needed to rollback, switch to the new
rollback.
Sorry, the change from patch 4 to patch 5 seems too big to grasp for me,
reviewing is really hard and I'm
On Mon, Jan 23, 2017 at 04:48:54PM -0700, Chris Murphy wrote:
> On Mon, Jan 23, 2017 at 3:04 PM, Omar Sandoval wrote:
> > On Mon, Jan 23, 2017 at 02:55:21PM -0700, Chris Murphy wrote:
> >> On Mon, Jan 23, 2017 at 2:50 PM, Chris Murphy
> >> > I haven't found the commit for
On Mon, Jan 23, 2017 at 3:04 PM, Omar Sandoval wrote:
> On Mon, Jan 23, 2017 at 02:55:21PM -0700, Chris Murphy wrote:
>> On Mon, Jan 23, 2017 at 2:50 PM, Chris Murphy
>> > I haven't found the commit for that patch, so maybe it's something
>> > with the combination of that
On Mon, 2017-01-23 at 18:18 -0500, Chris Mason wrote:
> We've been focusing on the single-drive use cases internally. This
> year
> that's changing as we ramp up more users in different places.
> Performance/stability work and raid5/6 are the top of my list right
> now.
+1
Would be nice to
On Mon, Jan 23, 2017 at 06:53:21PM +0100, Christoph Anton Mitterer wrote:
Just wondered... is there any larger known RAID56 deployment? I mean
something with real-world production systems and ideally many different
IO scenarios, failures, pulling disks randomly and perhaps even so
many disks
On Mon, Jan 23, 2017 at 02:55:21PM -0700, Chris Murphy wrote:
> On Mon, Jan 23, 2017 at 2:50 PM, Chris Murphy
> > I haven't found the commit for that patch, so maybe it's something
> > with the combination of that patch and the previous commit.
>
> I think that's provably not the case based on
On Mon, Jan 23, 2017 at 2:50 PM, Chris Murphy
> I haven't found the commit for that patch, so maybe it's something
> with the combination of that patch and the previous commit.
I think that's provably not the case based on the bisect log, because
I hit the problem with kernel that has only the
On Mon, Jan 23, 2017 at 2:31 PM, Omar Sandoval wrote:
> On Wed, Jan 18, 2017 at 02:27:13PM -0700, Chris Murphy wrote:
>> On Wed, Jan 11, 2017 at 4:13 PM, Chris Murphy
>> wrote:
>> > Looks like there's some sort of xattr and Btrfs interaction
On Wed, Jan 18, 2017 at 02:27:13PM -0700, Chris Murphy wrote:
> On Wed, Jan 11, 2017 at 4:13 PM, Chris Murphy wrote:
> > Looks like there's some sort of xattr and Btrfs interaction happening
> > here; but as it only happens with some subvolumes/snapshots not all
> > (but
OK so all of these pass original check, but have problems reported by
lowmem. Separate notes about each inline.
~500MiB each, these three are data volumes, first two are raid1, third
one is single.
https://drive.google.com/open?id=0B_2Asp8DGjJ9Z3UzWnFKT3A0clU
On 01/23/2017 09:27 PM, Hans van Kranenburg wrote:
> [... press send without rereading ...]
>
> Anyway, it seems to point to something that's going wrong with changes
> that are *not* on disk *yet*, and the crash is preventing ...
... whatever incorrect data this situation might result in from
On 01/23/2017 09:03 PM, Matt McKinnon wrote:
> Wondering what to do about this error which says 'reboot needed'. Has
> happened a three times in the past week:
>
> Jan 23 14:16:17 my_machine kernel: [ 2568.595648] BTRFS error (device
> sda1): err add delayed dir index item(index: 23810) into the
Wondering what to do about this error which says 'reboot needed'. Has
happened a three times in the past week:
Jan 23 14:16:17 my_machine kernel: [ 2568.595648] BTRFS error (device
sda1): err add delayed dir index item(index: 23810) into the deletion
tree of the delayed node(root id: 257,
On Mon, Dec 19, 2016 at 02:56:41PM +0800, Qu Wenruo wrote:
> Since we have the whole facilities needed to rollback, switch to the new
> rollback.
Sorry, the change from patch 4 to patch 5 seems too big to grasp for me,
reviewing is really hard and I'm not sure I could even do that. My
concern is
Just wondered... is there any larger known RAID56 deployment? I mean
something with real-world production systems and ideally many different
IO scenarios, failures, pulling disks randomly and perhaps even so
many disks that it's also likely to hit something like silent data
corruption (on the
On Mon, Dec 19, 2016 at 02:56:38PM +0800, Qu Wenruo wrote:
> +static u64 reserved_range_starts[3] = { 0, BTRFS_SB_MIRROR_OFFSET(1),
> + BTRFS_SB_MIRROR_OFFSET(2) };
> +static u64 reserved_range_lens[3] = { 1024 * 1024, 64 * 1024, 64 * 1024 };
Also anywhere in
On Mon, Dec 19, 2016 at 02:56:38PM +0800, Qu Wenruo wrote:
> Introduce basic set operations: is_subset() and is_intersection().
>
> This is quite useful to check if a range [start, start + len) subset or
> intersection of another range.
> So we don't need to use open code to do it, which I
On Fri, Jan 20, 2017 at 01:03:33PM -0600, Goldwyn Rodrigues wrote:
> From: Goldwyn Rodrigues
>
> While performing a memcpy, we are copying from uninitialized dst
> as opposed to src->data. Though using eb->len is correct, I used
> src->len to make it more readable.
>
>
Good Day Dear,
My name is Ms. Joyes Dadi, I am glad you are reading this letter and I hope
we will start our communication and I know that this message will look strange,
surprising and probably unbelievable to you, but it is the reality. I want to
make a donation of money to you.
I contact you
On Mon, 23 Jan 2017 14:15:55 +0100
Simon Waid wrote:
> I have a btrfs raid5 array that has become unmountable.
That's the third time you send this today. Will you keep resending every few
hours until you get a reply? That's not how mailing lists work.
--
With respect,
Dear all,
I have a btrfs raid5 array that has become unmountable. When trying to
mount dmesg containes the following:
[ 5686.334384] BTRFS info (device sdb): disk space caching is enabled
[ 5688.377244] BTRFS info (device sdb): bdev /dev/sdb errs: wr 2517, rd
77, flush 0, corrupt 0, gen 0
[
On 01/18/2017 02:11 PM, Christoph Groth wrote:
> Goldwyn Rodrigues wrote:
>> Thanks Christoph for the backtrace. I am unable to reproduce it, but
>> looking at your backtrace, I found a bug. Would you be able to give it
>> a try and check if it fixes the problem?
>
> I applied your patch to
Hello again
by the way. the init-extent-tree is still running (now almost 7 days).
is there any chance to find out how long it will take at the end?
Sebastian
Am 20.01.2017 um 02:08 schrieb Qu Wenruo:
At 01/19/2017 06:06 PM, Sebastian Gottschall wrote:
Hello
I have a question. after a
Btrfs lowmem check can report false csum error like:
ERROR: root 5 EXTENT_DATA[257 0] datasum missing
ERROR: root 5 EXTENT_DATA[257 4096] prealloc shouldn't have datasum
This is because lowmem check code always compare the found csum size
with the whole extent which data extents points to.
From: Lu Fengqi
Current common.local doesn't handle lowmem mode well.
It passes "--mode=lowmem" alone with "--repair", making it unable to
check lowmem mode.
It's caused by the following bugs:
1) Wrong variable in test/common.local
We should check TEST_ARGS_CHECK,
From: Lu Fengqi
Current common.local doesn't handle lowmem mode well.
It passes "--mode=lowmem" alone with "--repair", making it unable to
check lowmem mode.
It's caused by the following bugs:
1) Wrong variable in test/common.local
We should check TEST_ARGS_CHECK,
Add a minimal image which can reproduce the block group used space
false alert for lowmem mode fsck.
Reported-by: Christoph Anton Mitterer
Signed-off-by: Qu Wenruo
---
.../block_group_item_false_alert.raw.xz | Bin 0 -> 47792 bytes
Since btrfs_search_slot() can points to the slot which is beyond the
leaves' capacity, in the following case, btrfs lowmem mode check will
skip the block group and report false alert:
leaf 29405184 items 37 free space 1273 generation 11 owner 2
...
item 36 key (77594624 EXTENT_ITEM
The test case fsck-tests/015-check-bad-memory-access can't be repair by
btrfs check, and it's a fortunate bug makes original mode to forget the
error code from extent tree, making original mode pass it.
So fuzz-tests is more suitable for it.
Signed-off-by: Qu Wenruo
---
fsck-tests/013-extent-tree-rebuild uses "--init-extent-tree", which
implies "--repair".
But the test script doesn't specify "--repair" for lowmem mode test to
detect it.
Add it so lowmem mode test can be happy with it.
Signed-off-by: Qu Wenruo
---
This is a bug found in lowmem mode, which reports false alert for partly
written prealloc extent.
Reported-by: Chris Murphy
Signed-off-by: Qu Wenruo
---
tests/fsck-tests/020-extent-ref-cases/test.sh | 15 +++
1 file changed, 15
Patches can be fetch from github:
https://github.com/adam900710/btrfs-progs/tree/lowmem_fixes
Although there are near 10 patches, but they are all small.
No need to be scared. :)
Thanks for reports from Chris Murphy and Christoph Anton Mitterer,
several new bugs are exposed for lowmem mode fsck.
If one extent item has no inline ref, btrfs lowmem mode check can give
false alert without outputting any error message.
The problem is lowmem mode always assume that extent item has inline
refs, and when it encounters such case it flags the extent item has
wrong size, but doesn't output the
Although we output error like "errors found in extent allocation tree or
chunk allocation", but we lacks such output for other trees, but leaving
the final "found error is %d" to catch the last return value(and
sometime it's cleared)
This patch adds extra error message for top level error path,
On Mon, Jan 23, 2017 at 7:57 AM, Brendan Hide wrote:
>
> raid0 stripes data in 64k chunks (I think this size is tunable) across all
> devices, which is generally far faster in terms of throughput in both
> writing and reading data.
I remember seeing some proposals for
Hey, all
Long-time lurker/commenter here. Production-ready RAID5/6 and N-way
mirroring are the two features I've been anticipating most, so I've
commented regularly when this sort of thing pops up. :)
I'm only addressing some of the RAID-types queries as Qu already has a
handle on the
39 matches
Mail list logo