If we were going to reserve something, it should be a high number, not
a low one. Having 0 reserved makes some sense, but reserving other
low numbers seems kind of odd when they aren't already reserved.
I did some experiments.
Currently assigning higher-level qgroup to lower-level qgroup is
Austin S. Hemmelgarn posted on Tue, 28 Mar 2017 07:44:56 -0400 as
excerpted:
> On 2017-03-27 21:49, Qu Wenruo wrote:
>> The problem is, how should we treat subvolume.
>>
>> Btrfs subvolume sits in the middle of directory and (logical) volume
>> used in traditional stacked solution.
>>
>> While we
On Mon, Mar 27, 2017 at 07:07:15PM +0200, David Sterba wrote:
> On Fri, Mar 24, 2017 at 12:13:42PM -0700, Liu Bo wrote:
> > In raid56 senario, after trying parity recovery, we didn't set
> > mirror_num for btrfs_bio with failed mirror_num, hence
> > end_bio_extent_readpage() will report a random mi
On Mon, Mar 27, 2017 at 06:59:44PM +0200, David Sterba wrote:
> On Fri, Mar 24, 2017 at 12:13:35PM -0700, Liu Bo wrote:
> > Now that scrub can fix data errors with the help of parity for raid56
> > profile, repair during read is able to as well.
> >
> > Although the mirror num in raid56 senario ha
There's no known taint on a filesystem that was RAID5/6 once but has been
since converted to something non-experimental.
Signed-off-by: Adam Borowski
---
fs/btrfs/disk-io.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/fs/btrfs/disk-io.c b/fs/btrfs/disk-io.c
index a4c3e6628ec1..c720a7bc1
Too many people come complaining about losing their data -- and indeed,
there's no warning outside a wiki and the mailing list tribal knowledge.
Message severity chosen for consistency with XFS -- "alert" makes dmesg
produce nice red background which should get the point across.
Signed-off-by: Ada
This patchset can be fetched from my github repo:
https://github.com/adam900710/linux.git raid56_fixes
It's based on v4.11-rc2, the last two patches get modified according to
the advice from Liu Bo.
The patchset fixes the following bugs:
1) False alert or wrong csum error number when scrubbing R
When scrubbing a RAID5 which has recoverable data corruption (only one
data stripe is corrupted), sometimes scrub will report more csum errors
than expected. Sometimes even unrecoverable error will be reported.
The problem can be easily reproduced by the following steps:
1) Create a btrfs with RAI
Unlike mirror based profiles, RAID5/6 recovery needs to read out the
whole full stripe.
And if we don't do proper protect, it can easily cause race condition.
Introduce 2 new functions: lock_full_stripe() and unlock_full_stripe()
for RAID5/6.
Which stores a rb_tree of mutex for full stripes, so s
In the following situation, scrub will calculate wrong parity to
overwrite correct one:
RAID5 full stripe:
Before
| Dev 1 | Dev 2 | Dev 3 |
| Data stripe 1 | Data stripe 2 | Parity Stripe |
--- 0
| 0x (Bad) | 0x
scrub_setup_recheck_block() calls btrfs_map_sblock() and then access
bbio without protection of bio_counter.
This can leads to use-after-free if racing with dev replace cancel.
Fix it by increasing bio_counter before calling btrfs_map_sblock() and
decrease the bio_counter when corresponding recov
When raid56 dev replace is cancelled by running scrub, we will free target
device without waiting flighting bios, causing the following NULL
pointer deference or general protection.
BUG: unable to handle kernel NULL pointer dereference at 05e0
IP: generic_make_request_checks+0x4d/0x6
At 03/29/2017 01:52 AM, Jitendra wrote:
Hi Qu/All,
I am looking into in-memory in-bound dedup. I have cloned your git tree
from
following urls.
Linux Tree: https://github.com/adam900710/linux.git wang_dedupe_latest
btrfs-progs: https://g...@github.com:adam900710/btrfs-progs.git
dedupe_2017031
On Tue, Mar 28, 2017 at 06:40:25PM -0400, J. Hart wrote:
> Don't be embarrassed.
> I'm a native speaker and still have trouble with most explanations.:-)
You should try writing them. ;)
Hugo ("darkling").
>
> On 03/28/2017 06:01 PM, Jakob Schürz wrote:
> >Thanks for that explanati
Don't be embarrassed.
I'm a native speaker and still have trouble with most explanations.:-)
On 03/28/2017 06:01 PM, Jakob Schürz wrote:
Thanks for that explanation.
I'm sure, i didn't understand the -c option... and my english is pretty
good enough for the most things I need to know in
Thanks for that explanation.
I'm sure, i didn't understand the -c option... and my english is pretty
good enough for the most things I need to know in Linux-things... but
not for this. :-(
Am 2017-03-26 um 22:07 schrieb Peter Grandi:
> [ ... ]
>> BUT if i take a snapshot from the system, and want
On Tue, Mar 28, 2017 at 02:50:06PM +0200, David Sterba wrote:
> On Mon, Mar 06, 2017 at 12:23:30PM -0800, Liu Bo wrote:
> > Btrfs creates hole extents to cover any unwritten section right before
> > doing buffer writes after commit 3ac0d7b96a26 ("btrfs: Change the expanding
> > write sequence to fi
Hi Qu/All,
I am looking into in-memory in-bound dedup. I have cloned your git tree from
following urls.
Linux Tree: https://github.com/adam900710/linux.git wang_dedupe_latest
btrfs-progs: https://g...@github.com:adam900710/btrfs-progs.git dedupe_20170316
Then run the follwoing test
https://btr
On Tue, Mar 14, 2017 at 04:26:11PM +0800, Anand Jain wrote:
> The objective of this patch is to cleanup barrier_all_devices()
> so that the error checking is in a separate loop independent of
> of the loop which submits and waits on the device flush requests.
I think that getting completely rid of
On 2017-03-28 10:43, Peter Grandi wrote:
This is going to be long because I am writing something detailed
hoping pointlessly that someone in the future will find it by
searching the list archives while doing research before setting
up a new storage system, and they will be the kind of person
that
On Mon, Mar 13, 2017 at 03:42:12PM +0800, Anand Jain wrote:
> The only error that write dev flush (send) will fail is due
> to the ENOMEM then, as its not a device specific error and
> rather a system wide issue, we should rather stop further
> iterations and perpetuate the -ENOMEM error to the cal
I’ve glazed over on “Not only that …” … can you make youtube video of that :
> On 28 Mar 2017, at 16:06, Peter Grandi wrote:
>
>> I glazed over at “This is going to be long” … :)
>>> [ ... ]
>
> Not only that, you also top-posted while quoting it pointlessly
> in its entirety, to the whole m
On 2017-03-28 09:53, Marat Khalili wrote:
There are a couple of reasons I'm advocating the specific behavior I
outlined:
Some of your points are valid, but some break current behaviour and
expectations or create technical difficulties.
1. It doesn't require any specific qgroup setup. By defi
> I glazed over at “This is going to be long” … :)
>> [ ... ]
Not only that, you also top-posted while quoting it pointlessly
in its entirety, to the whole mailing list. Well played :-).
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majord...
On Mon, Mar 13, 2017 at 03:42:11PM +0800, Anand Jain wrote:
> REQ_PREFLUSH bio to flush dev cache uses btrfs_end_empty_barrier()
> completion callback only, as of now, and there it accounts for dev
> stat flush errors BTRFS_DEV_STAT_FLUSH_ERRS, so remove it from the
> btrfs_end_bio().
Can you plea
> [ ... ] slaps together a large storage system in the cheapest
> and quickest way knowing that while it is mostly empty it will
> seem very fast regardless and therefore to have awesome
> performance, and then the "clever" sysadm disappears surrounded
> by a halo of glory before the storage syste
> [ ... ] reminded of all the cases where someone left me to
> decatastrophize a storage system built on "optimistic"
> assumptions.
In particular when some "clever" sysadm with a "clever" (or
dumb) manager slaps together a large storage system in the
cheapest and quickest way knowing that while i
I glazed over at “This is going to be long” … :)
> On 28 Mar 2017, at 15:43, Peter Grandi wrote:
>
> This is going to be long because I am writing something detailed
> hoping pointlessly that someone in the future will find it by
> searching the list archives while doing research before setting
This is going to be long because I am writing something detailed
hoping pointlessly that someone in the future will find it by
searching the list archives while doing research before setting
up a new storage system, and they will be the kind of person
that tolerates reading messages longer than Twi
There are a couple of reasons I'm advocating the specific behavior I
outlined:
Some of your points are valid, but some break current behaviour and
expectations or create technical difficulties.
1. It doesn't require any specific qgroup setup. By definition, you
can be 100% certain that the d
All (1) callers pass the same value.
Signed-off-by: David Sterba
---
fs/btrfs/ctree.c | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
index 7dc8844037e0..d034d47c5470 100644
--- a/fs/btrfs/ctree.c
+++ b/fs/btrfs/ctree.c
@@ -567,7
All (1) callers pass the same value.
Signed-off-by: David Sterba
---
fs/btrfs/ctree.c | 10 +-
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/fs/btrfs/ctree.c b/fs/btrfs/ctree.c
index d034d47c5470..165e7ec12af7 100644
--- a/fs/btrfs/ctree.c
+++ b/fs/btrfs/ctree.c
@@ -663,7
There are several operations, usually started from ioctls, that cannot
run concurrently. The status is tracked in
mutually_exclusive_operation_running as an atomic_t. We can easily track
the status as one of the per-filesystem flag bits with same
synchronization guarantees.
The conversion replaces
On Mon, Mar 06, 2017 at 12:23:30PM -0800, Liu Bo wrote:
> Btrfs creates hole extents to cover any unwritten section right before
> doing buffer writes after commit 3ac0d7b96a26 ("btrfs: Change the expanding
> write sequence to fix snapshot related bug.").
>
> However, that takes the start position
All callers pass 0 for mirror_num and 1 for need_raid_map.
Signed-off-by: David Sterba
---
fs/btrfs/scrub.c | 6 +++---
fs/btrfs/volumes.c | 6 ++
fs/btrfs/volumes.h | 3 +--
3 files changed, 6 insertions(+), 9 deletions(-)
diff --git a/fs/btrfs/scrub.c b/fs/btrfs/scrub.c
index b0251eb123
On 2017-03-28 08:00, Marat Khalili wrote:
The default should be to inherit the qgroup of the parent subvolume.
This behaviour is only good for this particular use-case. In general
case, qgroups of subvolume and snapshots should exist separately, and
both can be included in some higher level qgro
On Mon, Mar 27, 2017 at 01:13:46PM -0500, Goldwyn Rodrigues wrote:
>
>
> On 03/27/2017 12:36 PM, David Sterba wrote:
> > On Mon, Mar 27, 2017 at 12:29:57PM -0500, Goldwyn Rodrigues wrote:
> >> From: Goldwyn Rodrigues
> >>
> >> We are facing the same problem with EDQUOT which was experienced with
The default should be to inherit the qgroup of the parent subvolume.
This behaviour is only good for this particular use-case. In general
case, qgroups of subvolume and snapshots should exist separately, and
both can be included in some higher level qgroup (after all, that's what
qgroup hierarc
On 2017-03-27 21:49, Qu Wenruo wrote:
At 03/27/2017 08:01 PM, Austin S. Hemmelgarn wrote:
On 2017-03-27 07:02, Moritz Sichert wrote:
Am 27.03.2017 um 05:46 schrieb Qu Wenruo:
At 03/27/2017 11:26 AM, Andrei Borzenkov wrote:
27.03.2017 03:39, Qu Wenruo пишет:
At 03/26/2017 06:03 AM, Mori
On Thu, Mar 16, 2017 at 10:04:32AM -0600, ednadol...@gmail.com wrote:
> From: Edmund Nadolski
>
> This series replaces several hard-coded values with descriptive
> symbols.
>
> ---
> v2:
> + rename SEQ_NONE to SEQ_LAST and move definition to ctree.h
> + clarify comment at __merge_refs()
>
> E
On 2017-03-27 15:32, Chris Murphy wrote:
How about if qgroups are enabled, then non-root user is prevented from
creating new subvolumes?
Or is there a way for a new nested subvolume to be included in its
parent's quota, rather than the new subvolume having a whole new quota
limit?
Tricky proble
On Mon, Mar 27, 2017 at 10:07:20PM +0100, sly...@gmail.com wrote:
> From: Sergei Trofimovich
>
> The easiest way to reproduce the error is to try to build
> btrfs-progs with
> $ make LDFLAGS=-Wl,--no-undefined
>
> btrfs-list.o: In function `lookup_ino_path':
> btrfs-list.c:(.text+0x7
On 3/27/17, David Sterba wrote:
> On Sat, Mar 25, 2017 at 09:48:28AM +0300, Denis Kirjanov wrote:
>> On 3/25/17, Jeff Mahoney wrote:
>> > On 3/24/17 5:02 AM, Denis Kirjanov wrote:
>> >> Hi guys,
>> >>
>> >> Looks like that current code does GFP_KERNEL allocation inside
>> >> __link_block_group.
>
43 matches
Mail list logo