Apologies for the dupe Chris, I neglected to hit Reply-All.. Comments below.
On Mon, Dec 3, 2018 at 9:56 PM Chris Murphy wrote:
>
> On Mon, Dec 3, 2018 at 8:32 PM Mike Javorski wrote:
> >
> > Need a bit of advice here ladies / gents. I am running into an issue
> > which Qu Wenruo seems to have
On 3.12.18 г. 19:25 ч., David Sterba wrote:
> On Sat, Nov 17, 2018 at 09:29:27AM +0800, Anand Jain wrote:
>>> - ret = find_free_dev_extent(trans, device, min_free,
>>> - _offset, NULL);
>>> - if (!ret)
>>> +
On 2018-12-04 14:59, Chris Murphy wrote:
Running 4.19.6 right now, but was experiencing the issue also with
4.18
kernels.
# btrfs device stats /data
[/dev/sda1].write_io_errs0
[/dev/sda1].read_io_errs 0
[/dev/sda1].flush_io_errs0
[/dev/sda1].corruption_errs 0
On Mon, Dec 3, 2018 at 10:44 PM Tomasz Chmielewski wrote:
>
> I'm trying to use btrfs on an external USB drive, without much success.
>
> When the drive is connected for 2-3+ days, the filesystem gets remounted
> readonly, with BTRFS saying "IO failure":
>
> [77760.444607] BTRFS error (device
On Mon, Dec 3, 2018 at 8:32 PM Mike Javorski wrote:
>
> Need a bit of advice here ladies / gents. I am running into an issue
> which Qu Wenruo seems to have posted a patch for several weeks ago
> (see https://patchwork.kernel.org/patch/10694997/).
>
> Here is the relevant dmesg output which led
I'm trying to use btrfs on an external USB drive, without much success.
When the drive is connected for 2-3+ days, the filesystem gets remounted
readonly, with BTRFS saying "IO failure":
[77760.444607] BTRFS error (device sdb1): bad tree block start, want
378372096 have 0
[77760.550933]
Apologies for not scouring the mailing list completely (just
subscribed in fact) as It appears that Patrick Dijkgraaf also ran into
this issue. He went ahead with a volume rebuild, whereas I am hoping I
can recover having not run anything more than a "btrfs scan" and
"btrfs device ready" on this
Need a bit of advice here ladies / gents. I am running into an issue
which Qu Wenruo seems to have posted a patch for several weeks ago
(see https://patchwork.kernel.org/patch/10694997/).
Here is the relevant dmesg output which led me to Qu's patch.
[ 10.032475] BTRFS critical (device
Also useful information for autopsy, perhaps not for fixing, is to
know whether the SCT ERC value for every drive is less than the
kernel's SCSI driver block device command timeout value. It's super
important that the drive reports an explicit read failure before the
read command is considered
On Mon, Dec 3, 2018 at 1:04 PM Lionel Bouton
wrote:
>
> Le 03/12/2018 à 20:56, Lionel Bouton a écrit :
> > [...]
> > Note : recently I tried upgrading from 4.9 to 4.14 kernels, various
> > tuning of the io queue (switching between classic io-schedulers and
> > blk-mq ones in the virtual machines)
On 2018/12/4 上午2:20, Wilson, Ellis wrote:
> Hi all,
>
> Many months ago I promised to graph how long it took to mount a BTRFS
> filesystem as it grows. I finally had (made) time for this, and the
> attached is the result of my testing. The image is a fairly
> self-explanatory graph, and
Hi,
On 12/3/18 8:56 PM, Lionel Bouton wrote:
>
> Le 03/12/2018 à 19:20, Wilson, Ellis a écrit :
>>
>> Many months ago I promised to graph how long it took to mount a BTRFS
>> filesystem as it grows. I finally had (made) time for this, and the
>> attached is the result of my testing. The
On Sun, Dec 02, 2018 at 03:08:36PM +, damenly...@gmail.com wrote:
> From: Su Yue
>
> Move "\n" at end of the sentence to print.
>
> Fixes: 281eec7a9ddf ("btrfs-progs: check: repair inode nbytes in lowmem mode")
> Signed-off-by: Su Yue
Applied, thanks.
Le 03/12/2018 à 20:56, Lionel Bouton a écrit :
> [...]
> Note : recently I tried upgrading from 4.9 to 4.14 kernels, various
> tuning of the io queue (switching between classic io-schedulers and
> blk-mq ones in the virtual machines) and BTRFS mount options
> (space_cache=v2,ssd_spread) but there
Hi,
Le 03/12/2018 à 19:20, Wilson, Ellis a écrit :
> Hi all,
>
> Many months ago I promised to graph how long it took to mount a BTRFS
> filesystem as it grows. I finally had (made) time for this, and the
> attached is the result of my testing. The image is a fairly
> self-explanatory graph,
On Mon, Dec 03, 2018 at 12:39:57PM +0800, Qu Wenruo wrote:
> For reloc tree, despite of its short lifespan, it's still the backref,
> where reloc tree root backref points back to itself, makes it special.
>
> So it's more approriate to put them into 020-extent-ref-cases.
>
> Signed-off-by: Qu
Hi all,
Many months ago I promised to graph how long it took to mount a BTRFS
filesystem as it grows. I finally had (made) time for this, and the
attached is the result of my testing. The image is a fairly
self-explanatory graph, and the raw data is also attached in
comma-delimited format
On Sat, Nov 17, 2018 at 09:29:27AM +0800, Anand Jain wrote:
> > - ret = find_free_dev_extent(trans, device, min_free,
> > - _offset, NULL);
> > - if (!ret)
> > + if (!find_free_dev_extent(trans,
On Tue, Nov 20, 2018 at 01:50:54PM +0100, David Sterba wrote:
> The first cleanup part went to 4.19, the actual switch from the custom
> locking to rswem was postponed as I found performance degradation. This
> turned out to be related to VM cache settings, so I'm resending the
> series again.
>
Hi,
this is a pre-release of btrfs-progs, 4.19.1-rc1. There are build fixes, minor
update to libbtrfsutil and documentation updates.
The 4.19.1 release is scheduled to this Wednesday, +2 days (2018-12-05).
Changelog:
* build fixes
* big-endian builds fail due to bswap helpers clash
*
On Mon, Dec 3, 2018, at 4:31 AM, Stefan Malte Schumacher wrote:
> I have noticed an unusual amount of crc-errors in downloaded rars,
> beginning about a week ago. But lets start with the preliminaries. I
> am using Debian Stretch.
> Kernel: Linux mars 4.9.0-8-amd64 #1 SMP Debian 4.9.110-3+deb9u4
The cleaner thread usually takes care of delayed iputs, with the
exception of the btrfs_end_transaction_throttle path. The cleaner
thread only gets woken up every 30 seconds, so instead wake it up to do
it's work so that we can free up that space as quickly as possible.
Reviewed-by: Filipe
Delayed iputs means we can have final iputs of deleted inodes in the
queue, which could potentially generate a lot of pinned space that could
be free'd. So before we decide to commit the transaction for ENOPSC
reasons, run the delayed iputs so that any potential space is free'd up.
If there is
v1->v2:
- only wakeup if the cleaner isn't currently doing work.
- re-arranged some stuff for running delayed iputs during flushint.
- removed the open code wakeup in the waitqueue patch.
-- Original message --
Here are some delayed iput fixes. Delayed iputs can hold reservations for a
while
The throttle path doesn't take cleaner_delayed_iput_mutex, which means
we could think we're done flushing iputs in the data space reservation
path when we could have a throttler doing an iput. There's no real
reason to serialize the delayed iput flushing, so instead of taking the
We could generate a lot of delayed refs in evict but never have any left
over space from our block rsv to make up for that fact. So reserve some
extra space and give it to the transaction so it can be used to refill
the delayed refs rsv every loop through the truncate path.
Signed-off-by: Josef
may_commit_transaction will skip committing the transaction if we don't
have enough pinned space or if we're trying to find space for a SYSTEM
chunk. However if we have pending free block groups in this transaction
we still want to commit as we may be able to allocate a chunk to make
our
With my change to no longer take into account the global reserve for
metadata allocation chunks we have this side-effect for mixed block
group fs'es where we are no longer allocating enough chunks for the
data/metadata requirements. To deal with this add a ALLOC_CHUNK_FORCE
step to the flushing
v1->v2:
- addressed comments from reviewers.
- fixed a bug in patch 6 that was introduced because of changes to upstream.
-- Original message --
The delayed refs rsv patches exposed a bunch of issues in our enospc
infrastructure that needed to be addressed. These aren't really one coherent
For enospc_debug having the block rsvs is super helpful to see if we've
done something wrong.
Signed-off-by: Josef Bacik
Reviewed-by: Omar Sandoval
Reviewed-by: David Sterba
---
fs/btrfs/extent-tree.c | 15 +++
1 file changed, 15 insertions(+)
diff --git a/fs/btrfs/extent-tree.c
With severe fragmentation we can end up with our inode rsv size being
huge during writeout, which would cause us to need to make very large
metadata reservations. However we may not actually need that much once
writeout is complete. So instead try to make our reservation, and if we
couldn't make
The should_alloc_chunk code has math in it to decide if we're getting
short on space and if we should go ahead and pre-emptively allocate a
new chunk. Previously when we did not have the delayed_refs_rsv, we had
to assume that the global block rsv was essentially used space and could
be allocated
For FLUSH_LIMIT flushers (think evict, truncate) we can deadlock when
running delalloc because we may be holding a tree lock. We can also
deadlock with delayed refs rsv's that are running via the committing
mechanism. The only safe operations for FLUSH_LIMIT is to run the
delayed operations and
With the introduction of the per-inode block_rsv it became possible to
have really really large reservation requests made because of data
fragmentation. Since the ticket stuff assumed that we'd always have
relatively small reservation requests it just killed all tickets if we
were unable to
Now with the delayed_refs_rsv we can now know exactly how much pending
delayed refs space we need. This means we can drastically simplify
btrfs_check_space_for_delayed_refs by simply checking how much space we
have reserved for the global rsv (which acts as a spill over buffer) and
the delayed
We have a bunch of magic to make sure we're throttling delayed refs when
truncating a file. Now that we have a delayed refs rsv and a mechanism
for refilling that reserve simply use that instead of all of this magic.
Reviewed-by: Nikolay Borisov
Signed-off-by: Josef Bacik
---
fs/btrfs/inode.c
Over the years we have built up a lot of infrastructure to keep delayed
refs in check, mostly by running them at btrfs_end_transaction() time.
We have a lot of different maths we do to figure out how much, if we
should do it inline or async, etc. This existed because we had no
feedback mechanism
From: Josef Bacik
We were missing some quota cleanups in check_ref_cleanup, so break the
ref head accounting cleanup into a helper and call that from both
check_ref_cleanup and cleanup_ref_head. This will hopefully ensure that
we don't screw up accounting in the future for other things that we
v1->v2:
- addressed the comments from the various reviewers.
- split "introduce delayed_refs_rsv" into 5 patches. The patches are the same
together as they were, just split out more logically. They can't really be
bisected across in that you will likely have fun enospc failures, but they
Any space used in the delayed_refs_rsv will be freed up by a transaction
commit, so instead of just counting the pinned space we also need to
account for any space in the delayed_refs_rsv when deciding if it will
make a different to commit the transaction to satisfy our space
reservation. If we
From: Josef Bacik
Traditionally we've had voodoo in btrfs to account for the space that
delayed refs may take up by having a global_block_rsv. This works most
of the time, except when it doesn't. We've had issues reported and seen
in production where sometimes the global reserve is exhausted
From: Josef Bacik
We use this number to figure out how many delayed refs to run, but
__btrfs_run_delayed_refs really only checks every time we need a new
delayed ref head, so we always run at least one ref head completely no
matter what the number of items on it. Fix the accounting to only be
From: Josef Bacik
The cleanup_extent_op function actually would run the extent_op if it
needed running, which made the name sort of a misnomer. Change it to
run_and_cleanup_extent_op, and move the actual cleanup work to
cleanup_extent_op so it can be used by check_ref_cleanup() in order to
From: Josef Bacik
We do this dance in cleanup_ref_head and check_ref_cleanup, unify it
into a helper and cleanup the calling functions.
Signed-off-by: Josef Bacik
Reviewed-by: Omar Sandoval
---
fs/btrfs/delayed-ref.c | 14 ++
fs/btrfs/delayed-ref.h | 3 ++-
A nice thing we gain with the delayed refs rsv is the ability to flush
the delayed refs on demand to deal with enospc pressure. Add states to
flush delayed refs on demand, and this will allow us to remove a lot of
ad-hoc work around checking to see if we should commit the transaction
to run our
On 3.12.18 г. 12:25 ч., Nikolay Borisov wrote:
> When extent_readpages is called from the generic readahead code it first
> builds a batch of 16 pages (which might or might not be consecutive,
> depending on whether add_to_page_cache_lru failed) and submits them to
> __extent_readpages. The
On 2018/12/3 下午5:31, Stefan Malte Schumacher wrote:
> Hello,
>
> I have noticed an unusual amount of crc-errors in downloaded rars,
> beginning about a week ago. But lets start with the preliminaries. I
> am using Debian Stretch.
> Kernel: Linux mars 4.9.0-8-amd64 #1 SMP Debian 4.9.110-3+deb9u4
On 28/11/2018 16:41, David Sterba wrote:
> On Wed, Nov 28, 2018 at 09:54:55AM +0100, Johannes Thumshirn wrote:
>> In map_private_extent_buffer() use offset_in_page() to initialize
>> 'start_offset' instead of open-coding it.
>
> Can you please fix all instances where it's opencoded? Grepping for
When extent_readpages is called from the generic readahead code it first
builds a batch of 16 pages (which might or might not be consecutive,
depending on whether add_to_page_cache_lru failed) and submits them to
__extent_readpages. The latter ensures that the range of pages (in the
batch of 16)
I've been running into (what I believe) is the same issue ever since
upgrading to 4.19:
[28950.083040] BTRFS error (device dm-0): bad tree block start, want
1815648960512 have 0
[28950.083047] BTRFS: error (device dm-0) in __btrfs_free_extent:6804:
errno=-5 IO failure
[28950.083048] BTRFS info
Hello,
I have noticed an unusual amount of crc-errors in downloaded rars,
beginning about a week ago. But lets start with the preliminaries. I
am using Debian Stretch.
Kernel: Linux mars 4.9.0-8-amd64 #1 SMP Debian 4.9.110-3+deb9u4
(2018-08-21) x86_64 GNU/Linux
BTRFS-Tools btrfs-progs 4.7.3-1
51 matches
Mail list logo