On 30.03.2021 09:16 Wang Yugui wrote:
H,
On 30.03.21 г. 9:24, Wang Yugui wrote:
Hi, Nikolay Borisov
With a lot of dump_stack()/printk inserted around ENOMEM in btrfs code,
we find out the call stack for ENOMEM.
see the file -btrfs-dump_stack-when-ENOMEM.patch
#cat /usr/hpc-bio/xfstests/
On 11.03.2021 18:58 Martin Raiber wrote:
On 01.02.2021 23:08 Martin Raiber wrote:
On 27.01.2021 22:03 Chris Murphy wrote:
On Wed, Jan 27, 2021 at 10:27 AM Martin Raiber wrote:
Hi,
seems 5.10.8 still has the ENOSPC issue when compression is used
(compress-force=zstd,space_cache=v2):
Jan 27
On 29.03.2021 19:25 Henning Schild wrote:
> Am Mon, 29 Mar 2021 19:30:34 +0300
> schrieb Andrei Borzenkov :
>
>> On 29.03.2021 16:16, Claudius Heine wrote:
>>> Hi,
>>>
>>> I am currently investigating the possibility to use `btrfs-stream`
>>> files (generated by `btrfs send`) for deploying a image
On 11.03.2021 15:43 Filipe Manana wrote:
> On Wed, Mar 10, 2021 at 5:18 PM Martin Raiber wrote:
>> Hi,
>>
>> I have this in a btrfs directory. Linux kernel 5.10.16, no errors in dmesg,
>> no scrub errors:
>>
>> ls -lh
>> total 19G
>&g
On 01.02.2021 23:08 Martin Raiber wrote:
> On 27.01.2021 22:03 Chris Murphy wrote:
>> On Wed, Jan 27, 2021 at 10:27 AM Martin Raiber wrote:
>>> Hi,
>>>
>>> seems 5.10.8 still has the ENOSPC issue when compression is used
>>> (compress-force=zstd,spa
missing the parent directory fsync).
So far no negative consequences... (except that programs might get confused).
echo 3 > /proc/sys/vm/drop_caches doesn't help.
Regards,
Martin Raiber
On 26.02.2021 18:00 David Sterba wrote:
> On Fri, Jan 08, 2021 at 12:02:48AM +0000, Martin Raiber wrote:
>> When reading from btrfs file via io_uring I get following
>> call traces:
>>
>> [<0>] wait_on_page_bit+0x12b/0x270
>> [<0>
ver, I don't really care
if it has to iterate over all block group metadata after mount for a few
seconds, if that means it has less write IOs for every write. The calculus
obivously changes for a hard disk where reading this metadata would talke
forever due to low IOPS.
Regards,
Martin Raiber
On 27.01.2021 22:03 Chris Murphy wrote:
> On Wed, Jan 27, 2021 at 10:27 AM Martin Raiber wrote:
>> Hi,
>>
>> seems 5.10.8 still has the ENOSPC issue when compression is used
>> (compress-force=zstd,space_cache=v2):
>>
>> Jan 27 11:02:14 kernel:
UKS-RC-a6414fd731ce4f878af44c3987bce533 1.00MiB
Regards,
Martin Raiber
On 12.01.2021 18:01 Pavel Begunkov wrote:
On 12/01/2021 15:36, David Sterba wrote:
On Fri, Jan 08, 2021 at 12:02:48AM +, Martin Raiber wrote:
When reading from btrfs file via io_uring I get following
call traces:
Is there a way to reproduce by common tools (fio) or is a specialized
one
t
8730f12b7962b21ea9ad2756abce1e205d22db84 ("btrfs: flag files as
supporting buffered async reads") with 5.9. Io_uring will read
the data via worker threads if it can't be read without sync IO
this way.
Signed-off-by: Martin Raiber
---
fs/btrfs/file.c | 15 +--
1 fil
On 07.07.2019 14:15 Qu Wenruo wrote:
>
> On 2019/7/6 上午4:28, Martin Raiber wrote:
>> More research on this. Seems a generic error reporting mechanism for
>> this is in the works https://lkml.org/lkml/2018/6/1/640 .
> sync() system call is defined as void sync(void);
lication to think
data is on disk even though it isn't.
On 05.07.2019 16:22 Martin Raiber wrote:
> Hi,
>
> I realize this isn't a btrfs specific problem but syncfs() returns no
> error even on complete fs failure. The problem is (I think) that the
> return value of sb->
tem
changes to disk.
For btrfs there is a work-around of using BTRFS_IOC_SYNC (which I am
going to use now) but that is obviously less user friendly than syncfs().
Regards,
Martin Raiber
I've fixed the same problem(s) by increasing the global metadata size as
well. Though I haven't encountered them since Josef Bacik's block rsv
rework in 5.0.
Another problem with increasing the global metadata size is, that I
think it is the only way dirty metadata is throttled. If increased too
mu
On 23.05.2019 19:41 Austin S. Hemmelgarn wrote:
> On 2019-05-23 13:31, Martin Raiber wrote:
>> On 23.05.2019 19:13 Austin S. Hemmelgarn wrote:
>>> On 2019-05-23 12:24, Chris Murphy wrote:
>>>> On Thu, May 23, 2019 at 5:19 AM Austin S. Hemmelgarn
>>>> wro
On 23.05.2019 19:13 Austin S. Hemmelgarn wrote:
> On 2019-05-23 12:24, Chris Murphy wrote:
>> On Thu, May 23, 2019 at 5:19 AM Austin S. Hemmelgarn
>> wrote:
>>>
>>> On 2019-05-22 14:46, Cerem Cem ASLAN wrote:
Could you confirm or disclaim the following explanation:
https://unix.stackexch
On 26.03.2019 14:37 Qu Wenruo wrote:
> On 2019/3/26 下午6:24, berodual_xyz wrote:
>> Mount messages below.
>>
>> Thanks for your input, Qu!
>>
>> ##
>> [42763.884134] BTRFS info (device sdd): disabling free space tree
>> [42763.884138] BTRFS info (device sdd): force clearing of disk cache
>> [42763.8
On 14.03.2019 23:20 Chris Murphy wrote:
> If you install btrfs-progs 4.20+ you'll see the documentation for
> supporting swapfiles on Btrfs, supported in kernel 5.0+. `man 5 btrfs`
>
> Anyone with access to the wiki should update the FAQ
> https://btrfs.wiki.kernel.org/index.php/FAQ#Does_btrfs_supp
thout patching.
Regards,
Martin Raiber
On 06.02.2019 01:22 Qu Wenruo wrote:
> On 2019/2/6 上午6:18, Stephen R. van den Berg wrote:
>> Are these Sysreq+w dumps not usable?
>>
> Sorry for the late reply.
>
> The hang looks pretty strange, and doesn't really look like previous
> deadlock caused by tree block locking.
> But some strange behav
On 14.12.2018 09:07 ethanlien wrote:
> Martin Raiber 於 2018-12-12 23:22 寫到:
>> On 12.12.2018 15:47 Chris Mason wrote:
>>> On 28 May 2018, at 1:48, Ethan Lien wrote:
>>>
>>> It took me a while to trigger, but this actually deadlocks ;) More
>>> below.
e to make progress on page
> writeback.
>
I had lockups with this patch as well. If you put e.g. a loop device on
top of a btrfs file, loop sets PF_LESS_THROTTLE to avoid a feed back
loop causing delays. The task balancing dirty pages in
btrfs_finish_ordered_io doesn't have the flag and causes slow-downs. In
my case it managed to cause a feedback loop where it queues other
btrfs_finish_ordered_io and gets stuck completely.
Regards,
Martin Raiber
Am 08.09.2018 um 18:24 schrieb Adam Borowski:
> On Thu, Sep 06, 2018 at 06:08:33AM -0400, Austin S. Hemmelgarn wrote:
>> On 2018-09-06 03:23, Nathan Dehnel wrote:
>>> So I guess my question is, does btrfs support atomic writes across
>>> multiple files? Or is anyone interested in such a feature?
>>
7fe655700
R09: 0101
[Fri Aug 17 16:21:06 2018] R10: 56521bf7c0cc R11: 0246
R12: 7f67fd6d6440
[Fri Aug 17 16:21:06 2018] R13: 7f67fd6d5900 R14: 0064
R15: 0000
Regards,
Martin Raiber
On 02.08.2018 14:27 Austin S. Hemmelgarn wrote:
> On 2018-08-02 06:56, Qu Wenruo wrote:
>>
>> On 2018年08月02日 18:45, Andrei Borzenkov wrote:
>>>
>>> Отправлено с iPhone
>>>
2 авг. 2018 г., в 10:02, Qu Wenruo
написал(а):
> On 2018年08月01日 11:45, MegaBrutal wrote:
> Hi all,
On 10.07.2018 09:04 Pete wrote:
> I've just had the error in the subject which caused the file system to
> go read-only.
>
> Further part of error message:
> WARNING: CPU: 14 PID: 1351 at fs/btrfs/extent-tree.c:3076
> btrfs_run_delayed_refs*0x163/0x190
>
> 'Screenshot' here:
> https://drive.google.
On 08.01.2018 19:34 Austin S. Hemmelgarn wrote:
> On 2018-01-08 13:17, Graham Cobb wrote:
>> On 08/01/18 16:34, Austin S. Hemmelgarn wrote:
>>> Ideally, I think it should be as generic as reasonably possible,
>>> possibly something along the lines of:
>>>
>>> A: While not strictly necessary, runnin
them with btrfs_should_throttle_delayed_refs . Maybe by
creating a snapshot of a file and then modifying it (some action that
creates delayed refs, is not truncate which is already throttled and
does not commit a transaction which is also throttled).
Regards,
Martin Raiber
--
To unsubscribe from this list: sen
On 03.12.2017 16:39 Martin Raiber wrote:
> Am 26.11.2017 um 17:02 schrieb Tomasz Chmielewski:
>> On 2017-11-27 00:37, Martin Raiber wrote:
>>> On 26.11.2017 08:46 Tomasz Chmielewski wrote:
>>>> Got this one on a 4.14-rc7 filesystem with some 400 GB left:
>>>
Am 26.11.2017 um 17:02 schrieb Tomasz Chmielewski:
> On 2017-11-27 00:37, Martin Raiber wrote:
>> On 26.11.2017 08:46 Tomasz Chmielewski wrote:
>>> Got this one on a 4.14-rc7 filesystem with some 400 GB left:
>> I guess it is too late now, but I guess the "btrfs fi
t completely sure
it was btrfs's fault and as usual not all the conditions may be
relevant. Could also be instead an upper layer error (Hyper-V storage),
memory issue or an application error.
Regards,
Martin Raiber
--
To unsubscribe from this list: send the line "unsubscribe linu
On 02.11.2017 16:10 Hans van Kranenburg wrote:
> On 11/02/2017 04:02 PM, Martin Raiber wrote:
>> snapshot cleanup is a little slow in my case (50TB volume). Would it
>> help to have multiple btrfs-cleaner threads? The block layer underneath
>> would have higher throughput w
Hi,
snapshot cleanup is a little slow in my case (50TB volume). Would it
help to have multiple btrfs-cleaner threads? The block layer underneath
would have higher throughput with more simultaneous read/write requests.
Regards,
Martin Raiber
--
To unsubscribe from this list: send the line
On 19.10.2017 10:16 Vladimir Panteleev wrote:
> On Tue, 17 Oct 2017 16:21:04 -0700, Duncan wrote:
>> * try the balance on 4.14-rc5+, where the known bug should be fixed
>
> Thanks! However, I'm getting the same error on
> 4.14.0-rc5-g9aa0d2dde6eb. The stack trace is different, though:
>
> Aside fro
ther file operations?
>
as far as I can see it only uses the log tree in some cases where the
log tree was already used for the file or the parent directory. The
cases are documented here
https://github.com/torvalds/linux/blob/master/fs/btrfs/tree-log.c#L45 .
So rename isn't much heavier
tall the client in the VM. It excludes unnessary
stuff like e.g. page files or the shadow storage area from the image
backups, as well and has a mode to store image backups as raw btrfs files.
Linux VMs I'd backup as files either from the hypervisor or from in VM.
If you want to backup
all dirty
>> data pages to disk, and then commit transaction.
>> While only calling btrfs_commit_transacation() doesn't trigger dirty
>> page writeback.
>>
>> So there is a difference.
this conversation made me realize why btrfs has sub-optimal meta-data
performa
On 08.02.2017 14:08 Austin S. Hemmelgarn wrote:
> On 2017-02-08 07:14, Martin Raiber wrote:
>> Hi,
>>
>> On 08.02.2017 03:11 Peter Zaitsev wrote:
>>> Out of curiosity, I see one problem here:
>>> If you're doing snapshots of the live database, each
s snapshots shouldn't be much behind
the properly snapshotted state, so I see the advantages more with
usability and taking care of corner cases automatically.
Regards,
Martin Raiber
smime.p7s
Description: S/MIME Cryptographic Signature
On 04.01.2017 00:43 Hans van Kranenburg wrote:
> On 01/04/2017 12:12 AM, Peter Becker wrote:
>> Good hint, this would be an option and i will try this.
>>
>> Regardless of this the curiosity has packed me and I will try to
>> figure out where the problem with the low transfer rate is.
>>
>> 2017-01
that, and will lower it if it still
becomes a problem.
Perhaps it would be good to somehow show that "global reserve" belongs
to metadata and show in btrfs fi usage/df that metadata is full if
global reserve>=free metadata, so that future users are not as confused
by this situation as I was.
Regards,
Martin Raiber
smime.p7s
Description: S/MIME Cryptographic Signature
On 22.11.2016 15:16 Martin Raiber wrote:
> ...
> Interestingly,
> after running "btrfs check --repair" "df" shows 0 free space (Used
> 516456408 Available 0), being inconsistent with the below other btrfs
> free space information.
>
> btrfs fi usa
Hi,
I'm having a file system which is currently broken because of ENOSPC issues.
It is a single device file system with no compression and no quotas
enabled but with some snapshots. Creation and initial ENOSPC/free space
inconsistency with 4.4.20 and 4.4.30 (both vanilla).
Currently I am on 4.9.0
On 20.07.2016 11:15 Libor Klepáč wrote:
> Hello,
> we use backuppc to backup our hosting machines.
>
> I have recently migrated it to btrfs, so we can use send-recieve for offsite
> backups of our backups.
>
> I have several btrfs volumes, each hosts nspawn container, which runs in
> /system subv
artin
From ebc5e8721264823a0df92b31e5fb1381f7f5e6f8 Mon Sep 17 00:00:00 2001
From: Martin Raiber
Date: Fri, 25 Sep 2015 13:24:13 +0200
Subject: [PATCH 1/1] btrfs: test for incremental send after file unlink and
then cloning
Creating a snapshot, then removing a file and cloning it back to its
origina
Hi,
this looks like http://www.spinics.net/lists/linux-btrfs/msg26774.html
which seems to be fixed, but it occurs on the latest stable kernel for
me (3.16.2):
[3.648344] BTRFS info (device xvda3): metadata ratio 4
[3.648350] BTRFS info (device xvda3): not using ssd allocation scheme
[
48 matches
Mail list logo