Re: [patch V3 13/37] mips/mm/highmem: Switch to generic kmap atomic

2021-01-08 Thread Thomas Bogendoerfer
On Sat, Jan 09, 2021 at 12:58:05AM +0100, Thomas Bogendoerfer wrote: > On Fri, Jan 08, 2021 at 08:20:43PM +, Paul Cercueil wrote: > > Hi Thomas, > > > > 5.11 does not boot anymore on Ingenic SoCs, I bisected it to this commit. > > > > Any idea what could be happening? > > not yet, kernel cra

Re: [patch V3 13/37] mips/mm/highmem: Switch to generic kmap atomic

2021-01-08 Thread Thomas Bogendoerfer
On Fri, Jan 08, 2021 at 08:20:43PM +, Paul Cercueil wrote: > Hi Thomas, > > 5.11 does not boot anymore on Ingenic SoCs, I bisected it to this commit. > > Any idea what could be happening? not yet, kernel crash log of a Malta QEMU is below. Thomas. Kernel bug detected[#1]: CPU: 0 PID: 1 Com

Re: [patch V3 13/37] mips/mm/highmem: Switch to generic kmap atomic

2021-01-08 Thread Paul Cercueil
Hi Thomas, 5.11 does not boot anymore on Ingenic SoCs, I bisected it to this commit. Any idea what could be happening? Cheers, -Paul

Re: Re: Raid1 of a slow hdd and a fast(er) SSD, howto to prioritize the SSD?

2021-01-08 Thread Andrea Gelmini
Il giorno ven 8 gen 2021 alle ore 09:36 ha scritto: > What happens when I poison one of the drives in the mdadm array using this > command? Will all data come out OK? > dd if=/dev/urandom of=/dev/dev/sdb1 bs=1M count = 100? Well, (happens) the same thing when your laptop is stolen or you read "

BTRFS and *CACHE setup [was Re: [RFC][PATCH V4] btrfs: preferred_metadata: preferred device for metadata]

2021-01-08 Thread Goffredo Baroncelli
On 1/8/21 6:30 PM, Goffredo Baroncelli wrote: On 1/8/21 2:05 AM, Zygo Blaxell wrote: On Thu, May 28, 2020 at 08:34:47PM +0200, Goffredo Baroncelli wrote: [...] I've been testing these patches for a while now.  They enable an interesting use case that can't otherwise be done safely, sanely o

Re: [RFC][PATCH V4] btrfs: preferred_metadata: preferred device for metadata

2021-01-08 Thread Goffredo Baroncelli
On 1/8/21 2:05 AM, Zygo Blaxell wrote: On Thu, May 28, 2020 at 08:34:47PM +0200, Goffredo Baroncelli wrote: [...] I've been testing these patches for a while now. They enable an interesting use case that can't otherwise be done safely, sanely or cheaply with btrfs. Thanks Zygo for this fe

Re: [PATCH 00/13] Serious fixes for different error paths

2021-01-08 Thread David Sterba
On Wed, Dec 16, 2020 at 11:22:04AM -0500, Josef Bacik wrote: > Hello, > > A lot of these were in previous versions of the relocation error handling > patches. I added a few since the last go around. All of these do not rely on > the error handling patches, and some of them are quite important ot

Re: Improve balance command

2021-01-08 Thread Hugo Mills
On Fri, Jan 08, 2021 at 02:30:52PM +, Claudius Ellsel wrote: > Hello, > > currently I am slowly adding drives to my filesystem (RAID1). This process is > incremental, since I am copying files off them to the btrfs filesystem and > then adding the free drive to it afterwards. Since RAID1 need

[PATCH] btrfs: no need to run delayed refs after commit_fs_roots

2021-01-08 Thread David Sterba
The inode number cache has been removed in this dev cycle, there's one more leftover. We don't need to run the delayed refs again after commit_fs_roots as stated in the comment, because btrfs_save_ino_cache is no more since 5297199a8bca ("btrfs: remove inode number cache feature"). Nothing else be

Re: [PATCH v5 2/8] btrfs: only let one thread pre-flush delayed refs in commit

2021-01-08 Thread David Sterba
On Fri, Dec 18, 2020 at 02:24:20PM -0500, Josef Bacik wrote: > I've been running a stress test that runs 20 workers in their own > subvolume, which are running an fsstress instance with 4 threads per > worker, which is 80 total fsstress threads. In addition to this I'm > running balance in the bac

Здравствуйте,

2021-01-08 Thread camille jackson
Приветствую тебя, мой друг, надеюсь, ты в порядке, пожалуйста, ответь мне благодаря,

Re: [PATCH v3] btrfs: shrink delalloc pages instead of full inodes

2021-01-08 Thread David Sterba
On Thu, Jan 07, 2021 at 05:08:30PM -0500, Josef Bacik wrote: > Commit 38d715f494f2 ("btrfs: use btrfs_start_delalloc_roots in > shrink_delalloc") cleaned up how we do delalloc shrinking by utilizing > some infrastructure we have in place to flush inodes that we use for > device replace and snapshot

Re: KASAN: null-ptr-deref Write in start_transaction

2021-01-08 Thread David Sterba
On Fri, Jan 08, 2021 at 02:22:00PM +, Filipe Manana wrote: > On Thu, Jan 7, 2021 at 1:13 PM syzbot > wrote: > > > > syzbot suspects this issue was fixed by commit: > > > > commit f30bed83426c5cb9fce6cabb3f7cc5a9d5428fcc > > Author: Filipe Manana > > Date: Fri Nov 13 11:24:17 2020 + > >

Re: KASAN: null-ptr-deref Write in start_transaction

2021-01-08 Thread David Sterba
On Fri, Jan 08, 2021 at 02:22:00PM +, Filipe Manana wrote: > On Thu, Jan 7, 2021 at 1:13 PM syzbot > wrote: > > > > syzbot suspects this issue was fixed by commit: > > > > commit f30bed83426c5cb9fce6cabb3f7cc5a9d5428fcc > > Author: Filipe Manana > > Date: Fri Nov 13 11:24:17 2020 + > >

Improve balance command

2021-01-08 Thread Claudius Ellsel
Hello, currently I am slowly adding drives to my filesystem (RAID1). This process is incremental, since I am copying files off them to the btrfs filesystem and then adding the free drive to it afterwards. Since RAID1 needs double the space, I added an empty 12TB drive and also had a head start

Re: KASAN: null-ptr-deref Write in start_transaction

2021-01-08 Thread Filipe Manana
On Thu, Jan 7, 2021 at 1:13 PM syzbot wrote: > > syzbot suspects this issue was fixed by commit: > > commit f30bed83426c5cb9fce6cabb3f7cc5a9d5428fcc > Author: Filipe Manana > Date: Fri Nov 13 11:24:17 2020 + > > btrfs: remove unnecessary attempt to drop extent maps after adding inline

Re: KASAN: null-ptr-deref Write in start_transaction

2021-01-08 Thread David Sterba
On Fri, Jan 08, 2021 at 10:17:25AM +0100, Dmitry Vyukov wrote: > On Thu, Jan 7, 2021 at 2:11 PM syzbot > wrote: > > > > syzbot suspects this issue was fixed by commit: > > > > commit f30bed83426c5cb9fce6cabb3f7cc5a9d5428fcc > > Author: Filipe Manana > > Date: Fri Nov 13 11:24:17 2020 + > >

Re: [PATCH] btrfs: fixup read_policy latency

2021-01-08 Thread David Sterba
On Wed, Jan 06, 2021 at 03:08:15PM +0800, Anand Jain wrote: > In the meantime, since I have sent the base patch as below [1], the > block layer commit 0d02129e76ed (block: merge struct block_device and > struct hd_struct) has changed the first argument in the function > part_stat_read_all() to stru

Re: Re: Raid1 of a slow hdd and a fast(er) SSD, howto to prioritize the SSD?

2021-01-08 Thread Zygo Blaxell
On Fri, Jan 08, 2021 at 09:36:13AM +0100, wrote: > > --- Ursprüngliche Nachricht --- > Von: Andrea Gelmini > Datum: 08.01.2021 09:16:26 > An: cedric.dew...@eclipso.eu > Betreff: Re: Raid1 of a slow hdd and a fast(er) SSD, howto to prioritize the > SSD? > > Il giorno mar 5 gen 2021 alle ore 07

Re: should btrfs reserve some space for root, so a normal user can't cause "no space left" problems?

2021-01-08 Thread Andrei Borzenkov
08.01.2021 10:56, cedric.dew...@eclipso.eu пишет: > I have done a test where I filled up an entire btrfs raid 1 filesystem as a > normal user. Then I simulated a failing drive. it turned out I was unable to > replace the drive, as raid1 need free space on both drives. See this mail for > details

Re: Re: Raid1 of a slow hdd and a fast(er) SSD, howto to prioritize the SSD?

2021-01-08 Thread
--- Ursprüngliche Nachricht --- Von: Andrea Gelmini Datum: 08.01.2021 09:16:26 An: cedric.dew...@eclipso.eu Betreff: Re: Raid1 of a slow hdd and a fast(er) SSD, howto to prioritize the SSD? Il giorno mar 5 gen 2021 alle ore 07:44 ha scritto: > > Is there a way to tell btrfs to leave the slow

Re: Raid1 of a slow hdd and a fast(er) SSD, howto to prioritize the SSD?

2021-01-08 Thread Andrea Gelmini
Il giorno mar 5 gen 2021 alle ore 07:44 ha scritto: > > Is there a way to tell btrfs to leave the slow hdd alone, and to prioritize > the SSD? You can use mdadm to do this (I'm using this feature since years in setup where I have to fallback on USB disks for any reason). >From manpage: