On Sat, Jan 09, 2021 at 12:58:05AM +0100, Thomas Bogendoerfer wrote:
> On Fri, Jan 08, 2021 at 08:20:43PM +, Paul Cercueil wrote:
> > Hi Thomas,
> >
> > 5.11 does not boot anymore on Ingenic SoCs, I bisected it to this commit.
> >
> > Any idea what could be happening?
>
> not yet, kernel cra
On Fri, Jan 08, 2021 at 08:20:43PM +, Paul Cercueil wrote:
> Hi Thomas,
>
> 5.11 does not boot anymore on Ingenic SoCs, I bisected it to this commit.
>
> Any idea what could be happening?
not yet, kernel crash log of a Malta QEMU is below.
Thomas.
Kernel bug detected[#1]:
CPU: 0 PID: 1 Com
Hi Thomas,
5.11 does not boot anymore on Ingenic SoCs, I bisected it to this
commit.
Any idea what could be happening?
Cheers,
-Paul
Il giorno ven 8 gen 2021 alle ore 09:36 ha scritto:
> What happens when I poison one of the drives in the mdadm array using this
> command? Will all data come out OK?
> dd if=/dev/urandom of=/dev/dev/sdb1 bs=1M count = 100?
Well, (happens) the same thing when your laptop is stolen or you read
"
On 1/8/21 6:30 PM, Goffredo Baroncelli wrote:
On 1/8/21 2:05 AM, Zygo Blaxell wrote:
On Thu, May 28, 2020 at 08:34:47PM +0200, Goffredo Baroncelli wrote:
[...]
I've been testing these patches for a while now. They enable an
interesting use case that can't otherwise be done safely, sanely o
On 1/8/21 2:05 AM, Zygo Blaxell wrote:
On Thu, May 28, 2020 at 08:34:47PM +0200, Goffredo Baroncelli wrote:
[...]
I've been testing these patches for a while now. They enable an
interesting use case that can't otherwise be done safely, sanely or
cheaply with btrfs.
Thanks Zygo for this fe
On Wed, Dec 16, 2020 at 11:22:04AM -0500, Josef Bacik wrote:
> Hello,
>
> A lot of these were in previous versions of the relocation error handling
> patches. I added a few since the last go around. All of these do not rely on
> the error handling patches, and some of them are quite important ot
On Fri, Jan 08, 2021 at 02:30:52PM +, Claudius Ellsel wrote:
> Hello,
>
> currently I am slowly adding drives to my filesystem (RAID1). This process is
> incremental, since I am copying files off them to the btrfs filesystem and
> then adding the free drive to it afterwards. Since RAID1 need
The inode number cache has been removed in this dev cycle, there's one
more leftover. We don't need to run the delayed refs again after
commit_fs_roots as stated in the comment, because btrfs_save_ino_cache
is no more since 5297199a8bca ("btrfs: remove inode number cache
feature").
Nothing else be
On Fri, Dec 18, 2020 at 02:24:20PM -0500, Josef Bacik wrote:
> I've been running a stress test that runs 20 workers in their own
> subvolume, which are running an fsstress instance with 4 threads per
> worker, which is 80 total fsstress threads. In addition to this I'm
> running balance in the bac
Приветствую тебя, мой друг, надеюсь, ты в порядке, пожалуйста, ответь мне
благодаря,
On Thu, Jan 07, 2021 at 05:08:30PM -0500, Josef Bacik wrote:
> Commit 38d715f494f2 ("btrfs: use btrfs_start_delalloc_roots in
> shrink_delalloc") cleaned up how we do delalloc shrinking by utilizing
> some infrastructure we have in place to flush inodes that we use for
> device replace and snapshot
On Fri, Jan 08, 2021 at 02:22:00PM +, Filipe Manana wrote:
> On Thu, Jan 7, 2021 at 1:13 PM syzbot
> wrote:
> >
> > syzbot suspects this issue was fixed by commit:
> >
> > commit f30bed83426c5cb9fce6cabb3f7cc5a9d5428fcc
> > Author: Filipe Manana
> > Date: Fri Nov 13 11:24:17 2020 +
> >
On Fri, Jan 08, 2021 at 02:22:00PM +, Filipe Manana wrote:
> On Thu, Jan 7, 2021 at 1:13 PM syzbot
> wrote:
> >
> > syzbot suspects this issue was fixed by commit:
> >
> > commit f30bed83426c5cb9fce6cabb3f7cc5a9d5428fcc
> > Author: Filipe Manana
> > Date: Fri Nov 13 11:24:17 2020 +
> >
Hello,
currently I am slowly adding drives to my filesystem (RAID1). This process is
incremental, since I am copying files off them to the btrfs filesystem and then
adding the free drive to it afterwards. Since RAID1 needs double the space, I
added an empty 12TB drive and also had a head start
On Thu, Jan 7, 2021 at 1:13 PM syzbot
wrote:
>
> syzbot suspects this issue was fixed by commit:
>
> commit f30bed83426c5cb9fce6cabb3f7cc5a9d5428fcc
> Author: Filipe Manana
> Date: Fri Nov 13 11:24:17 2020 +
>
> btrfs: remove unnecessary attempt to drop extent maps after adding inline
On Fri, Jan 08, 2021 at 10:17:25AM +0100, Dmitry Vyukov wrote:
> On Thu, Jan 7, 2021 at 2:11 PM syzbot
> wrote:
> >
> > syzbot suspects this issue was fixed by commit:
> >
> > commit f30bed83426c5cb9fce6cabb3f7cc5a9d5428fcc
> > Author: Filipe Manana
> > Date: Fri Nov 13 11:24:17 2020 +
> >
On Wed, Jan 06, 2021 at 03:08:15PM +0800, Anand Jain wrote:
> In the meantime, since I have sent the base patch as below [1], the
> block layer commit 0d02129e76ed (block: merge struct block_device and
> struct hd_struct) has changed the first argument in the function
> part_stat_read_all() to stru
On Fri, Jan 08, 2021 at 09:36:13AM +0100, wrote:
>
> --- Ursprüngliche Nachricht ---
> Von: Andrea Gelmini
> Datum: 08.01.2021 09:16:26
> An: cedric.dew...@eclipso.eu
> Betreff: Re: Raid1 of a slow hdd and a fast(er) SSD, howto to prioritize the
> SSD?
>
> Il giorno mar 5 gen 2021 alle ore 07
08.01.2021 10:56, cedric.dew...@eclipso.eu пишет:
> I have done a test where I filled up an entire btrfs raid 1 filesystem as a
> normal user. Then I simulated a failing drive. it turned out I was unable to
> replace the drive, as raid1 need free space on both drives. See this mail for
> details
--- Ursprüngliche Nachricht ---
Von: Andrea Gelmini
Datum: 08.01.2021 09:16:26
An: cedric.dew...@eclipso.eu
Betreff: Re: Raid1 of a slow hdd and a fast(er) SSD, howto to prioritize the
SSD?
Il giorno mar 5 gen 2021 alle ore 07:44
ha scritto:
>
> Is there a way to tell btrfs to leave the slow
Il giorno mar 5 gen 2021 alle ore 07:44 ha scritto:
>
> Is there a way to tell btrfs to leave the slow hdd alone, and to prioritize
> the SSD?
You can use mdadm to do this (I'm using this feature since years in
setup where I have to fallback on USB disks for any reason).
>From manpage:
22 matches
Mail list logo