On Wed, 2020-12-09 at 07:13 +0100, Mike Galbraith wrote:
> On Wed, 2020-12-09 at 00:26 +0100, Vitaly Wool wrote:
> > Hi Mike,
> >
> > On 2020-12-07 16:41, Mike Galbraith wrote:
> > > On Mon, 2020-12-07 at 16:21 +0100, Vitaly Wool wrote:
> > >> On Mon, Dec 7, 2020 at 1:34 PM Mike Galbraith wrote:
On Wed, 2020-12-09 at 00:26 +0100, Vitaly Wool wrote:
> Hi Mike,
>
> On 2020-12-07 16:41, Mike Galbraith wrote:
> > On Mon, 2020-12-07 at 16:21 +0100, Vitaly Wool wrote:
> >> On Mon, Dec 7, 2020 at 1:34 PM Mike Galbraith wrote:
> >>>
> >>
> >>> Unfortunately, that made zero difference.
> >>
> >>
Hi Mike,
On 2020-12-07 16:41, Mike Galbraith wrote:
On Mon, 2020-12-07 at 16:21 +0100, Vitaly Wool wrote:
On Mon, Dec 7, 2020 at 1:34 PM Mike Galbraith wrote:
Unfortunately, that made zero difference.
Okay, I suggest that you submit the patch that changes read_lock() to
write_lock() in
On Mon, 2020-12-07 at 16:21 +0100, Vitaly Wool wrote:
> On Mon, Dec 7, 2020 at 1:34 PM Mike Galbraith wrote:
> >
>
> > Unfortunately, that made zero difference.
>
> Okay, I suggest that you submit the patch that changes read_lock() to
> write_lock() in __release_z3fold_page() and I'll ack it
On 2020-12-07 16:21:20 [+0100], Vitaly Wool wrote:
> On Mon, Dec 7, 2020 at 1:34 PM Mike Galbraith wrote:
> >
> > Unfortunately, that made zero difference.
>
> Okay, I suggest that you submit the patch that changes read_lock() to
> write_lock() in __release_z3fold_page() and I'll ack it then.
>
On Mon, Dec 7, 2020 at 1:34 PM Mike Galbraith wrote:
>
> On Mon, 2020-12-07 at 12:52 +0100, Vitaly Wool wrote:
> >
> > Thanks. This trace beats me because I don't quite get how this could
> > have happened.
>
> I swear there's a mythical creature loose in there somewhere ;-)
> Everything looks
On Mon, 2020-12-07 at 12:52 +0100, Vitaly Wool wrote:
>
> Thanks. This trace beats me because I don't quite get how this could
> have happened.
I swear there's a mythical creature loose in there somewhere ;-)
Everything looks just peachy up to the instant it goes boom, then you
find in the
On Mon, Dec 7, 2020 at 3:18 AM Mike Galbraith wrote:
>
> On Mon, 2020-12-07 at 02:05 +0100, Vitaly Wool wrote:
> >
> > Could you please try the following patch in your setup:
>
> crash> gdb list *z3fold_zpool_free+0x527
> 0xc0e14487 is in z3fold_zpool_free (mm/z3fold.c:341).
> 336
On Mon, 2020-12-07 at 02:05 +0100, Vitaly Wool wrote:
>
> Could you please try the following patch in your setup:
crash> gdb list *z3fold_zpool_free+0x527
0xc0e14487 is in z3fold_zpool_free (mm/z3fold.c:341).
336 if (slots->slot[i]) {
337
On Thu, 2020-12-03 at 14:39 +0100, Sebastian Andrzej Siewior wrote:
> On 2020-12-03 09:18:21 [+0100], Mike Galbraith wrote:
> > On Thu, 2020-12-03 at 03:16 +0100, Mike Galbraith wrote:
> > > On Wed, 2020-12-02 at 23:08 +0100, Sebastian Andrzej Siewior wrote:
> > > Looks like...
> > >
> > >
On Thu, Dec 3, 2020 at 2:39 PM Sebastian Andrzej Siewior
wrote:
>
> On 2020-12-03 09:18:21 [+0100], Mike Galbraith wrote:
> > On Thu, 2020-12-03 at 03:16 +0100, Mike Galbraith wrote:
> > > On Wed, 2020-12-02 at 23:08 +0100, Sebastian Andrzej Siewior wrote:
> > > Looks like...
> > >
> > >
On 2020-12-03 09:18:21 [+0100], Mike Galbraith wrote:
> On Thu, 2020-12-03 at 03:16 +0100, Mike Galbraith wrote:
> > On Wed, 2020-12-02 at 23:08 +0100, Sebastian Andrzej Siewior wrote:
> > Looks like...
> >
> > d8f117abb380 z3fold: fix use-after-free when freeing handles
> >
> > ...wasn't
On Thu, 2020-12-03 at 03:16 +0100, Mike Galbraith wrote:
> On Wed, 2020-12-02 at 23:08 +0100, Sebastian Andrzej Siewior wrote:
> Looks like...
>
> d8f117abb380 z3fold: fix use-after-free when freeing handles
>
> ...wasn't completely effective...
The top two hunks seem to have rendered the thing
On Wed, 2020-12-02 at 23:08 +0100, Sebastian Andrzej Siewior wrote:
> On 2020-12-02 03:30:27 [+0100], Mike Galbraith wrote:
>
> > What I'm seeing is the below. rt_mutex_has_waiters() says yup we have
> > a waiter, rt_mutex_top_waiter() emits the missing cached leftmost, and
> >
On 2020-12-02 03:30:27 [+0100], Mike Galbraith wrote:
> > > In an LTP install, ./runltp -f mm. Shortly after box starts swapping
> > > insanely, it explodes quite reliably here with either z3fold or
> > > zsmalloc.. but not with zbud.
>
> What I'm seeing is the below. rt_mutex_has_waiters()
On Mon, 2020-11-30 at 17:03 +0100, Sebastian Andrzej Siewior wrote:
> On 2020-11-30 16:01:11 [+0100], Mike Galbraith wrote:
> > On Mon, 2020-11-30 at 15:52 +0100, Sebastian Andrzej Siewior wrote:
> > > How do you test this? I triggered a few oom-killer and I have here git
> > > gc running for a
On Mon, 2020-11-30 at 17:32 +0100, Sebastian Andrzej Siewior wrote:
> On 2020-11-30 17:27:17 [+0100], Mike Galbraith wrote:
> > > This just passed. It however killed my git-gc task which wasn't done.
> > > Let me try tomorrow with your config.
> >
> > FYI, I tried 5.9-rt (after fixing 5.9.11), it
On Mon, 2020-11-30 at 17:27 +0100, Mike Galbraith wrote:
> On Mon, 2020-11-30 at 17:03 +0100, Sebastian Andrzej Siewior wrote:
> > On 2020-11-30 16:01:11 [+0100], Mike Galbraith wrote:
> > > On Mon, 2020-11-30 at 15:52 +0100, Sebastian Andrzej Siewior wrote:
> > > > How do you test this? I
On Mon, 2020-11-30 at 17:32 +0100, Sebastian Andrzej Siewior wrote:
> On 2020-11-30 17:27:17 [+0100], Mike Galbraith wrote:
> > > This just passed. It however killed my git-gc task which wasn't done.
> > > Let me try tomorrow with your config.
> >
> > FYI, I tried 5.9-rt (after fixing 5.9.11), it
On 2020-11-30 17:27:17 [+0100], Mike Galbraith wrote:
> > This just passed. It however killed my git-gc task which wasn't done.
> > Let me try tomorrow with your config.
>
> FYI, I tried 5.9-rt (after fixing 5.9.11), it exploded in the same way,
> so (as expected) it's not some devel tree oopsie.
On Mon, 2020-11-30 at 17:03 +0100, Sebastian Andrzej Siewior wrote:
> On 2020-11-30 16:01:11 [+0100], Mike Galbraith wrote:
> > On Mon, 2020-11-30 at 15:52 +0100, Sebastian Andrzej Siewior wrote:
> > > How do you test this? I triggered a few oom-killer and I have here git
> > > gc running for a
On 2020-11-30 16:01:11 [+0100], Mike Galbraith wrote:
> On Mon, 2020-11-30 at 15:52 +0100, Sebastian Andrzej Siewior wrote:
> > How do you test this? I triggered a few oom-killer and I have here git
> > gc running for a few hours now… Everything is fine.
>
> In an LTP install, ./runltp -f mm.
On Mon, 2020-11-30 at 16:01 +0100, Mike Galbraith wrote:
> On Mon, 2020-11-30 at 15:52 +0100, Sebastian Andrzej Siewior wrote:
> > How do you test this? I triggered a few oom-killer and I have here git
> > gc running for a few hours now… Everything is fine.
>
> In an LTP install, ./runltp -f mm.
On Mon, 2020-11-30 at 15:52 +0100, Sebastian Andrzej Siewior wrote:
> How do you test this? I triggered a few oom-killer and I have here git
> gc running for a few hours now… Everything is fine.
In an LTP install, ./runltp -f mm. Shortly after box starts swapping
insanely, it explodes quite
On 2020-11-30 15:42:46 [+0100], Mike Galbraith wrote:
> This explodes in write_unlock() as mine did. Oleksandr's local_lock()
> variant explodes in the lock he added. (ew, corruption)
>
> I think I'll try a stable-rt tree. This master tree _should_ be fine
> given it seems to work just peachy
On Mon, 2020-11-30 at 14:20 +0100, Sebastian Andrzej Siewior wrote:
> On 2020-11-29 12:41:14 [+0100], Mike Galbraith wrote:
> > On Sun, 2020-11-29 at 12:29 +0100, Oleksandr Natalenko wrote:
> > >
> > > Ummm so do compressors explode under non-rt kernel in your tests as
> > > well, or it is just
On 2020-11-30 14:53:22 [+0100], Oleksandr Natalenko wrote:
> > diff --git a/mm/zswap.c b/mm/zswap.c
> > index 78a20f7b00f2c..b24f761b9241c 100644
> > --- a/mm/zswap.c
> > +++ b/mm/zswap.c
> > @@ -394,7 +394,9 @@ struct zswap_comp {
> > u8 *dstmem;
> > };
> >
> > -static
On Mon, Nov 30, 2020 at 02:20:14PM +0100, Sebastian Andrzej Siewior wrote:
> On 2020-11-29 12:41:14 [+0100], Mike Galbraith wrote:
> > On Sun, 2020-11-29 at 12:29 +0100, Oleksandr Natalenko wrote:
> > >
> > > Ummm so do compressors explode under non-rt kernel in your tests as
> > > well, or it is
On 2020-11-29 12:41:14 [+0100], Mike Galbraith wrote:
> On Sun, 2020-11-29 at 12:29 +0100, Oleksandr Natalenko wrote:
> >
> > Ummm so do compressors explode under non-rt kernel in your tests as
> > well, or it is just -rt that triggers this?
>
> I only tested a non-rt kernel with z3fold, which
On Sun, 2020-11-29 at 12:29 +0100, Oleksandr Natalenko wrote:
>
> Ummm so do compressors explode under non-rt kernel in your tests as
> well, or it is just -rt that triggers this?
I only tested a non-rt kernel with z3fold, which worked just fine.
-Mike
On Sun, Nov 29, 2020 at 11:56:55AM +0100, Mike Galbraith wrote:
> On Sun, 2020-11-29 at 10:21 +0100, Mike Galbraith wrote:
> > On Sun, 2020-11-29 at 08:48 +0100, Mike Galbraith wrote:
> > > On Sun, 2020-11-29 at 07:41 +0100, Mike Galbraith wrote:
> > > > On Sat, 2020-11-28 at 15:27 +0100,
On Sun, 2020-11-29 at 10:21 +0100, Mike Galbraith wrote:
> On Sun, 2020-11-29 at 08:48 +0100, Mike Galbraith wrote:
> > On Sun, 2020-11-29 at 07:41 +0100, Mike Galbraith wrote:
> > > On Sat, 2020-11-28 at 15:27 +0100, Oleksandr Natalenko wrote:
> > > >
> > > > > > Shouldn't the list manipulation
On Sun, 2020-11-29 at 08:48 +0100, Mike Galbraith wrote:
> On Sun, 2020-11-29 at 07:41 +0100, Mike Galbraith wrote:
> > On Sat, 2020-11-28 at 15:27 +0100, Oleksandr Natalenko wrote:
> > >
> > > > > Shouldn't the list manipulation be protected with
> > > > > local_lock+this_cpu_ptr instead of
On Sun, 2020-11-29 at 07:41 +0100, Mike Galbraith wrote:
> On Sat, 2020-11-28 at 15:27 +0100, Oleksandr Natalenko wrote:
> >
> > > > Shouldn't the list manipulation be protected with
> > > > local_lock+this_cpu_ptr instead of get_cpu_ptr+spin_lock?
> >
> > Totally untested:
>
> Hrm, the thing
On Sat, 2020-11-28 at 15:27 +0100, Oleksandr Natalenko wrote:
>
> > > Shouldn't the list manipulation be protected with
> > > local_lock+this_cpu_ptr instead of get_cpu_ptr+spin_lock?
>
> Totally untested:
Hrm, the thing doesn't seem to care deeply about preemption being
disabled, so adding
On Sat, Nov 28, 2020 at 03:09:24PM +0100, Oleksandr Natalenko wrote:
> > While running v5.10-rc5-rt11 I bumped into the following:
> >
> > ```
> > BUG: scheduling while atomic: git/18695/0x0002
> > Preemption disabled at:
> > [] z3fold_zpool_malloc+0x463/0x6e0
> > …
> > Call Trace:
> >
On Sat, Nov 28, 2020 at 03:05:24PM +0100, Oleksandr Natalenko wrote:
> Hi.
>
> While running v5.10-rc5-rt11 I bumped into the following:
>
> ```
> BUG: scheduling while atomic: git/18695/0x0002
> Preemption disabled at:
> [] z3fold_zpool_malloc+0x463/0x6e0
> …
> Call Trace:
>
37 matches
Mail list logo