On 11/5/18 11:55 AM, Juri Lelli wrote:
> On 02/11/18 11:00, Daniel Bristot de Oliveira wrote:
>> On 11/1/18 6:55 AM, Juri Lelli wrote:
I meant, I am not against the/a fix, i just think that... it is more
complicated
that it seems.
For example: Let's assume that we have a
On 11/5/18 11:55 AM, Juri Lelli wrote:
> On 02/11/18 11:00, Daniel Bristot de Oliveira wrote:
>> On 11/1/18 6:55 AM, Juri Lelli wrote:
I meant, I am not against the/a fix, i just think that... it is more
complicated
that it seems.
For example: Let's assume that we have a
On 30/10/18 12:12, Juri Lelli wrote:
> On 30/10/18 11:45, Peter Zijlstra wrote:
>
> [...]
>
> > Hurm.. right. We knew of this issue back when we did it.
> > I suppose now it hurts and we need to figure something out.
> >
> > By virtue of being a real-time class, we do indeed need to have
On 30/10/18 12:12, Juri Lelli wrote:
> On 30/10/18 11:45, Peter Zijlstra wrote:
>
> [...]
>
> > Hurm.. right. We knew of this issue back when we did it.
> > I suppose now it hurts and we need to figure something out.
> >
> > By virtue of being a real-time class, we do indeed need to have
On 02/11/18 11:00, Daniel Bristot de Oliveira wrote:
> On 11/1/18 6:55 AM, Juri Lelli wrote:
> >> I meant, I am not against the/a fix, i just think that... it is more
> >> complicated
> >> that it seems.
> >>
> >> For example: Let's assume that we have a non-rt bad thread A in CPU 0
> >>
On 02/11/18 11:00, Daniel Bristot de Oliveira wrote:
> On 11/1/18 6:55 AM, Juri Lelli wrote:
> >> I meant, I am not against the/a fix, i just think that... it is more
> >> complicated
> >> that it seems.
> >>
> >> For example: Let's assume that we have a non-rt bad thread A in CPU 0
> >>
On 11/1/18 6:55 AM, Juri Lelli wrote:
>> I meant, I am not against the/a fix, i just think that... it is more
>> complicated
>> that it seems.
>>
>> For example: Let's assume that we have a non-rt bad thread A in CPU 0
>> generating
>> IPIs because of static key update, and a good dl thread B in
On 11/1/18 6:55 AM, Juri Lelli wrote:
>> I meant, I am not against the/a fix, i just think that... it is more
>> complicated
>> that it seems.
>>
>> For example: Let's assume that we have a non-rt bad thread A in CPU 0
>> generating
>> IPIs because of static key update, and a good dl thread B in
On 31/10/18 18:58, Daniel Bristot de Oliveira wrote:
> On 10/31/18 5:40 PM, Juri Lelli wrote:
> > On 31/10/18 17:18, Daniel Bristot de Oliveira wrote:
> >> On 10/30/18 12:08 PM, luca abeni wrote:
> >>> Hi Peter,
> >>>
> >>> On Tue, 30 Oct 2018 11:45:54 +0100
> >>> Peter Zijlstra wrote:
> >>>
On 31/10/18 18:58, Daniel Bristot de Oliveira wrote:
> On 10/31/18 5:40 PM, Juri Lelli wrote:
> > On 31/10/18 17:18, Daniel Bristot de Oliveira wrote:
> >> On 10/30/18 12:08 PM, luca abeni wrote:
> >>> Hi Peter,
> >>>
> >>> On Tue, 30 Oct 2018 11:45:54 +0100
> >>> Peter Zijlstra wrote:
> >>>
On 10/31/18 5:40 PM, Juri Lelli wrote:
> On 31/10/18 17:18, Daniel Bristot de Oliveira wrote:
>> On 10/30/18 12:08 PM, luca abeni wrote:
>>> Hi Peter,
>>>
>>> On Tue, 30 Oct 2018 11:45:54 +0100
>>> Peter Zijlstra wrote:
>>> [...]
> 2. This is related to perf_event_open syscall reproducer
On 10/31/18 5:40 PM, Juri Lelli wrote:
> On 31/10/18 17:18, Daniel Bristot de Oliveira wrote:
>> On 10/30/18 12:08 PM, luca abeni wrote:
>>> Hi Peter,
>>>
>>> On Tue, 30 Oct 2018 11:45:54 +0100
>>> Peter Zijlstra wrote:
>>> [...]
> 2. This is related to perf_event_open syscall reproducer
On Wed, Oct 31, 2018 at 05:40:10PM +0100, Juri Lelli wrote:
> I'm seeing something along the lines of what Peter suggested as a last
> resort measure we probably still need to put in place.
Yes, you certainly hope to never hit that, and should not in a proper
setup. But we need something to keep
On Wed, Oct 31, 2018 at 05:40:10PM +0100, Juri Lelli wrote:
> I'm seeing something along the lines of what Peter suggested as a last
> resort measure we probably still need to put in place.
Yes, you certainly hope to never hit that, and should not in a proper
setup. But we need something to keep
On Wed, Oct 31, 2018 at 05:18:00PM +0100, Daniel Bristot de Oliveira wrote:
> Brazilian part of the Ph.D we are dealing with probabilistic worst case
> execution time, and to be able to use probabilistic methods, we need to remove
> the noise of the IRQs in the execution time [1]. So, IMHO, using
On Wed, Oct 31, 2018 at 05:18:00PM +0100, Daniel Bristot de Oliveira wrote:
> Brazilian part of the Ph.D we are dealing with probabilistic worst case
> execution time, and to be able to use probabilistic methods, we need to remove
> the noise of the IRQs in the execution time [1]. So, IMHO, using
On 31/10/18 17:18, Daniel Bristot de Oliveira wrote:
> On 10/30/18 12:08 PM, luca abeni wrote:
> > Hi Peter,
> >
> > On Tue, 30 Oct 2018 11:45:54 +0100
> > Peter Zijlstra wrote:
> > [...]
> >>> 2. This is related to perf_event_open syscall reproducer does
> >>> before becoming DEADLINE and
On 31/10/18 17:18, Daniel Bristot de Oliveira wrote:
> On 10/30/18 12:08 PM, luca abeni wrote:
> > Hi Peter,
> >
> > On Tue, 30 Oct 2018 11:45:54 +0100
> > Peter Zijlstra wrote:
> > [...]
> >>> 2. This is related to perf_event_open syscall reproducer does
> >>> before becoming DEADLINE and
On 10/30/18 12:08 PM, luca abeni wrote:
> Hi Peter,
>
> On Tue, 30 Oct 2018 11:45:54 +0100
> Peter Zijlstra wrote:
> [...]
>>> 2. This is related to perf_event_open syscall reproducer does
>>> before becoming DEADLINE and entering the busy loop. Enabling of
>>> perf swevents generates lot of
On 10/30/18 12:08 PM, luca abeni wrote:
> Hi Peter,
>
> On Tue, 30 Oct 2018 11:45:54 +0100
> Peter Zijlstra wrote:
> [...]
>>> 2. This is related to perf_event_open syscall reproducer does
>>> before becoming DEADLINE and entering the busy loop. Enabling of
>>> perf swevents generates lot of
On 30/10/18 11:45, Peter Zijlstra wrote:
[...]
> Hurm.. right. We knew of this issue back when we did it.
> I suppose now it hurts and we need to figure something out.
>
> By virtue of being a real-time class, we do indeed need to have deadline
> on the wall-clock. But if we then don't account
On 30/10/18 11:45, Peter Zijlstra wrote:
[...]
> Hurm.. right. We knew of this issue back when we did it.
> I suppose now it hurts and we need to figure something out.
>
> By virtue of being a real-time class, we do indeed need to have deadline
> on the wall-clock. But if we then don't account
Hi Peter,
On Tue, 30 Oct 2018 11:45:54 +0100
Peter Zijlstra wrote:
[...]
> > 2. This is related to perf_event_open syscall reproducer does
> > before becoming DEADLINE and entering the busy loop. Enabling of
> > perf swevents generates lot of hrtimers load that happens in the
> > reproducer
Hi Peter,
On Tue, 30 Oct 2018 11:45:54 +0100
Peter Zijlstra wrote:
[...]
> > 2. This is related to perf_event_open syscall reproducer does
> > before becoming DEADLINE and entering the busy loop. Enabling of
> > perf swevents generates lot of hrtimers load that happens in the
> > reproducer
On Wed, Oct 24, 2018 at 02:03:35PM +0200, Juri Lelli wrote:
> Pain points:
>
> 1. Granularity of enforcement (at each tick) is huge compared with
> the task runtime. This makes starting the replenishment timer,
> when runtime is depleted, always to fail (because old deadline
> is way
On Wed, Oct 24, 2018 at 02:03:35PM +0200, Juri Lelli wrote:
> Pain points:
>
> 1. Granularity of enforcement (at each tick) is huge compared with
> the task runtime. This makes starting the replenishment timer,
> when runtime is depleted, always to fail (because old deadline
> is way
On 27/10/18 12:16, Dmitry Vyukov wrote:
> On Wed, Oct 24, 2018 at 1:03 PM, Juri Lelli wrote:
> >
> > On 19/10/18 22:50, luca abeni wrote:
> > > On Fri, 19 Oct 2018 13:39:42 +0200
> > > Peter Zijlstra wrote:
> > >
> > > > On Thu, Oct 18, 2018 at 01:08:11PM +0200, luca abeni wrote:
> > > > > Ok, I
On 27/10/18 12:16, Dmitry Vyukov wrote:
> On Wed, Oct 24, 2018 at 1:03 PM, Juri Lelli wrote:
> >
> > On 19/10/18 22:50, luca abeni wrote:
> > > On Fri, 19 Oct 2018 13:39:42 +0200
> > > Peter Zijlstra wrote:
> > >
> > > > On Thu, Oct 18, 2018 at 01:08:11PM +0200, luca abeni wrote:
> > > > > Ok, I
On Wed, Oct 24, 2018 at 1:03 PM, Juri Lelli wrote:
>
> On 19/10/18 22:50, luca abeni wrote:
> > On Fri, 19 Oct 2018 13:39:42 +0200
> > Peter Zijlstra wrote:
> >
> > > On Thu, Oct 18, 2018 at 01:08:11PM +0200, luca abeni wrote:
> > > > Ok, I see the issue now: the problem is that the "while
> > >
On Wed, Oct 24, 2018 at 1:03 PM, Juri Lelli wrote:
>
> On 19/10/18 22:50, luca abeni wrote:
> > On Fri, 19 Oct 2018 13:39:42 +0200
> > Peter Zijlstra wrote:
> >
> > > On Thu, Oct 18, 2018 at 01:08:11PM +0200, luca abeni wrote:
> > > > Ok, I see the issue now: the problem is that the "while
> > >
On 19/10/18 22:50, luca abeni wrote:
> On Fri, 19 Oct 2018 13:39:42 +0200
> Peter Zijlstra wrote:
>
> > On Thu, Oct 18, 2018 at 01:08:11PM +0200, luca abeni wrote:
> > > Ok, I see the issue now: the problem is that the "while
> > > (dl_se->runtime <= 0)" loop is executed at replenishment time,
On 19/10/18 22:50, luca abeni wrote:
> On Fri, 19 Oct 2018 13:39:42 +0200
> Peter Zijlstra wrote:
>
> > On Thu, Oct 18, 2018 at 01:08:11PM +0200, luca abeni wrote:
> > > Ok, I see the issue now: the problem is that the "while
> > > (dl_se->runtime <= 0)" loop is executed at replenishment time,
On Fri, 19 Oct 2018 13:39:42 +0200
Peter Zijlstra wrote:
> On Thu, Oct 18, 2018 at 01:08:11PM +0200, luca abeni wrote:
> > Ok, I see the issue now: the problem is that the "while
> > (dl_se->runtime <= 0)" loop is executed at replenishment time, but
> > the deadline should be postponed at
On Fri, 19 Oct 2018 13:39:42 +0200
Peter Zijlstra wrote:
> On Thu, Oct 18, 2018 at 01:08:11PM +0200, luca abeni wrote:
> > Ok, I see the issue now: the problem is that the "while
> > (dl_se->runtime <= 0)" loop is executed at replenishment time, but
> > the deadline should be postponed at
On Thu, Oct 18, 2018 at 12:33:32PM +0200, luca abeni wrote:
> Hi Peter,
>
> On Thu, 18 Oct 2018 11:48:50 +0200
> Peter Zijlstra wrote:
> [...]
> > > So, I tend to think that we might want to play safe and put some
> > > higher minimum value for dl_runtime (it's currently at 1ULL <<
> > >
On Thu, Oct 18, 2018 at 12:33:32PM +0200, luca abeni wrote:
> Hi Peter,
>
> On Thu, 18 Oct 2018 11:48:50 +0200
> Peter Zijlstra wrote:
> [...]
> > > So, I tend to think that we might want to play safe and put some
> > > higher minimum value for dl_runtime (it's currently at 1ULL <<
> > >
On Thu, Oct 18, 2018 at 01:08:11PM +0200, luca abeni wrote:
> Ok, I see the issue now: the problem is that the "while (dl_se->runtime
> <= 0)" loop is executed at replenishment time, but the deadline should
> be postponed at enforcement time.
>
> I mean: in update_curr_dl() we do:
>
On Thu, Oct 18, 2018 at 01:08:11PM +0200, luca abeni wrote:
> Ok, I see the issue now: the problem is that the "while (dl_se->runtime
> <= 0)" loop is executed at replenishment time, but the deadline should
> be postponed at enforcement time.
>
> I mean: in update_curr_dl() we do:
>
Hi Juri,
On Thu, 18 Oct 2018 14:21:42 +0200
Juri Lelli wrote:
[...]
> > > > I missed the original emails, but maybe the issue is that the
> > > > task blocks before the tick, and when it wakes up again
> > > > something goes wrong with the deadline and runtime assignment?
> > > > (maybe because
Hi Juri,
On Thu, 18 Oct 2018 14:21:42 +0200
Juri Lelli wrote:
[...]
> > > > I missed the original emails, but maybe the issue is that the
> > > > task blocks before the tick, and when it wakes up again
> > > > something goes wrong with the deadline and runtime assignment?
> > > > (maybe because
On 18/10/18 13:08, luca abeni wrote:
> On Thu, 18 Oct 2018 12:47:13 +0200
> Juri Lelli wrote:
>
> > Hi,
> >
> > On 18/10/18 12:23, luca abeni wrote:
> > > Hi Juri,
> > >
> > > On Thu, 18 Oct 2018 10:28:38 +0200
> > > Juri Lelli wrote:
> > > [...]
> > > > struct sched_attr {
> > > >
On 18/10/18 13:08, luca abeni wrote:
> On Thu, 18 Oct 2018 12:47:13 +0200
> Juri Lelli wrote:
>
> > Hi,
> >
> > On 18/10/18 12:23, luca abeni wrote:
> > > Hi Juri,
> > >
> > > On Thu, 18 Oct 2018 10:28:38 +0200
> > > Juri Lelli wrote:
> > > [...]
> > > > struct sched_attr {
> > > >
On Thu, 18 Oct 2018 12:47:13 +0200
Juri Lelli wrote:
> Hi,
>
> On 18/10/18 12:23, luca abeni wrote:
> > Hi Juri,
> >
> > On Thu, 18 Oct 2018 10:28:38 +0200
> > Juri Lelli wrote:
> > [...]
> > > struct sched_attr {
> > > .size = 0,
> > > .policy = 6,
> > > .flags=
On Thu, 18 Oct 2018 12:47:13 +0200
Juri Lelli wrote:
> Hi,
>
> On 18/10/18 12:23, luca abeni wrote:
> > Hi Juri,
> >
> > On Thu, 18 Oct 2018 10:28:38 +0200
> > Juri Lelli wrote:
> > [...]
> > > struct sched_attr {
> > > .size = 0,
> > > .policy = 6,
> > > .flags=
Hi,
On 18/10/18 12:23, luca abeni wrote:
> Hi Juri,
>
> On Thu, 18 Oct 2018 10:28:38 +0200
> Juri Lelli wrote:
> [...]
> > struct sched_attr {
> > .size = 0,
> > .policy = 6,
> > .flags = 0,
> > .nice = 0,
> > .priority = 0,
> > .runtime= 0x9917,
> >
Hi,
On 18/10/18 12:23, luca abeni wrote:
> Hi Juri,
>
> On Thu, 18 Oct 2018 10:28:38 +0200
> Juri Lelli wrote:
> [...]
> > struct sched_attr {
> > .size = 0,
> > .policy = 6,
> > .flags = 0,
> > .nice = 0,
> > .priority = 0,
> > .runtime= 0x9917,
> >
Hi Juri,
On Thu, 18 Oct 2018 12:10:08 +0200
Juri Lelli wrote:
[...]
> > Yes, a HZ related limit sounds like something we'd want. But if
> > we're going to do a minimum sysctl, we should also consider adding
> > a maximum, if you set a massive period/deadline, you can, even with
> > a relatively
Hi Juri,
On Thu, 18 Oct 2018 12:10:08 +0200
Juri Lelli wrote:
[...]
> > Yes, a HZ related limit sounds like something we'd want. But if
> > we're going to do a minimum sysctl, we should also consider adding
> > a maximum, if you set a massive period/deadline, you can, even with
> > a relatively
Hi Peter,
On Thu, 18 Oct 2018 11:48:50 +0200
Peter Zijlstra wrote:
[...]
> > So, I tend to think that we might want to play safe and put some
> > higher minimum value for dl_runtime (it's currently at 1ULL <<
> > DL_SCALE). Guess the problem is to pick a reasonable value, though.
> > Maybe link
Hi Peter,
On Thu, 18 Oct 2018 11:48:50 +0200
Peter Zijlstra wrote:
[...]
> > So, I tend to think that we might want to play safe and put some
> > higher minimum value for dl_runtime (it's currently at 1ULL <<
> > DL_SCALE). Guess the problem is to pick a reasonable value, though.
> > Maybe link
Hi Juri,
On Thu, 18 Oct 2018 10:28:38 +0200
Juri Lelli wrote:
[...]
> struct sched_attr {
> .size = 0,
> .policy = 6,
> .flags= 0,
> .nice = 0,
> .priority = 0,
> .runtime = 0x9917,
> .deadline = 0x,
> .period = 0,
> }
>
> So, we seem to be
Hi Juri,
On Thu, 18 Oct 2018 10:28:38 +0200
Juri Lelli wrote:
[...]
> struct sched_attr {
> .size = 0,
> .policy = 6,
> .flags= 0,
> .nice = 0,
> .priority = 0,
> .runtime = 0x9917,
> .deadline = 0x,
> .period = 0,
> }
>
> So, we seem to be
On 18/10/18 11:48, Peter Zijlstra wrote:
> On Thu, Oct 18, 2018 at 10:28:38AM +0200, Juri Lelli wrote:
>
> > Another side problem seems also to be that with such tiny parameters we
> > spend lot of time in the while (dl_se->runtime <= 0) loop of replenish_dl_
> > entity() (actually uselessly, as
On 18/10/18 11:48, Peter Zijlstra wrote:
> On Thu, Oct 18, 2018 at 10:28:38AM +0200, Juri Lelli wrote:
>
> > Another side problem seems also to be that with such tiny parameters we
> > spend lot of time in the while (dl_se->runtime <= 0) loop of replenish_dl_
> > entity() (actually uselessly, as
On Thu, Oct 18, 2018 at 10:28:38AM +0200, Juri Lelli wrote:
> Another side problem seems also to be that with such tiny parameters we
> spend lot of time in the while (dl_se->runtime <= 0) loop of replenish_dl_
> entity() (actually uselessly, as deadline is most probably going to
> still be in
On Thu, Oct 18, 2018 at 10:28:38AM +0200, Juri Lelli wrote:
> Another side problem seems also to be that with such tiny parameters we
> spend lot of time in the while (dl_se->runtime <= 0) loop of replenish_dl_
> entity() (actually uselessly, as deadline is most probably going to
> still be in
On 16/10/18 16:03, Peter Zijlstra wrote:
> On Tue, Oct 16, 2018 at 03:24:06PM +0200, Thomas Gleixner wrote:
> > It does reproduce here but with a kworker stall. Looking at the reproducer:
> >
> > *(uint32_t*)0x2000 = 0;
> > *(uint32_t*)0x2004 = 6;
> > *(uint64_t*)0x2008 = 0;
> >
On 16/10/18 16:03, Peter Zijlstra wrote:
> On Tue, Oct 16, 2018 at 03:24:06PM +0200, Thomas Gleixner wrote:
> > It does reproduce here but with a kworker stall. Looking at the reproducer:
> >
> > *(uint32_t*)0x2000 = 0;
> > *(uint32_t*)0x2004 = 6;
> > *(uint64_t*)0x2008 = 0;
> >
On 16/10/18 16:45, Thomas Gleixner wrote:
> On Tue, 16 Oct 2018, Juri Lelli wrote:
> > On 16/10/18 16:03, Peter Zijlstra wrote:
> > > On Tue, Oct 16, 2018 at 03:24:06PM +0200, Thomas Gleixner wrote:
> > > > It does reproduce here but with a kworker stall. Looking at the
> > > > reproducer:
> > >
On 16/10/18 16:45, Thomas Gleixner wrote:
> On Tue, 16 Oct 2018, Juri Lelli wrote:
> > On 16/10/18 16:03, Peter Zijlstra wrote:
> > > On Tue, Oct 16, 2018 at 03:24:06PM +0200, Thomas Gleixner wrote:
> > > > It does reproduce here but with a kworker stall. Looking at the
> > > > reproducer:
> > >
On Tue, 16 Oct 2018, Juri Lelli wrote:
> On 16/10/18 16:03, Peter Zijlstra wrote:
> > On Tue, Oct 16, 2018 at 03:24:06PM +0200, Thomas Gleixner wrote:
> > > It does reproduce here but with a kworker stall. Looking at the
> > > reproducer:
> > >
> > > *(uint32_t*)0x2000 = 0;
> > >
On Tue, 16 Oct 2018, Juri Lelli wrote:
> On 16/10/18 16:03, Peter Zijlstra wrote:
> > On Tue, Oct 16, 2018 at 03:24:06PM +0200, Thomas Gleixner wrote:
> > > It does reproduce here but with a kworker stall. Looking at the
> > > reproducer:
> > >
> > > *(uint32_t*)0x2000 = 0;
> > >
On 16/10/18 16:03, Peter Zijlstra wrote:
> On Tue, Oct 16, 2018 at 03:24:06PM +0200, Thomas Gleixner wrote:
> > It does reproduce here but with a kworker stall. Looking at the reproducer:
> >
> > *(uint32_t*)0x2000 = 0;
> > *(uint32_t*)0x2004 = 6;
> > *(uint64_t*)0x2008 = 0;
> >
On 16/10/18 16:03, Peter Zijlstra wrote:
> On Tue, Oct 16, 2018 at 03:24:06PM +0200, Thomas Gleixner wrote:
> > It does reproduce here but with a kworker stall. Looking at the reproducer:
> >
> > *(uint32_t*)0x2000 = 0;
> > *(uint32_t*)0x2004 = 6;
> > *(uint64_t*)0x2008 = 0;
> >
On Tue, Oct 16, 2018 at 03:24:06PM +0200, Thomas Gleixner wrote:
> It does reproduce here but with a kworker stall. Looking at the reproducer:
>
> *(uint32_t*)0x2000 = 0;
> *(uint32_t*)0x2004 = 6;
> *(uint64_t*)0x2008 = 0;
> *(uint32_t*)0x2010 = 0;
>
On Tue, Oct 16, 2018 at 03:24:06PM +0200, Thomas Gleixner wrote:
> It does reproduce here but with a kworker stall. Looking at the reproducer:
>
> *(uint32_t*)0x2000 = 0;
> *(uint32_t*)0x2004 = 6;
> *(uint64_t*)0x2008 = 0;
> *(uint32_t*)0x2010 = 0;
>
On Sat, 13 Oct 2018, syzbot wrote:
> syzbot found the following crash on:
>
> HEAD commit:6b3944e42e2e afs: Fix cell proc list
> git tree: upstream
> console output: https://syzkaller.appspot.com/x/log.txt?x=1545a47940
> kernel config:
On Sat, 13 Oct 2018, syzbot wrote:
> syzbot found the following crash on:
>
> HEAD commit:6b3944e42e2e afs: Fix cell proc list
> git tree: upstream
> console output: https://syzkaller.appspot.com/x/log.txt?x=1545a47940
> kernel config:
Hello,
syzbot found the following crash on:
HEAD commit:6b3944e42e2e afs: Fix cell proc list
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=1545a47940
kernel config: https://syzkaller.appspot.com/x/.config?x=88e9a8a39dc0be2d
dashboard link:
Hello,
syzbot found the following crash on:
HEAD commit:6b3944e42e2e afs: Fix cell proc list
git tree: upstream
console output: https://syzkaller.appspot.com/x/log.txt?x=1545a47940
kernel config: https://syzkaller.appspot.com/x/.config?x=88e9a8a39dc0be2d
dashboard link:
70 matches
Mail list logo