0x0 phys_seg 2 prio class 0
[ 408.620290] BUG: scheduling while atomic: swapper/16/0/0x0100
[ 408.620325] Modules linked in: sctp bridge 8021q garp stp mrp llc
psmouse dlm configfs aoe ipmi_ssif amd64_edac_mod edac_mce_amd
amd_energy kvm_amd kvm irqbypass ghash_clmulni_intel aesni_intel lib
On Wed, 2020-12-09 at 07:13 +0100, Mike Galbraith wrote:
> On Wed, 2020-12-09 at 00:26 +0100, Vitaly Wool wrote:
> > Hi Mike,
> >
> > On 2020-12-07 16:41, Mike Galbraith wrote:
> > > On Mon, 2020-12-07 at 16:21 +0100, Vitaly Wool wrote:
> > >> On Mon, Dec 7, 2020 at 1:34 PM Mike Galbraith wrote:
>
On Wed, 2020-12-09 at 00:26 +0100, Vitaly Wool wrote:
> Hi Mike,
>
> On 2020-12-07 16:41, Mike Galbraith wrote:
> > On Mon, 2020-12-07 at 16:21 +0100, Vitaly Wool wrote:
> >> On Mon, Dec 7, 2020 at 1:34 PM Mike Galbraith wrote:
> >>>
> >>
> >>> Unfortunately, that made zero difference.
> >>
> >> O
Hi Mike,
On 2020-12-07 16:41, Mike Galbraith wrote:
On Mon, 2020-12-07 at 16:21 +0100, Vitaly Wool wrote:
On Mon, Dec 7, 2020 at 1:34 PM Mike Galbraith wrote:
Unfortunately, that made zero difference.
Okay, I suggest that you submit the patch that changes read_lock() to
write_lock() in
On Mon, 2020-12-07 at 16:21 +0100, Vitaly Wool wrote:
> On Mon, Dec 7, 2020 at 1:34 PM Mike Galbraith wrote:
> >
>
> > Unfortunately, that made zero difference.
>
> Okay, I suggest that you submit the patch that changes read_lock() to
> write_lock() in __release_z3fold_page() and I'll ack it then.
On 2020-12-07 16:21:20 [+0100], Vitaly Wool wrote:
> On Mon, Dec 7, 2020 at 1:34 PM Mike Galbraith wrote:
> >
> > Unfortunately, that made zero difference.
>
> Okay, I suggest that you submit the patch that changes read_lock() to
> write_lock() in __release_z3fold_page() and I'll ack it then.
> I
On Mon, Dec 7, 2020 at 1:34 PM Mike Galbraith wrote:
>
> On Mon, 2020-12-07 at 12:52 +0100, Vitaly Wool wrote:
> >
> > Thanks. This trace beats me because I don't quite get how this could
> > have happened.
>
> I swear there's a mythical creature loose in there somewhere ;-)
> Everything looks jus
On Mon, 2020-12-07 at 12:52 +0100, Vitaly Wool wrote:
>
> Thanks. This trace beats me because I don't quite get how this could
> have happened.
I swear there's a mythical creature loose in there somewhere ;-)
Everything looks just peachy up to the instant it goes boom, then you
find in the wreckag
On Mon, Dec 7, 2020 at 3:18 AM Mike Galbraith wrote:
>
> On Mon, 2020-12-07 at 02:05 +0100, Vitaly Wool wrote:
> >
> > Could you please try the following patch in your setup:
>
> crash> gdb list *z3fold_zpool_free+0x527
> 0xc0e14487 is in z3fold_zpool_free (mm/z3fold.c:341).
> 336
On Mon, 2020-12-07 at 02:05 +0100, Vitaly Wool wrote:
>
> Could you please try the following patch in your setup:
crash> gdb list *z3fold_zpool_free+0x527
0xc0e14487 is in z3fold_zpool_free (mm/z3fold.c:341).
336 if (slots->slot[i]) {
337 is_
On Thu, 2020-12-03 at 14:39 +0100, Sebastian Andrzej Siewior wrote:
> On 2020-12-03 09:18:21 [+0100], Mike Galbraith wrote:
> > On Thu, 2020-12-03 at 03:16 +0100, Mike Galbraith wrote:
> > > On Wed, 2020-12-02 at 23:08 +0100, Sebastian Andrzej Siewior wrote:
> > > Looks like...
> > >
> > > d8f117ab
On Thu, Dec 3, 2020 at 2:39 PM Sebastian Andrzej Siewior
wrote:
>
> On 2020-12-03 09:18:21 [+0100], Mike Galbraith wrote:
> > On Thu, 2020-12-03 at 03:16 +0100, Mike Galbraith wrote:
> > > On Wed, 2020-12-02 at 23:08 +0100, Sebastian Andrzej Siewior wrote:
> > > Looks like...
> > >
> > > d8f117abb
On 2020-12-03 09:18:21 [+0100], Mike Galbraith wrote:
> On Thu, 2020-12-03 at 03:16 +0100, Mike Galbraith wrote:
> > On Wed, 2020-12-02 at 23:08 +0100, Sebastian Andrzej Siewior wrote:
> > Looks like...
> >
> > d8f117abb380 z3fold: fix use-after-free when freeing handles
> >
> > ...wasn't completel
On Thu, 2020-12-03 at 03:16 +0100, Mike Galbraith wrote:
> On Wed, 2020-12-02 at 23:08 +0100, Sebastian Andrzej Siewior wrote:
> Looks like...
>
> d8f117abb380 z3fold: fix use-after-free when freeing handles
>
> ...wasn't completely effective...
The top two hunks seem to have rendered the thing RT
On Wed, 2020-12-02 at 23:08 +0100, Sebastian Andrzej Siewior wrote:
> On 2020-12-02 03:30:27 [+0100], Mike Galbraith wrote:
>
> > What I'm seeing is the below. rt_mutex_has_waiters() says yup we have
> > a waiter, rt_mutex_top_waiter() emits the missing cached leftmost, and
> > rt_mutex_dequeue_pi
On 2020-12-02 03:30:27 [+0100], Mike Galbraith wrote:
> > > In an LTP install, ./runltp -f mm. Shortly after box starts swapping
> > > insanely, it explodes quite reliably here with either z3fold or
> > > zsmalloc.. but not with zbud.
>
> What I'm seeing is the below. rt_mutex_has_waiters() says
On Mon, 2020-11-30 at 17:03 +0100, Sebastian Andrzej Siewior wrote:
> On 2020-11-30 16:01:11 [+0100], Mike Galbraith wrote:
> > On Mon, 2020-11-30 at 15:52 +0100, Sebastian Andrzej Siewior wrote:
> > > How do you test this? I triggered a few oom-killer and I have here git
> > > gc running for a few
On Mon, 2020-11-30 at 17:32 +0100, Sebastian Andrzej Siewior wrote:
> On 2020-11-30 17:27:17 [+0100], Mike Galbraith wrote:
> > > This just passed. It however killed my git-gc task which wasn't done.
> > > Let me try tomorrow with your config.
> >
> > FYI, I tried 5.9-rt (after fixing 5.9.11), it e
On Mon, 2020-11-30 at 17:27 +0100, Mike Galbraith wrote:
> On Mon, 2020-11-30 at 17:03 +0100, Sebastian Andrzej Siewior wrote:
> > On 2020-11-30 16:01:11 [+0100], Mike Galbraith wrote:
> > > On Mon, 2020-11-30 at 15:52 +0100, Sebastian Andrzej Siewior wrote:
> > > > How do you test this? I triggere
On Mon, 2020-11-30 at 17:32 +0100, Sebastian Andrzej Siewior wrote:
> On 2020-11-30 17:27:17 [+0100], Mike Galbraith wrote:
> > > This just passed. It however killed my git-gc task which wasn't done.
> > > Let me try tomorrow with your config.
> >
> > FYI, I tried 5.9-rt (after fixing 5.9.11), it e
On 2020-11-30 17:27:17 [+0100], Mike Galbraith wrote:
> > This just passed. It however killed my git-gc task which wasn't done.
> > Let me try tomorrow with your config.
>
> FYI, I tried 5.9-rt (after fixing 5.9.11), it exploded in the same way,
> so (as expected) it's not some devel tree oopsie.
On Mon, 2020-11-30 at 17:03 +0100, Sebastian Andrzej Siewior wrote:
> On 2020-11-30 16:01:11 [+0100], Mike Galbraith wrote:
> > On Mon, 2020-11-30 at 15:52 +0100, Sebastian Andrzej Siewior wrote:
> > > How do you test this? I triggered a few oom-killer and I have here git
> > > gc running for a few
On 2020-11-30 16:01:11 [+0100], Mike Galbraith wrote:
> On Mon, 2020-11-30 at 15:52 +0100, Sebastian Andrzej Siewior wrote:
> > How do you test this? I triggered a few oom-killer and I have here git
> > gc running for a few hours now… Everything is fine.
>
> In an LTP install, ./runltp -f mm. Sho
On Mon, 2020-11-30 at 16:01 +0100, Mike Galbraith wrote:
> On Mon, 2020-11-30 at 15:52 +0100, Sebastian Andrzej Siewior wrote:
> > How do you test this? I triggered a few oom-killer and I have here git
> > gc running for a few hours now… Everything is fine.
>
> In an LTP install, ./runltp -f mm. S
On Mon, 2020-11-30 at 15:52 +0100, Sebastian Andrzej Siewior wrote:
> How do you test this? I triggered a few oom-killer and I have here git
> gc running for a few hours now… Everything is fine.
In an LTP install, ./runltp -f mm. Shortly after box starts swapping
insanely, it explodes quite relia
On 2020-11-30 15:42:46 [+0100], Mike Galbraith wrote:
> This explodes in write_unlock() as mine did. Oleksandr's local_lock()
> variant explodes in the lock he added. (ew, corruption)
>
> I think I'll try a stable-rt tree. This master tree _should_ be fine
> given it seems to work just peachy
On Mon, 2020-11-30 at 14:20 +0100, Sebastian Andrzej Siewior wrote:
> On 2020-11-29 12:41:14 [+0100], Mike Galbraith wrote:
> > On Sun, 2020-11-29 at 12:29 +0100, Oleksandr Natalenko wrote:
> > >
> > > Ummm so do compressors explode under non-rt kernel in your tests as
> > > well, or it is just -rt
On 2020-11-30 14:53:22 [+0100], Oleksandr Natalenko wrote:
> > diff --git a/mm/zswap.c b/mm/zswap.c
> > index 78a20f7b00f2c..b24f761b9241c 100644
> > --- a/mm/zswap.c
> > +++ b/mm/zswap.c
> > @@ -394,7 +394,9 @@ struct zswap_comp {
> > u8 *dstmem;
> > };
> >
> > -static DEFINE_PER_CPU(struct
On Mon, Nov 30, 2020 at 02:20:14PM +0100, Sebastian Andrzej Siewior wrote:
> On 2020-11-29 12:41:14 [+0100], Mike Galbraith wrote:
> > On Sun, 2020-11-29 at 12:29 +0100, Oleksandr Natalenko wrote:
> > >
> > > Ummm so do compressors explode under non-rt kernel in your tests as
> > > well, or it is j
On 2020-11-29 12:41:14 [+0100], Mike Galbraith wrote:
> On Sun, 2020-11-29 at 12:29 +0100, Oleksandr Natalenko wrote:
> >
> > Ummm so do compressors explode under non-rt kernel in your tests as
> > well, or it is just -rt that triggers this?
>
> I only tested a non-rt kernel with z3fold, which wor
On Sun, 2020-11-29 at 12:29 +0100, Oleksandr Natalenko wrote:
>
> Ummm so do compressors explode under non-rt kernel in your tests as
> well, or it is just -rt that triggers this?
I only tested a non-rt kernel with z3fold, which worked just fine.
-Mike
On Sun, Nov 29, 2020 at 11:56:55AM +0100, Mike Galbraith wrote:
> On Sun, 2020-11-29 at 10:21 +0100, Mike Galbraith wrote:
> > On Sun, 2020-11-29 at 08:48 +0100, Mike Galbraith wrote:
> > > On Sun, 2020-11-29 at 07:41 +0100, Mike Galbraith wrote:
> > > > On Sat, 2020-11-28 at 15:27 +0100, Oleksandr
On Sun, 2020-11-29 at 10:21 +0100, Mike Galbraith wrote:
> On Sun, 2020-11-29 at 08:48 +0100, Mike Galbraith wrote:
> > On Sun, 2020-11-29 at 07:41 +0100, Mike Galbraith wrote:
> > > On Sat, 2020-11-28 at 15:27 +0100, Oleksandr Natalenko wrote:
> > > >
> > > > > > Shouldn't the list manipulation be
On Sun, 2020-11-29 at 08:48 +0100, Mike Galbraith wrote:
> On Sun, 2020-11-29 at 07:41 +0100, Mike Galbraith wrote:
> > On Sat, 2020-11-28 at 15:27 +0100, Oleksandr Natalenko wrote:
> > >
> > > > > Shouldn't the list manipulation be protected with
> > > > > local_lock+this_cpu_ptr instead of get_cp
On Sun, 2020-11-29 at 07:41 +0100, Mike Galbraith wrote:
> On Sat, 2020-11-28 at 15:27 +0100, Oleksandr Natalenko wrote:
> >
> > > > Shouldn't the list manipulation be protected with
> > > > local_lock+this_cpu_ptr instead of get_cpu_ptr+spin_lock?
> >
> > Totally untested:
>
> Hrm, the thing doesn
On Sat, 2020-11-28 at 15:27 +0100, Oleksandr Natalenko wrote:
>
> > > Shouldn't the list manipulation be protected with
> > > local_lock+this_cpu_ptr instead of get_cpu_ptr+spin_lock?
>
> Totally untested:
Hrm, the thing doesn't seem to care deeply about preemption being
disabled, so adding anothe
On Sat, Nov 28, 2020 at 03:09:24PM +0100, Oleksandr Natalenko wrote:
> > While running v5.10-rc5-rt11 I bumped into the following:
> >
> > ```
> > BUG: scheduling while atomic: git/18695/0x0002
> > Preemption disabled at:
> > [] z3fold_zpool_mal
On Sat, Nov 28, 2020 at 03:05:24PM +0100, Oleksandr Natalenko wrote:
> Hi.
>
> While running v5.10-rc5-rt11 I bumped into the following:
>
> ```
> BUG: scheduling while atomic: git/18695/0x0002
> Preemption disabled at:
> [] z3fold_zpool_malloc+0x463/0x6e0
> …
Hi.
While running v5.10-rc5-rt11 I bumped into the following:
```
BUG: scheduling while atomic: git/18695/0x0002
Preemption disabled at:
[] z3fold_zpool_malloc+0x463/0x6e0
…
Call Trace:
dump_stack+0x6d/0x88
__schedule_bug.cold+0x88/0x96
__schedule+0x69e/0x8c0
preempt_schedule_lock+0x51
The issue reported here:
https://lkml.org/lkml/2020/7/28/1085
is still present as of 5.9-rc3; it was introduced in the 5.8 cycle.
When the problem occurs, iscsid crashes and iscsi volumes fail to come
up, which makes the machine quite sad if the volumes are critical to
its function.
Added CCs
e link details in sysfs") caused sleeping/scheduling while
> >> atomic warnings.
> >>
> >> BUG: sleeping function called from invalid context at
> >> kernel/locking/mutex.c:935
> >> in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 12, name:
> &
Hi
On 16.07.2020 07:30, Guenter Roeck wrote:
> On 7/15/20 10:08 PM, Saravana Kannan wrote:
>> Marek and Guenter reported that commit 287905e68dd2 ("driver core:
>> Expose device link details in sysfs") caused sleeping/scheduling while
>> atomic warnings.
>>
On 7/15/20 10:08 PM, Saravana Kannan wrote:
> Marek and Guenter reported that commit 287905e68dd2 ("driver core:
> Expose device link details in sysfs") caused sleeping/scheduling while
> atomic warnings.
>
> BUG: sleeping function called from invalid context at
>
Marek and Guenter reported that commit 287905e68dd2 ("driver core:
Expose device link details in sysfs") caused sleeping/scheduling while
atomic warnings.
BUG: sleeping function called from invalid context at kernel/locking/mutex.c:935
in_atomic(): 1, irqs_disabled(): 0, non_block:
From: Jeffrey Hugo
[ Upstream commit 3be60b564de49875e47974c37fabced893cd0931 ]
ufs_qcom_dump_dbg_regs() uses usleep_range, a sleeping function, but can be
called from atomic context in the following flow:
ufshcd_intr -> ufshcd_sl_intr -> ufshcd_check_errors ->
ufshcd_print_host_regs -> ufshcd_
From: Jeffrey Hugo
[ Upstream commit 3be60b564de49875e47974c37fabced893cd0931 ]
ufs_qcom_dump_dbg_regs() uses usleep_range, a sleeping function, but can be
called from atomic context in the following flow:
ufshcd_intr -> ufshcd_sl_intr -> ufshcd_check_errors ->
ufshcd_print_host_regs -> ufshcd_
From: Jeffrey Hugo
[ Upstream commit 3be60b564de49875e47974c37fabced893cd0931 ]
ufs_qcom_dump_dbg_regs() uses usleep_range, a sleeping function, but can be
called from atomic context in the following flow:
ufshcd_intr -> ufshcd_sl_intr -> ufshcd_check_errors ->
ufshcd_print_host_regs -> ufshcd_
From: Jeffrey Hugo
[ Upstream commit 3be60b564de49875e47974c37fabced893cd0931 ]
ufs_qcom_dump_dbg_regs() uses usleep_range, a sleeping function, but can be
called from atomic context in the following flow:
ufshcd_intr -> ufshcd_sl_intr -> ufshcd_check_errors ->
ufshcd_print_host_regs -> ufshcd_
From: Jeffrey Hugo
[ Upstream commit 3be60b564de49875e47974c37fabced893cd0931 ]
ufs_qcom_dump_dbg_regs() uses usleep_range, a sleeping function, but can be
called from atomic context in the following flow:
ufshcd_intr -> ufshcd_sl_intr -> ufshcd_check_errors ->
ufshcd_print_host_regs -> ufshcd_
From: Jeffrey Hugo
[ Upstream commit 3be60b564de49875e47974c37fabced893cd0931 ]
ufs_qcom_dump_dbg_regs() uses usleep_range, a sleeping function, but can be
called from atomic context in the following flow:
ufshcd_intr -> ufshcd_sl_intr -> ufshcd_check_errors ->
ufshcd_print_host_regs -> ufshcd_
From: Jeffrey Hugo
[ Upstream commit 3be60b564de49875e47974c37fabced893cd0931 ]
ufs_qcom_dump_dbg_regs() uses usleep_range, a sleeping function, but can be
called from atomic context in the following flow:
ufshcd_intr -> ufshcd_sl_intr -> ufshcd_check_errors ->
ufshcd_print_host_regs -> ufshcd_
From: Jeffrey Hugo
[ Upstream commit 3be60b564de49875e47974c37fabced893cd0931 ]
ufs_qcom_dump_dbg_regs() uses usleep_range, a sleeping function, but can be
called from atomic context in the following flow:
ufshcd_intr -> ufshcd_sl_intr -> ufshcd_check_errors ->
ufshcd_print_host_regs -> ufshcd_
; ufshcd_print_host_regs -> ufshcd_vops_dbg_register_dump ->
> ufs_qcom_dump_dbg_regs
>
> [...]
Applied to 5.8/scsi-queue, thanks!
[1/1] scsi: ufs-qcom: Fix scheduling while atomic issue
https://git.kernel.org/mkp/scsi/c/3be60b564de4
--
Martin K. Petersen Oracle Linux Engineering
On Tue, 2020-05-26 at 06:25 +, Avri Altman wrote:
>
> > ufs_qcom_dump_dbg_regs() uses usleep_range, a sleeping function,
> > but can
> > be called from atomic context in the following flow:
> >
> > ufshcd_intr -> ufshcd_sl_intr -> ufshcd_check_errors ->
> > ufshcd_print_host_regs -> ufshcd_v
> ufs_qcom_dump_dbg_regs() uses usleep_range, a sleeping function, but can
> be called from atomic context in the following flow:
>
> ufshcd_intr -> ufshcd_sl_intr -> ufshcd_check_errors ->
> ufshcd_print_host_regs -> ufshcd_vops_dbg_register_dump ->
> ufs_qcom_dump_dbg_regs
>
> This causes a b
ufs_qcom_dump_dbg_regs() uses usleep_range, a sleeping function, but can
be called from atomic context in the following flow:
ufshcd_intr -> ufshcd_sl_intr -> ufshcd_check_errors ->
ufshcd_print_host_regs -> ufshcd_vops_dbg_register_dump ->
ufs_qcom_dump_dbg_regs
This causes a boot crash on the L
Back in January a "BUG: scheduling while atomic" error showed up during
boot on my Meson8b Odroid-C1 (which uses a PWM regulator as CPU supply).
The call trace comes down to:
__mutex_lock
clk_prepare_lock
clk_core_get_rate
meson_pwm_apply
..
dev_pm_opp_set_rate
..
Jerom
On 30/03/2019 20:29, Martin Blumenstingl wrote:
> Hello Uwe,
>
> On Mon, Mar 25, 2019 at 9:07 PM Uwe Kleine-König
> wrote:
> [...]
- Does stopping the PWM (i.e. clearing MISC_{A,B}_EN in the MISC_AB
register) freeze the output, or is the currently running period
completed fi
On Sat, Mar 30, 2019 at 08:29:35PM +0100, Martin Blumenstingl wrote:
> Hello Uwe,
>
> On Mon, Mar 25, 2019 at 9:07 PM Uwe Kleine-König
> wrote:
> [...]
> > > > - Does stopping the PWM (i.e. clearing MISC_{A,B}_EN in the MISC_AB
> > > >register) freeze the output, or is the currently running
Hello Uwe,
On Mon, Mar 25, 2019 at 9:07 PM Uwe Kleine-König
wrote:
[...]
> > > - Does stopping the PWM (i.e. clearing MISC_{A,B}_EN in the MISC_AB
> > >register) freeze the output, or is the currently running period
> > >completed first? (The latter is the right behaviour.)
> > I don't k
Hi Jerome,
On Tue, Mar 26, 2019 at 9:37 AM Jerome Brunet wrote:
>
> On Mon, 2019-03-25 at 19:04 +0100, Martin Blumenstingl wrote:
> > > Thanks for fixing this Martin.
> > you're welcome!
> >
> > > As for the future enhancement, I'd like to know what you have in mind.
> > > As I have told you prev
Hello Uwe,
On Mon, Mar 25, 2019 at 9:07 PM Uwe Kleine-König
wrote:
>
> Hello Martin,
>
> On Mon, Mar 25, 2019 at 06:41:57PM +0100, Martin Blumenstingl wrote:
> > On Mon, Mar 25, 2019 at 9:41 AM Uwe Kleine-König
> > wrote:
> > > On Sun, Mar 24, 2019 at 11:02:16PM +0100, Martin Blumenstingl wrote:
>>> Back in January a "BUG: scheduling while atomic" error showed up during
> >>> boot on my Meson8b Odroid-C1 (which uses a PWM regulator as CPU supply).
> >>> The call trace comes down to:
> >>> __mutex_lock
> >>> clk_prepare
On 25/03/2019 18:41, Martin Blumenstingl wrote:
> Hello Uwe,
>
> On Mon, Mar 25, 2019 at 9:41 AM Uwe Kleine-König
> wrote:
>>
>> Hello Martin,
>>
>> On Sun, Mar 24, 2019 at 11:02:16PM +0100, Martin Blumenstingl wrote:
>>> Back in January a "BU
On 26/03/2019 09:37, Jerome Brunet wrote:
> On Mon, 2019-03-25 at 19:04 +0100, Martin Blumenstingl wrote:
>>> Thanks for fixing this Martin.
>> you're welcome!
>>
>>> As for the future enhancement, I'd like to know what you have in mind.
>>> As I have told you previously, I think the clock bindings
On Mon, 2019-03-25 at 19:04 +0100, Martin Blumenstingl wrote:
> > Thanks for fixing this Martin.
> you're welcome!
>
> > As for the future enhancement, I'd like to know what you have in mind.
> > As I have told you previously, I think the clock bindings of this driver are
> > not great.
> >
> > T
Hello Martin,
On Mon, Mar 25, 2019 at 06:41:57PM +0100, Martin Blumenstingl wrote:
> On Mon, Mar 25, 2019 at 9:41 AM Uwe Kleine-König
> wrote:
> > On Sun, Mar 24, 2019 at 11:02:16PM +0100, Martin Blumenstingl wrote:
> > > Analyzing this issue helped me understand the pwm-meson driver better.
> >
Hi Jerome,
On Mon, Mar 25, 2019 at 10:35 AM Jerome Brunet wrote:
>
> On Sun, 2019-03-24 at 23:02 +0100, Martin Blumenstingl wrote:
> > Back in January a "BUG: scheduling while atomic" error showed up during
> > boot on my Meson8b Odroid-C1 (which uses a PWM regulator
Hello Uwe,
On Mon, Mar 25, 2019 at 9:41 AM Uwe Kleine-König
wrote:
>
> Hello Martin,
>
> On Sun, Mar 24, 2019 at 11:02:16PM +0100, Martin Blumenstingl wrote:
> > Back in January a "BUG: scheduling while atomic" error showed up during
> > boot on my Meson8b Odr
On Sun, 2019-03-24 at 23:02 +0100, Martin Blumenstingl wrote:
> Back in January a "BUG: scheduling while atomic" error showed up during
> boot on my Meson8b Odroid-C1 (which uses a PWM regulator as CPU supply).
> The call trace comes down to:
> __mutex_loc
Hello,
On Mon, Mar 25, 2019 at 09:41:53AM +0100, Uwe Kleine-König wrote:
> If you want to implement further cleanups, my questions and propositions
> are:
>
> - Is there a publicly available manual for this hardware? If yes, you
>can add a link to it in the header of the driver.
>
> - Why
Hello Martin,
On Sun, Mar 24, 2019 at 11:02:16PM +0100, Martin Blumenstingl wrote:
> Back in January a "BUG: scheduling while atomic" error showed up during
> boot on my Meson8b Odroid-C1 (which uses a PWM regulator as CPU supply).
> The call trace comes down t
Back in January a "BUG: scheduling while atomic" error showed up during
boot on my Meson8b Odroid-C1 (which uses a PWM regulator as CPU supply).
The call trace comes down to:
__mutex_lock
clk_prepare_lock
clk_core_get_rate
meson_pwm_apply
..
dev_pm_opp_set_rate
..
Jerom
4.14-stable review patch. If anyone has any objections, please let me know.
--
From: Jisheng Zhang
[ Upstream commit 9024143e700f89d74b8cdaf316a3499d74fc56fe ]
When programming the inbound/outbound ATUs, we call usleep_range() after
each checking PCIE_ATU_ENABLE bit. Unfortuna
4.18-stable review patch. If anyone has any objections, please let me know.
--
From: Jisheng Zhang
[ Upstream commit 9024143e700f89d74b8cdaf316a3499d74fc56fe ]
When programming the inbound/outbound ATUs, we call usleep_range() after
each checking PCIE_ATU_ENABLE bit. Unfortuna
From: Jisheng Zhang
[ Upstream commit 9024143e700f89d74b8cdaf316a3499d74fc56fe ]
When programming the inbound/outbound ATUs, we call usleep_range() after
each checking PCIE_ATU_ENABLE bit. Unfortunately, the ATU programming
can be executed in atomic context:
inbound ATU programming could be cal
From: Jisheng Zhang
[ Upstream commit 9024143e700f89d74b8cdaf316a3499d74fc56fe ]
When programming the inbound/outbound ATUs, we call usleep_range() after
each checking PCIE_ATU_ENABLE bit. Unfortunately, the ATU programming
can be executed in atomic context:
inbound ATU programming could be cal
On Thu, Sep 13, 2018 at 04:05:54PM +0100, Lorenzo Pieralisi wrote:
> On Wed, Aug 29, 2018 at 11:04:08AM +0800, Jisheng Zhang wrote:
> > When programming inbound/outbound atu, we call usleep_range() after
> > each checking PCIE_ATU_ENABLE bit. Unfortunately, the atu programming
> > can be called in
On Wed, Aug 29, 2018 at 11:04:08AM +0800, Jisheng Zhang wrote:
> When programming inbound/outbound atu, we call usleep_range() after
> each checking PCIE_ATU_ENABLE bit. Unfortunately, the atu programming
> can be called in atomic context:
>
> inbound atu programming could be called through
> pci_
On Thu, Sep 13, 2018 at 06:29:54PM +0800, Jisheng Zhang wrote:
> Hi Lorenzo,
>
> On Thu, 13 Sep 2018 10:15:34 +0100 Lorenzo Pieralisi wrote:
>
> > On Mon, Sep 10, 2018 at 04:57:22PM +0800, Jisheng Zhang wrote:
> > > Hi all,
> > >
> > > On Wed, 29 Aug 2018 11:04:08 +0800 Jisheng Zhang wrote:
> >
Hi Lorenzo,
On Thu, 13 Sep 2018 10:15:34 +0100 Lorenzo Pieralisi wrote:
> On Mon, Sep 10, 2018 at 04:57:22PM +0800, Jisheng Zhang wrote:
> > Hi all,
> >
> > On Wed, 29 Aug 2018 11:04:08 +0800 Jisheng Zhang wrote:
> >
> > > When programming inbound/outbound atu, we call usleep_range() after
>
On Mon, Sep 10, 2018 at 04:57:22PM +0800, Jisheng Zhang wrote:
> Hi all,
>
> On Wed, 29 Aug 2018 11:04:08 +0800 Jisheng Zhang wrote:
>
> > When programming inbound/outbound atu, we call usleep_range() after
> > each checking PCIE_ATU_ENABLE bit. Unfortunately, the atu programming
> > can be calle
Hi all,
On Wed, 29 Aug 2018 11:04:08 +0800 Jisheng Zhang wrote:
> When programming inbound/outbound atu, we call usleep_range() after
> each checking PCIE_ATU_ENABLE bit. Unfortunately, the atu programming
> can be called in atomic context:
>
> inbound atu programming could be called through
> p
When programming inbound/outbound atu, we call usleep_range() after
each checking PCIE_ATU_ENABLE bit. Unfortunately, the atu programming
can be called in atomic context:
inbound atu programming could be called through
pci_epc_write_header()
=>dw_pcie_ep_write_header()
=>dw_pcie_prog_inbound
On Tue, Aug 21, 2018 at 02:15:12PM +0800, Jisheng Zhang wrote:
> When programming inbound/outbound atu, we call usleep_range() after
> each checking PCIE_ATU_ENABLE bit. Unfortunately, the atu programming
> can be called in atomic context:
>
> inbound atu programming could be called through
> pci_
Hi Jisheng
On 21/08/2018 07:15, Jisheng Zhang wrote:
> When programming inbound/outbound atu, we call usleep_range() after
> each checking PCIE_ATU_ENABLE bit. Unfortunately, the atu programming
> can be called in atomic context:
>
> inbound atu programming could be called through
> pci_epc_write
When programming inbound/outbound atu, we call usleep_range() after
each checking PCIE_ATU_ENABLE bit. Unfortunately, the atu programming
can be called in atomic context:
inbound atu programming could be called through
pci_epc_write_header()
=>dw_pcie_ep_write_header()
=>dw_pcie_prog_inbound
/Jisheng-Zhang/PCI-dwc-fix-scheduling-while-atomic-issues/20180821-110033
base: https://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci.git next
config: arm-omap2plus_defconfig (attached as .config)
compiler: arm-linux-gnueabi-gcc (Debian 7.2.0-11) 7.2.0
reproduce:
wget
https
When programming inbound/outbound atu, we call usleep_range() after
each checking PCIE_ATU_ENABLE bit. Unfortunately, the atu programming
can be called in atomic context:
inbound atu programming could be called through
pci_epc_write_header()
=>dw_pcie_ep_write_header()
=>dw_pcie_prog_inbound
From: "Steven Rostedt (VMware)"
The commit "memcontrol: Prevent scheduling while atomic in cgroup code"
fixed this issue:
refill_stock()
get_cpu_var()
drain_stock()
res_counter_uncharge()
res_co
From: Michael Chan
Date: Wed, 14 Mar 2018 10:22:51 -0700
> On Wed, Mar 14, 2018 at 9:36 AM, Jonathan Toppins wrote:
>> The problem was introduced in commit
>> 506b0a395f26 ("[netdrv] tg3: APE heartbeat changes"). The bug occurs
>> because tp->lock spinlock is held which is obtained in tg3_start
On 03/14/2018 01:22 PM, Michael Chan wrote:
> On Wed, Mar 14, 2018 at 9:36 AM, Jonathan Toppins wrote:
>> The problem was introduced in commit
>> 506b0a395f26 ("[netdrv] tg3: APE heartbeat changes"). The bug occurs
>> because tp->lock spinlock is held which is obtained in tg3_start
>> by way of tg
On Wed, Mar 14, 2018 at 9:36 AM, Jonathan Toppins wrote:
> The problem was introduced in commit
> 506b0a395f26 ("[netdrv] tg3: APE heartbeat changes"). The bug occurs
> because tp->lock spinlock is held which is obtained in tg3_start
> by way of tg3_full_lock(), line 11571. The documentation for u
The problem was introduced in commit
506b0a395f26 ("[netdrv] tg3: APE heartbeat changes"). The bug occurs
because tp->lock spinlock is held which is obtained in tg3_start
by way of tg3_full_lock(), line 11571. The documentation for usleep_range()
specifically states it cannot be used inside a spinl
4.9.65-rt57-rc2 stable review patch.
If anyone has any objections, please let me know.
--
From: "Steven Rostedt (VMware)"
The commit "memcontrol: Prevent scheduling while atomic in cgroup code"
fixed this issue:
refill_stock()
4.9.65-rt57-rc1 stable review patch.
If anyone has any objections, please let me know.
--
From: "Steven Rostedt (VMware)"
The commit "memcontrol: Prevent scheduling while atomic in cgroup code"
fixed this issue:
refill_stock()
4.9.65-rt57-rc1 stable review patch.
If anyone has any objections, please let me know.
--
From: "Steven Rostedt (VMware)"
The commit "memcontrol: Prevent scheduling while atomic in cgroup code"
fixed this issue:
refill_stock()
On 2017-11-22 07:31:19 [-0500], Steven Rostedt wrote:
> From: "Steven Rostedt (VMware)"
>
> The commit "memcontrol: Prevent scheduling while atomic in cgroup code"
> fixed this issue:
>
>refill_stock()
>
From: "Steven Rostedt (VMware)"
The commit "memcontrol: Prevent scheduling while atomic in cgroup code"
fixed this issue:
refill_stock()
get_cpu_var()
drain_stock()
res_counter_uncharge()
res_co
On Wed, 2017-09-13 at 17:19 +0200, Michal Hocko wrote:
> On Wed 13-09-17 15:07:26, Jorgen S. Hansen wrote:
[...]
> > The patch below has been used to fix the above issue by other distros
> > - among them Redhat for the 3.10 kernel, so it should work for 3.16 as
> > well.
>
> Thanks for the confirm
1 - 100 of 451 matches
Mail list logo