aoe: kernel crash on blk_update_request: I/O error, BUG: scheduling while atomic

2021-04-13 Thread Valentin Kleibel
0x0 phys_seg 2 prio class 0 [ 408.620290] BUG: scheduling while atomic: swapper/16/0/0x0100 [ 408.620325] Modules linked in: sctp bridge 8021q garp stp mrp llc psmouse dlm configfs aoe ipmi_ssif amd64_edac_mod edac_mce_amd amd_energy kvm_amd kvm irqbypass ghash_clmulni_intel aesni_intel lib

Re: scheduling while atomic in z3fold

2020-12-08 Thread Mike Galbraith
On Wed, 2020-12-09 at 07:13 +0100, Mike Galbraith wrote: > On Wed, 2020-12-09 at 00:26 +0100, Vitaly Wool wrote: > > Hi Mike, > > > > On 2020-12-07 16:41, Mike Galbraith wrote: > > > On Mon, 2020-12-07 at 16:21 +0100, Vitaly Wool wrote: > > >> On Mon, Dec 7, 2020 at 1:34 PM Mike Galbraith wrote: >

Re: scheduling while atomic in z3fold

2020-12-08 Thread Mike Galbraith
On Wed, 2020-12-09 at 00:26 +0100, Vitaly Wool wrote: > Hi Mike, > > On 2020-12-07 16:41, Mike Galbraith wrote: > > On Mon, 2020-12-07 at 16:21 +0100, Vitaly Wool wrote: > >> On Mon, Dec 7, 2020 at 1:34 PM Mike Galbraith wrote: > >>> > >> > >>> Unfortunately, that made zero difference. > >> > >> O

Re: scheduling while atomic in z3fold

2020-12-08 Thread Vitaly Wool
Hi Mike, On 2020-12-07 16:41, Mike Galbraith wrote: On Mon, 2020-12-07 at 16:21 +0100, Vitaly Wool wrote: On Mon, Dec 7, 2020 at 1:34 PM Mike Galbraith wrote: Unfortunately, that made zero difference. Okay, I suggest that you submit the patch that changes read_lock() to write_lock() in

Re: scheduling while atomic in z3fold

2020-12-07 Thread Mike Galbraith
On Mon, 2020-12-07 at 16:21 +0100, Vitaly Wool wrote: > On Mon, Dec 7, 2020 at 1:34 PM Mike Galbraith wrote: > > > > > Unfortunately, that made zero difference. > > Okay, I suggest that you submit the patch that changes read_lock() to > write_lock() in __release_z3fold_page() and I'll ack it then.

Re: scheduling while atomic in z3fold

2020-12-07 Thread Sebastian Andrzej Siewior
On 2020-12-07 16:21:20 [+0100], Vitaly Wool wrote: > On Mon, Dec 7, 2020 at 1:34 PM Mike Galbraith wrote: > > > > Unfortunately, that made zero difference. > > Okay, I suggest that you submit the patch that changes read_lock() to > write_lock() in __release_z3fold_page() and I'll ack it then. > I

Re: scheduling while atomic in z3fold

2020-12-07 Thread Vitaly Wool
On Mon, Dec 7, 2020 at 1:34 PM Mike Galbraith wrote: > > On Mon, 2020-12-07 at 12:52 +0100, Vitaly Wool wrote: > > > > Thanks. This trace beats me because I don't quite get how this could > > have happened. > > I swear there's a mythical creature loose in there somewhere ;-) > Everything looks jus

Re: scheduling while atomic in z3fold

2020-12-07 Thread Mike Galbraith
On Mon, 2020-12-07 at 12:52 +0100, Vitaly Wool wrote: > > Thanks. This trace beats me because I don't quite get how this could > have happened. I swear there's a mythical creature loose in there somewhere ;-) Everything looks just peachy up to the instant it goes boom, then you find in the wreckag

Re: scheduling while atomic in z3fold

2020-12-07 Thread Vitaly Wool
On Mon, Dec 7, 2020 at 3:18 AM Mike Galbraith wrote: > > On Mon, 2020-12-07 at 02:05 +0100, Vitaly Wool wrote: > > > > Could you please try the following patch in your setup: > > crash> gdb list *z3fold_zpool_free+0x527 > 0xc0e14487 is in z3fold_zpool_free (mm/z3fold.c:341). > 336

Re: scheduling while atomic in z3fold

2020-12-06 Thread Mike Galbraith
On Mon, 2020-12-07 at 02:05 +0100, Vitaly Wool wrote: > > Could you please try the following patch in your setup: crash> gdb list *z3fold_zpool_free+0x527 0xc0e14487 is in z3fold_zpool_free (mm/z3fold.c:341). 336 if (slots->slot[i]) { 337 is_

Re: scheduling while atomic in z3fold

2020-12-06 Thread Mike Galbraith
On Thu, 2020-12-03 at 14:39 +0100, Sebastian Andrzej Siewior wrote: > On 2020-12-03 09:18:21 [+0100], Mike Galbraith wrote: > > On Thu, 2020-12-03 at 03:16 +0100, Mike Galbraith wrote: > > > On Wed, 2020-12-02 at 23:08 +0100, Sebastian Andrzej Siewior wrote: > > > Looks like... > > > > > > d8f117ab

Re: scheduling while atomic in z3fold

2020-12-03 Thread Vitaly Wool
On Thu, Dec 3, 2020 at 2:39 PM Sebastian Andrzej Siewior wrote: > > On 2020-12-03 09:18:21 [+0100], Mike Galbraith wrote: > > On Thu, 2020-12-03 at 03:16 +0100, Mike Galbraith wrote: > > > On Wed, 2020-12-02 at 23:08 +0100, Sebastian Andrzej Siewior wrote: > > > Looks like... > > > > > > d8f117abb

Re: scheduling while atomic in z3fold

2020-12-03 Thread Sebastian Andrzej Siewior
On 2020-12-03 09:18:21 [+0100], Mike Galbraith wrote: > On Thu, 2020-12-03 at 03:16 +0100, Mike Galbraith wrote: > > On Wed, 2020-12-02 at 23:08 +0100, Sebastian Andrzej Siewior wrote: > > Looks like... > > > > d8f117abb380 z3fold: fix use-after-free when freeing handles > > > > ...wasn't completel

Re: scheduling while atomic in z3fold

2020-12-03 Thread Mike Galbraith
On Thu, 2020-12-03 at 03:16 +0100, Mike Galbraith wrote: > On Wed, 2020-12-02 at 23:08 +0100, Sebastian Andrzej Siewior wrote: > Looks like... > > d8f117abb380 z3fold: fix use-after-free when freeing handles > > ...wasn't completely effective... The top two hunks seem to have rendered the thing RT

Re: scheduling while atomic in z3fold

2020-12-02 Thread Mike Galbraith
On Wed, 2020-12-02 at 23:08 +0100, Sebastian Andrzej Siewior wrote: > On 2020-12-02 03:30:27 [+0100], Mike Galbraith wrote: > > > What I'm seeing is the below. rt_mutex_has_waiters() says yup we have > > a waiter, rt_mutex_top_waiter() emits the missing cached leftmost, and > > rt_mutex_dequeue_pi

Re: scheduling while atomic in z3fold

2020-12-02 Thread Sebastian Andrzej Siewior
On 2020-12-02 03:30:27 [+0100], Mike Galbraith wrote: > > > In an LTP install, ./runltp -f mm. Shortly after box starts swapping > > > insanely, it explodes quite reliably here with either z3fold or > > > zsmalloc.. but not with zbud. > > What I'm seeing is the below. rt_mutex_has_waiters() says

Re: scheduling while atomic in z3fold

2020-12-01 Thread Mike Galbraith
On Mon, 2020-11-30 at 17:03 +0100, Sebastian Andrzej Siewior wrote: > On 2020-11-30 16:01:11 [+0100], Mike Galbraith wrote: > > On Mon, 2020-11-30 at 15:52 +0100, Sebastian Andrzej Siewior wrote: > > > How do you test this? I triggered a few oom-killer and I have here git > > > gc running for a few

Re: scheduling while atomic in z3fold

2020-11-30 Thread Mike Galbraith
On Mon, 2020-11-30 at 17:32 +0100, Sebastian Andrzej Siewior wrote: > On 2020-11-30 17:27:17 [+0100], Mike Galbraith wrote: > > > This just passed. It however killed my git-gc task which wasn't done. > > > Let me try tomorrow with your config. > > > > FYI, I tried 5.9-rt (after fixing 5.9.11), it e

Re: scheduling while atomic in z3fold

2020-11-30 Thread Mike Galbraith
On Mon, 2020-11-30 at 17:27 +0100, Mike Galbraith wrote: > On Mon, 2020-11-30 at 17:03 +0100, Sebastian Andrzej Siewior wrote: > > On 2020-11-30 16:01:11 [+0100], Mike Galbraith wrote: > > > On Mon, 2020-11-30 at 15:52 +0100, Sebastian Andrzej Siewior wrote: > > > > How do you test this? I triggere

Re: scheduling while atomic in z3fold

2020-11-30 Thread Mike Galbraith
On Mon, 2020-11-30 at 17:32 +0100, Sebastian Andrzej Siewior wrote: > On 2020-11-30 17:27:17 [+0100], Mike Galbraith wrote: > > > This just passed. It however killed my git-gc task which wasn't done. > > > Let me try tomorrow with your config. > > > > FYI, I tried 5.9-rt (after fixing 5.9.11), it e

Re: scheduling while atomic in z3fold

2020-11-30 Thread Sebastian Andrzej Siewior
On 2020-11-30 17:27:17 [+0100], Mike Galbraith wrote: > > This just passed. It however killed my git-gc task which wasn't done. > > Let me try tomorrow with your config. > > FYI, I tried 5.9-rt (after fixing 5.9.11), it exploded in the same way, > so (as expected) it's not some devel tree oopsie.

Re: scheduling while atomic in z3fold

2020-11-30 Thread Mike Galbraith
On Mon, 2020-11-30 at 17:03 +0100, Sebastian Andrzej Siewior wrote: > On 2020-11-30 16:01:11 [+0100], Mike Galbraith wrote: > > On Mon, 2020-11-30 at 15:52 +0100, Sebastian Andrzej Siewior wrote: > > > How do you test this? I triggered a few oom-killer and I have here git > > > gc running for a few

Re: scheduling while atomic in z3fold

2020-11-30 Thread Sebastian Andrzej Siewior
On 2020-11-30 16:01:11 [+0100], Mike Galbraith wrote: > On Mon, 2020-11-30 at 15:52 +0100, Sebastian Andrzej Siewior wrote: > > How do you test this? I triggered a few oom-killer and I have here git > > gc running for a few hours now… Everything is fine. > > In an LTP install, ./runltp -f mm. Sho

Re: scheduling while atomic in z3fold

2020-11-30 Thread Mike Galbraith
On Mon, 2020-11-30 at 16:01 +0100, Mike Galbraith wrote: > On Mon, 2020-11-30 at 15:52 +0100, Sebastian Andrzej Siewior wrote: > > How do you test this? I triggered a few oom-killer and I have here git > > gc running for a few hours now… Everything is fine. > > In an LTP install, ./runltp -f mm. S

Re: scheduling while atomic in z3fold

2020-11-30 Thread Mike Galbraith
On Mon, 2020-11-30 at 15:52 +0100, Sebastian Andrzej Siewior wrote: > How do you test this? I triggered a few oom-killer and I have here git > gc running for a few hours now… Everything is fine. In an LTP install, ./runltp -f mm. Shortly after box starts swapping insanely, it explodes quite relia

Re: scheduling while atomic in z3fold

2020-11-30 Thread Sebastian Andrzej Siewior
On 2020-11-30 15:42:46 [+0100], Mike Galbraith wrote: > This explodes in write_unlock() as mine did. Oleksandr's local_lock() > variant explodes in the lock he added. (ew, corruption) > > I think I'll try a stable-rt tree. This master tree _should_ be fine > given it seems to work just peachy

Re: scheduling while atomic in z3fold

2020-11-30 Thread Mike Galbraith
On Mon, 2020-11-30 at 14:20 +0100, Sebastian Andrzej Siewior wrote: > On 2020-11-29 12:41:14 [+0100], Mike Galbraith wrote: > > On Sun, 2020-11-29 at 12:29 +0100, Oleksandr Natalenko wrote: > > > > > > Ummm so do compressors explode under non-rt kernel in your tests as > > > well, or it is just -rt

Re: scheduling while atomic in z3fold

2020-11-30 Thread Sebastian Andrzej Siewior
On 2020-11-30 14:53:22 [+0100], Oleksandr Natalenko wrote: > > diff --git a/mm/zswap.c b/mm/zswap.c > > index 78a20f7b00f2c..b24f761b9241c 100644 > > --- a/mm/zswap.c > > +++ b/mm/zswap.c > > @@ -394,7 +394,9 @@ struct zswap_comp { > > u8 *dstmem; > > }; > > > > -static DEFINE_PER_CPU(struct

Re: scheduling while atomic in z3fold

2020-11-30 Thread Oleksandr Natalenko
On Mon, Nov 30, 2020 at 02:20:14PM +0100, Sebastian Andrzej Siewior wrote: > On 2020-11-29 12:41:14 [+0100], Mike Galbraith wrote: > > On Sun, 2020-11-29 at 12:29 +0100, Oleksandr Natalenko wrote: > > > > > > Ummm so do compressors explode under non-rt kernel in your tests as > > > well, or it is j

Re: scheduling while atomic in z3fold

2020-11-30 Thread Sebastian Andrzej Siewior
On 2020-11-29 12:41:14 [+0100], Mike Galbraith wrote: > On Sun, 2020-11-29 at 12:29 +0100, Oleksandr Natalenko wrote: > > > > Ummm so do compressors explode under non-rt kernel in your tests as > > well, or it is just -rt that triggers this? > > I only tested a non-rt kernel with z3fold, which wor

Re: scheduling while atomic in z3fold

2020-11-29 Thread Mike Galbraith
On Sun, 2020-11-29 at 12:29 +0100, Oleksandr Natalenko wrote: > > Ummm so do compressors explode under non-rt kernel in your tests as > well, or it is just -rt that triggers this? I only tested a non-rt kernel with z3fold, which worked just fine. -Mike

Re: scheduling while atomic in z3fold

2020-11-29 Thread Oleksandr Natalenko
On Sun, Nov 29, 2020 at 11:56:55AM +0100, Mike Galbraith wrote: > On Sun, 2020-11-29 at 10:21 +0100, Mike Galbraith wrote: > > On Sun, 2020-11-29 at 08:48 +0100, Mike Galbraith wrote: > > > On Sun, 2020-11-29 at 07:41 +0100, Mike Galbraith wrote: > > > > On Sat, 2020-11-28 at 15:27 +0100, Oleksandr

Re: scheduling while atomic in z3fold

2020-11-29 Thread Mike Galbraith
On Sun, 2020-11-29 at 10:21 +0100, Mike Galbraith wrote: > On Sun, 2020-11-29 at 08:48 +0100, Mike Galbraith wrote: > > On Sun, 2020-11-29 at 07:41 +0100, Mike Galbraith wrote: > > > On Sat, 2020-11-28 at 15:27 +0100, Oleksandr Natalenko wrote: > > > > > > > > > > Shouldn't the list manipulation be

Re: scheduling while atomic in z3fold

2020-11-29 Thread Mike Galbraith
On Sun, 2020-11-29 at 08:48 +0100, Mike Galbraith wrote: > On Sun, 2020-11-29 at 07:41 +0100, Mike Galbraith wrote: > > On Sat, 2020-11-28 at 15:27 +0100, Oleksandr Natalenko wrote: > > > > > > > > Shouldn't the list manipulation be protected with > > > > > local_lock+this_cpu_ptr instead of get_cp

Re: scheduling while atomic in z3fold

2020-11-28 Thread Mike Galbraith
On Sun, 2020-11-29 at 07:41 +0100, Mike Galbraith wrote: > On Sat, 2020-11-28 at 15:27 +0100, Oleksandr Natalenko wrote: > > > > > > Shouldn't the list manipulation be protected with > > > > local_lock+this_cpu_ptr instead of get_cpu_ptr+spin_lock? > > > > Totally untested: > > Hrm, the thing doesn

Re: scheduling while atomic in z3fold

2020-11-28 Thread Mike Galbraith
On Sat, 2020-11-28 at 15:27 +0100, Oleksandr Natalenko wrote: > > > > Shouldn't the list manipulation be protected with > > > local_lock+this_cpu_ptr instead of get_cpu_ptr+spin_lock? > > Totally untested: Hrm, the thing doesn't seem to care deeply about preemption being disabled, so adding anothe

Re: scheduling while atomic in z3fold

2020-11-28 Thread Oleksandr Natalenko
On Sat, Nov 28, 2020 at 03:09:24PM +0100, Oleksandr Natalenko wrote: > > While running v5.10-rc5-rt11 I bumped into the following: > > > > ``` > > BUG: scheduling while atomic: git/18695/0x0002 > > Preemption disabled at: > > [] z3fold_zpool_mal

Re: scheduling while atomic in z3fold

2020-11-28 Thread Oleksandr Natalenko
On Sat, Nov 28, 2020 at 03:05:24PM +0100, Oleksandr Natalenko wrote: > Hi. > > While running v5.10-rc5-rt11 I bumped into the following: > > ``` > BUG: scheduling while atomic: git/18695/0x0002 > Preemption disabled at: > [] z3fold_zpool_malloc+0x463/0x6e0 > …

scheduling while atomic in z3fold

2020-11-28 Thread Oleksandr Natalenko
Hi. While running v5.10-rc5-rt11 I bumped into the following: ``` BUG: scheduling while atomic: git/18695/0x0002 Preemption disabled at: [] z3fold_zpool_malloc+0x463/0x6e0 … Call Trace: dump_stack+0x6d/0x88 __schedule_bug.cold+0x88/0x96 __schedule+0x69e/0x8c0 preempt_schedule_lock+0x51

"scheduling while atomic" BUG in iscsid since commit 1b66d253610c7

2020-08-31 Thread Marc Dionne
The issue reported here: https://lkml.org/lkml/2020/7/28/1085 is still present as of 5.9-rc3; it was introduced in the 5.8 cycle. When the problem occurs, iscsid crashes and iscsi volumes fail to come up, which makes the machine quite sad if the volumes are critical to its function. Added CCs

Re: [PATCH v1] driver core: Fix scheduling while atomic warnings during device link deletion

2020-07-16 Thread Saravana Kannan
e link details in sysfs") caused sleeping/scheduling while > >> atomic warnings. > >> > >> BUG: sleeping function called from invalid context at > >> kernel/locking/mutex.c:935 > >> in_atomic(): 1, irqs_disabled(): 0, non_block: 0, pid: 12, name: > &

Re: [PATCH v1] driver core: Fix scheduling while atomic warnings during device link deletion

2020-07-15 Thread Marek Szyprowski
Hi On 16.07.2020 07:30, Guenter Roeck wrote: > On 7/15/20 10:08 PM, Saravana Kannan wrote: >> Marek and Guenter reported that commit 287905e68dd2 ("driver core: >> Expose device link details in sysfs") caused sleeping/scheduling while >> atomic warnings. >>

Re: [PATCH v1] driver core: Fix scheduling while atomic warnings during device link deletion

2020-07-15 Thread Guenter Roeck
On 7/15/20 10:08 PM, Saravana Kannan wrote: > Marek and Guenter reported that commit 287905e68dd2 ("driver core: > Expose device link details in sysfs") caused sleeping/scheduling while > atomic warnings. > > BUG: sleeping function called from invalid context at >

[PATCH v1] driver core: Fix scheduling while atomic warnings during device link deletion

2020-07-15 Thread Saravana Kannan
Marek and Guenter reported that commit 287905e68dd2 ("driver core: Expose device link details in sysfs") caused sleeping/scheduling while atomic warnings. BUG: sleeping function called from invalid context at kernel/locking/mutex.c:935 in_atomic(): 1, irqs_disabled(): 0, non_block:

[PATCH 5.7 278/477] scsi: ufs-qcom: Fix scheduling while atomic issue

2020-06-23 Thread Greg Kroah-Hartman
From: Jeffrey Hugo [ Upstream commit 3be60b564de49875e47974c37fabced893cd0931 ] ufs_qcom_dump_dbg_regs() uses usleep_range, a sleeping function, but can be called from atomic context in the following flow: ufshcd_intr -> ufshcd_sl_intr -> ufshcd_check_errors -> ufshcd_print_host_regs -> ufshcd_

[PATCH 5.4 190/314] scsi: ufs-qcom: Fix scheduling while atomic issue

2020-06-23 Thread Greg Kroah-Hartman
From: Jeffrey Hugo [ Upstream commit 3be60b564de49875e47974c37fabced893cd0931 ] ufs_qcom_dump_dbg_regs() uses usleep_range, a sleeping function, but can be called from atomic context in the following flow: ufshcd_intr -> ufshcd_sl_intr -> ufshcd_check_errors -> ufshcd_print_host_regs -> ufshcd_

[PATCH 4.14 078/136] scsi: ufs-qcom: Fix scheduling while atomic issue

2020-06-23 Thread Greg Kroah-Hartman
From: Jeffrey Hugo [ Upstream commit 3be60b564de49875e47974c37fabced893cd0931 ] ufs_qcom_dump_dbg_regs() uses usleep_range, a sleeping function, but can be called from atomic context in the following flow: ufshcd_intr -> ufshcd_sl_intr -> ufshcd_check_errors -> ufshcd_print_host_regs -> ufshcd_

[PATCH 4.19 116/206] scsi: ufs-qcom: Fix scheduling while atomic issue

2020-06-23 Thread Greg Kroah-Hartman
From: Jeffrey Hugo [ Upstream commit 3be60b564de49875e47974c37fabced893cd0931 ] ufs_qcom_dump_dbg_regs() uses usleep_range, a sleeping function, but can be called from atomic context in the following flow: ufshcd_intr -> ufshcd_sl_intr -> ufshcd_check_errors -> ufshcd_print_host_regs -> ufshcd_

[PATCH AUTOSEL 5.7 283/388] scsi: ufs-qcom: Fix scheduling while atomic issue

2020-06-17 Thread Sasha Levin
From: Jeffrey Hugo [ Upstream commit 3be60b564de49875e47974c37fabced893cd0931 ] ufs_qcom_dump_dbg_regs() uses usleep_range, a sleeping function, but can be called from atomic context in the following flow: ufshcd_intr -> ufshcd_sl_intr -> ufshcd_check_errors -> ufshcd_print_host_regs -> ufshcd_

[PATCH AUTOSEL 4.14 081/108] scsi: ufs-qcom: Fix scheduling while atomic issue

2020-06-17 Thread Sasha Levin
From: Jeffrey Hugo [ Upstream commit 3be60b564de49875e47974c37fabced893cd0931 ] ufs_qcom_dump_dbg_regs() uses usleep_range, a sleeping function, but can be called from atomic context in the following flow: ufshcd_intr -> ufshcd_sl_intr -> ufshcd_check_errors -> ufshcd_print_host_regs -> ufshcd_

[PATCH AUTOSEL 4.19 123/172] scsi: ufs-qcom: Fix scheduling while atomic issue

2020-06-17 Thread Sasha Levin
From: Jeffrey Hugo [ Upstream commit 3be60b564de49875e47974c37fabced893cd0931 ] ufs_qcom_dump_dbg_regs() uses usleep_range, a sleeping function, but can be called from atomic context in the following flow: ufshcd_intr -> ufshcd_sl_intr -> ufshcd_check_errors -> ufshcd_print_host_regs -> ufshcd_

[PATCH AUTOSEL 5.4 194/266] scsi: ufs-qcom: Fix scheduling while atomic issue

2020-06-17 Thread Sasha Levin
From: Jeffrey Hugo [ Upstream commit 3be60b564de49875e47974c37fabced893cd0931 ] ufs_qcom_dump_dbg_regs() uses usleep_range, a sleeping function, but can be called from atomic context in the following flow: ufshcd_intr -> ufshcd_sl_intr -> ufshcd_check_errors -> ufshcd_print_host_regs -> ufshcd_

Re: [PATCH] scsi: ufs-qcom: Fix scheduling while atomic issue

2020-05-26 Thread Martin K. Petersen
; ufshcd_print_host_regs -> ufshcd_vops_dbg_register_dump -> > ufs_qcom_dump_dbg_regs > > [...] Applied to 5.8/scsi-queue, thanks! [1/1] scsi: ufs-qcom: Fix scheduling while atomic issue https://git.kernel.org/mkp/scsi/c/3be60b564de4 -- Martin K. Petersen Oracle Linux Engineering

Re: [PATCH] scsi: ufs-qcom: Fix scheduling while atomic issue

2020-05-26 Thread Bean Huo
On Tue, 2020-05-26 at 06:25 +, Avri Altman wrote: > > > ufs_qcom_dump_dbg_regs() uses usleep_range, a sleeping function, > > but can > > be called from atomic context in the following flow: > > > > ufshcd_intr -> ufshcd_sl_intr -> ufshcd_check_errors -> > > ufshcd_print_host_regs -> ufshcd_v

RE: [PATCH] scsi: ufs-qcom: Fix scheduling while atomic issue

2020-05-25 Thread Avri Altman
> ufs_qcom_dump_dbg_regs() uses usleep_range, a sleeping function, but can > be called from atomic context in the following flow: > > ufshcd_intr -> ufshcd_sl_intr -> ufshcd_check_errors -> > ufshcd_print_host_regs -> ufshcd_vops_dbg_register_dump -> > ufs_qcom_dump_dbg_regs > > This causes a b

[PATCH] scsi: ufs-qcom: Fix scheduling while atomic issue

2020-05-25 Thread Jeffrey Hugo
ufs_qcom_dump_dbg_regs() uses usleep_range, a sleeping function, but can be called from atomic context in the following flow: ufshcd_intr -> ufshcd_sl_intr -> ufshcd_check_errors -> ufshcd_print_host_regs -> ufshcd_vops_dbg_register_dump -> ufs_qcom_dump_dbg_regs This causes a boot crash on the L

[PATCH v2 0/1] pwm: meson: fix scheduling while atomic issue

2019-04-01 Thread Martin Blumenstingl
Back in January a "BUG: scheduling while atomic" error showed up during boot on my Meson8b Odroid-C1 (which uses a PWM regulator as CPU supply). The call trace comes down to: __mutex_lock clk_prepare_lock clk_core_get_rate meson_pwm_apply .. dev_pm_opp_set_rate .. Jerom

Re: [PATCH 0/1] pwm: meson: fix scheduling while atomic issue

2019-04-01 Thread Neil Armstrong
On 30/03/2019 20:29, Martin Blumenstingl wrote: > Hello Uwe, > > On Mon, Mar 25, 2019 at 9:07 PM Uwe Kleine-König > wrote: > [...] - Does stopping the PWM (i.e. clearing MISC_{A,B}_EN in the MISC_AB register) freeze the output, or is the currently running period completed fi

Re: [PATCH 0/1] pwm: meson: fix scheduling while atomic issue

2019-03-31 Thread Uwe Kleine-König
On Sat, Mar 30, 2019 at 08:29:35PM +0100, Martin Blumenstingl wrote: > Hello Uwe, > > On Mon, Mar 25, 2019 at 9:07 PM Uwe Kleine-König > wrote: > [...] > > > > - Does stopping the PWM (i.e. clearing MISC_{A,B}_EN in the MISC_AB > > > >register) freeze the output, or is the currently running

Re: [PATCH 0/1] pwm: meson: fix scheduling while atomic issue

2019-03-30 Thread Martin Blumenstingl
Hello Uwe, On Mon, Mar 25, 2019 at 9:07 PM Uwe Kleine-König wrote: [...] > > > - Does stopping the PWM (i.e. clearing MISC_{A,B}_EN in the MISC_AB > > >register) freeze the output, or is the currently running period > > >completed first? (The latter is the right behaviour.) > > I don't k

Re: [PATCH 0/1] pwm: meson: fix scheduling while atomic issue

2019-03-26 Thread Martin Blumenstingl
Hi Jerome, On Tue, Mar 26, 2019 at 9:37 AM Jerome Brunet wrote: > > On Mon, 2019-03-25 at 19:04 +0100, Martin Blumenstingl wrote: > > > Thanks for fixing this Martin. > > you're welcome! > > > > > As for the future enhancement, I'd like to know what you have in mind. > > > As I have told you prev

Re: [PATCH 0/1] pwm: meson: fix scheduling while atomic issue

2019-03-26 Thread Martin Blumenstingl
Hello Uwe, On Mon, Mar 25, 2019 at 9:07 PM Uwe Kleine-König wrote: > > Hello Martin, > > On Mon, Mar 25, 2019 at 06:41:57PM +0100, Martin Blumenstingl wrote: > > On Mon, Mar 25, 2019 at 9:41 AM Uwe Kleine-König > > wrote: > > > On Sun, Mar 24, 2019 at 11:02:16PM +0100, Martin Blumenstingl wrote:

Re: [PATCH 0/1] pwm: meson: fix scheduling while atomic issue

2019-03-26 Thread Uwe Kleine-König
>>> Back in January a "BUG: scheduling while atomic" error showed up during > >>> boot on my Meson8b Odroid-C1 (which uses a PWM regulator as CPU supply). > >>> The call trace comes down to: > >>> __mutex_lock > >>> clk_prepare

Re: [PATCH 0/1] pwm: meson: fix scheduling while atomic issue

2019-03-26 Thread Neil Armstrong
On 25/03/2019 18:41, Martin Blumenstingl wrote: > Hello Uwe, > > On Mon, Mar 25, 2019 at 9:41 AM Uwe Kleine-König > wrote: >> >> Hello Martin, >> >> On Sun, Mar 24, 2019 at 11:02:16PM +0100, Martin Blumenstingl wrote: >>> Back in January a "BU

Re: [PATCH 0/1] pwm: meson: fix scheduling while atomic issue

2019-03-26 Thread Neil Armstrong
On 26/03/2019 09:37, Jerome Brunet wrote: > On Mon, 2019-03-25 at 19:04 +0100, Martin Blumenstingl wrote: >>> Thanks for fixing this Martin. >> you're welcome! >> >>> As for the future enhancement, I'd like to know what you have in mind. >>> As I have told you previously, I think the clock bindings

Re: [PATCH 0/1] pwm: meson: fix scheduling while atomic issue

2019-03-26 Thread Jerome Brunet
On Mon, 2019-03-25 at 19:04 +0100, Martin Blumenstingl wrote: > > Thanks for fixing this Martin. > you're welcome! > > > As for the future enhancement, I'd like to know what you have in mind. > > As I have told you previously, I think the clock bindings of this driver are > > not great. > > > > T

Re: [PATCH 0/1] pwm: meson: fix scheduling while atomic issue

2019-03-25 Thread Uwe Kleine-König
Hello Martin, On Mon, Mar 25, 2019 at 06:41:57PM +0100, Martin Blumenstingl wrote: > On Mon, Mar 25, 2019 at 9:41 AM Uwe Kleine-König > wrote: > > On Sun, Mar 24, 2019 at 11:02:16PM +0100, Martin Blumenstingl wrote: > > > Analyzing this issue helped me understand the pwm-meson driver better. > >

Re: [PATCH 0/1] pwm: meson: fix scheduling while atomic issue

2019-03-25 Thread Martin Blumenstingl
Hi Jerome, On Mon, Mar 25, 2019 at 10:35 AM Jerome Brunet wrote: > > On Sun, 2019-03-24 at 23:02 +0100, Martin Blumenstingl wrote: > > Back in January a "BUG: scheduling while atomic" error showed up during > > boot on my Meson8b Odroid-C1 (which uses a PWM regulator

Re: [PATCH 0/1] pwm: meson: fix scheduling while atomic issue

2019-03-25 Thread Martin Blumenstingl
Hello Uwe, On Mon, Mar 25, 2019 at 9:41 AM Uwe Kleine-König wrote: > > Hello Martin, > > On Sun, Mar 24, 2019 at 11:02:16PM +0100, Martin Blumenstingl wrote: > > Back in January a "BUG: scheduling while atomic" error showed up during > > boot on my Meson8b Odr

Re: [PATCH 0/1] pwm: meson: fix scheduling while atomic issue

2019-03-25 Thread Jerome Brunet
On Sun, 2019-03-24 at 23:02 +0100, Martin Blumenstingl wrote: > Back in January a "BUG: scheduling while atomic" error showed up during > boot on my Meson8b Odroid-C1 (which uses a PWM regulator as CPU supply). > The call trace comes down to: > __mutex_loc

Re: [PATCH 0/1] pwm: meson: fix scheduling while atomic issue

2019-03-25 Thread Uwe Kleine-König
Hello, On Mon, Mar 25, 2019 at 09:41:53AM +0100, Uwe Kleine-König wrote: > If you want to implement further cleanups, my questions and propositions > are: > > - Is there a publicly available manual for this hardware? If yes, you >can add a link to it in the header of the driver. > > - Why

Re: [PATCH 0/1] pwm: meson: fix scheduling while atomic issue

2019-03-25 Thread Uwe Kleine-König
Hello Martin, On Sun, Mar 24, 2019 at 11:02:16PM +0100, Martin Blumenstingl wrote: > Back in January a "BUG: scheduling while atomic" error showed up during > boot on my Meson8b Odroid-C1 (which uses a PWM regulator as CPU supply). > The call trace comes down t

[PATCH 0/1] pwm: meson: fix scheduling while atomic issue

2019-03-24 Thread Martin Blumenstingl
Back in January a "BUG: scheduling while atomic" error showed up during boot on my Meson8b Odroid-C1 (which uses a PWM regulator as CPU supply). The call trace comes down to: __mutex_lock clk_prepare_lock clk_core_get_rate meson_pwm_apply .. dev_pm_opp_set_rate .. Jerom

[PATCH 4.14 20/41] PCI: dwc: Fix scheduling while atomic issues

2018-10-18 Thread Greg Kroah-Hartman
4.14-stable review patch. If anyone has any objections, please let me know. -- From: Jisheng Zhang [ Upstream commit 9024143e700f89d74b8cdaf316a3499d74fc56fe ] When programming the inbound/outbound ATUs, we call usleep_range() after each checking PCIE_ATU_ENABLE bit. Unfortuna

[PATCH 4.18 30/53] PCI: dwc: Fix scheduling while atomic issues

2018-10-18 Thread Greg Kroah-Hartman
4.18-stable review patch. If anyone has any objections, please let me know. -- From: Jisheng Zhang [ Upstream commit 9024143e700f89d74b8cdaf316a3499d74fc56fe ] When programming the inbound/outbound ATUs, we call usleep_range() after each checking PCIE_ATU_ENABLE bit. Unfortuna

[PATCH AUTOSEL 4.18 37/58] PCI: dwc: Fix scheduling while atomic issues

2018-10-08 Thread Sasha Levin
From: Jisheng Zhang [ Upstream commit 9024143e700f89d74b8cdaf316a3499d74fc56fe ] When programming the inbound/outbound ATUs, we call usleep_range() after each checking PCIE_ATU_ENABLE bit. Unfortunately, the ATU programming can be executed in atomic context: inbound ATU programming could be cal

[PATCH AUTOSEL 4.14 22/32] PCI: dwc: Fix scheduling while atomic issues

2018-10-08 Thread Sasha Levin
From: Jisheng Zhang [ Upstream commit 9024143e700f89d74b8cdaf316a3499d74fc56fe ] When programming the inbound/outbound ATUs, we call usleep_range() after each checking PCIE_ATU_ENABLE bit. Unfortunately, the ATU programming can be executed in atomic context: inbound ATU programming could be cal

Re: [PATCH v3] PCI: dwc: fix scheduling while atomic issues

2018-09-20 Thread Bjorn Helgaas
On Thu, Sep 13, 2018 at 04:05:54PM +0100, Lorenzo Pieralisi wrote: > On Wed, Aug 29, 2018 at 11:04:08AM +0800, Jisheng Zhang wrote: > > When programming inbound/outbound atu, we call usleep_range() after > > each checking PCIE_ATU_ENABLE bit. Unfortunately, the atu programming > > can be called in

Re: [PATCH v3] PCI: dwc: fix scheduling while atomic issues

2018-09-13 Thread Lorenzo Pieralisi
On Wed, Aug 29, 2018 at 11:04:08AM +0800, Jisheng Zhang wrote: > When programming inbound/outbound atu, we call usleep_range() after > each checking PCIE_ATU_ENABLE bit. Unfortunately, the atu programming > can be called in atomic context: > > inbound atu programming could be called through > pci_

Re: [PATCH v3] PCI: dwc: fix scheduling while atomic issues

2018-09-13 Thread Lorenzo Pieralisi
On Thu, Sep 13, 2018 at 06:29:54PM +0800, Jisheng Zhang wrote: > Hi Lorenzo, > > On Thu, 13 Sep 2018 10:15:34 +0100 Lorenzo Pieralisi wrote: > > > On Mon, Sep 10, 2018 at 04:57:22PM +0800, Jisheng Zhang wrote: > > > Hi all, > > > > > > On Wed, 29 Aug 2018 11:04:08 +0800 Jisheng Zhang wrote: > >

Re: [PATCH v3] PCI: dwc: fix scheduling while atomic issues

2018-09-13 Thread Jisheng Zhang
Hi Lorenzo, On Thu, 13 Sep 2018 10:15:34 +0100 Lorenzo Pieralisi wrote: > On Mon, Sep 10, 2018 at 04:57:22PM +0800, Jisheng Zhang wrote: > > Hi all, > > > > On Wed, 29 Aug 2018 11:04:08 +0800 Jisheng Zhang wrote: > > > > > When programming inbound/outbound atu, we call usleep_range() after >

Re: [PATCH v3] PCI: dwc: fix scheduling while atomic issues

2018-09-13 Thread Lorenzo Pieralisi
On Mon, Sep 10, 2018 at 04:57:22PM +0800, Jisheng Zhang wrote: > Hi all, > > On Wed, 29 Aug 2018 11:04:08 +0800 Jisheng Zhang wrote: > > > When programming inbound/outbound atu, we call usleep_range() after > > each checking PCIE_ATU_ENABLE bit. Unfortunately, the atu programming > > can be calle

Re: [PATCH v3] PCI: dwc: fix scheduling while atomic issues

2018-09-10 Thread Jisheng Zhang
Hi all, On Wed, 29 Aug 2018 11:04:08 +0800 Jisheng Zhang wrote: > When programming inbound/outbound atu, we call usleep_range() after > each checking PCIE_ATU_ENABLE bit. Unfortunately, the atu programming > can be called in atomic context: > > inbound atu programming could be called through > p

[PATCH v3] PCI: dwc: fix scheduling while atomic issues

2018-08-28 Thread Jisheng Zhang
When programming inbound/outbound atu, we call usleep_range() after each checking PCIE_ATU_ENABLE bit. Unfortunately, the atu programming can be called in atomic context: inbound atu programming could be called through pci_epc_write_header() =>dw_pcie_ep_write_header() =>dw_pcie_prog_inbound

Re: [PATCH v2] PCI: dwc: fix scheduling while atomic issues

2018-08-28 Thread Lorenzo Pieralisi
On Tue, Aug 21, 2018 at 02:15:12PM +0800, Jisheng Zhang wrote: > When programming inbound/outbound atu, we call usleep_range() after > each checking PCIE_ATU_ENABLE bit. Unfortunately, the atu programming > can be called in atomic context: > > inbound atu programming could be called through > pci_

Re: [PATCH v2] PCI: dwc: fix scheduling while atomic issues

2018-08-22 Thread Gustavo Pimentel
Hi Jisheng On 21/08/2018 07:15, Jisheng Zhang wrote: > When programming inbound/outbound atu, we call usleep_range() after > each checking PCIE_ATU_ENABLE bit. Unfortunately, the atu programming > can be called in atomic context: > > inbound atu programming could be called through > pci_epc_write

[PATCH v2] PCI: dwc: fix scheduling while atomic issues

2018-08-20 Thread Jisheng Zhang
When programming inbound/outbound atu, we call usleep_range() after each checking PCIE_ATU_ENABLE bit. Unfortunately, the atu programming can be called in atomic context: inbound atu programming could be called through pci_epc_write_header() =>dw_pcie_ep_write_header() =>dw_pcie_prog_inbound

Re: [PATCH] PCI: dwc: fix scheduling while atomic issues

2018-08-20 Thread kbuild test robot
/Jisheng-Zhang/PCI-dwc-fix-scheduling-while-atomic-issues/20180821-110033 base: https://git.kernel.org/pub/scm/linux/kernel/git/helgaas/pci.git next config: arm-omap2plus_defconfig (attached as .config) compiler: arm-linux-gnueabi-gcc (Debian 7.2.0-11) 7.2.0 reproduce: wget https

[PATCH] PCI: dwc: fix scheduling while atomic issues

2018-08-20 Thread Jisheng Zhang
When programming inbound/outbound atu, we call usleep_range() after each checking PCIE_ATU_ENABLE bit. Unfortunately, the atu programming can be called in atomic context: inbound atu programming could be called through pci_epc_write_header() =>dw_pcie_ep_write_header() =>dw_pcie_prog_inbound

[PATCH RT 6/7] Revert "memcontrol: Prevent scheduling while atomic in cgroup code"

2018-04-04 Thread Daniel Wagner
From: "Steven Rostedt (VMware)" The commit "memcontrol: Prevent scheduling while atomic in cgroup code" fixed this issue: refill_stock() get_cpu_var() drain_stock() res_counter_uncharge() res_co

Re: [PATCH net] tg3: prevent scheduling while atomic splat

2018-03-14 Thread David Miller
From: Michael Chan Date: Wed, 14 Mar 2018 10:22:51 -0700 > On Wed, Mar 14, 2018 at 9:36 AM, Jonathan Toppins wrote: >> The problem was introduced in commit >> 506b0a395f26 ("[netdrv] tg3: APE heartbeat changes"). The bug occurs >> because tp->lock spinlock is held which is obtained in tg3_start

Re: [PATCH net] tg3: prevent scheduling while atomic splat

2018-03-14 Thread Jonathan Toppins
On 03/14/2018 01:22 PM, Michael Chan wrote: > On Wed, Mar 14, 2018 at 9:36 AM, Jonathan Toppins wrote: >> The problem was introduced in commit >> 506b0a395f26 ("[netdrv] tg3: APE heartbeat changes"). The bug occurs >> because tp->lock spinlock is held which is obtained in tg3_start >> by way of tg

Re: [PATCH net] tg3: prevent scheduling while atomic splat

2018-03-14 Thread Michael Chan
On Wed, Mar 14, 2018 at 9:36 AM, Jonathan Toppins wrote: > The problem was introduced in commit > 506b0a395f26 ("[netdrv] tg3: APE heartbeat changes"). The bug occurs > because tp->lock spinlock is held which is obtained in tg3_start > by way of tg3_full_lock(), line 11571. The documentation for u

[PATCH net] tg3: prevent scheduling while atomic splat

2018-03-14 Thread Jonathan Toppins
The problem was introduced in commit 506b0a395f26 ("[netdrv] tg3: APE heartbeat changes"). The bug occurs because tp->lock spinlock is held which is obtained in tg3_start by way of tg3_full_lock(), line 11571. The documentation for usleep_range() specifically states it cannot be used inside a spinl

[PATCH RT 01/15] Revert "memcontrol: Prevent scheduling while atomic in cgroup code"

2017-12-01 Thread Steven Rostedt
4.9.65-rt57-rc2 stable review patch. If anyone has any objections, please let me know. -- From: "Steven Rostedt (VMware)" The commit "memcontrol: Prevent scheduling while atomic in cgroup code" fixed this issue: refill_stock()

[PATCH RT 01/15] Revert "memcontrol: Prevent scheduling while atomic in cgroup code"

2017-12-01 Thread Steven Rostedt
4.9.65-rt57-rc1 stable review patch. If anyone has any objections, please let me know. -- From: "Steven Rostedt (VMware)" The commit "memcontrol: Prevent scheduling while atomic in cgroup code" fixed this issue: refill_stock()

[PATCH RT 01/15] Revert "memcontrol: Prevent scheduling while atomic in cgroup code"

2017-12-01 Thread Steven Rostedt
4.9.65-rt57-rc1 stable review patch. If anyone has any objections, please let me know. -- From: "Steven Rostedt (VMware)" The commit "memcontrol: Prevent scheduling while atomic in cgroup code" fixed this issue: refill_stock()

Re: [PATCH RT] Revert "memcontrol: Prevent scheduling while atomic in cgroup code"

2017-11-23 Thread Sebastian Andrzej Siewior
On 2017-11-22 07:31:19 [-0500], Steven Rostedt wrote: > From: "Steven Rostedt (VMware)" > > The commit "memcontrol: Prevent scheduling while atomic in cgroup code" > fixed this issue: > >refill_stock() >

[PATCH RT] Revert "memcontrol: Prevent scheduling while atomic in cgroup code"

2017-11-22 Thread Steven Rostedt
From: "Steven Rostedt (VMware)" The commit "memcontrol: Prevent scheduling while atomic in cgroup code" fixed this issue: refill_stock() get_cpu_var() drain_stock() res_counter_uncharge() res_co

Re: scheduling while atomic from vmci_transport_recv_stream_cb in 3.16 kernels

2017-11-21 Thread Ben Hutchings
On Wed, 2017-09-13 at 17:19 +0200, Michal Hocko wrote: > On Wed 13-09-17 15:07:26, Jorgen S. Hansen wrote: [...] > > The patch below has been used to fix the above issue by other distros > > - among them Redhat for the 3.10 kernel, so it should work for 3.16 as > > well. > > Thanks for the confirm

  1   2   3   4   5   >