Hi,

I booted the RT kernel (v5.12-rc3-rt3) with KASAN enabled for the first
time today and noticed this:

[    2.670635] BUG: sleeping function called from invalid context at 
kernel/locking/rtmutex.c:951
[    2.670638] in_atomic(): 0, irqs_disabled(): 1, non_block: 0, pid: 19, name: 
pgdatinit0
[    2.670649] 8 locks held by pgdatinit0/19:
[    2.670651]  #0: ffffffffb6e0a1e0 (tasklist_lock){+.+.}-{0:0}, at: 
release_task+0x110/0x480
[    2.670666]  #1: ffffffffb7364d80 (rcu_read_lock){....}-{1:2}, at: 
rt_write_lock+0x292/0x3a0
[    2.670683]  #2: ffff888100364860 (&sighand->siglock){+.+.}-{0:0}, at: 
__exit_signal+0x11d/0x1180
[    2.670690]  #3: ffffffffb7364d80 (rcu_read_lock){....}-{1:2}, at: 
rt_spin_lock+0x5/0xb0
[    2.670696]  #4: ffff888100395e10 (&(&sig->stats_lock)->lock){+.+.}-{0:0}, 
at: __exit_signal+0x276/0x1180
[    2.670701]  #5: ffffffffb7364d80 (rcu_read_lock){....}-{1:2}, at: 
rt_spin_lock+0x5/0xb0
[    2.670707]  #6: ffff888100395d38 (&____s->seqcount#3){+.+.}-{0:0}, at: 
release_task+0x1d6/0x480
[    2.670713]  #7: ffffffffb77516c0 (depot_lock){+.+.}-{2:2}, at: 
stack_depot_save+0x1b9/0x440
[    2.670736] irq event stamp: 31790
[    2.670738] hardirqs last  enabled at (31789): [<ffffffffb5a58cbd>] 
_raw_spin_unlock_irqrestore+0x2d/0xe0
[    2.670741] hardirqs last disabled at (31790): [<ffffffffb3dc5d86>] 
__call_rcu+0x436/0x880
[    2.670746] softirqs last  enabled at (0): [<ffffffffb3be1737>] 
copy_process+0x1357/0x4f90
[    2.670751] softirqs last disabled at (0): [<0000000000000000>] 0x0
[    2.670763] CPU: 0 PID: 19 Comm: pgdatinit0 Not tainted 5.12.0-rc3-rt3 #1
[    2.670766] Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
[    2.670768] Call Trace:
[    2.670790]  ? unwind_next_frame+0x11e/0x1ce0
[    2.670800]  dump_stack+0x93/0xc2
[    2.670826]  ___might_sleep.cold+0x1b2/0x1f1
[    2.670838]  rt_spin_lock+0x3b/0xb0
[    2.670838]  ? stack_depot_save+0x1b9/0x440
[    2.670838]  stack_depot_save+0x1b9/0x440
[    2.670838]  kasan_save_stack+0x32/0x40
[    2.670838]  ? kasan_save_stack+0x1b/0x40
[    2.670838]  ? kasan_record_aux_stack+0xa5/0xb0
[    2.670838]  ? __call_rcu+0x117/0x880
[    2.670838]  ? __exit_signal+0xafb/0x1180
[    2.670838]  ? release_task+0x1d6/0x480
[    2.670838]  ? exit_notify+0x303/0x750
[    2.670838]  ? do_exit+0x678/0xcf0
[    2.670838]  ? kthread+0x364/0x4f0
[    2.670838]  ? ret_from_fork+0x22/0x30
[    2.670838]  ? mark_held_locks+0xa5/0xe0
[    2.670838]  ? lockdep_hardirqs_on_prepare.part.0+0x18a/0x370
[    2.670838]  ? _raw_spin_unlock_irqrestore+0x2d/0xe0
[    2.670838]  ? lockdep_hardirqs_on+0x77/0x100
[    2.670838]  ? _raw_spin_unlock_irqrestore+0x38/0xe0
[    2.670838]  ? debug_object_active_state+0x273/0x370
[    2.670838]  ? debug_object_activate+0x380/0x460
[    2.670838]  ? alloc_object+0x960/0x960
[    2.670838]  ? lockdep_hardirqs_on+0x77/0x100
[    2.670838]  ? _raw_spin_unlock_irqrestore+0x38/0xe0
[    2.670838]  ? __call_rcu+0x436/0x880
[    2.670838]  ? lockdep_hardirqs_off+0x90/0xd0
[    2.670838]  kasan_record_aux_stack+0xa5/0xb0
[    2.670838]  __call_rcu+0x117/0x880
[    2.670838]  ? put_pid+0x10/0x10
[    2.670838]  ? rt_spin_unlock+0x31/0x80
[    2.670838]  ? rcu_implicit_dynticks_qs+0xab0/0xab0
[    2.670838]  ? free_pid+0x19c/0x260
[    2.670838]  __exit_signal+0xafb/0x1180
[    2.670838]  ? trace_sched_process_exit+0x1b0/0x1b0
[    2.670838]  ? rcu_is_watching+0xf1/0x160
[    2.670838]  ? rt_write_lock+0x306/0x3a0
[    2.670838]  ? release_task+0x23/0x480
[    2.670838]  release_task+0x1d6/0x480
[    2.670838]  exit_notify+0x303/0x750
[    2.670838]  ? cgroup_exit+0x306/0x830
[    2.670838]  ? forget_original_parent+0xb80/0xb80
[    2.670838]  ? perf_event_exit_task+0x1b3/0x2d0
[    2.670838]  ? rcu_read_lock_sched_held+0x3f/0x70
[    2.670838]  do_exit+0x678/0xcf0
[    2.670838]  ? exit_mm+0x5b0/0x5b0
[    2.670838]  ? __kthread_parkme+0xc9/0x280
[    2.670838]  ? setup_nr_node_ids+0x2a/0x2a
[    2.670838]  kthread+0x364/0x4f0
[    2.670838]  ? __kthread_parkme+0x280/0x280
[    2.670838]  ret_from_fork+0x22/0x30

Please let me know if you want any more info.

Thanks,
Andrew

Reply via email to