On Tue, 22 Oct 2024 at 21:12, Peter Zijlstra <[email protected]> wrote: > > On Tue, Oct 22, 2024 at 03:40:52PM +0200, Marco Elver wrote: > > > Which gives us: > > > > | ================================================================== > > | BUG: KCSAN: assert: race in dequeue_entities / ttwu_do_activate > > | > > | write (marked) to 0xffff9e100329c628 of 4 bytes by interrupt on cpu 0: > > | activate_task kernel/sched/core.c:2064 [inline] > > > > This is this one: > > > > void activate_task(struct rq *rq, struct task_struct *p, int flags) > > { > > if (task_on_rq_migrating(p)) > > flags |= ENQUEUE_MIGRATED; > > if (flags & ENQUEUE_MIGRATED) > > sched_mm_cid_migrate_to(rq, p); > > > > enqueue_task(rq, p, flags); > > > > WRITE_ONCE(p->on_rq, TASK_ON_RQ_QUEUED); > > ASSERT_EXCLUSIVE_WRITER(p->on_rq); > > } > > > > | ttwu_do_activate+0x153/0x3e0 kernel/sched/core.c:3671 > > | ttwu_queue kernel/sched/core.c:3944 [inline] > > | try_to_wake_up+0x60f/0xaf0 kernel/sched/core.c:4270 > > > | assert no writes to 0xffff9e100329c628 of 4 bytes by task 10571 on cpu 3: > > | __block_task kernel/sched/sched.h:2770 [inline] > > > > And that's: > > > > static inline void __block_task(struct rq *rq, struct task_struct *p) > > { > > WRITE_ONCE(p->on_rq, 0); > > ASSERT_EXCLUSIVE_WRITER(p->on_rq); > > if (p->sched_contributes_to_load) > > rq->nr_uninterruptible++; > > > > | dequeue_entities+0xd83/0xe70 kernel/sched/fair.c:7177 > > | pick_next_entity kernel/sched/fair.c:5627 [inline] > > | pick_task_fair kernel/sched/fair.c:8856 [inline] > > | pick_next_task_fair+0xaf/0x710 kernel/sched/fair.c:8876 > > | __pick_next_task kernel/sched/core.c:5955 [inline] > > | pick_next_task kernel/sched/core.c:6477 [inline] > > | __schedule+0x47a/0x1130 kernel/sched/core.c:6629 > > | __schedule_loop kernel/sched/core.c:6752 [inline] > > | schedule+0x7b/0x130 kernel/sched/core.c:6767 > > > So KCSAn is trying to tell me these two paths run concurrently on the > same 'p' ?!? That would be a horrible bug -- both these call chains > should be holding rq->__lock (for task_rq(p)).
Yes correct. And just to confirm this is no false positive, the way KCSAN works _requires_ the race to actually happen before it reports anything; this can also be seen in Alexander's report with just 1 stack trace where it saw the value transition from 0 to 1 (TASK_ON_RQ_QUEUED) but didn't know who did the write because kernel/sched was uninstrumented.
