From: Thomas Gleixner <t...@linutronix.de> CONFIG_PREEMPT_COUNT is now unconditionally enabled and will be removed. Cleanup the leftovers before doing so.
Signed-off-by: Thomas Gleixner <t...@linutronix.de> Cc: Ingo Molnar <mi...@redhat.com> Cc: Peter Zijlstra <pet...@infradead.org> Cc: Juri Lelli <juri.le...@redhat.com> Cc: Vincent Guittot <vincent.guit...@linaro.org> Cc: Dietmar Eggemann <dietmar.eggem...@arm.com> Cc: Steven Rostedt <rost...@goodmis.org> Cc: Ben Segall <bseg...@google.com> Cc: Mel Gorman <mgor...@suse.de> Cc: Daniel Bristot de Oliveira <bris...@redhat.com> Signed-off-by: Uladzislau Rezki (Sony) <ure...@gmail.com> --- kernel/sched/core.c | 6 +----- lib/Kconfig.debug | 1 - 2 files changed, 1 insertion(+), 6 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index d2003a7d5ab5..e172f2ddfa16 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -3702,8 +3702,7 @@ asmlinkage __visible void schedule_tail(struct task_struct *prev) * finish_task_switch() for details. * * finish_task_switch() will drop rq->lock() and lower preempt_count - * and the preempt_enable() will end up enabling preemption (on - * PREEMPT_COUNT kernels). + * and the preempt_enable() will end up enabling preemption. */ rq = finish_task_switch(prev); @@ -7307,9 +7306,6 @@ void __cant_sleep(const char *file, int line, int preempt_offset) if (irqs_disabled()) return; - if (!IS_ENABLED(CONFIG_PREEMPT_COUNT)) - return; - if (preempt_count() > preempt_offset) return; diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug index 03a85065805e..d62806c81f6d 100644 --- a/lib/Kconfig.debug +++ b/lib/Kconfig.debug @@ -1318,7 +1318,6 @@ config DEBUG_LOCKDEP config DEBUG_ATOMIC_SLEEP bool "Sleep inside atomic section checking" - select PREEMPT_COUNT depends on DEBUG_KERNEL help If you say Y here, various routines which may sleep will become very -- 2.20.1