The following commit has been merged into the sched/urgent branch of tip: Commit-ID: a49b4f4012ef233143c5f7ce44f97851e54d5ef9 Gitweb: https://git.kernel.org/tip/a49b4f4012ef233143c5f7ce44f97851e54d5ef9 Author: Valentin Schneider <valentin.schnei...@arm.com> AuthorDate: Mon, 23 Sep 2019 15:36:12 +01:00 Committer: Ingo Molnar <mi...@kernel.org> CommitterDate: Wed, 25 Sep 2019 17:42:32 +02:00
sched/core: Fix preempt_schedule() interrupt return comment preempt_schedule_irq() is the one that should be called on return from interrupt, clean up the comment to avoid any ambiguity. Signed-off-by: Valentin Schneider <valentin.schnei...@arm.com> Signed-off-by: Peter Zijlstra (Intel) <pet...@infradead.org> Acked-by: Thomas Gleixner <t...@linutronix.de> Cc: Linus Torvalds <torva...@linux-foundation.org> Cc: Peter Zijlstra <pet...@infradead.org> Cc: linux-m...@lists.linux-m68k.org Cc: linux-ri...@lists.infradead.org Cc: uclinux-h8-de...@lists.sourceforge.jp Link: https://lkml.kernel.org/r/20190923143620.29334-2-valentin.schnei...@arm.com Signed-off-by: Ingo Molnar <mi...@kernel.org> --- kernel/sched/core.c | 7 +++---- 1 file changed, 3 insertions(+), 4 deletions(-) diff --git a/kernel/sched/core.c b/kernel/sched/core.c index 83ea23e..00ef44c 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -4218,9 +4218,8 @@ static void __sched notrace preempt_schedule_common(void) #ifdef CONFIG_PREEMPTION /* - * this is the entry point to schedule() from in-kernel preemption - * off of preempt_enable. Kernel preemptions off return from interrupt - * occur there and call schedule directly. + * This is the entry point to schedule() from in-kernel preemption + * off of preempt_enable. */ asmlinkage __visible void __sched notrace preempt_schedule(void) { @@ -4291,7 +4290,7 @@ EXPORT_SYMBOL_GPL(preempt_schedule_notrace); #endif /* CONFIG_PREEMPTION */ /* - * this is the entry point to schedule() from kernel preemption + * This is the entry point to schedule() from kernel preemption * off of irq context. * Note, that this is called and return with irqs disabled. This will * protect us against recursive calling from irq.