On Thu, Jul 11, 2024 at 03:00:04PM +0200, Valentin Schneider wrote:
> +static inline void task_throttle_cancel_work(struct task_struct *p, int 
> dst_cpu)
> +{
> +       /*
> +     * The calling context may be holding p->pi_lock, which is also acquired
> +     * by task_work_cancel_match().
> +     *
> +     * Lock recursion is prevented by punting the work cancellation to the
> +     * next IRQ enable. This is sent to the destination CPU rather than
> +     * >this< CPU to prevent the task from resuming execution and getting
> +     * throttled in its return to userspace.
> +     */


u're having white space trouble there.. :-)

> +       irq_work_queue_on(&p->unthrottle_irq_work, dst_cpu);
> +}

Reply via email to