On Wed, May 13, 2020 at 04:30:23PM +0100, Mel Gorman wrote:
> Complete shot in the dark but restore adjust_numa_imbalance() and try
> this
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 1a9983da4408..0b31f4468d5b 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2393,7 +2393,7 @@ static void ttwu_queue(struct task_struct *p, int cpu, 
> int wake_flags)
>       struct rq_flags rf;
>  
>  #if defined(CONFIG_SMP)
> -     if (sched_feat(TTWU_QUEUE) && !cpus_share_cache(smp_processor_id(), 
> cpu)) {
> +     if (sched_feat(TTWU_QUEUE)) {

just saying that this has the risk of regressing other workloads, see:

  518cd6234178 ("sched: Only queue remote wakeups when crossing cache 
boundaries")

>               sched_clock_cpu(cpu); /* Sync clocks across CPUs */
>               ttwu_queue_remote(p, cpu, wake_flags);
>               return;
> 
> -- 
> Mel Gorman
> SUSE Labs

Reply via email to