Hi, Peter

Could you make a patch for it, please? Jason J. Herne's test showed we
addressed the bug.  But the fix is not in kernel yet.  Some new highly
related reports are come up again.

I don't want to argue any more, no matter how the patch will be,
I will accept.  And please add the following tags in your patch:

Reported-by: Sasha Levin <sasha.le...@oracle.com>
Reported-by: Jason J. Herne <jjhe...@linux.vnet.ibm.com>
Tested-by: Jason J. Herne <jjhe...@linux.vnet.ibm.com>
Acked-by: Lai Jiangshan <la...@cn.fujitsu.com>


Thanks,
Lai

On 06/06/2014 09:36 PM, Peter Zijlstra wrote:
> On Thu, Jun 05, 2014 at 06:54:35PM +0800, Lai Jiangshan wrote:
>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>> index 268a45e..d05a5a1 100644
>> --- a/kernel/sched/core.c
>> +++ b/kernel/sched/core.c
>> @@ -1474,20 +1474,24 @@ static int ttwu_remote(struct task_struct *p, int 
>> wake_flags)
>>  }
>>  
>>  #ifdef CONFIG_SMP
>> -static void sched_ttwu_pending(void)
>> +static void sched_ttwu_pending_locked(struct rq *rq)
>>  {
>> -    struct rq *rq = this_rq();
>>      struct llist_node *llist = llist_del_all(&rq->wake_list);
>>      struct task_struct *p;
>>  
>> -    raw_spin_lock(&rq->lock);
>> -
>>      while (llist) {
>>              p = llist_entry(llist, struct task_struct, wake_entry);
>>              llist = llist_next(llist);
>>              ttwu_do_activate(rq, p, 0);
>>      }
>> +}
>>  
>> +static void sched_ttwu_pending(void)
>> +{
>> +    struct rq *rq = this_rq();
>> +
>> +    raw_spin_lock(&rq->lock);
>> +    sched_ttwu_pending_locked(rq);
>>      raw_spin_unlock(&rq->lock);
>>  }
> 
> OK, so this won't apply to a recent kernel.
> 
>> @@ -4530,6 +4534,11 @@ int set_cpus_allowed_ptr(struct task_struct *p, const 
>> struct cpumask *new_mask)
>>              goto out;
>>  
>>      dest_cpu = cpumask_any_and(cpu_active_mask, new_mask);
>> +
>> +    /* Ensure it is on rq for migration if it is waking */
>> +    if (p->state == TASK_WAKING)
>> +            sched_ttwu_pending_locked(rq);
> 
> So I would really rather like to avoid this if possible, its doing full
> remote queueing, exactly what we tried to avoid.
> 
>> +
>>      if (p->on_rq) {
>>              struct migration_arg arg = { p, dest_cpu };
>>              /* Need help from migration thread: drop lock and wait. */
>> @@ -4576,6 +4585,10 @@ static int __migrate_task(struct task_struct *p, int 
>> src_cpu, int dest_cpu)
>>      if (!cpumask_test_cpu(dest_cpu, tsk_cpus_allowed(p)))
>>              goto fail;
>>  
>> +    /* Ensure it is on rq for migration if it is waking */
>> +    if (p->state == TASK_WAKING)
>> +            sched_ttwu_pending_locked(rq_src);
>> +
>>      /*
>>       * If we're not on a rq, the next wake-up will ensure we're
>>       * placed properly.
> 
> Oh man, another variant.. why did you change it again? And without
> explanation for why you changed it.
> 
> I don't see a reason to call sched_ttwu_pending() with rq->lock held,
> seeing as how we append to that list without it held.
> 
> I'm still thinking the previous version is good, can you explain why you
> changed it?
> 
> 
> 
> 
> 
> 
> .
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to