On 07/14/2013 11:56 PM, Rafael J. Wysocki wrote:
[snip]
>> +
>> +    /*
>> +     * Since there is no lock to prvent re-queue the
>> +     * cancelled work, some early cancelled work might
>> +     * have been queued again by later cancelled work.
>> +     *
>> +     * Flush the work again with dbs_data->queue_stop
>> +     * enabled, this time there will be no survivors.
>> +     */
>> +    if (round)
>> +            goto redo;
> 
> Well, what about doing:
> 
>       for (round = 2; round; round--)
>               for_each_cpu(i, policy->cpus) {
>                       cdbs = dbs_data->cdata->get_cpu_cdbs(i);
>                       cancel_delayed_work_sync(&cdbs->work);
>               }
> 
> instead?
> 

It could works, while I was a little dislike to use nested 'for' logical...

Anyway, seems like we have not solved the issue yet, so let's put these
down and focus on the fix firstly ;-)

Regards,
Michael Wang

>> +    dbs_data->queue_stop = 0;
>>  }
>>  
>>  /* Will return if we need to evaluate cpu load again or not */
>> diff --git a/drivers/cpufreq/cpufreq_governor.h 
>> b/drivers/cpufreq/cpufreq_governor.h
>> index e16a961..9116135 100644
>> --- a/drivers/cpufreq/cpufreq_governor.h
>> +++ b/drivers/cpufreq/cpufreq_governor.h
>> @@ -213,6 +213,7 @@ struct dbs_data {
>>      unsigned int min_sampling_rate;
>>      int usage_count;
>>      void *tuners;
>> +    int queue_stop;
>>  
>>      /* dbs_mutex protects dbs_enable in governor start/stop */
>>      struct mutex mutex;
>>
> 
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to