On Wed, Oct 08, 2014 at 03:43:11PM +0900, Yasuaki Ishimatsu wrote:
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index bfa3c86..fb7dc3f 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -1496,18 +1496,26 @@ static void update_task_scan_period(struct 
> task_struct *p,
>                       slot = 1;
>               diff = slot * period_slot;
>       } else {
> -             diff = -(NUMA_PERIOD_THRESHOLD - ratio) * period_slot;
> +             if (unlikely((private + shared) == 0))
> +                     /*
> +                      * This is a rare case. The trigger is node offline.
> +                      */
> +                     diff = 0;
> +             else {
> +                     diff = -(NUMA_PERIOD_THRESHOLD - ratio) * period_slot;
> 
> -             /*
> -              * Scale scan rate increases based on sharing. There is an
> -              * inverse relationship between the degree of sharing and
> -              * the adjustment made to the scanning period. Broadly
> -              * speaking the intent is that there is little point
> -              * scanning faster if shared accesses dominate as it may
> -              * simply bounce migrations uselessly
> -              */
> -             ratio = DIV_ROUND_UP(private * NUMA_PERIOD_SLOTS, (private + 
> shared));
> -             diff = (diff * ratio) / NUMA_PERIOD_SLOTS;
> +                     /*
> +                      * Scale scan rate increases based on sharing. There is
> +                      * an inverse relationship between the degree of sharing
> +                      * and the adjustment made to the scanning period.
> +                      * Broadly speaking the intent is that there is little
> +                      * point scanning faster if shared accesses dominate as
> +                      * it may simply bounce migrations uselessly
> +                      */
> +                     ratio = DIV_ROUND_UP(private * NUMA_PERIOD_SLOTS,
> +                                                     (private + shared));
> +                     diff = (diff * ratio) / NUMA_PERIOD_SLOTS;
> +             }
>       }
> 
>       p->numa_scan_period = clamp(p->numa_scan_period + diff,

Yeah, so I don't like the patch nor do I really like the function as it
stands -- which I suppose is part of why I don't like the patch.

The problem I have with the function is that its very inconsistent in
behaviour. In the early return path it sets numa_scan_period and
numa_next_scan, in the later return path it sets numa_scan_period and
numa_faults_locality.

I feel both return paths should affect the same set of variables, esp.
the non clearing of numa_faults_locality in the early path seems weird.

The thing I suppose I don't like about the patch is its added
indentation and the fact that the simple +1 thing wasn't considered.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to