Hello,

On Tue, 20 Jan 2015, Chris Caputo wrote:

> On Tue, 20 Jan 2015, Julian Anastasov wrote:
> > > +                      (u64)dr * (u64)lwgt < (u64)lr * (u64)dwgt ||
> [...]
> > > +                            (dr == lr && dwgt > lwgt)) {
> > 
> >     Above check is redundant.
> 
> I accepted your feedback and applied it to the below, except for this 
> item.  I believe if dr and lr are zero (no traffic), we still want to 
> choose the higher weight, thus a separate comparison is needed.

        ok

> +     spin_lock_bh(&svc->sched_lock);
> +     p = (struct list_head *)svc->sched_data;
> +     last = dest = list_entry(p, struct ip_vs_dest, n_list);
> +
> +     do {
> +             list_for_each_entry_continue_rcu(dest,
> +                                              &svc->destinations,
> +                                              n_list) {
> +                     dwgt = (u32)atomic_read(&dest->weight);
> +                     if (!(dest->flags & IP_VS_DEST_F_OVERLOAD) &&
> +                         dwgt > 0) {
> +                               spin_lock(&dest->stats.lock);

        May be there is a way to avoid this spin_lock
by using u64_stats_fetch_begin and corresponding
u64_stats_update_begin in estimation_timer(). We can
even remove this ->lock, it will be replaced by ->syncp.
The benefit is for 64-bit platforms where we avoid
lock here in the scheduler. Otherwise, I don't see
other implementation problems in this patch and I'll
check it more carefully this weekend.

Regards

--
Julian Anastasov <j...@ssi.bg>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to