Ingo Molnar <[EMAIL PROTECTED]> wrote:
>
> 
> * Andrew Morton <[EMAIL PROTECTED]> wrote:
> 
> > Christoph Lameter <[EMAIL PROTECTED]> wrote:
> > >
> > > When a potential periodic timer is deleted through timer_del_sync, all
> > >  cpus are scanned to determine if the timer is running on that cpu. In a
> > >  NUMA configuration doing so will cause NUMA interlink traffic which 
> > > limits
> > >  the scalability of timers.
> > > 
> > >  The following patch makes the timer remember where the timer was last
> > >  started. It is then possible to only wait for the completion of the timer
> > >  on that specific cpu.
> 
> i'm not sure about this. The patch adds one more pointer to a very
> frequently used and frequently embedded data structure (struct
> timer_list), for the benefit of a rarely used API variant
> (timer_del_sync()).

True.

(We can delete timer.magic and check_timer() now.  It has served its purpose)

> Furthermore, timer->base itself is affine too, a timer always runs on
> the CPU belonging to timer->base. So a more scalable variant of
> del_timer_sync() would perhaps be possible by carefully deleting the
> timer and only going into the full loop if the timer was deleted before. 
> (and in which case semantics require us to synchronize on timer
> execution.) Or we could skip the full loop altogether and just
> synchronize with timer execution if _we_ deleted the timer. This should
> work fine as far as itimers are concerned.

If we're prepared to rule that a timer handler is not allowed to do
add_timer_on() then a recurring timer is permanently pinned to a CPU, isn't
it?

That should make things simpler?
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Reply via email to