Christoph Lameter <[EMAIL PROTECTED]> wrote:
>
> On Sun, 20 Mar 2005, Andrew Morton wrote:
>
> > > Hope Andrew is going to take the patch this time.
> >
> > Hope Kenneth is going to test the alternate del_timer_sync patches in next
> > -mm ;)
>
> BTW Why are we going through this? Oleg has posted
Christoph Lameter <[EMAIL PROTECTED]> wrote:
>
> On Sun, 20 Mar 2005, Andrew Morton wrote:
>
> > "Chen, Kenneth W" <[EMAIL PROTECTED]> wrote:
> > >
> > > We did exactly the same thing about 10 months back. Nice to
> > > see that independent people came up with exactly the same
> > > solution that
On Sun, 20 Mar 2005, Andrew Morton wrote:
> > Hope Andrew is going to take the patch this time.
>
> Hope Kenneth is going to test the alternate del_timer_sync patches in next
> -mm ;)
BTW Why are we going through this? Oleg has posted a much better solution
to this issue yersteday AFAIK.
-
To uns
On Sun, 20 Mar 2005, Andrew Morton wrote:
> "Chen, Kenneth W" <[EMAIL PROTECTED]> wrote:
> >
> > We did exactly the same thing about 10 months back. Nice to
> > see that independent people came up with exactly the same
> > solution that we proposed 10 months back.
>
> Well the same question appli
"Chen, Kenneth W" <[EMAIL PROTECTED]> wrote:
>
> We did exactly the same thing about 10 months back. Nice to
> see that independent people came up with exactly the same
> solution that we proposed 10 months back.
Well the same question applies. Christoph, which code is calling
del_timer_sync() s
We did exactly the same thing about 10 months back. Nice to
see that independent people came up with exactly the same
solution that we proposed 10 months back. In fact, this patch
is line-by-line identical to the one we post.
Hope Andrew is going to take the patch this time.
See our original po
* Christoph Lameter <[EMAIL PROTECTED]> wrote:
> The following patch removes the magic in the timer_list structure
> (Andrew suggested that we may not need it anymore) and replaces it
> with two u8 variables that give us some additional state of the timer
The 'remove the magic' observation is no
Christoph Lameter wrote:
>
> @@ -476,6 +454,7 @@ repeat:
> }
> }
> spin_lock_irq(&base->lock);
> + timer->running = 0;
^^
> goto repeat;
>
Christoph Lameter wrote:
>
> On Sun, 13 Mar 2005, Oleg Nesterov wrote:
>
> > I suspect that del_timer_sync() in its current form is racy.
> >
...snip...
> > next timer interrupt, __run_timers() picks
> > this timer again, sets timer->base = NULL
^^^
> >
> >
How about this take on the problem?
When a potential periodic timer is deleted through timer_del_sync, all cpus
are scanned to determine if the timer is running on that cpu. In a NUMA
configuration doing so will cause NUMA interlink traffic which limits the
scalability of timers.
The following pa
On Sun, 13 Mar 2005, Oleg Nesterov wrote:
> I suspect that del_timer_sync() in its current form is racy.
>
> CPU 0 CPU 1
>
> __run_timers() sets timer->base = NULL
>
> del_timer_sync() starts, calls
> del_timer
I suspect that del_timer_sync() in its current form is racy.
CPU 0 CPU 1
__run_timers() sets timer->base = NULL
del_timer_sync() starts, calls
del_timer(), it returns
On Fri, 11 Mar 2005, Oleg Nesterov wrote:
> I think it is not enough to exchange these 2 lines in
> __run_timers, we also need barriers.
Maybe its best to drop last_running_timer as Ingo suggested.
Replace the magic with a flag that can be set to stop scheduling a timer
again.
Then del_timer_sy
Hello.
I am not sure, but I think this patch incorrect.
> @@ -466,6 +482,7 @@ repeat:
> set_running_timer(base, timer);
> smp_wmb();
> timer->base = NULL;
--> WINDOW <--
> + set_last_running(timer, base)
On Tue, 8 Mar 2005, Ingo Molnar wrote:
> > > The following patch makes the timer remember where the timer was last
> > > started. It is then possible to only wait for the completion of the timer
> > > on that specific cpu.
>
> i'm not sure about this. The patch adds one more pointer to a very
>
On Tue, 8 Mar 2005, Andrew Morton wrote:
> If we're prepared to rule that a timer handler is not allowed to do
> add_timer_on() then a recurring timer is permanently pinned to a CPU, isn't
> it?
The process may be rescheduled to run on different processor. Then the
add_timer() function (called fr
Ingo Molnar <[EMAIL PROTECTED]> wrote:
>
>
> * Andrew Morton <[EMAIL PROTECTED]> wrote:
>
> > Christoph Lameter <[EMAIL PROTECTED]> wrote:
> > >
> > > When a potential periodic timer is deleted through timer_del_sync, all
> > > cpus are scanned to determine if the timer is running on that cpu. I
* Andrew Morton <[EMAIL PROTECTED]> wrote:
> Christoph Lameter <[EMAIL PROTECTED]> wrote:
> >
> > When a potential periodic timer is deleted through timer_del_sync, all
> > cpus are scanned to determine if the timer is running on that cpu. In a
> > NUMA configuration doing so will cause NUMA in
Christoph Lameter <[EMAIL PROTECTED]> wrote:
>
> When a potential periodic timer is deleted through timer_del_sync, all
> cpus are scanned to determine if the timer is running on that cpu. In a
> NUMA configuration doing so will cause NUMA interlink traffic which limits
> the scalability of time
When a potential periodic timer is deleted through timer_del_sync, all
cpus are scanned to determine if the timer is running on that cpu. In a
NUMA configuration doing so will cause NUMA interlink traffic which limits
the scalability of timers.
The following patch makes the timer remember where th
20 matches
Mail list logo