Re: [PATCH 1/3] Updated dynamic tick patches - Fix lost tick calculation in timer_pm.c

2005-09-06 Thread john stultz
On Sat, 2005-09-03 at 00:05 -0400, Lee Revell wrote:
> On Wed, 2005-08-31 at 22:42 +0530, Srivatsa Vaddagiri wrote:
> > With this patch, time had kept up really well on one particular
> > machine (Intel 4way Pentium 3 box) overnight, while
> > on another newer machine (Intel 4way Xeon with HT) it didnt do so
> > well (time sped up after 3 or 4 hours). Hence I consider this
> > particular patch will need more review/work.
> > 
> 
> Are lost ticks really that common?  If so, any idea what's disabling
> interrupts for so long (or if it's a hardware issue)?  And if not, it
> seems like you'd need an artificial way to simulate lost ticks in order
> to test this stuff.

Pavel came up with a pretty good test for this awhile back.

http://marc.theaimsgroup.com/?l=linux-kernel=110519095425851=2

Adding:
unsigned long mask = 0x1;
sched_setaffinity(0, sizeof(mask), );

to the top helps it work on SMP systems.

thanks
-john

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/3] Updated dynamic tick patches - Fix lost tick calculation in timer_pm.c

2005-09-06 Thread Srivatsa Vaddagiri
On Tue, Sep 06, 2005 at 12:32:32PM +0200, Pavel Machek wrote:
> Try running this from userspace (and watch for time going completely
> crazy). Try it in mainline, too; it broke even vanilla some time
> ago. Need to run as root. 

Note that kernel relies on some backing time source (like TSC/PM)
to recover lost ticks (& time). And these backing time source have 
their own limitation on how many max lost ticks you can recover,
which in turn means how long you can have interrupts blocked.
In case of TSC, since only 32-bit previous snapshot is maintained (in x86
atleast) it allows for ticks to be lost only upto a second (if I remember
correctly), while the 24-bit ACPI PM timer allows for upto 3-4
seconds. 

I found that the while loop below takes 3.66 seconds running
on a 1.8GHz P4 CPU. That may be too much if kernel is using
(32-bit snapshot of) TSC to recover ticks, while maybe just
at the max limit allowed for ACPI PM timer.

I will test this code with the lost-tick recovery fixes
for ACPI PM timer that I sent out and let you know
how it performs!

> for (i=0; i<10; i++)
> asm volatile("");

-- 


Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/3] Updated dynamic tick patches - Fix lost tick calculation in timer_pm.c

2005-09-06 Thread Pavel Machek
Hi!

> > With this patch, time had kept up really well on one particular
> > machine (Intel 4way Pentium 3 box) overnight, while
> > on another newer machine (Intel 4way Xeon with HT) it didnt do so
> > well (time sped up after 3 or 4 hours). Hence I consider this
> > particular patch will need more review/work.
> > 
> 
> Are lost ticks really that common?  If so, any idea what's disabling
> interrupts for so long (or if it's a hardware issue)?  And if not, it
> seems like you'd need an artificial way to simulate lost ticks in order
> to test this stuff.

Try running this from userspace (and watch for time going completely
crazy). Try it in mainline, too; it broke even vanilla some time
ago. Need to run as root. 

Pavel

void
main(void)
{
int i;
iopl(3);
while (1) {
asm volatile("cli");
//  for (i=0; i<2000; i++)
for (i=0; i<10; i++)
asm volatile("");
asm volatile("sti");
sleep(1);
}
}


-- 
if you have sharp zaurus hardware you don't need... you know my address
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/3] Updated dynamic tick patches - Fix lost tick calculation in timer_pm.c

2005-09-06 Thread Pavel Machek
Hi!

  With this patch, time had kept up really well on one particular
  machine (Intel 4way Pentium 3 box) overnight, while
  on another newer machine (Intel 4way Xeon with HT) it didnt do so
  well (time sped up after 3 or 4 hours). Hence I consider this
  particular patch will need more review/work.
  
 
 Are lost ticks really that common?  If so, any idea what's disabling
 interrupts for so long (or if it's a hardware issue)?  And if not, it
 seems like you'd need an artificial way to simulate lost ticks in order
 to test this stuff.

Try running this from userspace (and watch for time going completely
crazy). Try it in mainline, too; it broke even vanilla some time
ago. Need to run as root. 

Pavel

void
main(void)
{
int i;
iopl(3);
while (1) {
asm volatile(cli);
//  for (i=0; i2000; i++)
for (i=0; i10; i++)
asm volatile();
asm volatile(sti);
sleep(1);
}
}


-- 
if you have sharp zaurus hardware you don't need... you know my address
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/3] Updated dynamic tick patches - Fix lost tick calculation in timer_pm.c

2005-09-06 Thread Srivatsa Vaddagiri
On Tue, Sep 06, 2005 at 12:32:32PM +0200, Pavel Machek wrote:
 Try running this from userspace (and watch for time going completely
 crazy). Try it in mainline, too; it broke even vanilla some time
 ago. Need to run as root. 

Note that kernel relies on some backing time source (like TSC/PM)
to recover lost ticks ( time). And these backing time source have 
their own limitation on how many max lost ticks you can recover,
which in turn means how long you can have interrupts blocked.
In case of TSC, since only 32-bit previous snapshot is maintained (in x86
atleast) it allows for ticks to be lost only upto a second (if I remember
correctly), while the 24-bit ACPI PM timer allows for upto 3-4
seconds. 

I found that the while loop below takes 3.66 seconds running
on a 1.8GHz P4 CPU. That may be too much if kernel is using
(32-bit snapshot of) TSC to recover ticks, while maybe just
at the max limit allowed for ACPI PM timer.

I will test this code with the lost-tick recovery fixes
for ACPI PM timer that I sent out and let you know
how it performs!

 for (i=0; i10; i++)
 asm volatile();

-- 


Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/3] Updated dynamic tick patches - Fix lost tick calculation in timer_pm.c

2005-09-06 Thread john stultz
On Sat, 2005-09-03 at 00:05 -0400, Lee Revell wrote:
 On Wed, 2005-08-31 at 22:42 +0530, Srivatsa Vaddagiri wrote:
  With this patch, time had kept up really well on one particular
  machine (Intel 4way Pentium 3 box) overnight, while
  on another newer machine (Intel 4way Xeon with HT) it didnt do so
  well (time sped up after 3 or 4 hours). Hence I consider this
  particular patch will need more review/work.
  
 
 Are lost ticks really that common?  If so, any idea what's disabling
 interrupts for so long (or if it's a hardware issue)?  And if not, it
 seems like you'd need an artificial way to simulate lost ticks in order
 to test this stuff.

Pavel came up with a pretty good test for this awhile back.

http://marc.theaimsgroup.com/?l=linux-kernelm=110519095425851w=2

Adding:
unsigned long mask = 0x1;
sched_setaffinity(0, sizeof(mask), mask);

to the top helps it work on SMP systems.

thanks
-john

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/3] Updated dynamic tick patches - Fix lost tick calculation in timer_pm.c

2005-09-02 Thread Lee Revell
On Sat, 2005-09-03 at 01:15 -0400, Parag Warudkar wrote:
> Lee Revell wrote:
> 
> > Are lost ticks really that common? If so, any idea what's disabling
> >
> >interrupts for so long (or if it's a hardware issue)?  And if not, it
> >seems like you'd need an artificial way to simulate lost ticks in order
> >to test this stuff.
> >
> >Lee
> >  
> >
> Yes - I know many people with laptops who have this lost ticks problem. 
> So no simulation and/or
> special efforts required.  If anyone wants a test bed - my laptop is the 
> perfect instrument.
> 
> In my case the rip is always as acpi_processor_idle now a days. Earlier 
> it used to be at acpi_ec_read.

Ah, OK, I forgot about SMM traps.

Lee

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/3] Updated dynamic tick patches - Fix lost tick calculation in timer_pm.c

2005-09-02 Thread Srivatsa Vaddagiri
On Sat, Sep 03, 2005 at 12:05:00AM -0400, Lee Revell wrote:
> Are lost ticks really that common?  If so, any idea what's disabling

It becomes common with a patch like dynamic ticks, where we purposefully
skip ticks when CPU is idle. When the CPU wakes up, we have to regain
the lost/skipped ticks and thats where I ran into incorrect lost-tick
calculation issues.

> interrupts for so long (or if it's a hardware issue)?  And if not, it
> seems like you'd need an artificial way to simulate lost ticks in order
> to test this stuff.

Dyn-tick patch is enought to simulate these lost ticks!

-- 


Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/3] Updated dynamic tick patches - Fix lost tick calculation in timer_pm.c

2005-09-02 Thread Parag Warudkar

Lee Revell wrote:


Are lost ticks really that common? If so, any idea what's disabling

interrupts for so long (or if it's a hardware issue)?  And if not, it
seems like you'd need an artificial way to simulate lost ticks in order
to test this stuff.

Lee
 

Yes - I know many people with laptops who have this lost ticks problem. 
So no simulation and/or
special efforts required.  If anyone wants a test bed - my laptop is the 
perfect instrument.


In my case the rip is always as acpi_processor_idle now a days. Earlier 
it used to be at acpi_ec_read.


Parag
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/3] Updated dynamic tick patches - Fix lost tick calculation in timer_pm.c

2005-09-02 Thread Peter Williams

Lee Revell wrote:

On Sat, 2005-09-03 at 14:18 +1000, Peter Williams wrote:

In my experience, turning off DMA for IDE disks is a pretty good way to 
generate lost ticks :-)



For this to "work" you have to unset "unmask IRQ" with hdparm, right?


I'm not familiar with that method.  When I've experienced this it's been 
due to me accidentally not configuring IDE DMA during configuration.


Peter
--
Peter Williams   [EMAIL PROTECTED]

"Learning, n. The kind of ignorance distinguishing the studious."
 -- Ambrose Bierce
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/3] Updated dynamic tick patches - Fix lost tick calculation in timer_pm.c

2005-09-02 Thread Lee Revell
On Sat, 2005-09-03 at 14:18 +1000, Peter Williams wrote:
> In my experience, turning off DMA for IDE disks is a pretty good way to 
> generate lost ticks :-)

For this to "work" you have to unset "unmask IRQ" with hdparm, right?

Lee

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/3] Updated dynamic tick patches - Fix lost tick calculation in timer_pm.c

2005-09-02 Thread Peter Williams

Lee Revell wrote:

On Wed, 2005-08-31 at 22:42 +0530, Srivatsa Vaddagiri wrote:


With this patch, time had kept up really well on one particular
machine (Intel 4way Pentium 3 box) overnight, while
on another newer machine (Intel 4way Xeon with HT) it didnt do so
well (time sped up after 3 or 4 hours). Hence I consider this
particular patch will need more review/work.




Are lost ticks really that common?  If so, any idea what's disabling
interrupts for so long (or if it's a hardware issue)?  And if not, it
seems like you'd need an artificial way to simulate lost ticks in order
to test this stuff.


In my experience, turning off DMA for IDE disks is a pretty good way to 
generate lost ticks :-)


Peter
--
Peter Williams   [EMAIL PROTECTED]

"Learning, n. The kind of ignorance distinguishing the studious."
 -- Ambrose Bierce
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/3] Updated dynamic tick patches - Fix lost tick calculation in timer_pm.c

2005-09-02 Thread Lee Revell
On Wed, 2005-08-31 at 22:42 +0530, Srivatsa Vaddagiri wrote:
> With this patch, time had kept up really well on one particular
> machine (Intel 4way Pentium 3 box) overnight, while
> on another newer machine (Intel 4way Xeon with HT) it didnt do so
> well (time sped up after 3 or 4 hours). Hence I consider this
> particular patch will need more review/work.
> 

Are lost ticks really that common?  If so, any idea what's disabling
interrupts for so long (or if it's a hardware issue)?  And if not, it
seems like you'd need an artificial way to simulate lost ticks in order
to test this stuff.

Lee

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/3] Updated dynamic tick patches - Fix lost tick calculation in timer_pm.c

2005-09-02 Thread Lee Revell
On Wed, 2005-08-31 at 22:42 +0530, Srivatsa Vaddagiri wrote:
 With this patch, time had kept up really well on one particular
 machine (Intel 4way Pentium 3 box) overnight, while
 on another newer machine (Intel 4way Xeon with HT) it didnt do so
 well (time sped up after 3 or 4 hours). Hence I consider this
 particular patch will need more review/work.
 

Are lost ticks really that common?  If so, any idea what's disabling
interrupts for so long (or if it's a hardware issue)?  And if not, it
seems like you'd need an artificial way to simulate lost ticks in order
to test this stuff.

Lee

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/3] Updated dynamic tick patches - Fix lost tick calculation in timer_pm.c

2005-09-02 Thread Lee Revell
On Sat, 2005-09-03 at 14:18 +1000, Peter Williams wrote:
 In my experience, turning off DMA for IDE disks is a pretty good way to 
 generate lost ticks :-)

For this to work you have to unset unmask IRQ with hdparm, right?

Lee

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/3] Updated dynamic tick patches - Fix lost tick calculation in timer_pm.c

2005-09-02 Thread Peter Williams

Lee Revell wrote:

On Sat, 2005-09-03 at 14:18 +1000, Peter Williams wrote:

In my experience, turning off DMA for IDE disks is a pretty good way to 
generate lost ticks :-)



For this to work you have to unset unmask IRQ with hdparm, right?


I'm not familiar with that method.  When I've experienced this it's been 
due to me accidentally not configuring IDE DMA during configuration.


Peter
--
Peter Williams   [EMAIL PROTECTED]

Learning, n. The kind of ignorance distinguishing the studious.
 -- Ambrose Bierce
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/3] Updated dynamic tick patches - Fix lost tick calculation in timer_pm.c

2005-09-02 Thread Parag Warudkar

Lee Revell wrote:


Are lost ticks really that common? If so, any idea what's disabling

interrupts for so long (or if it's a hardware issue)?  And if not, it
seems like you'd need an artificial way to simulate lost ticks in order
to test this stuff.

Lee
 

Yes - I know many people with laptops who have this lost ticks problem. 
So no simulation and/or
special efforts required.  If anyone wants a test bed - my laptop is the 
perfect instrument.


In my case the rip is always as acpi_processor_idle now a days. Earlier 
it used to be at acpi_ec_read.


Parag
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/3] Updated dynamic tick patches - Fix lost tick calculation in timer_pm.c

2005-09-02 Thread Srivatsa Vaddagiri
On Sat, Sep 03, 2005 at 12:05:00AM -0400, Lee Revell wrote:
 Are lost ticks really that common?  If so, any idea what's disabling

It becomes common with a patch like dynamic ticks, where we purposefully
skip ticks when CPU is idle. When the CPU wakes up, we have to regain
the lost/skipped ticks and thats where I ran into incorrect lost-tick
calculation issues.

 interrupts for so long (or if it's a hardware issue)?  And if not, it
 seems like you'd need an artificial way to simulate lost ticks in order
 to test this stuff.

Dyn-tick patch is enought to simulate these lost ticks!

-- 


Thanks and Regards,
Srivatsa Vaddagiri,
Linux Technology Center,
IBM Software Labs,
Bangalore, INDIA - 560017
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/3] Updated dynamic tick patches - Fix lost tick calculation in timer_pm.c

2005-09-02 Thread Lee Revell
On Sat, 2005-09-03 at 01:15 -0400, Parag Warudkar wrote:
 Lee Revell wrote:
 
  Are lost ticks really that common? If so, any idea what's disabling
 
 interrupts for so long (or if it's a hardware issue)?  And if not, it
 seems like you'd need an artificial way to simulate lost ticks in order
 to test this stuff.
 
 Lee
   
 
 Yes - I know many people with laptops who have this lost ticks problem. 
 So no simulation and/or
 special efforts required.  If anyone wants a test bed - my laptop is the 
 perfect instrument.
 
 In my case the rip is always as acpi_processor_idle now a days. Earlier 
 it used to be at acpi_ec_read.

Ah, OK, I forgot about SMM traps.

Lee

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/3] Updated dynamic tick patches - Fix lost tick calculation in timer_pm.c

2005-08-31 Thread john stultz
On Wed, 2005-08-31 at 15:36 -0700, Zachary Amsden wrote:
> >I feel lost ticks can be based on cycles difference directly
> >rather than being based on microseconds that has elapsed.
> >
> >Following patch is in that direction. 
> >
> >With this patch, time had kept up really well on one particular
> >machine (Intel 4way Pentium 3 box) overnight, while
> >on another newer machine (Intel 4way Xeon with HT) it didnt do so
> >well (time sped up after 3 or 4 hours). Hence I consider this
> >particular patch will need more review/work.
> >
> >  
> >
> 
> Does this patch help address the issues pointed out here?
> 
> http://bugzilla.kernel.org/show_bug.cgi?id=5127

Unfortunately no. The issue there is that once the lost tick
compensation code has fired, should those "lost" ticks appear later we
end up over-compensating.

This patch however does help to make sure that when the lost tick code
fires, the error from converting to usecs doesn't bite us. And could
probably go into mainline independent of the dynamic ticks patch (with
further testing, of course).

thanks
-john

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/3] Updated dynamic tick patches - Fix lost tick calculation in timer_pm.c

2005-08-31 Thread Zachary Amsden

Srivatsa Vaddagiri wrote:


On Wed, Aug 31, 2005 at 10:28:43PM +0530, Srivatsa Vaddagiri wrote:
 


Following patches related to dynamic tick are posted in separate mails,
for convenience of review. The first patch probably applies w/o dynamic
tick consideration also.

Patch 1/3  -> Fixup lost tick calculation in timer_pm.c
   



Currently, lost tick calculation in timer_pm.c is based on number
of microseconds that has elapsed since the last tick. Calculating
the number of microseconds is approximated by cyc2us, which
basically does :

microsec = (cycles * 286) / 1024

Consider 10 ticks lost. This amounts to 14319*10 = 143190 cycles 
(14319 = PMTMR_EXPECTED_RATE/(CALIBRATE_LATCH/LATCH)).
This amount to 39992 microseconds as per the above equation 
or 39992 / 4000 = 9 lost ticks, which is incorrect.


I feel lost ticks can be based on cycles difference directly
rather than being based on microseconds that has elapsed.

Following patch is in that direction. 


With this patch, time had kept up really well on one particular
machine (Intel 4way Pentium 3 box) overnight, while
on another newer machine (Intel 4way Xeon with HT) it didnt do so
well (time sped up after 3 or 4 hours). Hence I consider this
particular patch will need more review/work.

 



Does this patch help address the issues pointed out here?

http://bugzilla.kernel.org/show_bug.cgi?id=5127
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH 1/3] Updated dynamic tick patches - Fix lost tick calculation in timer_pm.c

2005-08-31 Thread Srivatsa Vaddagiri
On Wed, Aug 31, 2005 at 10:28:43PM +0530, Srivatsa Vaddagiri wrote:
> Following patches related to dynamic tick are posted in separate mails,
> for convenience of review. The first patch probably applies w/o dynamic
> tick consideration also.
> 
> Patch 1/3  -> Fixup lost tick calculation in timer_pm.c

Currently, lost tick calculation in timer_pm.c is based on number
of microseconds that has elapsed since the last tick. Calculating
the number of microseconds is approximated by cyc2us, which
basically does :

microsec = (cycles * 286) / 1024

Consider 10 ticks lost. This amounts to 14319*10 = 143190 cycles 
(14319 = PMTMR_EXPECTED_RATE/(CALIBRATE_LATCH/LATCH)).
This amount to 39992 microseconds as per the above equation 
or 39992 / 4000 = 9 lost ticks, which is incorrect.

I feel lost ticks can be based on cycles difference directly
rather than being based on microseconds that has elapsed.

Following patch is in that direction. 

With this patch, time had kept up really well on one particular
machine (Intel 4way Pentium 3 box) overnight, while
on another newer machine (Intel 4way Xeon with HT) it didnt do so
well (time sped up after 3 or 4 hours). Hence I consider this
particular patch will need more review/work.

Patch is against 2.6.13-rc6-mm2.



Fix lost tick calculation in timer_pm.c

---

 linux-2.6.13-rc6-mm2-root/arch/i386/kernel/timers/timer_pm.c |   44 +--
 1 files changed, 20 insertions(+), 24 deletions(-)

diff -puN arch/i386/kernel/timers/timer_pm.c~pm_timer_fix 
arch/i386/kernel/timers/timer_pm.c
--- linux-2.6.13-rc6-mm2/arch/i386/kernel/timers/timer_pm.c~pm_timer_fix
2005-08-31 16:31:52.0 +0530
+++ linux-2.6.13-rc6-mm2-root/arch/i386/kernel/timers/timer_pm.c
2005-08-31 16:32:51.0 +0530
@@ -30,6 +30,8 @@
   ((CALIBRATE_LATCH * (PMTMR_TICKS_PER_SEC >> 10)) / (CLOCK_TICK_RATE>>10))
 
 
+static int pm_ticks_per_jiffy = PMTMR_EXPECTED_RATE / (CALIBRATE_LATCH/LATCH);
+
 /* The I/O port the PMTMR resides at.
  * The location is detected during setup_arch(),
  * in arch/i386/acpi/boot.c */
@@ -37,8 +39,7 @@ u32 pmtmr_ioport = 0;
 
 
 /* value of the Power timer at last timer interrupt */
-static u32 offset_tick;
-static u32 offset_delay;
+static u32 offset_last;
 
 static unsigned long long monotonic_base;
 static seqlock_t monotonic_lock = SEQLOCK_UNLOCKED;
@@ -127,6 +128,11 @@ pm_good:
if (verify_pmtmr_rate() != 0)
return -ENODEV;
 
+   printk ("Using %u PM timer ticks per jiffy \n", pm_ticks_per_jiffy);
+
+   offset_last = read_pmtmr();
+   setup_pit_timer();
+
init_cpu_khz();
return 0;
 }
@@ -150,47 +156,37 @@ static inline u32 cyc2us(u32 cycles)
  */
 static void mark_offset_pmtmr(void)
 {
-   u32 lost, delta, last_offset;
-   static int first_run = 1;
-   last_offset = offset_tick;
+   u32 lost, delta, deltaus, offset_now;
 
write_seqlock(_lock);
 
-   offset_tick = read_pmtmr();
+   offset_now = read_pmtmr();
 
/* calculate tick interval */
-   delta = (offset_tick - last_offset) & ACPI_PM_MASK;
+   delta = (offset_now - offset_last) & ACPI_PM_MASK;
 
/* convert to usecs */
-   delta = cyc2us(delta);
+   deltaus = cyc2us(delta);
 
/* update the monotonic base value */
-   monotonic_base += delta * NSEC_PER_USEC;
+   monotonic_base += deltaus * NSEC_PER_USEC;
write_sequnlock(_lock);
 
/* convert to ticks */
-   delta += offset_delay;
-   lost = delta / (USEC_PER_SEC / HZ);
-   offset_delay = delta % (USEC_PER_SEC / HZ);
-
+   lost = delta / pm_ticks_per_jiffy;
+   offset_last += lost * pm_ticks_per_jiffy;
+   offset_last &= ACPI_PM_MASK;
 
/* compensate for lost ticks */
if (lost >= 2)
jiffies_64 += lost - 1;
-
-   /* don't calculate delay for first run,
-  or if we've got less then a tick */
-   if (first_run || (lost < 1)) {
-   first_run = 0;
-   offset_delay = 0;
-   }
 }
 
 static int pmtmr_resume(void)
 {
write_seqlock(_lock);
/* Assume this is the last mark offset time */
-   offset_tick = read_pmtmr();
+   offset_last = read_pmtmr();
write_sequnlock(_lock);
return 0;
 }
@@ -205,7 +201,7 @@ static unsigned long long monotonic_cloc
/* atomically read monotonic base & last_offset */
do {
seq = read_seqbegin(_lock);
-   last_offset = offset_tick;
+   last_offset = offset_last;
base = monotonic_base;
} while (read_seqretry(_lock, seq));
 
@@ -239,11 +235,11 @@ static unsigned long get_offset_pmtmr(vo
 {
u32 now, offset, delta = 0;
 
-   offset = offset_tick;
+   offset = offset_last;
now = read_pmtmr();
delta = (now - offset)_PM_MASK;
 
-   return (unsigned long) offset_delay + cyc2us(delta);
+   return (unsigned long) cyc2us(delta);
 }
 
 


[PATCH 1/3] Updated dynamic tick patches - Fix lost tick calculation in timer_pm.c

2005-08-31 Thread Srivatsa Vaddagiri
On Wed, Aug 31, 2005 at 10:28:43PM +0530, Srivatsa Vaddagiri wrote:
 Following patches related to dynamic tick are posted in separate mails,
 for convenience of review. The first patch probably applies w/o dynamic
 tick consideration also.
 
 Patch 1/3  - Fixup lost tick calculation in timer_pm.c

Currently, lost tick calculation in timer_pm.c is based on number
of microseconds that has elapsed since the last tick. Calculating
the number of microseconds is approximated by cyc2us, which
basically does :

microsec = (cycles * 286) / 1024

Consider 10 ticks lost. This amounts to 14319*10 = 143190 cycles 
(14319 = PMTMR_EXPECTED_RATE/(CALIBRATE_LATCH/LATCH)).
This amount to 39992 microseconds as per the above equation 
or 39992 / 4000 = 9 lost ticks, which is incorrect.

I feel lost ticks can be based on cycles difference directly
rather than being based on microseconds that has elapsed.

Following patch is in that direction. 

With this patch, time had kept up really well on one particular
machine (Intel 4way Pentium 3 box) overnight, while
on another newer machine (Intel 4way Xeon with HT) it didnt do so
well (time sped up after 3 or 4 hours). Hence I consider this
particular patch will need more review/work.

Patch is against 2.6.13-rc6-mm2.



Fix lost tick calculation in timer_pm.c

---

 linux-2.6.13-rc6-mm2-root/arch/i386/kernel/timers/timer_pm.c |   44 +--
 1 files changed, 20 insertions(+), 24 deletions(-)

diff -puN arch/i386/kernel/timers/timer_pm.c~pm_timer_fix 
arch/i386/kernel/timers/timer_pm.c
--- linux-2.6.13-rc6-mm2/arch/i386/kernel/timers/timer_pm.c~pm_timer_fix
2005-08-31 16:31:52.0 +0530
+++ linux-2.6.13-rc6-mm2-root/arch/i386/kernel/timers/timer_pm.c
2005-08-31 16:32:51.0 +0530
@@ -30,6 +30,8 @@
   ((CALIBRATE_LATCH * (PMTMR_TICKS_PER_SEC  10)) / (CLOCK_TICK_RATE10))
 
 
+static int pm_ticks_per_jiffy = PMTMR_EXPECTED_RATE / (CALIBRATE_LATCH/LATCH);
+
 /* The I/O port the PMTMR resides at.
  * The location is detected during setup_arch(),
  * in arch/i386/acpi/boot.c */
@@ -37,8 +39,7 @@ u32 pmtmr_ioport = 0;
 
 
 /* value of the Power timer at last timer interrupt */
-static u32 offset_tick;
-static u32 offset_delay;
+static u32 offset_last;
 
 static unsigned long long monotonic_base;
 static seqlock_t monotonic_lock = SEQLOCK_UNLOCKED;
@@ -127,6 +128,11 @@ pm_good:
if (verify_pmtmr_rate() != 0)
return -ENODEV;
 
+   printk (Using %u PM timer ticks per jiffy \n, pm_ticks_per_jiffy);
+
+   offset_last = read_pmtmr();
+   setup_pit_timer();
+
init_cpu_khz();
return 0;
 }
@@ -150,47 +156,37 @@ static inline u32 cyc2us(u32 cycles)
  */
 static void mark_offset_pmtmr(void)
 {
-   u32 lost, delta, last_offset;
-   static int first_run = 1;
-   last_offset = offset_tick;
+   u32 lost, delta, deltaus, offset_now;
 
write_seqlock(monotonic_lock);
 
-   offset_tick = read_pmtmr();
+   offset_now = read_pmtmr();
 
/* calculate tick interval */
-   delta = (offset_tick - last_offset)  ACPI_PM_MASK;
+   delta = (offset_now - offset_last)  ACPI_PM_MASK;
 
/* convert to usecs */
-   delta = cyc2us(delta);
+   deltaus = cyc2us(delta);
 
/* update the monotonic base value */
-   monotonic_base += delta * NSEC_PER_USEC;
+   monotonic_base += deltaus * NSEC_PER_USEC;
write_sequnlock(monotonic_lock);
 
/* convert to ticks */
-   delta += offset_delay;
-   lost = delta / (USEC_PER_SEC / HZ);
-   offset_delay = delta % (USEC_PER_SEC / HZ);
-
+   lost = delta / pm_ticks_per_jiffy;
+   offset_last += lost * pm_ticks_per_jiffy;
+   offset_last = ACPI_PM_MASK;
 
/* compensate for lost ticks */
if (lost = 2)
jiffies_64 += lost - 1;
-
-   /* don't calculate delay for first run,
-  or if we've got less then a tick */
-   if (first_run || (lost  1)) {
-   first_run = 0;
-   offset_delay = 0;
-   }
 }
 
 static int pmtmr_resume(void)
 {
write_seqlock(monotonic_lock);
/* Assume this is the last mark offset time */
-   offset_tick = read_pmtmr();
+   offset_last = read_pmtmr();
write_sequnlock(monotonic_lock);
return 0;
 }
@@ -205,7 +201,7 @@ static unsigned long long monotonic_cloc
/* atomically read monotonic base  last_offset */
do {
seq = read_seqbegin(monotonic_lock);
-   last_offset = offset_tick;
+   last_offset = offset_last;
base = monotonic_base;
} while (read_seqretry(monotonic_lock, seq));
 
@@ -239,11 +235,11 @@ static unsigned long get_offset_pmtmr(vo
 {
u32 now, offset, delta = 0;
 
-   offset = offset_tick;
+   offset = offset_last;
now = read_pmtmr();
delta = (now - offset)ACPI_PM_MASK;
 
-   return (unsigned long) offset_delay + cyc2us(delta);
+   return 

Re: [PATCH 1/3] Updated dynamic tick patches - Fix lost tick calculation in timer_pm.c

2005-08-31 Thread Zachary Amsden

Srivatsa Vaddagiri wrote:


On Wed, Aug 31, 2005 at 10:28:43PM +0530, Srivatsa Vaddagiri wrote:
 


Following patches related to dynamic tick are posted in separate mails,
for convenience of review. The first patch probably applies w/o dynamic
tick consideration also.

Patch 1/3  - Fixup lost tick calculation in timer_pm.c
   



Currently, lost tick calculation in timer_pm.c is based on number
of microseconds that has elapsed since the last tick. Calculating
the number of microseconds is approximated by cyc2us, which
basically does :

microsec = (cycles * 286) / 1024

Consider 10 ticks lost. This amounts to 14319*10 = 143190 cycles 
(14319 = PMTMR_EXPECTED_RATE/(CALIBRATE_LATCH/LATCH)).
This amount to 39992 microseconds as per the above equation 
or 39992 / 4000 = 9 lost ticks, which is incorrect.


I feel lost ticks can be based on cycles difference directly
rather than being based on microseconds that has elapsed.

Following patch is in that direction. 


With this patch, time had kept up really well on one particular
machine (Intel 4way Pentium 3 box) overnight, while
on another newer machine (Intel 4way Xeon with HT) it didnt do so
well (time sped up after 3 or 4 hours). Hence I consider this
particular patch will need more review/work.

 



Does this patch help address the issues pointed out here?

http://bugzilla.kernel.org/show_bug.cgi?id=5127
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH 1/3] Updated dynamic tick patches - Fix lost tick calculation in timer_pm.c

2005-08-31 Thread john stultz
On Wed, 2005-08-31 at 15:36 -0700, Zachary Amsden wrote:
 I feel lost ticks can be based on cycles difference directly
 rather than being based on microseconds that has elapsed.
 
 Following patch is in that direction. 
 
 With this patch, time had kept up really well on one particular
 machine (Intel 4way Pentium 3 box) overnight, while
 on another newer machine (Intel 4way Xeon with HT) it didnt do so
 well (time sped up after 3 or 4 hours). Hence I consider this
 particular patch will need more review/work.
 
   
 
 
 Does this patch help address the issues pointed out here?
 
 http://bugzilla.kernel.org/show_bug.cgi?id=5127

Unfortunately no. The issue there is that once the lost tick
compensation code has fired, should those lost ticks appear later we
end up over-compensating.

This patch however does help to make sure that when the lost tick code
fires, the error from converting to usecs doesn't bite us. And could
probably go into mainline independent of the dynamic ticks patch (with
further testing, of course).

thanks
-john

-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/