Re: [linux-kernel] Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-20 Thread Rene Herman

On 20-02-08 21:13, David P. Reed wrote:

Actually, disparaging things as "one idiotic system" doesn't seem like a 
long-term thoughtful process - it's not even accurate.


Whatever we think about systems using port 0x80, fact of the matter is that 
they do and outside of legacy stuff that isn't applicable to these systems, 
Linux needs to stop using it (post ACPI init at least) to be able to run on 
them.


As options of doing so we have:

1) Replace the port 0x80 I/O delay with nothing. Determined to be unsafe.

2) Replace 0x80 with another port. None are really available and DMI based 
switching gets to be unmaintainable and has a such been shot down.


3) Replace the port 0x80 I/O delay with a udelay(2). Should work well enough 
in practice for the remaining users outside legacy drivers after (Alan's) 
cleaning up.


The remaining (possible) problem is that pre calibration microseconds are a 
total fiction and at least PIT init happens before calibration. In practice 
I believe this might not be much of a real-world problem since for whatever 
initial value for loops_per_jiffy we get at least our old double short jump 
that is enough of a delay for 386 and 486 but I sympathise with anone, such 
as HPA, who'd consider my beliefs not a particular guarantee.


So, we need a more useful pre calibration udelay. Ugly as it might be, 
through something like the attached. Alan, could you perhaps comment?


With the problam surfacing only post ACPI init, the _last_ remaining option 
is talking to the PIT using port 0x80 at init and using udelay after but I 
myself will not be submitting a patch to do so. Insane mess.


It would be good to get this crap sorted relatively quickly so we can do 
away with the io_delay mongering even pre .26 again. It introduces boot 
parameters and as such becomes part of API somewhat, so if it's not going to 
stay let's kill it quickly.


Ren
commit 9c679215248e837b34242632d5a22adf9a247021
Author: Rene Herman <[EMAIL PROTECTED]>
Date:   Wed Feb 20 12:52:30 2008 +0100

x86: per CPU family loops_per_jiffy initialization

Following the current port 0x80 I/O delay replacements we need
microseconds to be somewhat usefully defined pre calibration.

Initialize 386, 486 and Pentium 1 as fastest in their families
and higher CPUs (including 64-bit) at 1 Ghz. Note that trouble
should be absent past family 5 systems anyway.

Signed-off-by: Rene Herman <[EMAIL PROTECTED]>

diff --git a/arch/x86/kernel/time_32.c b/arch/x86/kernel/time_32.c
index 1a89e93..e33e70b 100644
--- a/arch/x86/kernel/time_32.c
+++ b/arch/x86/kernel/time_32.c
@@ -32,6 +32,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -134,6 +135,17 @@ void __init hpet_time_init(void)
  */
 void __init time_init(void)
 {
+   switch (boot_cpu_data.x86) {
+   case 3:
+   loops_per_jiffy = LOOPS_PER_JIFFY_386;
+   break;
+   case 4:
+   loops_per_jiffy = LOOPS_PER_JIFFY_486;
+   break;
+   case 5:
+   loops_per_jiffy = LOOPS_PER_JIFFY_586;
+   break;
+   }
tsc_init();
late_time_init = choose_time_init();
 }
diff --git a/include/asm-x86/delay.h b/include/asm-x86/delay.h
index 409a649..d0fbaf6 100644
--- a/include/asm-x86/delay.h
+++ b/include/asm-x86/delay.h
@@ -7,6 +7,11 @@
  * Delay routines calling functions in arch/x86/lib/delay.c
  */
 
+#define LOOPS_PER_JIFFY_386(400 / HZ)/* 386 at 40 Mhz */
+#define LOOPS_PER_JIFFY_486(3000 / HZ)   /* 486 at 120 MHz */
+#define LOOPS_PER_JIFFY_586(23300 / HZ)  /* Pentium 1 at 233 Mhz */
+#define LOOPS_PER_JIFFY(10 / HZ) /* P6+ at 1 GHz */
+
 /* Undefined functions to get compile-time errors */
 extern void __bad_udelay(void);
 extern void __bad_ndelay(void);
diff --git a/init/main.c b/init/main.c
index 8b19820..94862c8 100644
--- a/init/main.c
+++ b/init/main.c
@@ -228,12 +228,11 @@ static int __init obsolete_checksetup(char *line)
return had_early_param;
 }
 
-/*
- * This should be approx 2 Bo*oMips to start (note initial shift), and will
- * still work even if initially too large, it will just take slightly longer
- */
-unsigned long loops_per_jiffy = (1<<12);
+#ifndef LOOPS_PER_JIFFY
+#define LOOPS_PER_JIFFY(1 << 12)
+#endif
 
+unsigned long loops_per_jiffy = LOOPS_PER_JIFFY;
 EXPORT_SYMBOL(loops_per_jiffy);
 
 static int __init debug_kernel(char *str)


Re: [linux-kernel] Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-20 Thread David P. Reed
Actually, disparaging things as "one idiotic system" doesn't seem like a 
long-term thoughtful process - it's not even accurate.  There are more 
such systems that are running code today than the total number of 486 
systems ever manufactured.  The production rate is $1M/month.


a) ENE chips are "documented" to receive port 80, and also it is the 
case that modern chipsets will happily diagnose writes to non-existent 
ports as MCE's.   Using side effects that depend on non-existent ports 
just creates a brittle failure mode down the road.  And it's not just 
post ACPI "initialization".   The pcspkr use of port 80 caused solid 
freezes if you typed "tab" to complete a command line and there were 
more than one choice, leading to beeps.


b) sad to say, Linux is not what hardware vendors use as the system that 
their BIOSes MUST work with.  That's Windows, and Windows, whether we 
like it or not does not require hardware vendors to stay away from port 80.


IMHO, calling something "idiotic" is hardly evidence-based decision 
making.   Maybe you love to hate Microsoft, but until Intel writes an 
architecture standard that says explicitly that a "standard PC" must not 
use port 80 for any peripheral, the port 80 thing is folklore, and one 
that is solely Linux-defined.


Rene Herman wrote:

On 20-02-08 18:05, H. Peter Anvin wrote:
 

Rene Herman wrote:


_Something_ like this would seem to be the only remaining option. It 
seems fairly unuseful to #ifdef around that switch statement for 
kernels without support for the earlier families, but if you insist...




"Only remaining option" other than the one we've had all along.  Even 
on the one idiotic set of systems which break, it only breaks 
post-ACPI intialization, IIRC.


Linus vetoed the DMI switch.

Rene.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-20 Thread Rene Herman

On 20-02-08 18:05, H. Peter Anvin wrote:


Rene Herman wrote:


_Something_ like this would seem to be the only remaining option. It 
seems fairly unuseful to #ifdef around that switch statement for 
kernels without support for the earlier families, but if you insist...




"Only remaining option" other than the one we've had all along.  Even on 
the one idiotic set of systems which break, it only breaks post-ACPI 
intialization, IIRC.


Linus vetoed the DMI switch.

Rene.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-20 Thread H. Peter Anvin

Rene Herman wrote:


_Something_ like this would seem to be the only remaining option. It 
seems fairly unuseful to #ifdef around that switch statement for kernels 
without support for the earlier families, but if you insist...




"Only remaining option" other than the one we've had all along.  Even on 
the one idiotic set of systems which break, it only breaks post-ACPI 
intialization, IIRC.


-hpa
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-20 Thread Rene Herman

On 18-02-08 23:44, H. Peter Anvin wrote:


Rene Herman wrote:


Yes, but generally not any P5+ system is going to need the PIT 
delay in the first place meaning it just doesn't matter. There were 
the VIA issues with the PIC but unless I missed it not with the PIT.




Uhm, I'm not sure I believe that's safe.

The PIT is particularly pissy in this case -- the semantics of the 
PIT are ill-defined if there hasn't been a PIT clock between two 
adjacent accesses, so I fully expect that there are chipsets out 
there which will do very bad things in this case.


Okay. Now that they're isolated, do you have a suggestion for 
{in,out}b_pit? You say a PIT clock, so do you think we can bounce of 
the PIT iself in this case after all?


Am I correct that channel 1 is never used? A simple read from 0x41?



Channel 1 is available for the system.  In modern systems, it's pretty 
much available for the OS, although that's never formally stated (in the 
original PC, it was used for DRAM refresh.)


However, I could very easily see a chipset have issues with that kind of 
stuff.


I couldn't really, but clean it's neither. Okay, so you just want something 
like this? This initializes loops_per_jiffy somewhat more usefully -- at a 
1G CPU for P6 and 64-bit, and tuning it down again for 386/486/586.


The values taken are for what I believe to be the fastest CPUs among the 
specific family. Alan?


386-40 and P1-233 were verified, the 486-120 value was scaled from a 486-40.

_Something_ like this would seem to be the only remaining option. It seems 
fairly unuseful to #ifdef around that switch statement for kernels without 
support for the earlier families, but if you insist...


Rene.
commit 9c679215248e837b34242632d5a22adf9a247021
Author: Rene Herman <[EMAIL PROTECTED]>
Date:   Wed Feb 20 12:52:30 2008 +0100

x86: per CPU family loops_per_jiffy initialization

Following the current port 0x80 I/O delay replacements we need
microseconds to be somewhat usefully defined pre calibration.

Initialize 386, 486 and Pentium 1 as fastest in their families
and higher CPUs (including 64-bit) at 1 Ghz. Note that trouble
should be absent past family 5 systems anyway.

Signed-off-by: Rene Herman <[EMAIL PROTECTED]>

diff --git a/arch/x86/kernel/time_32.c b/arch/x86/kernel/time_32.c
index 1a89e93..e33e70b 100644
--- a/arch/x86/kernel/time_32.c
+++ b/arch/x86/kernel/time_32.c
@@ -32,6 +32,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -134,6 +135,17 @@ void __init hpet_time_init(void)
  */
 void __init time_init(void)
 {
+   switch (boot_cpu_data.x86) {
+   case 3:
+   loops_per_jiffy = LOOPS_PER_JIFFY_386;
+   break;
+   case 4:
+   loops_per_jiffy = LOOPS_PER_JIFFY_486;
+   break;
+   case 5:
+   loops_per_jiffy = LOOPS_PER_JIFFY_586;
+   break;
+   }
tsc_init();
late_time_init = choose_time_init();
 }
diff --git a/include/asm-x86/delay.h b/include/asm-x86/delay.h
index 409a649..d0fbaf6 100644
--- a/include/asm-x86/delay.h
+++ b/include/asm-x86/delay.h
@@ -7,6 +7,11 @@
  * Delay routines calling functions in arch/x86/lib/delay.c
  */
 
+#define LOOPS_PER_JIFFY_386(400 / HZ)/* 386 at 40 Mhz */
+#define LOOPS_PER_JIFFY_486(3000 / HZ)   /* 486 at 120 MHz */
+#define LOOPS_PER_JIFFY_586(23300 / HZ)  /* Pentium 1 at 233 Mhz */
+#define LOOPS_PER_JIFFY(10 / HZ) /* P6+ at 1 GHz */
+
 /* Undefined functions to get compile-time errors */
 extern void __bad_udelay(void);
 extern void __bad_ndelay(void);
diff --git a/init/main.c b/init/main.c
index 8b19820..94862c8 100644
--- a/init/main.c
+++ b/init/main.c
@@ -228,12 +228,11 @@ static int __init obsolete_checksetup(char *line)
return had_early_param;
 }
 
-/*
- * This should be approx 2 Bo*oMips to start (note initial shift), and will
- * still work even if initially too large, it will just take slightly longer
- */
-unsigned long loops_per_jiffy = (1<<12);
+#ifndef LOOPS_PER_JIFFY
+#define LOOPS_PER_JIFFY(1 << 12)
+#endif
 
+unsigned long loops_per_jiffy = LOOPS_PER_JIFFY;
 EXPORT_SYMBOL(loops_per_jiffy);
 
 static int __init debug_kernel(char *str)


Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-20 Thread Rene Herman

On 18-02-08 23:44, H. Peter Anvin wrote:


Rene Herman wrote:


Yes, but generally not any P5+ system is going to need the PIT 
delay in the first place meaning it just doesn't matter. There were 
the VIA issues with the PIC but unless I missed it not with the PIT.




Uhm, I'm not sure I believe that's safe.

The PIT is particularly pissy in this case -- the semantics of the 
PIT are ill-defined if there hasn't been a PIT clock between two 
adjacent accesses, so I fully expect that there are chipsets out 
there which will do very bad things in this case.


Okay. Now that they're isolated, do you have a suggestion for 
{in,out}b_pit? You say a PIT clock, so do you think we can bounce of 
the PIT iself in this case after all?


Am I correct that channel 1 is never used? A simple read from 0x41?



Channel 1 is available for the system.  In modern systems, it's pretty 
much available for the OS, although that's never formally stated (in the 
original PC, it was used for DRAM refresh.)


However, I could very easily see a chipset have issues with that kind of 
stuff.


I couldn't really, but clean it's neither. Okay, so you just want something 
like this? This initializes loops_per_jiffy somewhat more usefully -- at a 
1G CPU for P6 and 64-bit, and tuning it down again for 386/486/586.


The values taken are for what I believe to be the fastest CPUs among the 
specific family. Alan?


386-40 and P1-233 were verified, the 486-120 value was scaled from a 486-40.

_Something_ like this would seem to be the only remaining option. It seems 
fairly unuseful to #ifdef around that switch statement for kernels without 
support for the earlier families, but if you insist...


Rene.
commit 9c679215248e837b34242632d5a22adf9a247021
Author: Rene Herman [EMAIL PROTECTED]
Date:   Wed Feb 20 12:52:30 2008 +0100

x86: per CPU family loops_per_jiffy initialization

Following the current port 0x80 I/O delay replacements we need
microseconds to be somewhat usefully defined pre calibration.

Initialize 386, 486 and Pentium 1 as fastest in their families
and higher CPUs (including 64-bit) at 1 Ghz. Note that trouble
should be absent past family 5 systems anyway.

Signed-off-by: Rene Herman [EMAIL PROTECTED]

diff --git a/arch/x86/kernel/time_32.c b/arch/x86/kernel/time_32.c
index 1a89e93..e33e70b 100644
--- a/arch/x86/kernel/time_32.c
+++ b/arch/x86/kernel/time_32.c
@@ -32,6 +32,7 @@
 #include linux/interrupt.h
 #include linux/time.h
 #include linux/mca.h
+#include linux/delay.h
 
 #include asm/arch_hooks.h
 #include asm/hpet.h
@@ -134,6 +135,17 @@ void __init hpet_time_init(void)
  */
 void __init time_init(void)
 {
+   switch (boot_cpu_data.x86) {
+   case 3:
+   loops_per_jiffy = LOOPS_PER_JIFFY_386;
+   break;
+   case 4:
+   loops_per_jiffy = LOOPS_PER_JIFFY_486;
+   break;
+   case 5:
+   loops_per_jiffy = LOOPS_PER_JIFFY_586;
+   break;
+   }
tsc_init();
late_time_init = choose_time_init();
 }
diff --git a/include/asm-x86/delay.h b/include/asm-x86/delay.h
index 409a649..d0fbaf6 100644
--- a/include/asm-x86/delay.h
+++ b/include/asm-x86/delay.h
@@ -7,6 +7,11 @@
  * Delay routines calling functions in arch/x86/lib/delay.c
  */
 
+#define LOOPS_PER_JIFFY_386(400 / HZ)/* 386 at 40 Mhz */
+#define LOOPS_PER_JIFFY_486(3000 / HZ)   /* 486 at 120 MHz */
+#define LOOPS_PER_JIFFY_586(23300 / HZ)  /* Pentium 1 at 233 Mhz */
+#define LOOPS_PER_JIFFY(10 / HZ) /* P6+ at 1 GHz */
+
 /* Undefined functions to get compile-time errors */
 extern void __bad_udelay(void);
 extern void __bad_ndelay(void);
diff --git a/init/main.c b/init/main.c
index 8b19820..94862c8 100644
--- a/init/main.c
+++ b/init/main.c
@@ -228,12 +228,11 @@ static int __init obsolete_checksetup(char *line)
return had_early_param;
 }
 
-/*
- * This should be approx 2 Bo*oMips to start (note initial shift), and will
- * still work even if initially too large, it will just take slightly longer
- */
-unsigned long loops_per_jiffy = (112);
+#ifndef LOOPS_PER_JIFFY
+#define LOOPS_PER_JIFFY(1  12)
+#endif
 
+unsigned long loops_per_jiffy = LOOPS_PER_JIFFY;
 EXPORT_SYMBOL(loops_per_jiffy);
 
 static int __init debug_kernel(char *str)


Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-20 Thread H. Peter Anvin

Rene Herman wrote:


_Something_ like this would seem to be the only remaining option. It 
seems fairly unuseful to #ifdef around that switch statement for kernels 
without support for the earlier families, but if you insist...




Only remaining option other than the one we've had all along.  Even on 
the one idiotic set of systems which break, it only breaks post-ACPI 
intialization, IIRC.


-hpa
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-20 Thread Rene Herman

On 20-02-08 18:05, H. Peter Anvin wrote:


Rene Herman wrote:


_Something_ like this would seem to be the only remaining option. It 
seems fairly unuseful to #ifdef around that switch statement for 
kernels without support for the earlier families, but if you insist...




Only remaining option other than the one we've had all along.  Even on 
the one idiotic set of systems which break, it only breaks post-ACPI 
intialization, IIRC.


Linus vetoed the DMI switch.

Rene.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [linux-kernel] Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-20 Thread David P. Reed
Actually, disparaging things as one idiotic system doesn't seem like a 
long-term thoughtful process - it's not even accurate.  There are more 
such systems that are running code today than the total number of 486 
systems ever manufactured.  The production rate is $1M/month.


a) ENE chips are documented to receive port 80, and also it is the 
case that modern chipsets will happily diagnose writes to non-existent 
ports as MCE's.   Using side effects that depend on non-existent ports 
just creates a brittle failure mode down the road.  And it's not just 
post ACPI initialization.   The pcspkr use of port 80 caused solid 
freezes if you typed tab to complete a command line and there were 
more than one choice, leading to beeps.


b) sad to say, Linux is not what hardware vendors use as the system that 
their BIOSes MUST work with.  That's Windows, and Windows, whether we 
like it or not does not require hardware vendors to stay away from port 80.


IMHO, calling something idiotic is hardly evidence-based decision 
making.   Maybe you love to hate Microsoft, but until Intel writes an 
architecture standard that says explicitly that a standard PC must not 
use port 80 for any peripheral, the port 80 thing is folklore, and one 
that is solely Linux-defined.


Rene Herman wrote:

On 20-02-08 18:05, H. Peter Anvin wrote:
 

Rene Herman wrote:


_Something_ like this would seem to be the only remaining option. It 
seems fairly unuseful to #ifdef around that switch statement for 
kernels without support for the earlier families, but if you insist...




Only remaining option other than the one we've had all along.  Even 
on the one idiotic set of systems which break, it only breaks 
post-ACPI intialization, IIRC.


Linus vetoed the DMI switch.

Rene.


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [linux-kernel] Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-20 Thread Rene Herman

On 20-02-08 21:13, David P. Reed wrote:

Actually, disparaging things as one idiotic system doesn't seem like a 
long-term thoughtful process - it's not even accurate.


Whatever we think about systems using port 0x80, fact of the matter is that 
they do and outside of legacy stuff that isn't applicable to these systems, 
Linux needs to stop using it (post ACPI init at least) to be able to run on 
them.


As options of doing so we have:

1) Replace the port 0x80 I/O delay with nothing. Determined to be unsafe.

2) Replace 0x80 with another port. None are really available and DMI based 
switching gets to be unmaintainable and has a such been shot down.


3) Replace the port 0x80 I/O delay with a udelay(2). Should work well enough 
in practice for the remaining users outside legacy drivers after (Alan's) 
cleaning up.


The remaining (possible) problem is that pre calibration microseconds are a 
total fiction and at least PIT init happens before calibration. In practice 
I believe this might not be much of a real-world problem since for whatever 
initial value for loops_per_jiffy we get at least our old double short jump 
that is enough of a delay for 386 and 486 but I sympathise with anone, such 
as HPA, who'd consider my beliefs not a particular guarantee.


So, we need a more useful pre calibration udelay. Ugly as it might be, 
through something like the attached. Alan, could you perhaps comment?


With the problam surfacing only post ACPI init, the _last_ remaining option 
is talking to the PIT using port 0x80 at init and using udelay after but I 
myself will not be submitting a patch to do so. Insane mess.


It would be good to get this crap sorted relatively quickly so we can do 
away with the io_delay mongering even pre .26 again. It introduces boot 
parameters and as such becomes part of API somewhat, so if it's not going to 
stay let's kill it quickly.


Ren
commit 9c679215248e837b34242632d5a22adf9a247021
Author: Rene Herman [EMAIL PROTECTED]
Date:   Wed Feb 20 12:52:30 2008 +0100

x86: per CPU family loops_per_jiffy initialization

Following the current port 0x80 I/O delay replacements we need
microseconds to be somewhat usefully defined pre calibration.

Initialize 386, 486 and Pentium 1 as fastest in their families
and higher CPUs (including 64-bit) at 1 Ghz. Note that trouble
should be absent past family 5 systems anyway.

Signed-off-by: Rene Herman [EMAIL PROTECTED]

diff --git a/arch/x86/kernel/time_32.c b/arch/x86/kernel/time_32.c
index 1a89e93..e33e70b 100644
--- a/arch/x86/kernel/time_32.c
+++ b/arch/x86/kernel/time_32.c
@@ -32,6 +32,7 @@
 #include linux/interrupt.h
 #include linux/time.h
 #include linux/mca.h
+#include linux/delay.h
 
 #include asm/arch_hooks.h
 #include asm/hpet.h
@@ -134,6 +135,17 @@ void __init hpet_time_init(void)
  */
 void __init time_init(void)
 {
+   switch (boot_cpu_data.x86) {
+   case 3:
+   loops_per_jiffy = LOOPS_PER_JIFFY_386;
+   break;
+   case 4:
+   loops_per_jiffy = LOOPS_PER_JIFFY_486;
+   break;
+   case 5:
+   loops_per_jiffy = LOOPS_PER_JIFFY_586;
+   break;
+   }
tsc_init();
late_time_init = choose_time_init();
 }
diff --git a/include/asm-x86/delay.h b/include/asm-x86/delay.h
index 409a649..d0fbaf6 100644
--- a/include/asm-x86/delay.h
+++ b/include/asm-x86/delay.h
@@ -7,6 +7,11 @@
  * Delay routines calling functions in arch/x86/lib/delay.c
  */
 
+#define LOOPS_PER_JIFFY_386(400 / HZ)/* 386 at 40 Mhz */
+#define LOOPS_PER_JIFFY_486(3000 / HZ)   /* 486 at 120 MHz */
+#define LOOPS_PER_JIFFY_586(23300 / HZ)  /* Pentium 1 at 233 Mhz */
+#define LOOPS_PER_JIFFY(10 / HZ) /* P6+ at 1 GHz */
+
 /* Undefined functions to get compile-time errors */
 extern void __bad_udelay(void);
 extern void __bad_ndelay(void);
diff --git a/init/main.c b/init/main.c
index 8b19820..94862c8 100644
--- a/init/main.c
+++ b/init/main.c
@@ -228,12 +228,11 @@ static int __init obsolete_checksetup(char *line)
return had_early_param;
 }
 
-/*
- * This should be approx 2 Bo*oMips to start (note initial shift), and will
- * still work even if initially too large, it will just take slightly longer
- */
-unsigned long loops_per_jiffy = (112);
+#ifndef LOOPS_PER_JIFFY
+#define LOOPS_PER_JIFFY(1  12)
+#endif
 
+unsigned long loops_per_jiffy = LOOPS_PER_JIFFY;
 EXPORT_SYMBOL(loops_per_jiffy);
 
 static int __init debug_kernel(char *str)


Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-19 Thread Ingo Molnar

* David P. Reed <[EMAIL PROTECTED]> wrote:

> x86: use explicit timing delay for pit accesses in kernel and pcspkr 
> driver

thanks, applied.

Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-19 Thread Ingo Molnar

* David P. Reed [EMAIL PROTECTED] wrote:

 x86: use explicit timing delay for pit accesses in kernel and pcspkr 
 driver

thanks, applied.

Ingo
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-18 Thread H. Peter Anvin

Rene Herman wrote:

On 18-02-08 23:07, Rene Herman wrote:


On 18-02-08 23:01, H. Peter Anvin wrote:


Rene Herman wrote:


Yes, but generally not any P5+ system is going to need the PIT delay 
in the first place meaning it just doesn't matter. There were the 
VIA issues with the PIC but unless I missed it not with the PIT.




Uhm, I'm not sure I believe that's safe.

The PIT is particularly pissy in this case -- the semantics of the 
PIT are ill-defined if there hasn't been a PIT clock between two 
adjacent accesses, so I fully expect that there are chipsets out 
there which will do very bad things in this case.


Okay. Now that they're isolated, do you have a suggestion for 
{in,out}b_pit? You say a PIT clock, so do you think we can bounce of 
the PIT iself in this case after all?


Am I correct that channel 1 is never used? A simple read from 0x41?



Channel 1 is available for the system.  In modern systems, it's pretty 
much available for the OS, although that's never formally stated (in the 
original PC, it was used for DRAM refresh.)


However, I could very easily see a chipset have issues with that kind of 
stuff.


-hpa
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-18 Thread H. Peter Anvin

Rene Herman wrote:


Uhm, I'm not sure I believe that's safe.

The PIT is particularly pissy in this case -- the semantics of the PIT 
are ill-defined if there hasn't been a PIT clock between two adjacent 
accesses, so I fully expect that there are chipsets out there which 
will do very bad things in this case.


Okay. Now that they're isolated, do you have a suggestion for 
{in,out}b_pit? You say a PIT clock, so do you think we can bounce of the 
PIT iself in this case after all?


No, I don't think so.

-hpa
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-18 Thread Rene Herman

On 18-02-08 23:07, Rene Herman wrote:


On 18-02-08 23:01, H. Peter Anvin wrote:


Rene Herman wrote:


Yes, but generally not any P5+ system is going to need the PIT delay 
in the first place meaning it just doesn't matter. There were the VIA 
issues with the PIC but unless I missed it not with the PIT.




Uhm, I'm not sure I believe that's safe.

The PIT is particularly pissy in this case -- the semantics of the PIT 
are ill-defined if there hasn't been a PIT clock between two adjacent 
accesses, so I fully expect that there are chipsets out there which 
will do very bad things in this case.


Okay. Now that they're isolated, do you have a suggestion for 
{in,out}b_pit? You say a PIT clock, so do you think we can bounce of the 
PIT iself in this case after all?


Am I correct that channel 1 is never used? A simple read from 0x41?

Rene.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-18 Thread Rene Herman

On 18-02-08 23:01, H. Peter Anvin wrote:


Rene Herman wrote:


Yes, but generally not any P5+ system is going to need the PIT delay 
in the first place meaning it just doesn't matter. There were the VIA 
issues with the PIC but unless I missed it not with the PIT.




Uhm, I'm not sure I believe that's safe.

The PIT is particularly pissy in this case -- the semantics of the PIT 
are ill-defined if there hasn't been a PIT clock between two adjacent 
accesses, so I fully expect that there are chipsets out there which will 
do very bad things in this case.


Okay. Now that they're isolated, do you have a suggestion for {in,out}b_pit? 
You say a PIT clock, so do you think we can bounce of the PIT iself in this 
case after all?


Rene.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-18 Thread H. Peter Anvin

Rene Herman wrote:


Yes, but generally not any P5+ system is going to need the PIT delay in 
the first place meaning it just doesn't matter. There were the VIA 
issues with the PIC but unless I missed it not with the PIT.




Uhm, I'm not sure I believe that's safe.

The PIT is particularly pissy in this case -- the semantics of the PIT 
are ill-defined if there hasn't been a PIT clock between two adjacent 
accesses, so I fully expect that there are chipsets out there which will 
do very bad things in this case.


-hpa
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-18 Thread Rene Herman

On 18-02-08 22:44, H. Peter Anvin wrote:

Rene Herman wrote:


I mean that before the linux kernel used a port 0x80 write as an I/O 
delay it used a short jump (two in a row actually...) as such and this 
was at the time that it actually ran on the old legacy stuff that is 
of most concern here.


No, if I'm not mistaken, those two jumps are actually what the 
udelay() is going to do anyway as part of delay_loop() at that early 
stage so that even before loops_per_jiffy calibration, I believe we 
should still be okay.




That doesn't make any sense at all.  The whole point why the two jumps 
were obsoleted with the P5 (or even late P4, if I'm not mistaken) was 
because they were utterly insufficient when the CPU ran at something 
much higher than the external speed.


Yes, but generally not any P5+ system is going to need the PIT delay in the 
first place meaning it just doesn't matter. There were the VIA issues with 
the PIC but unless I missed it not with the PIT.


That's the point. It's fairly unclean to say udelay(2) and then not delay 
for 2 microseconds but you _have_ done the two short jumps meaning 386 and 
486 systems are okay and later systems were okay to start with.


Rene.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-18 Thread H. Peter Anvin

Rene Herman wrote:


I mean that before the linux kernel used a port 0x80 write as an I/O 
delay it used a short jump (two in a row actually...) as such and this 
was at the time that it actually ran on the old legacy stuff that is of 
most concern here.


No, if I'm not mistaken, those two jumps are actually what the udelay() 
is going to do anyway as part of delay_loop() at that early stage so 
that even before loops_per_jiffy calibration, I believe we should still 
be okay.




That doesn't make any sense at all.  The whole point why the two jumps 
were obsoleted with the P5 (or even late P4, if I'm not mistaken) was 
because they were utterly insufficient when the CPU ran at something 
much higher than the external speed.


Yes, it's a bit of a "well, hrrm" thing, but, well... loops_per_jiffy 
can be initialised a bit more conservatively then today as well (and as 
discussed earlier, possibly per CPU family) but I believe it's actually 
sort of fine not too worry much about it...


Uhm... no.  Quite the contrary, I would say.

-hpa
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-18 Thread Rene Herman

On 18-02-08 22:04, Rene Herman wrote:

On 18-02-08 21:43, H. Peter Anvin wrote:


Rene Herman wrote:


Now with respect to the original pre port 80 "jmp $+2" I/O delay 
(which the Pentium obsoleted) I suppose it'll probably be okay even 
without fixing that specifically but do note such -- it's a vital 
part of the problem.




Sorry, that paragraph didn't parse for me.


I mean that before the linux kernel used a port 0x80 write as an I/O 
delay it used a short jump (two in a row actually...) as such and this 
was at the time that it actually ran on the old legacy stuff that is of 
most concern here.


No, if I'm not mistaken, those two jumps are actually what the udelay() 


_Now_, if I'm ...

is going to do anyway as part of delay_loop() at that early stage so 
that even before loops_per_jiffy calibration, I believe we should still 
be okay.


Yes, it's a bit of a "well, hrrm" thing, but, well... loops_per_jiffy 
can be initialised a bit more conservatively then today as well (and as 
discussed earlier, possibly per CPU family) but I believe it's actually 
sort of fine not too worry much about it...


Rene.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-18 Thread Rene Herman

On 18-02-08 21:43, H. Peter Anvin wrote:


Rene Herman wrote:


Now with respect to the original pre port 80 "jmp $+2" I/O delay 
(which the Pentium obsoleted) I suppose it'll probably be okay even 
without fixing that specifically but do note such -- it's a vital part 
of the problem.




Sorry, that paragraph didn't parse for me.


I mean that before the linux kernel used a port 0x80 write as an I/O delay 
it used a short jump (two in a row actually...) as such and this was at the 
time that it actually ran on the old legacy stuff that is of most concern here.


No, if I'm not mistaken, those two jumps are actually what the udelay() is 
going to do anyway as part of delay_loop() at that early stage so that even 
before loops_per_jiffy calibration, I believe we should still be okay.


Yes, it's a bit of a "well, hrrm" thing, but, well... loops_per_jiffy can be 
initialised a bit more conservatively then today as well (and as discussed 
earlier, possibly per CPU family) but I believe it's actually sort of fine 
not too worry much about it...


Rene.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-18 Thread H. Peter Anvin

Rene Herman wrote:


Now with respect to the original pre port 80 "jmp $+2" I/O delay (which 
the Pentium obsoleted) I suppose it'll probably be okay even without 
fixing that specifically but do note such -- it's a vital part of the 
problem.




Sorry, that paragraph didn't parse for me.

-hpa
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-18 Thread Rene Herman

On 18-02-08 19:58, David P. Reed wrote:


--- linux-2.6.orig/include/asm-x86/i8253.h
+++ linux-2.6/include/asm-x86/i8253.h
@@ -12,7 +12,25 @@ extern struct clock_event_device *global
 
 extern void setup_pit_timer(void);
 
-#define inb_pit		inb_p

-#define outb_pit   outb_p
+/* accesses to PIT registers need careful delays on some platforms. Define
+   them here in a common place */
+static inline unsigned char inb_pit(unsigned int port)
+{
+   /* delay for some accesses to PIT on motherboard or in chipset must be
+  at least one microsecond, but be safe here. */
+   unsigned char value = inb(port);
+   udelay(2);
+   return value;
+}


With the remark that (at least) the PIT is accessed at a time that 
microseconds and hence udelay are still a total fiction, this looks obvious 
otherwise.


Now with respect to the original pre port 80 "jmp $+2" I/O delay (which the 
Pentium obsoleted) I suppose it'll probably be okay even without fixing that 
specifically but do note such -- it's a vital part of the problem.


Rene.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-18 Thread Alan Cox
On Mon, 18 Feb 2008 13:58:41 -0500 (EST)
"David P. Reed" <[EMAIL PROTECTED]> wrote:

> x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

Both look good to me now
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-18 Thread Alan Cox
On Mon, 18 Feb 2008 13:58:41 -0500 (EST)
David P. Reed [EMAIL PROTECTED] wrote:

 x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

Both look good to me now
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-18 Thread Rene Herman

On 18-02-08 19:58, David P. Reed wrote:


--- linux-2.6.orig/include/asm-x86/i8253.h
+++ linux-2.6/include/asm-x86/i8253.h
@@ -12,7 +12,25 @@ extern struct clock_event_device *global
 
 extern void setup_pit_timer(void);
 
-#define inb_pit		inb_p

-#define outb_pit   outb_p
+/* accesses to PIT registers need careful delays on some platforms. Define
+   them here in a common place */
+static inline unsigned char inb_pit(unsigned int port)
+{
+   /* delay for some accesses to PIT on motherboard or in chipset must be
+  at least one microsecond, but be safe here. */
+   unsigned char value = inb(port);
+   udelay(2);
+   return value;
+}


With the remark that (at least) the PIT is accessed at a time that 
microseconds and hence udelay are still a total fiction, this looks obvious 
otherwise.


Now with respect to the original pre port 80 jmp $+2 I/O delay (which the 
Pentium obsoleted) I suppose it'll probably be okay even without fixing that 
specifically but do note such -- it's a vital part of the problem.


Rene.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-18 Thread H. Peter Anvin

Rene Herman wrote:


Now with respect to the original pre port 80 jmp $+2 I/O delay (which 
the Pentium obsoleted) I suppose it'll probably be okay even without 
fixing that specifically but do note such -- it's a vital part of the 
problem.




Sorry, that paragraph didn't parse for me.

-hpa
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-18 Thread Rene Herman

On 18-02-08 22:04, Rene Herman wrote:

On 18-02-08 21:43, H. Peter Anvin wrote:


Rene Herman wrote:


Now with respect to the original pre port 80 jmp $+2 I/O delay 
(which the Pentium obsoleted) I suppose it'll probably be okay even 
without fixing that specifically but do note such -- it's a vital 
part of the problem.




Sorry, that paragraph didn't parse for me.


I mean that before the linux kernel used a port 0x80 write as an I/O 
delay it used a short jump (two in a row actually...) as such and this 
was at the time that it actually ran on the old legacy stuff that is of 
most concern here.


No, if I'm not mistaken, those two jumps are actually what the udelay() 


_Now_, if I'm ...

is going to do anyway as part of delay_loop() at that early stage so 
that even before loops_per_jiffy calibration, I believe we should still 
be okay.


Yes, it's a bit of a well, hrrm thing, but, well... loops_per_jiffy 
can be initialised a bit more conservatively then today as well (and as 
discussed earlier, possibly per CPU family) but I believe it's actually 
sort of fine not too worry much about it...


Rene.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-18 Thread Rene Herman

On 18-02-08 21:43, H. Peter Anvin wrote:


Rene Herman wrote:


Now with respect to the original pre port 80 jmp $+2 I/O delay 
(which the Pentium obsoleted) I suppose it'll probably be okay even 
without fixing that specifically but do note such -- it's a vital part 
of the problem.




Sorry, that paragraph didn't parse for me.


I mean that before the linux kernel used a port 0x80 write as an I/O delay 
it used a short jump (two in a row actually...) as such and this was at the 
time that it actually ran on the old legacy stuff that is of most concern here.


No, if I'm not mistaken, those two jumps are actually what the udelay() is 
going to do anyway as part of delay_loop() at that early stage so that even 
before loops_per_jiffy calibration, I believe we should still be okay.


Yes, it's a bit of a well, hrrm thing, but, well... loops_per_jiffy can be 
initialised a bit more conservatively then today as well (and as discussed 
earlier, possibly per CPU family) but I believe it's actually sort of fine 
not too worry much about it...


Rene.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-18 Thread H. Peter Anvin

Rene Herman wrote:


I mean that before the linux kernel used a port 0x80 write as an I/O 
delay it used a short jump (two in a row actually...) as such and this 
was at the time that it actually ran on the old legacy stuff that is of 
most concern here.


No, if I'm not mistaken, those two jumps are actually what the udelay() 
is going to do anyway as part of delay_loop() at that early stage so 
that even before loops_per_jiffy calibration, I believe we should still 
be okay.




That doesn't make any sense at all.  The whole point why the two jumps 
were obsoleted with the P5 (or even late P4, if I'm not mistaken) was 
because they were utterly insufficient when the CPU ran at something 
much higher than the external speed.


Yes, it's a bit of a well, hrrm thing, but, well... loops_per_jiffy 
can be initialised a bit more conservatively then today as well (and as 
discussed earlier, possibly per CPU family) but I believe it's actually 
sort of fine not too worry much about it...


Uhm... no.  Quite the contrary, I would say.

-hpa
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-18 Thread Rene Herman

On 18-02-08 23:01, H. Peter Anvin wrote:


Rene Herman wrote:


Yes, but generally not any P5+ system is going to need the PIT delay 
in the first place meaning it just doesn't matter. There were the VIA 
issues with the PIC but unless I missed it not with the PIT.




Uhm, I'm not sure I believe that's safe.

The PIT is particularly pissy in this case -- the semantics of the PIT 
are ill-defined if there hasn't been a PIT clock between two adjacent 
accesses, so I fully expect that there are chipsets out there which will 
do very bad things in this case.


Okay. Now that they're isolated, do you have a suggestion for {in,out}b_pit? 
You say a PIT clock, so do you think we can bounce of the PIT iself in this 
case after all?


Rene.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-18 Thread H. Peter Anvin

Rene Herman wrote:


Yes, but generally not any P5+ system is going to need the PIT delay in 
the first place meaning it just doesn't matter. There were the VIA 
issues with the PIC but unless I missed it not with the PIT.




Uhm, I'm not sure I believe that's safe.

The PIT is particularly pissy in this case -- the semantics of the PIT 
are ill-defined if there hasn't been a PIT clock between two adjacent 
accesses, so I fully expect that there are chipsets out there which will 
do very bad things in this case.


-hpa
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-18 Thread Rene Herman

On 18-02-08 22:44, H. Peter Anvin wrote:

Rene Herman wrote:


I mean that before the linux kernel used a port 0x80 write as an I/O 
delay it used a short jump (two in a row actually...) as such and this 
was at the time that it actually ran on the old legacy stuff that is 
of most concern here.


No, if I'm not mistaken, those two jumps are actually what the 
udelay() is going to do anyway as part of delay_loop() at that early 
stage so that even before loops_per_jiffy calibration, I believe we 
should still be okay.




That doesn't make any sense at all.  The whole point why the two jumps 
were obsoleted with the P5 (or even late P4, if I'm not mistaken) was 
because they were utterly insufficient when the CPU ran at something 
much higher than the external speed.


Yes, but generally not any P5+ system is going to need the PIT delay in the 
first place meaning it just doesn't matter. There were the VIA issues with 
the PIC but unless I missed it not with the PIT.


That's the point. It's fairly unclean to say udelay(2) and then not delay 
for 2 microseconds but you _have_ done the two short jumps meaning 386 and 
486 systems are okay and later systems were okay to start with.


Rene.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-18 Thread H. Peter Anvin

Rene Herman wrote:


Uhm, I'm not sure I believe that's safe.

The PIT is particularly pissy in this case -- the semantics of the PIT 
are ill-defined if there hasn't been a PIT clock between two adjacent 
accesses, so I fully expect that there are chipsets out there which 
will do very bad things in this case.


Okay. Now that they're isolated, do you have a suggestion for 
{in,out}b_pit? You say a PIT clock, so do you think we can bounce of the 
PIT iself in this case after all?


No, I don't think so.

-hpa
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-18 Thread H. Peter Anvin

Rene Herman wrote:

On 18-02-08 23:07, Rene Herman wrote:


On 18-02-08 23:01, H. Peter Anvin wrote:


Rene Herman wrote:


Yes, but generally not any P5+ system is going to need the PIT delay 
in the first place meaning it just doesn't matter. There were the 
VIA issues with the PIC but unless I missed it not with the PIT.




Uhm, I'm not sure I believe that's safe.

The PIT is particularly pissy in this case -- the semantics of the 
PIT are ill-defined if there hasn't been a PIT clock between two 
adjacent accesses, so I fully expect that there are chipsets out 
there which will do very bad things in this case.


Okay. Now that they're isolated, do you have a suggestion for 
{in,out}b_pit? You say a PIT clock, so do you think we can bounce of 
the PIT iself in this case after all?


Am I correct that channel 1 is never used? A simple read from 0x41?



Channel 1 is available for the system.  In modern systems, it's pretty 
much available for the OS, although that's never formally stated (in the 
original PC, it was used for DRAM refresh.)


However, I could very easily see a chipset have issues with that kind of 
stuff.


-hpa
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH] x86: use explicit timing delay for pit accesses in kernel and pcspkr driver

2008-02-18 Thread Rene Herman

On 18-02-08 23:07, Rene Herman wrote:


On 18-02-08 23:01, H. Peter Anvin wrote:


Rene Herman wrote:


Yes, but generally not any P5+ system is going to need the PIT delay 
in the first place meaning it just doesn't matter. There were the VIA 
issues with the PIC but unless I missed it not with the PIT.




Uhm, I'm not sure I believe that's safe.

The PIT is particularly pissy in this case -- the semantics of the PIT 
are ill-defined if there hasn't been a PIT clock between two adjacent 
accesses, so I fully expect that there are chipsets out there which 
will do very bad things in this case.


Okay. Now that they're isolated, do you have a suggestion for 
{in,out}b_pit? You say a PIT clock, so do you think we can bounce of the 
PIT iself in this case after all?


Am I correct that channel 1 is never used? A simple read from 0x41?

Rene.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/