Mybe I am wrong, but if you are detecting 40-wire cable to set them to
DMA/33, why the check includes also 80-wire cables configuring them to
DMA/33 too?
With this patch my nvidia4 IDE controllers detects correctly and configure
correctly DMA/100 for my HD and DMA/33 for my DVD (the first
Mybe I am wrong, but if you are detecting 40-wire cable to set them to
DMA/33, why the check includes also 80-wire cables configuring them to
DMA/33 too?
With this patch my nvidia4 IDE controllers detects correctly and configure
correctly DMA/100 for my HD and DMA/33 for my DVD (the first
On Wed, 2 May 2007, Daniel J Blueman wrote:
Date: Wed, 2 May 2007 16:43:44 +0100
From: Daniel J Blueman <[EMAIL PROTECTED]>
To: Rajib Majumder <[EMAIL PROTECTED]>
Cc: Linux Kernel
Subject: Re: Kernel Scalability
Resent-Date: Wed, 02 May 2007 17:44:58 +0200
Resent-From: <[EMAIL PROTECTED]>
On
On Wed, 2 May 2007, Daniel J Blueman wrote:
Date: Wed, 2 May 2007 16:43:44 +0100
From: Daniel J Blueman [EMAIL PROTECTED]
To: Rajib Majumder [EMAIL PROTECTED]
Cc: Linux Kernel linux-kernel@vger.kernel.org
Subject: Re: Kernel Scalability
Resent-Date: Wed, 02 May 2007 17:44:58 +0200
Resent-From:
Here is something I don't understand...
It seems there is a maintainer, namesys people, which is what I was
supposing, and probably it is the most qualified one for reiser4,
but it also seems you imply that they are not interested right now in
kernel inclusion, since they are not asking "in
Here is something I don't understand...
It seems there is a maintainer, namesys people, which is what I was
supposing, and probably it is the most qualified one for reiser4,
but it also seems you imply that they are not interested right now in
kernel inclusion, since they are not asking in
Hi,
I reported this also for 2.6.20 kernel.
new libata with controller nVidia CK804 initializes the disk in DMA/33,
with with 2.6.19.5 and previous the disk is correctly inizialized in
DMA/100.
Tha cable is OK, and with older kernels the disks runs without troubles.
The sistem has two sata
Hi,
I reported this also for 2.6.20 kernel.
new libata with controller nVidia CK804 initializes the disk in DMA/33,
with with 2.6.19.5 and previous the disk is correctly inizialized in
DMA/100.
Tha cable is OK, and with older kernels the disks runs without troubles.
The sistem has two sata
Well, the cable is OK, of course I checked.
On Wed, 7 Feb 2007, Robert Hancock wrote:
Date: Wed, 07 Feb 2007 22:36:58 -0600
From: Robert Hancock <[EMAIL PROTECTED]>
To: Alan <[EMAIL PROTECTED]>,
linux-kernel
Cc: Luigi Genoni <[EMAIL PROTECTED]>
Subject: Re: [BUG?] ata disk running maximum
Well, the cable is OK, of course I checked.
On Wed, 7 Feb 2007, Robert Hancock wrote:
Date: Wed, 07 Feb 2007 22:36:58 -0600
From: Robert Hancock [EMAIL PROTECTED]
To: Alan [EMAIL PROTECTED],
linux-kernel linux-kernel@vger.kernel.org
Cc: Luigi Genoni [EMAIL PROTECTED]
Subject: Re: [BUG?]
I did the test you asked and yes, it is consistently booting at DMA33 with
2.6.20 and DMA100 with 2.6.19.3 (20 reboots, 10 2.6.20 and 10 2.6.19 in
sparse order).
I am compiling a 2.6.20 kernel with older pata_amd.c driver and will let
you know. seeing the diff I do exspect it to compile
I did the test you asked and yes, it is consistently booting at DMA33 with
2.6.20 and DMA100 with 2.6.19.3 (20 reboots, 10 2.6.20 and 10 2.6.19 in
sparse order).
I am compiling a 2.6.20 kernel with older pata_amd.c driver and will let
you know. seeing the diff I do exspect it to compile
Hi,
since upgrading to kernel 2.6.20 my pata disk, using new pata driver, is
initialized maximum in DMA33 mode, as you can see from:
pata_amd :00:06.0: version 0.2.7
PCI: Setting latency timer of device :00:06.0 to 64
ata5: PATA max UDMA/133 cmd 0x1F0 ctl 0x3F6 bmdma 0xF000 irq 14
Hi,
since upgrading to kernel 2.6.20 my pata disk, using new pata driver, is
initialized maximum in DMA33 mode, as you can see from:
pata_amd :00:06.0: version 0.2.7
PCI: Setting latency timer of device :00:06.0 to 64
ata5: PATA max UDMA/133 cmd 0x1F0 ctl 0x3F6 bmdma 0xF000 irq 14
As I reported when I tested this patch, it works, but I could see an
abnormally high load averay while triggering the error message. anyway, it
is better to have an high load averag three or four times higher than what
you would expect then a crash/reboot. isn't it? :)
Luigi Genoni
p.s.
will
As I reported when I tested this patch, it works, but I could see an
abnormally high load averay while triggering the error message. anyway, it
is better to have an high load averag three or four times higher than what
you would expect then a crash/reboot. isn't it? :)
Luigi Genoni
p.s.
will
reproduced.
it took more or less one hour to reproduce it. I could reproduce it olny
running also irqbalance 0.55 and commenting out the sleep 1. The message
in
syslog is the same and then, after a few seconds I think, KABOM! system
crash
and reboot.
I tested also a similar system that has 4
reproduced.
it took more or less one hour to reproduce it. I could reproduce it olny
running also irqbalance 0.55 and commenting out the sleep 1. The message
in
syslog is the same and then, after a few seconds I think, KABOM! system
crash
and reboot.
I tested also a similar system that has 4
On Mon, 8 Jan 2007, Dirk wrote:
Alright. I came to discuss an idea I had because I realized that installing
Windows and running Linux in VMware is the only _fun_ way to play "real"
Games and have Linux at the same time.
And everyone who says I'm a troll doesn't like Games or simple things.
On Mon, 8 Jan 2007, Dirk wrote:
Alright. I came to discuss an idea I had because I realized that installing
Windows and running Linux in VMware is the only _fun_ way to play real
Games and have Linux at the same time.
And everyone who says I'm a troll doesn't like Games or simple things.
Just curious why on Opteron dual core 2600MHZ I get:
phoenix:{root}:/tmp> gcc -DCMOV -Wall -O2 t.c
phoenix:{root}:/tmp>time ./a.out
6
real0m0.117s
user0m0.120s
sys 0m0.000s
phoenix:{root}:/tmp>gcc -Wall -O2 t.c
phoenix:{root}:/tmp> time ./a.out
6
real0m0.136s
Just to make clearer why I am so curious, this from X86_64 X2 3800+:
DarkStar:{venom}:/tmp> gcc -DCMOV -Wall -O2 t.c
DarkStar:{venom}:/tmp>time ./a.out
6
real0m0.151s
user0m0.150s
sys 0m0.000s
DarkStar:{venom}:/tmp> gcc -Wall -O2 t.c
DarkStar:{venom}:/tmp> time ./a.out
Just to make clearer why I am so curious, this from X86_64 X2 3800+:
DarkStar:{venom}:/tmp gcc -DCMOV -Wall -O2 t.c
DarkStar:{venom}:/tmptime ./a.out
6
real0m0.151s
user0m0.150s
sys 0m0.000s
DarkStar:{venom}:/tmp gcc -Wall -O2 t.c
DarkStar:{venom}:/tmp time ./a.out
Just curious why on Opteron dual core 2600MHZ I get:
phoenix:{root}:/tmp gcc -DCMOV -Wall -O2 t.c
phoenix:{root}:/tmptime ./a.out
6
real0m0.117s
user0m0.120s
sys 0m0.000s
phoenix:{root}:/tmpgcc -Wall -O2 t.c
phoenix:{root}:/tmp time ./a.out
6
real0m0.136s
user
24 matches
Mail list logo