From: Jeff Garzik <[EMAIL PROTECTED]>
Date: Wed, 09 May 2007 18:46:16 -0400
> Bartlomiej Zolnierkiewicz wrote:
> > Bartlomiej Zolnierkiewicz (11):
> > ide: fix UDMA/MWDMA/SWDMA masks (v3)
> > ide: rework the code for selecting the best DMA transfer mode (v3)
> > ide: add
Bartlomiej Zolnierkiewicz wrote:
On Thursday 10 May 2007, Jeff Garzik wrote:
The limit was raised to 400K IIRC.
That's (good) news to me, here goes the actual 150K patch:
Thanks. I did in fact receive copies from vger, so it went through.
Jeff
-
To unsubscribe from this list:
Andrew Morton wrote:
Jeff Garzik <[EMAIL PROTECTED]> wrote:
Has this seen testing/exposure in -mm tree?
argh. If this was in a file called
ide-rework-the-code-for-selecting-the-best-DMA-transfer-mode.patch then it
would be so easy.
ah, it's hidden in ide-max-dma-mode-v3.patch.
On Wed, 09 May 2007 18:47:23 -0400
Jeff Garzik <[EMAIL PROTECTED]> wrote:
> Bartlomiej Zolnierkiewicz wrote:
> > * the code for selecting and programming the best DMA transfer mode
> > has been reworked to be cleaner, more generic and more libata-like,
> > (> 500 LOCs gone and this change
Bartlomiej Zolnierkiewicz wrote:
Bartlomiej Zolnierkiewicz (11):
ide: fix UDMA/MWDMA/SWDMA masks (v3)
ide: rework the code for selecting the best DMA transfer mode (v3)
ide: add ide_tune_dma() helper
ide: make /proc/ide/ optional
ide: split off ioctl handling from
Bartlomiej Zolnierkiewicz wrote:
* the code for selecting and programming the best DMA transfer mode
has been reworked to be cleaner, more generic and more libata-like,
(> 500 LOCs gone and this change allows the change described below)
Bartlomiej Zolnierkiewicz (11):
ide: rework
Bartlomiej Zolnierkiewicz wrote:
Bartlomiej Zolnierkiewicz (11):
ide: fix UDMA/MWDMA/SWDMA masks (v3)
ide: rework the code for selecting the best DMA transfer mode (v3)
ide: add ide_tune_dma() helper
ide: make /proc/ide/ optional
ide: split off ioctl handling from
Bartlomiej Zolnierkiewicz wrote:
* the code for selecting and programming the best DMA transfer mode
has been reworked to be cleaner, more generic and more libata-like,
( 500 LOCs gone and this change allows the change described below)
Bartlomiej Zolnierkiewicz (11):
ide: rework the
On Wed, 09 May 2007 18:47:23 -0400
Jeff Garzik [EMAIL PROTECTED] wrote:
Bartlomiej Zolnierkiewicz wrote:
* the code for selecting and programming the best DMA transfer mode
has been reworked to be cleaner, more generic and more libata-like,
( 500 LOCs gone and this change allows the
Andrew Morton wrote:
Jeff Garzik [EMAIL PROTECTED] wrote:
Has this seen testing/exposure in -mm tree?
argh. If this was in a file called
ide-rework-the-code-for-selecting-the-best-DMA-transfer-mode.patch then it
would be so easy.
logs into hera
greps
ah, it's hidden in
Bartlomiej Zolnierkiewicz wrote:
On Thursday 10 May 2007, Jeff Garzik wrote:
The limit was raised to 400K IIRC.
That's (good) news to me, here goes the actual 150K patch:
Thanks. I did in fact receive copies from vger, so it went through.
Jeff
-
To unsubscribe from this list:
From: Jeff Garzik [EMAIL PROTECTED]
Date: Wed, 09 May 2007 18:46:16 -0400
Bartlomiej Zolnierkiewicz wrote:
Bartlomiej Zolnierkiewicz (11):
ide: fix UDMA/MWDMA/SWDMA masks (v3)
ide: rework the code for selecting the best DMA transfer mode (v3)
ide: add ide_tune_dma()
On 8/20/05, Bartlomiej Zolnierkiewicz <[EMAIL PROTECTED]> wrote:
> On 8/19/05, Alan Cox <[EMAIL PROTECTED]> wrote:
> > On Gwe, 2005-08-19 at 11:02 +0200, Bartlomiej Zolnierkiewicz wrote:
> > > lkml.org/lkml/2005/1/27/20
> > >
> > > AFAIK CS5535 driver was never ported to 2.6.x. Somebody needs to
On 8/19/05, Alan Cox <[EMAIL PROTECTED]> wrote:
> On Gwe, 2005-08-19 at 11:02 +0200, Bartlomiej Zolnierkiewicz wrote:
> > lkml.org/lkml/2005/1/27/20
> >
> > AFAIK CS5535 driver was never ported to 2.6.x. Somebody needs to
> > port it to 2.6.x kernel, cleanup to match kernel coding standards and
On Gwe, 2005-08-19 at 11:02 +0200, Bartlomiej Zolnierkiewicz wrote:
> lkml.org/lkml/2005/1/27/20
>
> AFAIK CS5535 driver was never ported to 2.6.x. Somebody needs to
> port it to 2.6.x kernel, cleanup to match kernel coding standards and test.
That was done some time ago and posted to various
On 8/19/05, Alan Cox <[EMAIL PROTECTED]> wrote:
> On Iau, 2005-08-18 at 23:37 +0200, Bartlomiej Zolnierkiewicz wrote:
> > + },{ /* 14 */
> > + .name = "Revolution",
> > + .init_hwif = init_hwif_generic,
> > + .channels = 2,
> > +
On 8/19/05, Alan Cox [EMAIL PROTECTED] wrote:
On Iau, 2005-08-18 at 23:37 +0200, Bartlomiej Zolnierkiewicz wrote:
+ },{ /* 14 */
+ .name = Revolution,
+ .init_hwif = init_hwif_generic,
+ .channels = 2,
+
On Gwe, 2005-08-19 at 11:02 +0200, Bartlomiej Zolnierkiewicz wrote:
lkml.org/lkml/2005/1/27/20
AFAIK CS5535 driver was never ported to 2.6.x. Somebody needs to
port it to 2.6.x kernel, cleanup to match kernel coding standards and test.
That was done some time ago and posted to various
On 8/19/05, Alan Cox [EMAIL PROTECTED] wrote:
On Gwe, 2005-08-19 at 11:02 +0200, Bartlomiej Zolnierkiewicz wrote:
lkml.org/lkml/2005/1/27/20
AFAIK CS5535 driver was never ported to 2.6.x. Somebody needs to
port it to 2.6.x kernel, cleanup to match kernel coding standards and test.
On 8/20/05, Bartlomiej Zolnierkiewicz [EMAIL PROTECTED] wrote:
On 8/19/05, Alan Cox [EMAIL PROTECTED] wrote:
On Gwe, 2005-08-19 at 11:02 +0200, Bartlomiej Zolnierkiewicz wrote:
lkml.org/lkml/2005/1/27/20
AFAIK CS5535 driver was never ported to 2.6.x. Somebody needs to
port it to
Linus Torvalds wrote:
Btw, things like this:
+#define IDEFLOPPY_TICKS_DELAY HZ/20 /* default delay for ZIP 100
(50ms) */
are just bugs waiting to happen.
Needs parenthesis: ((HZ)/20)
Or one could just use the msecs_to_jiffies() macro.
Cheers
-
To unsubscribe from this list:
On Iau, 2005-08-18 at 23:37 +0200, Bartlomiej Zolnierkiewicz wrote:
> + },{ /* 14 */
> + .name = "Revolution",
> + .init_hwif = init_hwif_generic,
> + .channels = 2,
> + .autodma= AUTODMA,
> +
On 8/18/05, Linus Torvalds <[EMAIL PROTECTED]> wrote:
>
>
> On Thu, 18 Aug 2005, Bartlomiej Zolnierkiewicz wrote:
> >
> > 3 obvious fixes + support for 2 new controllers
> > (just new PCI IDs).
>
> Btw, things like this:
>
> +#define IDEFLOPPY_TICKS_DELAY HZ/20 /* default delay for
On Thu, 18 Aug 2005, Bartlomiej Zolnierkiewicz wrote:
>
> 3 obvious fixes + support for 2 new controllers
> (just new PCI IDs).
Btw, things like this:
+#define IDEFLOPPY_TICKS_DELAY HZ/20 /* default delay for ZIP 100
(50ms) */
are just bugs waiting to happen.
Hint: see what
On Thu, 18 Aug 2005, Bartlomiej Zolnierkiewicz wrote:
3 obvious fixes + support for 2 new controllers
(just new PCI IDs).
Btw, things like this:
+#define IDEFLOPPY_TICKS_DELAY HZ/20 /* default delay for ZIP 100
(50ms) */
are just bugs waiting to happen.
Hint: see what happens
On 8/18/05, Linus Torvalds [EMAIL PROTECTED] wrote:
On Thu, 18 Aug 2005, Bartlomiej Zolnierkiewicz wrote:
3 obvious fixes + support for 2 new controllers
(just new PCI IDs).
Btw, things like this:
+#define IDEFLOPPY_TICKS_DELAY HZ/20 /* default delay for ZIP 100
On Iau, 2005-08-18 at 23:37 +0200, Bartlomiej Zolnierkiewicz wrote:
+ },{ /* 14 */
+ .name = Revolution,
+ .init_hwif = init_hwif_generic,
+ .channels = 2,
+ .autodma= AUTODMA,
+ .bootable
Linus Torvalds wrote:
Btw, things like this:
+#define IDEFLOPPY_TICKS_DELAY HZ/20 /* default delay for ZIP 100
(50ms) */
are just bugs waiting to happen.
Needs parenthesis: ((HZ)/20)
Or one could just use the msecs_to_jiffies() macro.
Cheers
-
To unsubscribe from this list:
On Maw, 2005-07-05 at 20:14, Jens Axboe wrote:
> IDE still has much lower overhead per command than your average SCSI
> hardware. SATA with FIS even improves on this, definitely a good thing!
But SCSI overlaps them while in PATA they are dead time. Thats why PATA
is so demanding of large I/O
On Maw, 2005-07-05 at 20:14, Jens Axboe wrote:
IDE still has much lower overhead per command than your average SCSI
hardware. SATA with FIS even improves on this, definitely a good thing!
But SCSI overlaps them while in PATA they are dead time. Thats why PATA
is so demanding of large I/O block
Jens Axboe <[EMAIL PROTECTED]> wrote:{
> >>
> >>>Some more investigation - it appears to be broken read-ahead, actually.
> >>>
> >>>--- mm/readahead.c~2005-07-08 11:16:14.0 +0200
> >>>+++ mm/readahead.c 2005-07-08 11:17:49.0 +0200
> >>>@@ -351,7 +351,9 @@
> >>>
Jens Axboe [EMAIL PROTECTED] wrote:{
Some more investigation - it appears to be broken read-ahead, actually.
--- mm/readahead.c~2005-07-08 11:16:14.0 +0200
+++ mm/readahead.c 2005-07-08 11:17:49.0 +0200
@@ -351,7 +351,9 @@
ra-cache_hit += nr_to_read;
On Fri, Jul 08 2005, Steven Pratt wrote:
> Jens Axboe wrote:
>
> >On Fri, Jul 08 2005, Andrew Morton wrote:
> >
> >
> >>Jens Axboe <[EMAIL PROTECTED]> wrote:
> >>
> >>
> >>>Some more investigation - it appears to be broken read-ahead, actually.
> >>>hdparm does repeated read(), lseek() loops
Jens Axboe wrote:
On Fri, Jul 08 2005, Andrew Morton wrote:
Jens Axboe <[EMAIL PROTECTED]> wrote:
Some more investigation - it appears to be broken read-ahead, actually.
hdparm does repeated read(), lseek() loops which causes the read-ahead
logic to mark the file as being in cache
On Fri, Jul 08 2005, Ingo Molnar wrote:
>
> * Jens Axboe <[EMAIL PROTECTED]> wrote:
>
> > But! I used hdparm -t solely, 2.6 was always ~5% faster than 2.4. But
> > using -Tt slowed down the hd speed by about 30%. So it looks like some
> > scheduler interaction, perhaps the memory timing loops
On Fri, 2005-07-08 at 10:06 +1000, Grant Coady wrote:
> I've not been able to get dual channel I/O speed faster than single
> interface speed, either as 'md' RAID0 or simultaneous reading or
> writing done the other day:
>
> Time to write or read 500MB file:
>
> >summary
* Jens Axboe <[EMAIL PROTECTED]> wrote:
> But! I used hdparm -t solely, 2.6 was always ~5% faster than 2.4. But
> using -Tt slowed down the hd speed by about 30%. So it looks like some
> scheduler interaction, perhaps the memory timing loops gets it marked
> as batch or something?
to check
On Fri, Jul 08 2005, Andrew Morton wrote:
> Jens Axboe <[EMAIL PROTECTED]> wrote:
> >
> > Some more investigation - it appears to be broken read-ahead, actually.
> > hdparm does repeated read(), lseek() loops which causes the read-ahead
> > logic to mark the file as being in cache (since it
Jens Axboe <[EMAIL PROTECTED]> wrote:
>
> Some more investigation - it appears to be broken read-ahead, actually.
> hdparm does repeated read(), lseek() loops which causes the read-ahead
> logic to mark the file as being in cache (since it reads the same chunk
> every time). Killing the INCACHE
On Fri, Jul 08 2005, Jens Axboe wrote:
> On Tue, Jul 05 2005, Linus Torvalds wrote:
> > So my gut feel is that the reason hdparm and dd from the raw partition
> > gives different performance is not so much the driver, but probably that
> > we've tweaked read-ahead for file access or something
On Fri, Jul 08 2005, Jens Axboe wrote:
On Tue, Jul 05 2005, Linus Torvalds wrote:
So my gut feel is that the reason hdparm and dd from the raw partition
gives different performance is not so much the driver, but probably that
we've tweaked read-ahead for file access or something like
Jens Axboe [EMAIL PROTECTED] wrote:
Some more investigation - it appears to be broken read-ahead, actually.
hdparm does repeated read(), lseek() loops which causes the read-ahead
logic to mark the file as being in cache (since it reads the same chunk
every time). Killing the INCACHE check
On Fri, Jul 08 2005, Andrew Morton wrote:
Jens Axboe [EMAIL PROTECTED] wrote:
Some more investigation - it appears to be broken read-ahead, actually.
hdparm does repeated read(), lseek() loops which causes the read-ahead
logic to mark the file as being in cache (since it reads the same
* Jens Axboe [EMAIL PROTECTED] wrote:
But! I used hdparm -t solely, 2.6 was always ~5% faster than 2.4. But
using -Tt slowed down the hd speed by about 30%. So it looks like some
scheduler interaction, perhaps the memory timing loops gets it marked
as batch or something?
to check whether
On Fri, 2005-07-08 at 10:06 +1000, Grant Coady wrote:
I've not been able to get dual channel I/O speed faster than single
interface speed, either as 'md' RAID0 or simultaneous reading or
writing done the other day:
Time to write or read 500MB file:
summary 2.4.31-hf1
On Fri, Jul 08 2005, Ingo Molnar wrote:
* Jens Axboe [EMAIL PROTECTED] wrote:
But! I used hdparm -t solely, 2.6 was always ~5% faster than 2.4. But
using -Tt slowed down the hd speed by about 30%. So it looks like some
scheduler interaction, perhaps the memory timing loops gets it
Jens Axboe wrote:
On Fri, Jul 08 2005, Andrew Morton wrote:
Jens Axboe [EMAIL PROTECTED] wrote:
Some more investigation - it appears to be broken read-ahead, actually.
hdparm does repeated read(), lseek() loops which causes the read-ahead
logic to mark the file as being in cache
On Fri, Jul 08 2005, Steven Pratt wrote:
Jens Axboe wrote:
On Fri, Jul 08 2005, Andrew Morton wrote:
Jens Axboe [EMAIL PROTECTED] wrote:
Some more investigation - it appears to be broken read-ahead, actually.
hdparm does repeated read(), lseek() loops which causes the read-ahead
On Thu, 07 Jul 2005 18:32:52 -0400, Mark Lord <[EMAIL PROTECTED]> wrote:
>
>hdparm can also use O_DIRECT for the -t timing test.
I've not been able to get dual channel I/O speed faster than single
interface speed, either as 'md' RAID0 or simultaneous reading or
writing done the other day:
Time
Note:
hdparm can also use O_DIRECT for the -t timing test.
Eg. hdparm --direct -t /dev/hda
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the
Bartlomiej Zolnierkiewicz wrote:
BIOS setting is irrelevant and ~14MB/s for UDMA33 is OK.
CPU cycles are wasted somewhere else...
After seeing how poorly Linux copes with bad info coming out of ACPI, I
no longer assume that BIOS information is ignored. Thought it was worth
mentioning.
--
On 7/6/05, Bill Davidsen <[EMAIL PROTECTED]> wrote:
> Ondrej Zary wrote:
> > Jens Axboe wrote:
> >
> >> On Tue, Jul 05 2005, Ondrej Zary wrote:
> >>
> >>> André Tomt wrote:
> >>>
> Al Boldi wrote:
>
>
> > Bartlomiej Zolnierkiewicz wrote: {
> >
> >
> On 7/4/05,
Bill Davidsen wrote:
Ondrej Zary wrote:
Jens Axboe wrote:
On Tue, Jul 05 2005, Ondrej Zary wrote:
André Tomt wrote:
Al Boldi wrote:
Bartlomiej Zolnierkiewicz wrote: {
On 7/4/05, Al Boldi <[EMAIL PROTECTED]> wrote:
Hdparm -tT gives 38mb/s in 2.4.31
Cat /dev/hda > /dev/null gives 2%
Ondrej Zary wrote:
Jens Axboe wrote:
On Tue, Jul 05 2005, Ondrej Zary wrote:
André Tomt wrote:
Al Boldi wrote:
Bartlomiej Zolnierkiewicz wrote: {
On 7/4/05, Al Boldi <[EMAIL PROTECTED]> wrote:
Hdparm -tT gives 38mb/s in 2.4.31
Cat /dev/hda > /dev/null gives 2% user 33% sys 65% idle
Ondrej Zary wrote:
Jens Axboe wrote:
On Tue, Jul 05 2005, Ondrej Zary wrote:
André Tomt wrote:
Al Boldi wrote:
Bartlomiej Zolnierkiewicz wrote: {
On 7/4/05, Al Boldi [EMAIL PROTECTED] wrote:
Hdparm -tT gives 38mb/s in 2.4.31
Cat /dev/hda /dev/null gives 2% user 33% sys 65% idle
Bill Davidsen wrote:
Ondrej Zary wrote:
Jens Axboe wrote:
On Tue, Jul 05 2005, Ondrej Zary wrote:
André Tomt wrote:
Al Boldi wrote:
Bartlomiej Zolnierkiewicz wrote: {
On 7/4/05, Al Boldi [EMAIL PROTECTED] wrote:
Hdparm -tT gives 38mb/s in 2.4.31
Cat /dev/hda /dev/null gives 2%
On 7/6/05, Bill Davidsen [EMAIL PROTECTED] wrote:
Ondrej Zary wrote:
Jens Axboe wrote:
On Tue, Jul 05 2005, Ondrej Zary wrote:
André Tomt wrote:
Al Boldi wrote:
Bartlomiej Zolnierkiewicz wrote: {
On 7/4/05, Al Boldi [EMAIL PROTECTED] wrote:
Hdparm -tT gives 38mb/s in
Bartlomiej Zolnierkiewicz wrote:
BIOS setting is irrelevant and ~14MB/s for UDMA33 is OK.
CPU cycles are wasted somewhere else...
After seeing how poorly Linux copes with bad info coming out of ACPI, I
no longer assume that BIOS information is ignored. Thought it was worth
mentioning.
--
Note:
hdparm can also use O_DIRECT for the -t timing test.
Eg. hdparm --direct -t /dev/hda
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the
On Thu, 07 Jul 2005 18:32:52 -0400, Mark Lord [EMAIL PROTECTED] wrote:
hdparm can also use O_DIRECT for the -t timing test.
I've not been able to get dual channel I/O speed faster than single
interface speed, either as 'md' RAID0 or simultaneous reading or
writing done the other day:
Time to
On Wed, 6 Jul 2005, Grant Coady wrote:
>
> Sure, take a while longer to vary by block size. One effect seems
> to be wrong is interaction between /dev/hda and /dev/hdc in 'peetoo',
> the IDE channels not independent?
Well, looking at your numbers for "silly" and "tosh", which were perhaps
On Tue, 5 Jul 2005 17:51:50 -0700 (PDT), Linus Torvalds <[EMAIL PROTECTED]>
wrote:
>
>Btw, can you try this same thing (or at least a subset) with a large file
>on a filesystem? Does that show the same pattern, or is it always just the
>raw device?
>
Sure, take a while longer to vary by block
On Tue, 5 Jul 2005 17:51:50 -0700 (PDT), Linus Torvalds [EMAIL PROTECTED]
wrote:
Btw, can you try this same thing (or at least a subset) with a large file
on a filesystem? Does that show the same pattern, or is it always just the
raw device?
Sure, take a while longer to vary by block size.
On Wed, 6 Jul 2005, Grant Coady wrote:
Sure, take a while longer to vary by block size. One effect seems
to be wrong is interaction between /dev/hda and /dev/hdc in 'peetoo',
the IDE channels not independent?
Well, looking at your numbers for silly and tosh, which were perhaps
the most
Linus Torvalds wrote: {
On Wed, 6 Jul 2005, Grant Coady wrote:
>
> Executive Summary
Btw, can you try this same thing (or at least a subset) with a large file on
a filesystem? Does that show the same pattern, or is it always just the raw
device?
}
Linus,
Cat /dev/hda > /dev/null and cat
On Wed, 6 Jul 2005, Grant Coady wrote:
>
> Executive Summary
Btw, can you try this same thing (or at least a subset) with a large file
on a filesystem? Does that show the same pattern, or is it always just the
raw device?
Linus
-
To unsubscribe from this list: send the line
On Tue, 5 Jul 2005 16:21:26 +0200, Jens Axboe <[EMAIL PROTECTED]> wrote:
># gcc -Wall -O2 -o oread oread.c
># time ./oread /dev/hda
Executive Summary
``
Comparing 'oread' with hdparm -tT on latest 2.4 vs 2.6 stable on
various x86 boxen. Performance drops for 2.6, sometimes:
Jens Axboe wrote:
On Tue, Jul 05 2005, Ondrej Zary wrote:
oread is faster than dd, but still not as fast as 2.4. In 2.6.12, HDD
led is blinking, in 2.4 it's solid on during the read.
Oh, and please do test 2.6 by first setting the deadline scheduler for
hda. I can see you are using the 'as'
Jens Axboe wrote:
On Tue, Jul 05 2005, Linus Torvalds wrote:
On Tue, 5 Jul 2005, Jens Axboe wrote:
Looks interesting, 2.6 spends oodles of times copying to user space.
Lets check if raw reads perform ok, please try and time this app in 2.4
and 2.6 as well.
I think it's just that 2.4.x
On Tue, Jul 05 2005, Ondrej Zary wrote:
> oread is faster than dd, but still not as fast as 2.4. In 2.6.12, HDD
> led is blinking, in 2.4 it's solid on during the read.
Oh, and please do test 2.6 by first setting the deadline scheduler for
hda. I can see you are using the 'as' scheduler right
On Tue, Jul 05 2005, Ondrej Zary wrote:
> Jens Axboe wrote:
> >On Tue, Jul 05 2005, Ondrej Zary wrote:
> >
> >>Jens Axboe wrote:
> >>
> >>>On Tue, 2005-07-05 at 15:02 +0200, Ondrej Zary wrote:
> >>>
> >>>
> >Ok, looks alright for both. Your machine is quite slow, perhaps that is
> >showing
On Tue, Jul 05 2005, Linus Torvalds wrote:
>
>
> On Tue, 5 Jul 2005, Jens Axboe wrote:
> >
> > Looks interesting, 2.6 spends oodles of times copying to user space.
> > Lets check if raw reads perform ok, please try and time this app in 2.4
> > and 2.6 as well.
>
> I think it's just that 2.4.x
On Tue, 5 Jul 2005, Jens Axboe wrote:
>
> Looks interesting, 2.6 spends oodles of times copying to user space.
> Lets check if raw reads perform ok, please try and time this app in 2.4
> and 2.6 as well.
I think it's just that 2.4.x used to allow longer command queues. I think
MAX_NR_REQUESTS
Jens Axboe wrote:
On Tue, Jul 05 2005, Ondrej Zary wrote:
Jens Axboe wrote:
On Tue, 2005-07-05 at 15:02 +0200, Ondrej Zary wrote:
Ok, looks alright for both. Your machine is quite slow, perhaps that is
showing the slower performance. Can you try and make HZ 100 in 2.6 and
test again?
On Tue, Jul 05 2005, Ondrej Zary wrote:
> Jens Axboe wrote:
> >On Tue, 2005-07-05 at 15:02 +0200, Ondrej Zary wrote:
> >
> >>>Ok, looks alright for both. Your machine is quite slow, perhaps that is
> >>>showing the slower performance. Can you try and make HZ 100 in 2.6 and
> >>>test again?
Jens Axboe wrote:
On Tue, 2005-07-05 at 15:02 +0200, Ondrej Zary wrote:
Ok, looks alright for both. Your machine is quite slow, perhaps that is
showing the slower performance. Can you try and make HZ 100 in 2.6 and
test again? 2.6.13-recent has it as a config option, otherwise edit
On Tue, 2005-07-05 at 15:02 +0200, Ondrej Zary wrote:
> > Ok, looks alright for both. Your machine is quite slow, perhaps that is
> > showing the slower performance. Can you try and make HZ 100 in 2.6 and
> > test again? 2.6.13-recent has it as a config option, otherwise edit
> >
Jens Axboe wrote:
On Tue, 2005-07-05 at 14:35 +0200, Ondrej Zary wrote:
2.4.26
[EMAIL PROTECTED]:/home/rainbow# time dd if=/dev/hda of=/dev/null bs=512
count=1048576
1048576+0 records in
1048576+0 records out
real0m23.858s
user0m1.750s
sys 0m15.180s
Perhaps some read-ahead bug.
On Tue, 2005-07-05 at 14:35 +0200, Ondrej Zary wrote:
> 2.4.26
> [EMAIL PROTECTED]:/home/rainbow# time dd if=/dev/hda of=/dev/null bs=512
> count=1048576
> 1048576+0 records in
> 1048576+0 records out
>
> real0m23.858s
> user0m1.750s
> sys 0m15.180s
Jens Axboe wrote:
On Tue, Jul 05 2005, Ondrej Zary wrote:
Jens Axboe wrote:
On Tue, Jul 05 2005, Ondrej Zary wrote:
André Tomt wrote:
Al Boldi wrote:
Bartlomiej Zolnierkiewicz wrote: {
On 7/4/05, Al Boldi <[EMAIL PROTECTED]> wrote:
Hdparm -tT gives 38mb/s in 2.4.31
Cat /dev/hda
On Tue, Jul 05 2005, Ondrej Zary wrote:
> Jens Axboe wrote:
> >On Tue, Jul 05 2005, Ondrej Zary wrote:
> >
> >>André Tomt wrote:
> >>
> >>>Al Boldi wrote:
> >>>
> >>>
> Bartlomiej Zolnierkiewicz wrote: {
>
>
> >>>On 7/4/05, Al Boldi <[EMAIL PROTECTED]> wrote:
> >>>Hdparm -tT
Jens Axboe wrote:
On Tue, Jul 05 2005, Ondrej Zary wrote:
André Tomt wrote:
Al Boldi wrote:
Bartlomiej Zolnierkiewicz wrote: {
On 7/4/05, Al Boldi <[EMAIL PROTECTED]> wrote:
Hdparm -tT gives 38mb/s in 2.4.31
Cat /dev/hda > /dev/null gives 2% user 33% sys 65% idle
Hdparm -tT gives
On Tue, Jul 05 2005, Ondrej Zary wrote:
> André Tomt wrote:
> >Al Boldi wrote:
> >
> >>Bartlomiej Zolnierkiewicz wrote: {
> >>
> >On 7/4/05, Al Boldi <[EMAIL PROTECTED]> wrote:
> >Hdparm -tT gives 38mb/s in 2.4.31
> >Cat /dev/hda > /dev/null gives 2% user 33% sys 65% idle
> >
>
André Tomt wrote:
Al Boldi wrote:
Bartlomiej Zolnierkiewicz wrote: {
On 7/4/05, Al Boldi <[EMAIL PROTECTED]> wrote:
Hdparm -tT gives 38mb/s in 2.4.31
Cat /dev/hda > /dev/null gives 2% user 33% sys 65% idle
Hdparm -tT gives 28mb/s in 2.6.12
Cat /dev/hda > /dev/null gives 2% user 25% sys 0%
André Tomt wrote:
Al Boldi wrote:
Bartlomiej Zolnierkiewicz wrote: {
On 7/4/05, Al Boldi [EMAIL PROTECTED] wrote:
Hdparm -tT gives 38mb/s in 2.4.31
Cat /dev/hda /dev/null gives 2% user 33% sys 65% idle
Hdparm -tT gives 28mb/s in 2.6.12
Cat /dev/hda /dev/null gives 2% user 25% sys 0% idle
On Tue, Jul 05 2005, Ondrej Zary wrote:
André Tomt wrote:
Al Boldi wrote:
Bartlomiej Zolnierkiewicz wrote: {
On 7/4/05, Al Boldi [EMAIL PROTECTED] wrote:
Hdparm -tT gives 38mb/s in 2.4.31
Cat /dev/hda /dev/null gives 2% user 33% sys 65% idle
Hdparm -tT gives 28mb/s in 2.6.12
Cat
Jens Axboe wrote:
On Tue, Jul 05 2005, Ondrej Zary wrote:
André Tomt wrote:
Al Boldi wrote:
Bartlomiej Zolnierkiewicz wrote: {
On 7/4/05, Al Boldi [EMAIL PROTECTED] wrote:
Hdparm -tT gives 38mb/s in 2.4.31
Cat /dev/hda /dev/null gives 2% user 33% sys 65% idle
Hdparm -tT gives 28mb/s
On Tue, Jul 05 2005, Ondrej Zary wrote:
Jens Axboe wrote:
On Tue, Jul 05 2005, Ondrej Zary wrote:
André Tomt wrote:
Al Boldi wrote:
Bartlomiej Zolnierkiewicz wrote: {
On 7/4/05, Al Boldi [EMAIL PROTECTED] wrote:
Hdparm -tT gives 38mb/s in 2.4.31
Cat /dev/hda /dev/null gives 2%
Jens Axboe wrote:
On Tue, Jul 05 2005, Ondrej Zary wrote:
Jens Axboe wrote:
On Tue, Jul 05 2005, Ondrej Zary wrote:
André Tomt wrote:
Al Boldi wrote:
Bartlomiej Zolnierkiewicz wrote: {
On 7/4/05, Al Boldi [EMAIL PROTECTED] wrote:
Hdparm -tT gives 38mb/s in 2.4.31
Cat /dev/hda
On Tue, 2005-07-05 at 14:35 +0200, Ondrej Zary wrote:
2.4.26
[EMAIL PROTECTED]:/home/rainbow# time dd if=/dev/hda of=/dev/null bs=512
count=1048576
1048576+0 records in
1048576+0 records out
real0m23.858s
user0m1.750s
sys 0m15.180s
Perhaps some read-ahead bug. What
Jens Axboe wrote:
On Tue, 2005-07-05 at 14:35 +0200, Ondrej Zary wrote:
2.4.26
[EMAIL PROTECTED]:/home/rainbow# time dd if=/dev/hda of=/dev/null bs=512
count=1048576
1048576+0 records in
1048576+0 records out
real0m23.858s
user0m1.750s
sys 0m15.180s
Perhaps some read-ahead bug.
On Tue, 2005-07-05 at 15:02 +0200, Ondrej Zary wrote:
Ok, looks alright for both. Your machine is quite slow, perhaps that is
showing the slower performance. Can you try and make HZ 100 in 2.6 and
test again? 2.6.13-recent has it as a config option, otherwise edit
include/asm/param.h
Jens Axboe wrote:
On Tue, 2005-07-05 at 15:02 +0200, Ondrej Zary wrote:
Ok, looks alright for both. Your machine is quite slow, perhaps that is
showing the slower performance. Can you try and make HZ 100 in 2.6 and
test again? 2.6.13-recent has it as a config option, otherwise edit
On Tue, Jul 05 2005, Ondrej Zary wrote:
Jens Axboe wrote:
On Tue, 2005-07-05 at 15:02 +0200, Ondrej Zary wrote:
Ok, looks alright for both. Your machine is quite slow, perhaps that is
showing the slower performance. Can you try and make HZ 100 in 2.6 and
test again? 2.6.13-recent has it as
Jens Axboe wrote:
On Tue, Jul 05 2005, Ondrej Zary wrote:
Jens Axboe wrote:
On Tue, 2005-07-05 at 15:02 +0200, Ondrej Zary wrote:
Ok, looks alright for both. Your machine is quite slow, perhaps that is
showing the slower performance. Can you try and make HZ 100 in 2.6 and
test again?
On Tue, 5 Jul 2005, Jens Axboe wrote:
Looks interesting, 2.6 spends oodles of times copying to user space.
Lets check if raw reads perform ok, please try and time this app in 2.4
and 2.6 as well.
I think it's just that 2.4.x used to allow longer command queues. I think
MAX_NR_REQUESTS is
On Tue, Jul 05 2005, Linus Torvalds wrote:
On Tue, 5 Jul 2005, Jens Axboe wrote:
Looks interesting, 2.6 spends oodles of times copying to user space.
Lets check if raw reads perform ok, please try and time this app in 2.4
and 2.6 as well.
I think it's just that 2.4.x used to allow
On Tue, Jul 05 2005, Ondrej Zary wrote:
Jens Axboe wrote:
On Tue, Jul 05 2005, Ondrej Zary wrote:
Jens Axboe wrote:
On Tue, 2005-07-05 at 15:02 +0200, Ondrej Zary wrote:
Ok, looks alright for both. Your machine is quite slow, perhaps that is
showing the slower performance. Can you
On Tue, Jul 05 2005, Ondrej Zary wrote:
oread is faster than dd, but still not as fast as 2.4. In 2.6.12, HDD
led is blinking, in 2.4 it's solid on during the read.
Oh, and please do test 2.6 by first setting the deadline scheduler for
hda. I can see you are using the 'as' scheduler right now.
Jens Axboe wrote:
On Tue, Jul 05 2005, Linus Torvalds wrote:
On Tue, 5 Jul 2005, Jens Axboe wrote:
Looks interesting, 2.6 spends oodles of times copying to user space.
Lets check if raw reads perform ok, please try and time this app in 2.4
and 2.6 as well.
I think it's just that 2.4.x
1 - 100 of 126 matches
Mail list logo