Re: 2.6.20-rc3 regression? hdparm shows 1/2...1/3 the throughput

2007-01-06 Thread Stefan Richter
> Tim Schmielau wrote:
>> See
>>   http://lkml.org/lkml/2007/1/2/75
>> for the solution.

OK; this has already been committed a few days ago. I assume this closes
the issue. I will post after -rc4 in the unlikely case that it doesn't.
-- 
Stefan Richter
-=-=-=== ---= --==-
http://arcgraph.de/sr/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: 2.6.20-rc3 regression? hdparm shows 1/2...1/3 the throughput

2007-01-06 Thread Stefan Richter
Tim Schmielau wrote:
> On Sat, 6 Jan 2007, Stefan Richter wrote:
> 
>> Did anybody else notice this?  The result of "hdparm -t" under 2.6.20-rc
>> seems to be less than half of what you get on 2.6.19.  However, disk I/O
>> did *not* get slower according to bonnie++.
> 
> yes. See
>   http://lkml.org/lkml/2007/1/2/75
> for the solution.
> 
> Tim

Thanks. I should have remembered that.

# hdparm -tT /dev/hda

/dev/hda:
 Timing cached reads:   10864 MB in  2.00 seconds = 5440.10 MB/sec
 Timing buffered disk reads:   58 MB in  3.06 seconds =  18.94 MB/sec
# cat /sys/block/hda/queue/scheduler
noop anticipatory deadline [cfq]
# echo anticipatory > /sys/block/hda/queue/scheduler
# hdparm -tT /dev/hda

/dev/hda:
 Timing cached reads:   10680 MB in  2.00 seconds = 5347.20 MB/sec
 Timing buffered disk reads:  148 MB in  3.02 seconds =  49.03 MB/sec

-- 
Stefan Richter
-=-=-=== ---= --==-
http://arcgraph.de/sr/
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: 2.6.20-rc3 regression? hdparm shows 1/2...1/3 the throughput

2007-01-06 Thread Tim Schmielau
On Sat, 6 Jan 2007, Stefan Richter wrote:

> Did anybody else notice this?  The result of "hdparm -t" under 2.6.20-rc
> seems to be less than half of what you get on 2.6.19.  However, disk I/O
> did *not* get slower according to bonnie++.

yes. See
  http://lkml.org/lkml/2007/1/2/75
for the solution.

Tim
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


2.6.20-rc3 regression? hdparm shows 1/2...1/3 the throughput

2007-01-06 Thread Stefan Richter
Did anybody else notice this?  The result of "hdparm -t" under 2.6.20-rc
seems to be less than half of what you get on 2.6.19.  However, disk I/O
did *not* get slower according to bonnie++.

That is, there is no harm beyond confusion of the user.  hdparm is lying
for some reason.

Below is the output of hdparm v6.6 and bonnie++ 1.93c.  I ran hdparm 3
times each and bonnie++ 2 times to ensure that I get typical readings,
on an otherwise idle system.

hdparm -tT /dev/???:

2.6.19.1, IDE:
 Timing cached reads:   10728 MB in  2.00 seconds = 5371.95 MB/sec
 Timing buffered disk reads:  154 MB in  3.01 seconds =  51.11 MB/sec

2.6.19.1, SATA:
 Timing cached reads:   8028 MB in  2.00 seconds = 4017.69 MB/sec
 Timing buffered disk reads:  220 MB in  3.01 seconds =  73.18 MB/sec

2.6.19.1 + latest 1394 drivers, FireWire 800:
 Timing cached reads:   10216 MB in  2.00 seconds = 5114.73 MB/sec
 Timing buffered disk reads:  214 MB in  3.03 seconds =  70.71 MB/sec

2.6.19.1 + latest 1394 drivers, FireWire 400:
 Timing cached reads:   8892 MB in  2.00 seconds = 4449.76 MB/sec
 Timing buffered disk reads:   74 MB in  3.08 seconds =  24.02 MB/sec

2.6.20-rc3, IDE:
 Timing cached reads:   11492 MB in  2.00 seconds = 5753.76 MB/sec
 Timing buffered disk reads:   56 MB in  3.10 seconds =  18.09 MB/sec

2.6.20-rc3, SATA:
 Timing cached reads:   10736 MB in  2.00 seconds = 5374.70 MB/sec
 Timing buffered disk reads:  102 MB in  3.00 seconds =  33.99 MB/sec

2.6.20-rc3 + latest 1394 drivers, FireWire 800:
 Timing cached reads:   9476 MB in  2.00 seconds = 4742.33 MB/sec
 Timing buffered disk reads:   70 MB in  3.08 seconds =  22.69 MB/sec

2.6.20-rc3 + latest 1394 drivers, FireWire 400:
 Timing cached reads:   10336 MB in  2.00 seconds = 5173.65 MB/sec
 Timing buffered disk reads:   40 MB in  3.09 seconds =  12.95 MB/sec



bonnie++ on the SATA disk with a single 47% filled reiserfs partition:

2.6.19.1, SATA:
> Version 1.93c   --Sequential Output-- --Sequential Input- 
> --Random-
> Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- 
> --Seeks--
> MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec 
> %CP
> stein2G   203  99 57389  18 26943   9   856  93 66032  17 190.8   
> 6
> Latency 48553us 295ms1130ms 196ms   18075us 500ms
> Version 1.93c   --Sequential Create-- Random 
> Create
> stein   -Create-- --Read--- -Delete-- -Create-- --Read--- 
> -Delete--
>   files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec 
> %CP
>  16 16541  55 + +++ 18794  71 22933  79 + +++ 24005  
> 95
> Latency   107ms 129us 175ms   16613us  76us 236us
That is,
56.0 MB/s write and 64.5 MB/s read performance of sequential block I/O.

2.6.20-rc3, SATA:
> Version 1.93c   --Sequential Output-- --Sequential Input- 
> --Random-
> Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- 
> --Seeks--
> MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec 
> %CP
> stein2G   210  99 57340  18 26920  10  1166  89 66805  17 195.3   
> 5
> Latency 47794us 284ms 869ms 249ms   36229us 533ms
> Version 1.93c   --Sequential Create-- Random 
> Create
> stein   -Create-- --Read--- -Delete-- -Create-- --Read--- 
> -Delete--
>   files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec 
> %CP
>  16 19645  67 + +++ 21767  83 26951  94 + +++ 24859  
> 98
> Latency   231ms  85us 120us 469us  69us 118us
That is,
56.0 MB/s write and 65.2 MB/s read performance of sequential block I/O.



It's a 32bit kernel on a dual core x86 PC:
> $ uname -a
> Linux stein 2.6.20-rc3 #7 SMP PREEMPT Sat Jan 6 17:07:30 CET 2007 i686 
> Intel(R) Core(TM)2 CPU T7200  @ 2.00GHz GenuineIntel GNU/Linux

The motherboard is i945GT based.
> $ /usr/sbin/lspci
> 00:00.0 Host bridge: Intel Corporation Mobile Memory Controller Hub (rev 03)
> 00:01.0 PCI bridge: Intel Corporation Mobile PCI Express Graphics Port (rev 
> 03)
> 00:02.0 VGA compatible controller: Intel Corporation Mobile Integrated 
> Graphics Controller (rev 03)
> 00:1b.0 Class 0403: Intel Corporation 82801G (ICH7 Family) High Definition 
> Audio Controller (rev 01)
> 00:1c.0 PCI bridge: Intel Corporation 82801G (ICH7 Family) PCI Express Port 1 
> (rev 01)
> 00:1c.4 PCI bridge: Intel Corporation 82801GR/GH/GHM 

2.6.20-rc3 regression? hdparm shows 1/2...1/3 the throughput

2007-01-06 Thread Stefan Richter
Did anybody else notice this?  The result of hdparm -t under 2.6.20-rc
seems to be less than half of what you get on 2.6.19.  However, disk I/O
did *not* get slower according to bonnie++.

That is, there is no harm beyond confusion of the user.  hdparm is lying
for some reason.

Below is the output of hdparm v6.6 and bonnie++ 1.93c.  I ran hdparm 3
times each and bonnie++ 2 times to ensure that I get typical readings,
on an otherwise idle system.

hdparm -tT /dev/???:

2.6.19.1, IDE:
 Timing cached reads:   10728 MB in  2.00 seconds = 5371.95 MB/sec
 Timing buffered disk reads:  154 MB in  3.01 seconds =  51.11 MB/sec

2.6.19.1, SATA:
 Timing cached reads:   8028 MB in  2.00 seconds = 4017.69 MB/sec
 Timing buffered disk reads:  220 MB in  3.01 seconds =  73.18 MB/sec

2.6.19.1 + latest 1394 drivers, FireWire 800:
 Timing cached reads:   10216 MB in  2.00 seconds = 5114.73 MB/sec
 Timing buffered disk reads:  214 MB in  3.03 seconds =  70.71 MB/sec

2.6.19.1 + latest 1394 drivers, FireWire 400:
 Timing cached reads:   8892 MB in  2.00 seconds = 4449.76 MB/sec
 Timing buffered disk reads:   74 MB in  3.08 seconds =  24.02 MB/sec

2.6.20-rc3, IDE:
 Timing cached reads:   11492 MB in  2.00 seconds = 5753.76 MB/sec
 Timing buffered disk reads:   56 MB in  3.10 seconds =  18.09 MB/sec

2.6.20-rc3, SATA:
 Timing cached reads:   10736 MB in  2.00 seconds = 5374.70 MB/sec
 Timing buffered disk reads:  102 MB in  3.00 seconds =  33.99 MB/sec

2.6.20-rc3 + latest 1394 drivers, FireWire 800:
 Timing cached reads:   9476 MB in  2.00 seconds = 4742.33 MB/sec
 Timing buffered disk reads:   70 MB in  3.08 seconds =  22.69 MB/sec

2.6.20-rc3 + latest 1394 drivers, FireWire 400:
 Timing cached reads:   10336 MB in  2.00 seconds = 5173.65 MB/sec
 Timing buffered disk reads:   40 MB in  3.09 seconds =  12.95 MB/sec



bonnie++ on the SATA disk with a single 47% filled reiserfs partition:

2.6.19.1, SATA:
 Version 1.93c   --Sequential Output-- --Sequential Input- 
 --Random-
 Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- 
 --Seeks--
 MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec 
 %CP
 stein2G   203  99 57389  18 26943   9   856  93 66032  17 190.8   
 6
 Latency 48553us 295ms1130ms 196ms   18075us 500ms
 Version 1.93c   --Sequential Create-- Random 
 Create
 stein   -Create-- --Read--- -Delete-- -Create-- --Read--- 
 -Delete--
   files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec 
 %CP
  16 16541  55 + +++ 18794  71 22933  79 + +++ 24005  
 95
 Latency   107ms 129us 175ms   16613us  76us 236us
That is,
56.0 MB/s write and 64.5 MB/s read performance of sequential block I/O.

2.6.20-rc3, SATA:
 Version 1.93c   --Sequential Output-- --Sequential Input- 
 --Random-
 Concurrency   1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- 
 --Seeks--
 MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP  /sec 
 %CP
 stein2G   210  99 57340  18 26920  10  1166  89 66805  17 195.3   
 5
 Latency 47794us 284ms 869ms 249ms   36229us 533ms
 Version 1.93c   --Sequential Create-- Random 
 Create
 stein   -Create-- --Read--- -Delete-- -Create-- --Read--- 
 -Delete--
   files  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec %CP  /sec 
 %CP
  16 19645  67 + +++ 21767  83 26951  94 + +++ 24859  
 98
 Latency   231ms  85us 120us 469us  69us 118us
That is,
56.0 MB/s write and 65.2 MB/s read performance of sequential block I/O.



It's a 32bit kernel on a dual core x86 PC:
 $ uname -a
 Linux stein 2.6.20-rc3 #7 SMP PREEMPT Sat Jan 6 17:07:30 CET 2007 i686 
 Intel(R) Core(TM)2 CPU T7200  @ 2.00GHz GenuineIntel GNU/Linux

The motherboard is i945GT based.
 $ /usr/sbin/lspci
 00:00.0 Host bridge: Intel Corporation Mobile Memory Controller Hub (rev 03)
 00:01.0 PCI bridge: Intel Corporation Mobile PCI Express Graphics Port (rev 
 03)
 00:02.0 VGA compatible controller: Intel Corporation Mobile Integrated 
 Graphics Controller (rev 03)
 00:1b.0 Class 0403: Intel Corporation 82801G (ICH7 Family) High Definition 
 Audio Controller (rev 01)
 00:1c.0 PCI bridge: Intel Corporation 82801G (ICH7 Family) PCI Express Port 1 
 (rev 01)
 00:1c.4 PCI bridge: Intel Corporation 82801GR/GH/GHM (ICH7 Family) PCI 
 Express Port 5 (rev 01)
 00:1d.0 

Re: 2.6.20-rc3 regression? hdparm shows 1/2...1/3 the throughput

2007-01-06 Thread Tim Schmielau
On Sat, 6 Jan 2007, Stefan Richter wrote:

 Did anybody else notice this?  The result of hdparm -t under 2.6.20-rc
 seems to be less than half of what you get on 2.6.19.  However, disk I/O
 did *not* get slower according to bonnie++.

yes. See
  http://lkml.org/lkml/2007/1/2/75
for the solution.

Tim
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: 2.6.20-rc3 regression? hdparm shows 1/2...1/3 the throughput

2007-01-06 Thread Stefan Richter
Tim Schmielau wrote:
 On Sat, 6 Jan 2007, Stefan Richter wrote:
 
 Did anybody else notice this?  The result of hdparm -t under 2.6.20-rc
 seems to be less than half of what you get on 2.6.19.  However, disk I/O
 did *not* get slower according to bonnie++.
 
 yes. See
   http://lkml.org/lkml/2007/1/2/75
 for the solution.
 
 Tim

Thanks. I should have remembered that.

# hdparm -tT /dev/hda

/dev/hda:
 Timing cached reads:   10864 MB in  2.00 seconds = 5440.10 MB/sec
 Timing buffered disk reads:   58 MB in  3.06 seconds =  18.94 MB/sec
# cat /sys/block/hda/queue/scheduler
noop anticipatory deadline [cfq]
# echo anticipatory  /sys/block/hda/queue/scheduler
# hdparm -tT /dev/hda

/dev/hda:
 Timing cached reads:   10680 MB in  2.00 seconds = 5347.20 MB/sec
 Timing buffered disk reads:  148 MB in  3.02 seconds =  49.03 MB/sec

-- 
Stefan Richter
-=-=-=== ---= --==-
http://arcgraph.de/sr/
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: 2.6.20-rc3 regression? hdparm shows 1/2...1/3 the throughput

2007-01-06 Thread Stefan Richter
 Tim Schmielau wrote:
 See
   http://lkml.org/lkml/2007/1/2/75
 for the solution.

OK; this has already been committed a few days ago. I assume this closes
the issue. I will post after -rc4 in the unlikely case that it doesn't.
-- 
Stefan Richter
-=-=-=== ---= --==-
http://arcgraph.de/sr/
-
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/