Tom Brinkman wrote:
> 
> /dev/hdb:  [8.4g WD Caviar, Mdk 6.0 and swap are on the first 3g's.
>  Timing buffer-cache reads:   128 MB in  1.01 seconds =126.73 MB/sec
>  Timing buffered disk reads:  64 MB in 12.62 seconds = 5.07 MB/sec

Looks sane...

> hdparm v:
> /dev/hdb:
>  multcount    =  0 (off)
>  I/O support  =  0 (default 16-bit)
>  unmaskirq    =  0 (off)
>  using_dma    =  0 (off)
>  keepsettings =  0 (off)
>  nowerr       =  0 (off)
>  readonly     =  0 (off)
>  readahead    =  8 (on)
>  geometry     = 1027/255/63, sectors = 16514064, start = 0

Run:  hdparm -m16 -c1 -u1 -d1 -k1 -a128 /dev/hdb

If it goes without error, put it at the end of /etc/rc.d/rc.local.

The command will setup 16 sectors for multi-sector transfer, turn on
32bit interface support, turn on interrupt masking (will allow other
interrupts to be serviced while hard drive is busy), turn on DMA
transfers, set the drive to keep these settings over a reset and set
filesystem readahead to 128 sectors.

After running this, rerun the hdparm -tT a few times to see what the new
rates are.  If they still seem low, tweak the -m and -a values.


> --
>        Altho I'm runnin a 133.6mhz FSB, my pci bus is in spec at
> 33.4mhz (133.6/4), so that shouldn't be a factor.  When I saw the
> Mdk update to fix the 'fs busy' problem at shutdown, I installed
> the 6.1 kernel, 2.2.13-7mdk with the 6.1 'initscripts' rpm also.
> WD's EZbios is _not_ installed.  LBA is enabled, Award 4.6 bios,
> Aopen ax6bc r1.2 motherboard. 128mb, 125mhz PC100 cas2 7ns ram set
> to 133.6mhz cas3. System's been 'bulletproof' stable for 11 months.
> SafBench reports no cpu/L2/cache/ram errors, no matter how long
> I let it run.  At 467.7mhz, I'm well within the Deschutes core
> limit of ~500mhz.  I did adjust the L2 latency from 5 to 7, but
> with no performance hit. That doesn't happen till L2,8.

Way too much detail, but I can't help but LOVE that you've got your
ducks in a row on this one!  Thanks!
 
> Questions:  I thought the recent kernels had DMA support? .. and
>   that the HDD's were 'optimized' at boot, no? ...sure doesn't
>   look like it, ..doesn't hardly look like anything's enabled
>   according to hdparm v (?) ... but according to dmesg, DMA is
>   enabled (?)  I have compiled/installed UNIXbench 4.0.1.  Maybe
>   that would be a better reference?

Under Mandrake 6.1, yes, DMA transfers are enabled by default at
boot-time.  No such tricks under 6.0.
 
>     I don't use Linux for anything heavy duty now, but I'm gonna
>   need it optimized shortly 'cause I'm fixin' to install an UNIX
>   flight sim, Flight Gear FS.  If this is all a little OT for this
>   group, mea culpa and I apologize, but I'd still appreciate it
>   if someone would then email me and point me in the right
>   direction.

Actually, this is a great newbie topic, and one that doesn't get
discussed very much.  hdparm seems to be hidden enough that people just
don't notice it (or realize the difference that it can make!)

Here's the readings I get from a dual Celeron 400 w/ a Quantum Bigfoot
6.4G drive on a PIIX4 interface:

        /dev/hda:
         multcount    =  0 (off)
         I/O support  =  0 (default 16-bit)
         unmaskirq    =  0 (off)
         using_dma    =  0 (off)
         keepsettings =  0 (off)
         nowerr       =  0 (off)
         readonly     =  0 (off)
         readahead    =  8 (on)
         geometry     = 13446/15/63, sectors = 12706470, start = 0
        [root@localhost /root]# hdparm -tT /dev/hda
         
        /dev/hda:
         Timing buffer-cache reads:   128 MB in  1.78 seconds =71.91 MB/sec
         Timing buffered disk reads:  64 MB in 18.70 seconds = 3.42
MB/sec              

And after running the hdparm optimizations:

        /dev/hda:
         setting fs readahead to 128
         setting 32-bit I/O support flag to 1
         setting multcount to 16
         setting unmaskirq to 1 (on)
         setting using_dma to 1 (on)
         setting keep_settings to 1 (on)
         multcount    = 16 (on)
         I/O support  =  1 (32-bit)
         unmaskirq    =  1 (on)
         using_dma    =  1 (on)
         keepsettings =  1 (on)
         readahead    = 128 (on)
        [root@localhost /root]# hdparm -tT /dev/hda
         
        /dev/hda:
         Timing buffer-cache reads:   128 MB in  1.78 seconds =71.91 MB/sec
         Timing buffered disk reads:  64 MB in  9.31 seconds = 6.87
MB/sec              

Roughly a doubling in performance for this extremely slow drive.


Good luck!

-- 
Steve Philp
Network Administrator
Advance Packaging Corporation
[EMAIL PROTECTED]

Reply via email to