ide irq/dma timeouts under load - hpt366 - observations

1999-11-13 Thread Glenn McGrath

Hi, im experiencing timout problems with my hpt366 controller card, this is
a known problem (i think), but anyway i thought you may be interested in my
observations.

Yesterday i was creating an ide raid0 array using
kernel 2.2.13 with
raid0145-19990824-2.2.11 patch (for raid v0.90)
ide.2.2.13.19991104 patch (for my hpt 366 controller)
on
AMD K6-2-266
128MB SDRAM
hda = Quantum KA 18GB (bonnie=20MB/s)
hdc = Quantum EX 6.4GB (bonnie=14MB/s)
hde = Quantum CR 6.4GB (~16MB/s i think)
hdg = Quantum CR 6.4GB ( ditto )
Onboard via chipset (Apollo MVP3) controlling hda + hdc
PCI HPT366 (rev 1) IDE card (Abit Hotrod) controlling hde + hdg
Other hardware Adaptec 2940 cotnrolling 2 cdroms, realtek 8139, riva tnt

Anyway, im benchmarking away using bonnie, it was going great guns using a 2
way raid0, getting linear performance increase, getting good speed, tried
various combinations.
1) 20MB/s on hda + hdc
2) 29MB/s on hde + hdg,
3) 25MB/s on hda + hde,
4) 25MB/s on hdc + hdg,
I would have thought 3 + 4 would be different as hdc is faster than hda,
anyway, point is i could use any combiantion of two drives in a raid0
without a timeout problem.

Next i was going to some benchmarks with a 3way raid0, but i kept getting
those damn pesky DMA and IRQ timouts, like

hde: timeout waiting for DMA
hde: irq timeout: status=0x58 { DriveReasy SeekComplete DataRequest }
hde: DMA disabled
also read_intr: status=0x50 { DriveReady SeekComplete }

I only get them on drive hde and hdg (the hpt366 controller)
I got bonnie to run without timeouts on hda hdc hde at 25MB/s,
When i tried hde+hdg+ either hda or hdc i consistently get timeouts.

I tried compiling the kernel with the hpt366 fast interrupt option but i
didnt see any difference (and wasnt really sure what it was supposed to do,
but the name seems to sugest it was apropriate).

Seems to me it may be just a problem at high load (>29MB/s?) as the 3 way
raid that i did get to work was limited in speed as it used the slowest
drive, and the slowest drive was on a stable controller (via)

I figure its a known problem, but seems i did the tests i may as well share
the info.

I did have this problem a while back but destroyed my hpt366 controller
card, i just started playing with its replacement again.

Im not on the linux-kernel mailing list so could you please cc me if you
wish to discuss it further with me. (maybe theres another test i could do)

Thanks

Glenn McGrath




Re: I2O support?

1999-11-13 Thread Stephen Waters

as an FYI,

alan cox has been doing work for i2o on a DPT card... see
http://www.linux.org.uk/diary/

-s

Matthew Clark wrote:
> 
> Hi guys - could someone please tell me if any Linux kernel currently has
> support for I2O (Intelligent Input/Output)?
> 
> Sorry to cross-post this - but I need to know urgently...
> 
> While I'm here - does anyone know what is a decent amount of I20 Cache for a
> RAID controller?  We only seem to have 16Mb out of a possible 128Mb.. it has
> been suggested that this could be the cause of an IO bottleneck we're having
> with an HP NetServer LH4...
> 
> Regards,
> 
> Matthew Clark.
> --
> NetDespatch Ltd - The Internet Call Centre.
> http://www.netdespatch.com

-- 
Pound for pound, the amoeba is the most vicious animal on earth.



RAID 5 Array fails if first disk is missing

1999-11-13 Thread Marc Haber

Hi!

I have a RAID 5 array running off three disks, sda, sdb and sdc. The
system is set up so that it can boot from sda or sdb, both disks do
have identical copies of the root fs. kernel is 2.2.13 with
raid-patches from August 24, raidtools-0.90 from the same date.

For a test, I disconnected sda while system power was off and expected
the system to come up on the remaining disks. However, the RAID array
wasn't detected:

|autodetecting RAID arrays
|autorun...
|   ... autorun DONE.

/proc/mdstat shows no active RAID devices, raidstart /dev/md0 gives:
|blkdev_open() failed: -6
|md: could not lock sdp15, zero-size= Marking faulty.
|could not import sdp15!
|autostart sdp14 failed!
|/dev/md0: Invalid argument

When I plug the first disk back in, everything works again.

I suspect that there is something wrong with the persistent
superblocks on the second and/or the third disk. Can I rewrite the
persistent superblock?

How am I supposed to get the system booted in case of a disk failure?

Greetings
Marc

-- 
-- !! No courtesy copies, please !! -
Marc Haber  |   " Questions are the | Mailadresse im Header
Karlsruhe, Germany  | Beginning of Wisdom " | Fon: *49 721 966 32 15
Nordisch by Nature  | Lt. Worf, TNG "Rightful Heir" | Fax: *49 721 966 31 29



FDisk 2.9

1999-11-13 Thread Lauri Tischler

Where can I find fdisk 2.9
Preferably as deb
Mylex dac requires 2.9
cheers..
-- 
[EMAIL PROTECTED]  * Whut? Whut? Suuurfff.. *



Re: I2O support?

1999-11-13 Thread Alan Cox

> Hi guys - could someone please tell me if any Linux kernel currently has
> support for I2O (Intelligent Input/Output)?

2.3.x we have support for PCI based I2O devices talking I2O lan, I2O scsi
and I2O block to I2O spec 1.5. 

Linux 2.2 has a board specific driver the Red Creek VPN card and you can
get a board specific driver the DPT Decade/Century/Millenium series cards

> While I'm here - does anyone know what is a decent amount of I20 Cache for a
> RAID controller?  We only seem to have 16Mb out of a possible 128Mb.. it has
> been suggested that this could be the cause of an IO bottleneck we're having
> with an HP NetServer LH4...

Unless the system is unusual you should get better performance by increasing
main memory rather than disk controller cache size. How much you want on the
board is obviously vendor specific. 

Alan