Skip Harrison wrote:
> My main drive with all Linux files on it is a WD 4.3g UDMA 33
> model.  I have
> UDMA turned on for this drive (again, a program from WD web site).  Using
> "hdparm" to check that dma mode is turned on for _both_ drives (turned on
> automatically by option in make config), I get 12 to 13 MB/sec on the WD.
> Using "hdparm" on the IBM drive, I get 17 MB/sec. Granted, 17 is
> faster than 13, but is it UDMA 66?

First off, I don't have a Promise ultra/66 controller, so perhaps I
shouldn't respond.  But I've been playing with udma/33, the udma patches and
raid for a year now, so I might be able to help with bits and pieces.

17MB/sec is a very good max transfer speed for an IBM 7200rpm drive.  I get
12-13MB/sec from maxtor 5400 drives under udma33.  I get 15-16MB/sec for an
IBM 7200 RPM drive under udma33. (all non-raided).

People have said to expect a 10% or so increase from running a udma66 drive
on a udma33 channel vs. running on a udma66 channel.  My results with a
friend with a HPT-366 (udma66) controller and a udma66 western digital drive
didn't give that much improvement.  we saw at most a 0.2MB/sec increase,
though we did not perform serious benchmarks, and the actual results seem to
vary quite a bit from run to run (up to +-1MB/sec, If I remember correctly).

Logically, I'm not sure how much performance increase you should expect when
doing long sustained reads from a drive.. as the max throughput of the drive
should really be the most significant thing. You might get a more measurable
increase with udma66 if you ran two drives per channel (though we all have
seen reports of increases with a single drive when going from
udma33-udma66).

Running two drives per channel and running hdparm on both at the same time
might be a useful test... can you see the difference between udma33 and
udma66 under these circumstances

> I saw a "PDC20262.warning" file at the
> ftp.us.kernel.org/pub/linux/kernel/people/hedrick/ page.  (The PDC20262 is
> the chip on the Promise Ultra 66 card).  In it Andre Hedrick said
> that "you will have to call idex=ata66 to inform the driver of the
> existence".  I assume he means that one would have to do an "append
> ide2=ata66" at boot time to the LILO prompt, or set up a LILO boot option
> for the kernel that I complied in support for the Promise card.  I have
> tried both, but still do not get any higher than 17 MB/sec.

That is indeed what it means.  Actually,  "append blah blah blah" as far as
I remember is just for lilo.conf, at the actual lilo prompt I think it
doesn't need the append. This is manual ide tuning...  in the past, when
Andre has suggested such a fix, he has also indicated that you need to turn
CONFIG_IDEDMA_AUTO off in the kernel configuration.. I forget what menu
option that is, but make sure it reads CONFIG_IDEDMA_AUTO=n in your .config
before you do a make.  Note that this also turns auto tune off for your udma
onboard, so you'd want to call it with "ide=dma ide1=dma ide2=ata66
ide3=ata66".  You also may need to run hdparm to "tune-up" the specific
drives.  I do not know the -X setting for udma66, but I believe it's in the
hdparm code.  so you would then directly tune each drive up to udma66 with
something like "hdparm -X66 /dev/hdc" (but that's entirely from memory)

It's worth a note that the PDC20262 code is basically brand new... I think
it first appeared in late July.  I tried to use the HPT366 (also brand new)
code with an existing array (with udma33 drives) and ended up with
significant corruption in the array, thanks to the HPT366 driver.  The moral
of the story is that Andre's code releases can sometimes take some time to
shake out, and perform well.  Until recently I was running 2.2.7 with the
2.2.6 ide patches, as anything newer would fail to boot on my machine
(onboard controller + two promise ultra33's).  So it's possible you aren't
seeing an increase in throughput because there is none to see at this point.

I would personally suggest going ahead and building the raid array as is..
as you are at most looking at a 10% increase in speed, and can always get
that increase through tuning after you've built the array.  You can still
run hdparm on a single drive in a running array without issues, so you can
still test.  I would expect the max 10% throughput increase to end up being
lower in the composite raid array anyway, so you be doing a lot of fighting
for not very much of an increase at all.

> I have tried to write to Andre Hedrick, but he is no
> longer at Vanderbilt -- someone that responded to one of my
> messages said he
> had gone to work for Suse..I have not tried to contact him there yet.

I know that Andre monitors the linux-kernel mailing list, that might be the
place to track him down.  He's popped up in here from time to time, but I
don't think he normally reads this list.

I would definitely try to get in touch with him, however.. maybe by
reposting to the linux-kernel list. He's very approachable, and can be very
helpful.

> I think maybe we need a separate UDMA mailing list since
> about 25 to 40% of posts there seem to be about UDMA questions/problems.

A certain amount of discussion goes on in the linux-kernel list.  I'd find a
linux-ide list to be helpful, though.., I'd subscribe and participate.

Tom

Reply via email to