(I really hate how Outlook makes you answer in FRONT of the message,
what a dumb design...)

Well, without spending the time I should thinking about my answer, I'll say
there are many things which impact performance, most of which we've seen
talked about here:

        1 - how fast can you get data off the media?
        2 - related - does the data just happen to be in drive cache?
        3 - how fast can you get data from the drive to the controller?
        4 - how fast can you get data from the controller into system RAM?
        5 - how fast can you get that data to the user?

(assuming reads - writes are similar, but reversed - for the most part)

Number 1 relates to rotational delay, seek times, and other things I'm
probably
forgetting.  (Like sector skewing (is that the right term? I forget!) -
where you
try to put the 'next' sector where its ready to be read by the head (on a
sequential read) just after the system has gotten around to asking for that
sector (boy, I can see the flames coming already! ;-) / 2

Number 2 relates to how smart your drive is, and too smart a drive can
actually
slow you down by being in the wrong place reading data you don't want when
you go ask it for data somewhere else.

Number 3 relates to not only the obvious issue of how fast the scsi bus is,
but how congested it is.  If you have 15 devices which can sustain a data
rate (including rotational delays and cache hits) of 10 megabytes/sec, and
your scsi bus can only pass 20 MB/sec, then you should not put more than 2
of those devices on that bus - thus requiring more and more controllers...
(and I'm ignoring any issues of contention, as I'm not familiar with the low
level of scsi enough to know about it)

Number 4 relates to your system bus bandwidth, DMA speed, system bus
loading,
etc.

Number 5 relates to how fast your cpu is, how well written the driver is,
and other things I'm probably forgetting.  (Like , can the OS actually
HANDLE 2 things going on at once - and do floppy accesses take priority
over later requests for hard disk accesses?)

So maximizing performance is not a 1-variable exercise.   And you don't
always
have the control you'd like over all the variables.

And paying too much attention to only one while ignoring others can easily
cause you to make really silly statements like: "Wow, I've really got a
fast system here - I have an ULTRA DMA 66 drive on my P133 - really screams.
And with that nice new 199x cdrom drive with it as secondary - wow, I
really SCREAM through those CD's!"  Um, well, sure, uh-huh.  Most of you
on this list see the obvious errors there - I've seen some pretty smart
people do similar things (but not so obvious to most folks) by missing
some of the above issues.

Well, this is way longer than I expected, so I'll quit before I get
into any MORE trouble than I probably am already!

rusty


-----Original Message-----
From: Bob Gustafson [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 04, 2000 5:18 PM
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: performance limitations of linux raid


I think the original answer was more to the point of Performance Limitation.

The mechanical delays inherent in the disk rotation are much slower than
the electronic or optical speeds in the connection between disk and
computer.

If you had a huge bank of semiconductor memory, or a huge cache or buffer
which was really helping (i.e., you wanted the information that was already
in the cache or buffer), then things get more complicated.

BobG

Reply via email to