RE: performance limitations of linux raid

2000-05-05 Thread Carruth, Rusty

(I really hate how Outlook makes you answer in FRONT of the message,
what a dumb design...)

Well, without spending the time I should thinking about my answer, I'll say
there are many things which impact performance, most of which we've seen
talked about here:

1 - how fast can you get data off the media?
2 - related - does the data just happen to be in drive cache?
3 - how fast can you get data from the drive to the controller?
4 - how fast can you get data from the controller into system RAM?
5 - how fast can you get that data to the user?

(assuming reads - writes are similar, but reversed - for the most part)

Number 1 relates to rotational delay, seek times, and other things I'm
probably
forgetting.  (Like sector skewing (is that the right term? I forget!) -
where you
try to put the 'next' sector where its ready to be read by the head (on a
sequential read) just after the system has gotten around to asking for that
sector (boy, I can see the flames coming already! ;-) / 2

Number 2 relates to how smart your drive is, and too smart a drive can
actually
slow you down by being in the wrong place reading data you don't want when
you go ask it for data somewhere else.

Number 3 relates to not only the obvious issue of how fast the scsi bus is,
but how congested it is.  If you have 15 devices which can sustain a data
rate (including rotational delays and cache hits) of 10 megabytes/sec, and
your scsi bus can only pass 20 MB/sec, then you should not put more than 2
of those devices on that bus - thus requiring more and more controllers...
(and I'm ignoring any issues of contention, as I'm not familiar with the low
level of scsi enough to know about it)

Number 4 relates to your system bus bandwidth, DMA speed, system bus
loading,
etc.

Number 5 relates to how fast your cpu is, how well written the driver is,
and other things I'm probably forgetting.  (Like , can the OS actually
HANDLE 2 things going on at once - and do floppy accesses take priority
over later requests for hard disk accesses?)

So maximizing performance is not a 1-variable exercise.   And you don't
always
have the control you'd like over all the variables.

And paying too much attention to only one while ignoring others can easily
cause you to make really silly statements like: "Wow, I've really got a
fast system here - I have an ULTRA DMA 66 drive on my P133 - really screams.
And with that nice new 199x cdrom drive with it as secondary - wow, I
really SCREAM through those CD's!"  Um, well, sure, uh-huh.  Most of you
on this list see the obvious errors there - I've seen some pretty smart
people do similar things (but not so obvious to most folks) by missing
some of the above issues.

Well, this is way longer than I expected, so I'll quit before I get
into any MORE trouble than I probably am already!

rusty


-Original Message-
From: Bob Gustafson [mailto:[EMAIL PROTECTED]]
Sent: Thursday, May 04, 2000 5:18 PM
To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
Subject: Re: performance limitations of linux raid


I think the original answer was more to the point of Performance Limitation.

The mechanical delays inherent in the disk rotation are much slower than
the electronic or optical speeds in the connection between disk and
computer.

If you had a huge bank of semiconductor memory, or a huge cache or buffer
which was really helping (i.e., you wanted the information that was already
in the cache or buffer), then things get more complicated.

BobG



RE: performance limitations of linux raid

2000-05-05 Thread Carruth, Rusty


 From: Gregory Leblanc [mailto:[EMAIL PROTECTED]]

 ..., that would suck up a lot more host CPU processing power than
 the 3 SCSI channels that you'd need to get 12 drives and avoid bus
saturation.  

not to mention the obvious bus slot loading problem ;-)

rc



Re: performance limitations of linux raid

2000-05-05 Thread Michael Robinton

   
 Not entirely, there is a fair bit more CPU overhead running an
   IDE bus than a proper SCSI one.
  
  A "fair" bit on a 500mhz+ processor is really negligible.
 
 
   Ehem, a fair bit on a 500Mhz CPU is ~ 30%.  I have watched a
 *single* UDMA66 drive (with read ahead, multiblock io, 32bit mode, and
 dma transfers enabled) on a 2.2.14 + IDE + RAID patched take over 30%
 of the CPU during disk activity.  The same system with a 4 x 28G RAID0
 set running would be  .1% idle during large copies.  An exactly
 configured system with UltraWide SCSI instead of IDE sits ~ 95% idle
 during the same ops.

Try turning on DMA



Re: performance limitations of linux raid

2000-05-05 Thread Christopher E. Brown

On Fri, 5 May 2000, Michael Robinton wrote:


Not entirely, there is a fair bit more CPU overhead running an
IDE bus than a proper SCSI one.
   
   A "fair" bit on a 500mhz+ processor is really negligible.
  
  
  Ehem, a fair bit on a 500Mhz CPU is ~ 30%.  I have watched a
  *single* UDMA66 drive (with read ahead, multiblock io, 32bit mode, and
  dma transfers enabled) on a 2.2.14 + IDE + RAID patched take over 30%
  of the CPU during disk activity.  The same system with a 4 x 28G RAID0
  set running would be  .1% idle during large copies.  An exactly
  configured system with UltraWide SCSI instead of IDE sits ~ 95% idle
  during the same ops.
 
 Try turning on DMA

Ahem, try re-reading the above, line 3 first word!


---
As folks might have suspected, not much survives except roaches, 
and they don't carry large enough packets fast enough...
--About the Internet and nuclear war.





RE: performance limitations of linux raid

2000-05-04 Thread Carruth, Rusty



 The primary limitation is probably the rotational speed of the disks and 
 how fast you can rip data off the drives. For instance, ...

Well, yeah, and so whatever happened to optical scsi?  I heard that you
could get 1 gbit/sec (or maybe gByte?) xfer, and you could go 1000 meters -
or is this not coming down the pike?

(optical scsi - meaning using fiber instead of ribbon cable to interconnect
controller to drive)

rusty



Re: performance limitations of linux raid

2000-05-04 Thread phil



On Thu, May 04, 2000 at 08:35:52AM -0700, Carruth, Rusty wrote:
 
  The primary limitation is probably the rotational speed of the disks and 
  how fast you can rip data off the drives. For instance, ...
 
 Well, yeah, and so whatever happened to optical scsi?  I heard that you
 could get 1 gbit/sec (or maybe gByte?) xfer, and you could go 1000 meters -
 or is this not coming down the pike?
 
 (optical scsi - meaning using fiber instead of ribbon cable to interconnect
 controller to drive)

Optical is an interesting idea.  Fiber has a better throughput (I
think the limiting factor is the LED/laser modulation), but fiber has
a longer latency.  I.e., light travels slower in fiber than
electricity through wire.  (No, light doesn't travel at 'light speed'
through a dense medium like plastic or glass.)  But you can pack more
data in a fiber.  (I.e., more on-off transitions per second.)

I think fiber is mostly used for running long distances, or for
isolating (electricly) between nodes/networks.  Some NSP's require
that your link be fiber to prevent the possibility of you frying their
equipment if you, say, wire your DS1 (T1) pairs to an electrical
outlet.. ;')

For shorter distances, I think differential pairs of wires is
faster/cheaper/easier for most things.


Phil

-- 
Philip Edelbrock -- IS Manager -- Edge Design, Corvallis, OR
   [EMAIL PROTECTED] -- http://www.netroedge.com/~phil
 PGP F16: 01 D2 FD 01 B5 46 F4 F0  3A 8B 9D 7E 14 7F FB 7A



RE: performance limitations of linux raid

2000-05-04 Thread Gregory Leblanc

 -Original Message-
 From: Carruth, Rusty [mailto:[EMAIL PROTECTED]]
 Sent: Thursday, May 04, 2000 8:36 AM
 To: [EMAIL PROTECTED]
 Subject: RE: performance limitations of linux raid
 
  The primary limitation is probably the rotational speed of 
 the disks and 
  how fast you can rip data off the drives. For instance, ...
 
 Well, yeah, and so whatever happened to optical scsi?  I 
 heard that you
 could get 1 gbit/sec (or maybe gByte?) xfer, and you could go 
 1000 meters -
 or is this not coming down the pike?

1 Gbit/sec is approximately equal to 120Mbytes/sec.  Ultra160 SCSI is faster
than that.  

 (optical scsi - meaning using fiber instead of ribbon cable 
 to interconnect
 controller to drive)

Fiber Channel, or SCSI over Fiber Channel?  Sun has an FC array that allows
you to have 500metres between the server and the array, at 100MegaBytes/sec
data transfer.  I don't know what they're going to do with FC, but it's been
outpaced for speed by SCSI.  
Greg



Re: performance limitations of linux raid

2000-05-04 Thread Christopher E. Brown

On Wed, 3 May 2000, Michael Robinton wrote:

 The primary limitation is probably the rotational speed of the disks and 
 how fast you can rip data off the drives. For instance, the big IBM 
 drives (20 - 40 gigs) have a limitation of about 27mbs for both the 7200 
 and 10k rpm models. The Drives to come will have to make trade-offs 
 between density and speed, as the technology's in the works have upper 
 constraints on one or the other. So... given enough controllers (either 
 scsii on disk or ide individual) the limit will be related to the 
 bandwidth of the disk interface rather than the speed of the processor 
 it's talking too.


Not entirely, there is a fair bit more CPU overhead running an
IDE bus than a proper SCSI one.

---
As folks might have suspected, not much survives except roaches, 
and they don't carry large enough packets fast enough...
--About the Internet and nuclear war.





Re: performance limitations of linux raid

2000-05-04 Thread Michael Robinton

On Thu, 4 May 2000, Christopher E. Brown wrote:

 On Wed, 3 May 2000, Michael Robinton wrote:
 
  The primary limitation is probably the rotational speed of the disks and 
  how fast you can rip data off the drives. For instance, the big IBM 
  drives (20 - 40 gigs) have a limitation of about 27mbs for both the 7200 
  and 10k rpm models. The Drives to come will have to make trade-offs 
  between density and speed, as the technology's in the works have upper 
  constraints on one or the other. So... given enough controllers (either 
  scsii on disk or ide individual) the limit will be related to the 
  bandwidth of the disk interface rather than the speed of the processor 
  it's talking too.
 
 
   Not entirely, there is a fair bit more CPU overhead running an
 IDE bus than a proper SCSI one.

A "fair" bit on a 500mhz+ processor is really negligible.
 



Re: performance limitations of linux raid

2000-05-03 Thread Christopher E. Brown

On Sun, 23 Apr 2000, Chris Mauritz wrote:

  I wonder what the fastest speed any linux software raid has gotten, it
  would be great if the limitation was a hardware limitation i.e. cpu,
  (scsi/ide) interface speed, number of (scsi/ide) interfaces, drive
  speed. It would be interesting to see how close software raid could get
  to its hardware limitations.
 
 RAID0 seemed to scale rather linearly.  I don't think there would be
 much of a problem getting over 100mbits/sec on an array of 8-10 ultra2
 wide drives.  I ultimately stopped fiddling with software RAID on my
 production boxes as I needed something that would reliably do hot
 swapping of dead drives.  So I've switched to using Mylex ExtremeRAID
 1100 cards instead (certainly not the card you want to use for low 
 budget applications...heh).


Umm, I can get 13,000K/sec to/from ext2 from a *single*
UltraWide Cheeta (best case, *long* reads, no seeks).  100Mbit is only
12,500K/sec.


A 4 drive UltraWide Cheeta array will top out an UltraWide bus
at 40MByte/sec, over 3 times the max rate of a 100Mbit ethernet.

---
As folks might have suspected, not much survives except roaches, 
and they don't carry large enough packets fast enough...
--About the Internet and nuclear war.





Re: performance limitations of linux raid

2000-05-03 Thread Chris Mauritz

 From [EMAIL PROTECTED] Wed May  3 20:38:05 2000
 
   Umm, I can get 13,000K/sec to/from ext2 from a *single*
 UltraWide Cheeta (best case, *long* reads, no seeks).  100Mbit is only
 12,500K/sec.
 
 
   A 4 drive UltraWide Cheeta array will top out an UltraWide bus
 at 40MByte/sec, over 3 times the max rate of a 100Mbit ethernet.

Yup, that's why all my RAID boxen use gigabit ether cards.  8-)  The
Netgear cards are only about $300.  The switches are a bit expensive
though.  8-)

Cheers,

Chris
-- 
Christopher Mauritz
[EMAIL PROTECTED]



Re: performance limitations of linux raid

2000-05-03 Thread Michael Robinton

The primary limitation is probably the rotational speed of the disks and 
how fast you can rip data off the drives. For instance, the big IBM 
drives (20 - 40 gigs) have a limitation of about 27mbs for both the 7200 
and 10k rpm models. The Drives to come will have to make trade-offs 
between density and speed, as the technologys in the works have upper 
constraints on one or the other. So... given enough controllers (either 
scsii on disk or ide individual) the limit will be related to the 
bandwidth of the disk interface rather than the speed of the processor 
it's talking too.

On Wed, 3 May 2000, Christopher E. Brown wrote:

 On Sun, 23 Apr 2000, Chris Mauritz wrote:
 
   I wonder what the fastest speed any linux software raid has gotten, it
   would be great if the limitation was a hardware limitation i.e. cpu,
   (scsi/ide) interface speed, number of (scsi/ide) interfaces, drive
   speed. It would be interesting to see how close software raid could get
   to its hardware limitations.
  
  RAID0 seemed to scale rather linearly.  I don't think there would be
  much of a problem getting over 100mbits/sec on an array of 8-10 ultra2
  wide drives.  I ultimately stopped fiddling with software RAID on my
  production boxes as I needed something that would reliably do hot
  swapping of dead drives.  So I've switched to using Mylex ExtremeRAID
  1100 cards instead (certainly not the card you want to use for low 
  budget applications...heh).
 
 
   Umm, I can get 13,000K/sec to/from ext2 from a *single*
 UltraWide Cheeta (best case, *long* reads, no seeks).  100Mbit is only
 12,500K/sec.
 
 
   A 4 drive UltraWide Cheeta array will top out an UltraWide bus
 at 40MByte/sec, over 3 times the max rate of a 100Mbit ethernet.
 
 ---
 As folks might have suspected, not much survives except roaches, 
 and they don't carry large enough packets fast enough...
 --About the Internet and nuclear war.
 
 



: Re: performance limitations of linux raid

2000-05-02 Thread john kidd
If you are looking for one of the higest performance systems we have ever seen, visit www.raidzone.com. These are real systems.

John



___
Are you a Techie? Get Your Free Tech Email Address Now!
Many to choose from! Visit http://www.TechEmail.com


Re: performance limitations of linux raid

2000-05-01 Thread Clay Claiborne


More notes on the 8 IDE drive raid5 system I built, and the 3ware controller.
Edwin Hakkennes wrote:
May I ask how these ide-ports and the attatched disks
show up under Redhat 6.2? Are they just standard ATA33 or ATA66 controllers
which can be used in software raid? Or is only the sum of the attatched
disks usable as one scsi disk?
The 3-ware site doesn't mention anything like this. Did you need any
quirks
to get this running?
The 3ware controller is supported in the latest 2.2.X kernels. I used 2.2.15pre17.
It is listed with the scsi modules and indeed, that is how the drives show
up. In my case, I had an 9.1GB LVD drive on a tekram dc390u2 controller
as my system drive (sda) so the four Maxtor DMA66 drives became sdb - sde.
I didn't use 3ware bios or setup at all. 3ware expects you to use the controller
as advertised - with their setup software to create a raid0, raid1 or raid1-0
array. I haven't tried that yet. I just wanted a lot of fast dma66 controllers
in a single slot. I figured out how to address the individual drives, which
as you can see, was pretty straight forward, and let the linux software
handle the raid.
3ware doesn't mention anything like this, because it is news to them.
They have no experience using their boards with software raid or raid5.
Unfortunatedly, their boards are somewhat pricey, reflecting I suppose,
there status as raid controllers, as opposed to UDMA66 controllers. At
Linux Beach we have just become dealers for 3ware and can offer these boards
at slightly less than the 3ware prices:
QTY ITEM NO.
DESCRIPTION
PRICE
--
RAID-3W5200L 3WARE ESCALADE 2PORT IDE RAID
CONTROLLER
$119.00
RAID-3W5400L 3WARE ESCALADE 4PORT IDE RAID
CONTROLLER
$219.00
RAID-3W5800L 3WARE ESCALADE 8PORT IDE RAID
CONTROLLER
$419.00
So my question basically is: Could you send me the
output of
cat /proc/pci
cat /proc/partitions
cat /proc/mdstat
ls -laR /proc/ide
This system currently a production server at my client's site so I don't
have easy access to it. So I can't send you the proc outputs you requested.
However they have promised to let me have it for awhile at sometime in
the future for futher benchmarking and tuning. Hopefully I can get it then.
Here is the raidtab:
/etc/raidtab
raiddev /dev/md0
 raid-level
5
 nr-raid-disks
8
 nr-spare-disks
0
 persistent-superblock
1
 chunk-size
128
 parity-algorithm
left-symmetric
 device
/dev/hda1
 raid-disk
0
 device
/dev/hde1
 raid-disk
1
 device
/dev/hdc1
 raid-disk
2
 device
/dev/hdg1
 raid-disk
3
 device
/dev/sdb1
 raid-disk
4
 device
/dev/sdc1
 raid-disk
5
 device
/dev/sdd1
 raid-disk
6
 device
/dev/sde1
 raid-disk
7

Benno Senoner [EMAIL PROTECTED]> wrote:

I went to the 3WARE site.

That is http://www.3ware.com

really nice the 8 IDE channel version and cheap too.
:-)



I noticed that they do not support UDMA/66. (at least the PDF says so).

On second look, this is correct. I know they used UDMA66 drives in their
benchmarks:
http://3ware.com/products/3wareEscaladePerformanceAnalysis_kc.pdf
and recommend UDMA66 drives for best preformance.

Do you think that the impact is negligible when there is only one drive per chanel ?(At least I think that there are not that many EIDE disks which can sustain
33MB/sec all the time). The load (in terms of bandwidth) generated on the bus/CPU by multiple (8) disks is quite high IMHO, therefore 66MB/sec * 8 (even if
every drive would be able to deliver that kind of bandwidth) , would likely saturate your mobo/CPU, shifting the bottleneck from the disks to the memory/CPU
subsystem.


The controller ships with single drive 80 wire cables, and the numbers
I got were equivalent to those I got from the other DMA66 controllers running
master only.


BTW, do you need a 2.3.x to work with these 3WARE monsters ? Or are there patches backported to 2.2.x floating around ?

I know it's in 2.2.15pre14 and above and modules are available at the 3ware
site for RH6.1, RH6.2 and Suse 6.3.
TTFN
Clay J. Claiborne, Jr., President
Cosmos Engineering Company
1550 South Dunsmuir Ave.
Los Angeles, CA 90019
(323) 930-2540 (323) 930-1393 Fax
http:www.CosmosEng.com
Email: [EMAIL PROTECTED]



Re: performance limitations of linux raid

2000-04-26 Thread =3D=3FISO-8859-2=3FQ=3FJure=5FPe=E8ar=3F=3D


 In the last 24 hours ive been getting them when e2fsck runs after
 rebooting. Usual cause of rebooting is irq causeing lockup, or endlessly
 trying looping trying to get an irq.
 
 Im convinced its my hpt366 controller, ive mentioned my problem in a few
 channels, no luck yet.
 
 I used to think it was the raid code, but i get it with lvm as well, it
 happens more often from reading than writting via the HPT366, the more
 load placed on the controller the more likely it is to lockup, one drive
 by itself it just losses interrupts (sometimes it can recover), if use
 three or four channels, using both my onboard hpt366 and my pci card it
 locks up hard in a fraction of a second
 
 I want to try and work this problem out, im not a kernel hacker though.

me too ... have the same problem over here with hpt366 (abit 
hotrod) and 4 ibm 25gb disks. i thought this happens when array 
i/o is high, but recently seen some (with successfull recovery) 
when disks were quiet. i guess we really need some expert kernel 
hacker to sort this out...

btw, i'm new to this list and have some (already answered i guess) 
questions. what's the current status of linux raid? 2.2 patch seems 
to be stuck at .11. is there any general recipie to make it work in 
recen kernels? i need to have it working in 2.2.15pre19 with ide 
patch also. please enlighten me :) 

|Pegasus|



Re: performance limitations of linux raid

2000-04-26 Thread Clay Claiborne



The coolest guy you know wrote:

 Clay Claiborne wrote:
 
  For what its worth, we recently built an 8 ide drive 280GB raid5 system.
  Benchmarking with HDBENCH we got  35.7MB/sec read and 29.87MB/sec write. With
  DBENCH and 1 client we got 44.5 MB/sec with 3 clients it dropped down to about
  43MB/sec.
  The system is a 600Mhz P-3 on a ASUS P3C2000 with 256MB of ram, the raid drives
  are 40GB Maxtor DMA66, 7200 RPM, and each is run as master on its own channel.
 
  Turning on DMA seems to be the key. Benchmarking the individual drives with
  HDBENCH we got numbers like 2.57MB/sec read and 3.27MB/sec write with DMA off
  and it jumped up to 24.7MB/sec read and 24.2MB/sec write with it on.
 
  That, and  enough processing power to see that paritity calc is not a
  bottleneck.
 
  --
 
  Clay J. Claiborne, Jr., President
 
  Cosmos Engineering Company
  1550 South Dunsmuir Ave.
  Los Angeles, CA 90019
 
  (323) 930-2540   (323) 930-1393 Fax
 
  http:www.CosmosEng.com
 
  Email: [EMAIL PROTECTED]

 You have some pretty impressive numbers there.
 What controllers did you use?

 Chris
 --

2 Masters from the ASUS P3C2000 CDU Board, 2 Masters on a CMD648 based PCI controller
and 4 Masters on a 3Wave 4 port RAID Controller. I don't use the 3Wave as a RAID
controller though, just as a very good 4 channel Ultra66 board. They also make an 8
channel board, and that's what I'm going to use next time. I can put two in a system
an mount 16 IDE drives for raid while saving the on board IDE's for the usual stuff.

Clay.

Cosmos Engineering Company
1550 South Dunsmuir Ave.
Los Angeles, CA 90019

(323) 930-2540   (323) 930-1393 Fax

http:www.CosmosEng.com

Email: [EMAIL PROTECTED]





Re: performance limitations of linux raid

2000-04-26 Thread Michael

 2 Masters from the ASUS P3C2000 CDU Board, 2 Masters on a CMD648
 based PCI controller and 4 Masters on a 3Wave 4 port RAID
 Controller. I don't use the 3Wave as a RAID controller though, just
 as a very good 4 channel Ultra66 board. They also make an 8 channel
 board, and that's what I'm going to use next time. I can put two in
 a system an mount 16 IDE drives for raid while saving the on board
 IDE's for the usual stuff.

So do the standard IDE drivers in the kernel detect and handle 
all these master controllers?? or is a special driver required?

Very Interested!!
Michael
[EMAIL PROTECTED]



Re: performance limitations of linux raid

2000-04-25 Thread remo strotkamp

bug1 wrote:
 
 Clay Claiborne wrote:
 
  For what its worth, we recently built an 8 ide drive 280GB raid5 system.
  Benchmarking with HDBENCH we got  35.7MB/sec read and 29.87MB/sec write. With
  DBENCH and 1 client we got 44.5 MB/sec with 3 clients it dropped down to about
  43MB/sec.
  The system is a 600Mhz P-3 on a ASUS P3C2000 with 256MB of ram, the raid drives
  are 40GB Maxtor DMA66, 7200 RPM, and each is run as master on its own channel.
 
  Turning on DMA seems to be the key. Benchmarking the individual drives with
  HDBENCH we got numbers like 2.57MB/sec read and 3.27MB/sec write with DMA off
  and it jumped up to 24.7MB/sec read and 24.2MB/sec write with it on.
 
  That, and  enough processing power to see that paritity calc is not a
  bottleneck.
 
 
 Can you use your raid system with DMA turned on or do you get irq
 timouts like me ?


gonna jump on the bandwagon here...:-)

How often do you get them?? On all the disks, or just
some of them. In all the modes or just with ultra66???


i get some every couple of days, normally resulting in
the dma getting disabled for the specific drive...:-(

but the filesystems seem to be ok...


remo



Re: performance limitations of linux raid

2000-04-25 Thread Paul Jakma

On Mon, 24 Apr 2000, Frank Joerdens wrote:

  I've been toying with the idea of getting one of those for a while, but
  there doesn't seem to be a linux driver for the FastTrack66 (the RAID
  card), only for the Ultra66 (the not-hacked IDE controller), and that
  driver has only 'Experimental' status with current production kernels:


Clue: the Promise IDE RAID controller is NOT a hardware RAID
controller.

Promise IDE RAID == Software RAID where the software is written by
Promise and sitting on the ROM on the Promise card getting called by
the BIOS.
  

  I even wrote to Promise to ask when or if a linux driver might become
  available, but didn't get much of an answer (they replied that there
  was a driver available for the Ultra, although I had specifically asked
  for the RAID card).

Dont bother... A driver for Promise RAID == Software RAID. You
already ahve software RAID in linux 
  
  If anyone hears about a Linux driver for this card, I'd like to know.
  
  Cheers, Frank
  
  

-- 
Paul Jakma  [EMAIL PROTECTED]
PGP5 key: http://www.clubi.ie/jakma/publickey.txt
---
Fortune:
I use not only all the brains I have, but all those I can borrow as well.
-- Woodrow Wilson




Re: performance limitations of linux raid

2000-04-25 Thread Daniel Roesen

On Tue, Apr 25, 2000 at 10:28:46PM +0100, Paul Jakma wrote:
 Clue: the Promise IDE RAID controller is NOT a hardware RAID
 controller.
 
 Promise IDE RAID == Software RAID where the software is written by
 Promise and sitting on the ROM on the Promise card getting called by
 the BIOS.

Clue: this is the way every RAID controller I know of works these days.


PS: Linux doesn't use BIOS to access devices.



RE: performance limitations of linux raid

2000-04-25 Thread Gregory Leblanc

 -Original Message-
 From: Daniel Roesen [mailto:[EMAIL PROTECTED]]
 Sent: Tuesday, April 25, 2000 3:07 PM
 To: [EMAIL PROTECTED]
 Subject: Re: performance limitations of linux raid
 
 
 On Tue, Apr 25, 2000 at 10:28:46PM +0100, Paul Jakma wrote:
  Clue: the Promise IDE RAID controller is NOT a hardware RAID
  controller.
  
  Promise IDE RAID == Software RAID where the software is written by
  Promise and sitting on the ROM on the Promise card getting called by
  the BIOS.
 
 Clue: this is the way every RAID controller I know of works 
 these days.

Then you've never used a RAID card.  I've got a number of RAID cards here, 2
from compaq, 1 from DPT, and another from HP (really AMI), and all of them
implement RAID functions like striping, double writes (mirroring), and
parity calculations for RAID4/5 in firmware, using an onboard CPU.  All the
controllers here are i960 based, but I've heard that the StrongARM procs are
much faster at parity caclculations.  The controllers that I've used that
are software are the Adaptec AAA series boards.  The other one that I know
of is this Promise thing.
Greg



Re: performance limitations of linux raid

2000-04-25 Thread Drake Diedrich

On Mon, Apr 24, 2000 at 09:13:20PM -0400, Scott M. Ransom wrote:
 
 Then I moved back to kernel 2.2.15-pre18 with the RAID and IDE patches
 and here are my results:
 
   RAID0 on Promise Card 2.2.15-pre18 (1200MB test)
 --
  ---Sequential Output ---Sequential Input-- --Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
  K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
   6833 99.2 42532 44.4 18397 42.2  7227 98.3 47754 33.0 182.8  1.5
 **
 
 When doing _actual_ work (I/O bound reads on huge data sets), I often
 see sustained read performance as high as 50MB/s.
 
 Tests on the individual drives show 28+ MB/s.

   What stripe size, CPU and memory is used here?  I have a similar setup
(2.2.15pre19, IDE+RAID patches), 4 master IDE Deskstars, on board VIA and
offboard Promise controllers, and K6-2/500  256M ram and see 22MB/s native
but a max of 28 MB/s for all stripes.  dd/hdparm -t on multiple drives
simultaneously appears to show complete contention between the separate
chains.

hdparm -t /dev/hde
19.16 MB/sec

( hdparm -t /dev/hde ) ; (hdparm -t /dev/hdg )
11.43 MB/sec
10.47 MB/sec

   With all four drives throughput per drive is less than 7MB/sec. With
RAID0 across all four drives I get 28 MB/sec according to bonnie, vs 22
MB/sec on single drives.  I had been attributing that to a singly-entrant
IDE driver in 2.2, but your results make me think there's some other reason
I don't see linear speedups.  Is this a dual CPU system perhaps?  Something
unusual about the interrupt handling?  UDMA33 vs. UDMA 66 (I'm using 40
conductor cables, perhaps I need the 80s)?



Re: performance limitations of linux raid

2000-04-25 Thread bug1

remo strotkamp wrote:
 
 bug1 wrote:
 
  Clay Claiborne wrote:
  
   For what its worth, we recently built an 8 ide drive 280GB raid5 system.
   Benchmarking with HDBENCH we got  35.7MB/sec read and 29.87MB/sec write. With
   DBENCH and 1 client we got 44.5 MB/sec with 3 clients it dropped down to about
   43MB/sec.
   The system is a 600Mhz P-3 on a ASUS P3C2000 with 256MB of ram, the raid drives
   are 40GB Maxtor DMA66, 7200 RPM, and each is run as master on its own channel.
  
   Turning on DMA seems to be the key. Benchmarking the individual drives with
   HDBENCH we got numbers like 2.57MB/sec read and 3.27MB/sec write with DMA off
   and it jumped up to 24.7MB/sec read and 24.2MB/sec write with it on.
  
   That, and  enough processing power to see that paritity calc is not a
   bottleneck.
  
 
  Can you use your raid system with DMA turned on or do you get irq
  timouts like me ?
 
 gonna jump on the bandwagon here...:-)
 
 How often do you get them?? On all the disks, or just
 some of them. In all the modes or just with ultra66???
 
 i get some every couple of days, normally resulting in
 the dma getting disabled for the specific drive...:-(
 
 but the filesystems seem to be ok...
 
 remo

Yea, i get them on my disks (Quantum XA and KX, both my IBM DPTA
372050), they usually start at the last ide channel and work backwards.

In the last 24 hours ive been getting them when e2fsck runs after
rebooting. Usual cause of rebooting is irq causeing lockup, or endlessly
trying looping trying to get an irq.

Im convinced its my hpt366 controller, ive mentioned my problem in a few
channels, no luck yet.

I used to think it was the raid code, but i get it with lvm as well, it
happens more often from reading than writting via the HPT366, the more
load placed on the controller the more likely it is to lockup, one drive
by itself it just losses interrupts (sometimes it can recover), if use
three or four channels, using both my onboard hpt366 and my pci card it
locks up hard in a fraction of a second

I want to try and work this problem out, im not a kernel hacker though.

Anyone have any advice on how to get into kernel debugging, i know c, i
dont know the kernel though. I know how to use ksymoops thats about it.

Glenn



Re: performance limitations of linux raid

2000-04-25 Thread Scott M. Ransom

 
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit

   What stripe size, CPU and memory is used here?

System is a dual-cpu PII 450Mhz with 256MB RAM.
Disks are configured with chunk-size of 32kb (ext2 block-size is 4kb).

 Is this a dual CPU system perhaps?  Something

Yes.  See above.

 unusual about the interrupt handling?  UDMA33 vs. UDMA 66 (I'm using 40
 conductor cables, perhaps I need the 80s)?

I am using UDMA66.  When testing against UDMA33, I found a 10-15% speed
difference.  To get that speed increase you must have the UDMA66
cables...

Scott

-- 
Scott M. Ransom   
Phone:  (781) 320-9867 Address:  75 Sanderson Ave.
email:  [EMAIL PROTECTED]   Dedham, MA  02026
PGP Fingerprint: D2 0E D0 10 CD 95 06 DA  EF 78 FE 2B CB 3A D3 53



Re: performance limitations of linux raid

2000-04-24 Thread Frank Joerdens

   I find those numbers rather hard to believe.  I've not yet heard of a
   disk (IDE or SCSI) that can reliably dump 22mb/sec which is what your
   2 drive setup implies.  Something isn't right.
  
  Check http://www.tomshardware.com , they review the Promise IDE RAID
  card
  (the hacked one for $30).  They get some pretty insane throughputs
  on some ATA66 drives.
 
 
 Gee, they look pretty awesome, excellent performance.
 The only drawback i can think of is that it wouldnt be as flexible as
 software raid.

I've been toying with the idea of getting one of those for a while, but
there doesn't seem to be a linux driver for the FastTrack66 (the RAID
card), only for the Ultra66 (the not-hacked IDE controller), and that
driver has only 'Experimental' status with current production kernels:

- snip ---
CONFIG_BLK_DEV_PDC4030:

This driver provides support for the secondary IDE interface and
cache of Promise IDE chipsets, e.g. DC4030 and DC5030. This driver
is known to incur timeouts/retries during heavy I/O to drives
attached to the secondary interface. CDROM and TAPE devices are not
supported yet. This driver is enabled at runtime using the
"ide0=dc4030" kernel boot parameter. See the Documentation/ide.txt
and drivers/block/pdc4030.c files for more info.
- snap ---

I even wrote to Promise to ask when or if a linux driver might become
available, but didn't get much of an answer (they replied that there
was a driver available for the Ultra, although I had specifically asked
for the RAID card).

If anyone hears about a Linux driver for this card, I'd like to know.

Cheers, Frank

-- 
frank joerdens   

joerdens new media
heinrich-roller str. 16/17
10405 berlin
germany

e: [EMAIL PROTECTED]
t: +49 (0)30 44055471
f: +49 (0)30 44055475
h: http://www.joerdens.de

pgp public key: http://www.joerdens.de/pgp/frank_joerdens.asc



Re: performance limitations of linux raid

2000-04-24 Thread Michael

I find those numbers rather hard to believe.  I've not yet heard of a
disk (IDE or SCSI) that can reliably dump 22mb/sec which is what your
2 drive setup implies.  Something isn't right.

Sure it is. go to the ibm site and look at the specs on all the new 
high capacity drives. Without regard to the RPM, they are all spec'd 
to rip data off the drive at around 27mb/sec continuous.
[EMAIL PROTECTED]



Re: performance limitations of linux raid

2000-04-24 Thread Chris Mauritz

There's "specs" and then there's real life.  I have never seen a hard drive
that could do this.  I've got brand new IBM 7200rpm ATA66 drives and I can't
seem to get them to do much better than 6-7mb/sec with either Win98,
Win2000, or Linux.  That's with Abit BH6, an Asus P3C2000, and Supermicro
PIIIDME boards.  And yes, I'm using an 80 conductor cable.  I'm using
Wintune on the windows platforms and bonnie on Linux to do benchmarks.

Cheers,

Chris

- Original Message -
From: "Michael" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Sent: Monday, April 24, 2000 5:10 PM
Subject: Re: performance limitations of linux raid


 I find those numbers rather hard to believe.  I've not yet heard
of a
 disk (IDE or SCSI) that can reliably dump 22mb/sec which is what
your
 2 drive setup implies.  Something isn't right.

 Sure it is. go to the ibm site and look at the specs on all the new
 high capacity drives. Without regard to the RPM, they are all spec'd
 to rip data off the drive at around 27mb/sec continuous.
 [EMAIL PROTECTED]






Re: performance limitations of linux raid

2000-04-24 Thread Seth Vidal

 There's "specs" and then there's real life.  I have never seen a hard drive
 that could do this.  I've got brand new IBM 7200rpm ATA66 drives and I can't
 seem to get them to do much better than 6-7mb/sec with either Win98,
 Win2000, or Linux.  That's with Abit BH6, an Asus P3C2000, and Supermicro
 PIIIDME boards.  And yes, I'm using an 80 conductor cable.  I'm using
 Wintune on the windows platforms and bonnie on Linux to do benchmarks.

turn udma modes on in the bios and run hdparm -d 1 /dev/hda (where hda ==
drive device)

the re-run your specs

I think you'll find the speed is stepped up dramatically.
i'm getting 16MB/s write and 22MB/s read on the same drive.

I got for crap w/o the dma turned on via hdparm

-sv





RE: performance limitations of linux raid

2000-04-24 Thread Gregory Leblanc

 -Original Message-
 From: Chris Mauritz [mailto:[EMAIL PROTECTED]]
 Sent: Monday, April 24, 2000 2:30 PM
 To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
 Subject: Re: performance limitations of linux raid
 
 
 There's "specs" and then there's real life.  I have never 
 seen a hard drive
 that could do this.  I've got brand new IBM 7200rpm ATA66 
 drives and I can't
 seem to get them to do much better than 6-7mb/sec with either Win98,
 Win2000, or Linux.  That's with Abit BH6, an Asus P3C2000, 
 and Supermicro
 PIIIDME boards.  And yes, I'm using an 80 conductor cable.  I'm using
 Wintune on the windows platforms and bonnie on Linux to do benchmarks.

I don't believe the specs either, because they are for the "ideal" case.
However, I think that either your benchmark is flawed, or you've got a
crappy controller.  I have a (I think) 5400 RPM 4.5GB IBM SCA SCSI drive in
a machine at home, and I can easily read at 7MB/sec from it under Solaris.
Linux is slower, but that's because of the drivers for the SCSI controller.
I haven't done any benchmarks on my IDE drives because I already know that
they're SLOW.
Greg

 
 Cheers,
 
 Chris
 
 - Original Message -
 From: "Michael" [EMAIL PROTECTED]
 To: [EMAIL PROTECTED]
 Sent: Monday, April 24, 2000 5:10 PM
 Subject: Re: performance limitations of linux raid
 
 
  I find those numbers rather hard to believe.  I've 
 not yet heard
 of a
  disk (IDE or SCSI) that can reliably dump 22mb/sec 
 which is what
 your
  2 drive setup implies.  Something isn't right.
 
  Sure it is. go to the ibm site and look at the specs on all the new
  high capacity drives. Without regard to the RPM, they are all spec'd
  to rip data off the drive at around 27mb/sec continuous.
  [EMAIL PROTECTED]
 
 
 



Re: performance limitations of linux raid

2000-04-24 Thread Michael

  There's "specs" and then there's real life.  I have never seen a hard drive
  that could do this.  I've got brand new IBM 7200rpm ATA66 drives and I can't
  seem to get them to do much better than 6-7mb/sec with either Win98,
  Win2000, or Linux.  That's with Abit BH6, an Asus P3C2000, and Supermicro
  PIIIDME boards.  And yes, I'm using an 80 conductor cable.  I'm using
  Wintune on the windows platforms and bonnie on Linux to do benchmarks.
 
 turn udma modes on in the bios and run hdparm -d 1 /dev/hda (where
 hda == drive device)
 

just FYI, 

3 disk raid 5 on udma33 asus mother board + 1 udma33 promise 
controller.

disks are maxtor 87000D8's speced at a maximum of media to interface 
transfer rate of 14.7 mbs -- doesn't include any seek time.

my box has 128megs so ran a 500 meg bonnie.
results 
block write = 7.9mbs
block read = 10.5 mps
pretty good for an old 166 K6

Michael
[EMAIL PROTECTED]



Re: performance limitations of linux raid

2000-04-24 Thread Clay Claiborne

For what its worth, we recently built an 8 ide drive 280GB raid5 system.
Benchmarking with HDBENCH we got  35.7MB/sec read and 29.87MB/sec write. With
DBENCH and 1 client we got 44.5 MB/sec with 3 clients it dropped down to about
43MB/sec.
The system is a 600Mhz P-3 on a ASUS P3C2000 with 256MB of ram, the raid drives
are 40GB Maxtor DMA66, 7200 RPM, and each is run as master on its own channel.

Turning on DMA seems to be the key. Benchmarking the individual drives with
HDBENCH we got numbers like 2.57MB/sec read and 3.27MB/sec write with DMA off
and it jumped up to 24.7MB/sec read and 24.2MB/sec write with it on.

That, and  enough processing power to see that paritity calc is not a
bottleneck.

--

Clay J. Claiborne, Jr., President

Cosmos Engineering Company
1550 South Dunsmuir Ave.
Los Angeles, CA 90019

(323) 930-2540   (323) 930-1393 Fax

http:www.CosmosEng.com

Email: [EMAIL PROTECTED]





RE: performance limitations of linux raid

2000-04-24 Thread Gregory Leblanc

 -Original Message-
 From: Scott M. Ransom [mailto:[EMAIL PROTECTED]]
 Sent: Monday, April 24, 2000 6:13 PM
 To: [EMAIL PROTECTED]
 Cc: [EMAIL PROTECTED]; [EMAIL PROTECTED]; Gregory Leblanc; bug1
 Subject: RE: performance limitations of linux raid
 
 Content-Type: text/plain; charset=us-ascii
 Content-Transfer-Encoding: 7bit
 
  There's "specs" and then there's real life.  I have never 
  seen a hard drive
  that could do this.  I've got brand new IBM 7200rpm ATA66 
  drives and I can't
  seem to get them to do much better than 6-7mb/sec with 
 either Win98,
  Win2000, or Linux.  That's with Abit BH6, an Asus P3C2000, 
  and Supermicro
  PIIIDME boards.  And yes, I'm using an 80 conductor cable. 
  I'm using
  Wintune on the windows platforms and bonnie on Linux to do 
 benchmarks.
 
  I don't believe the specs either, because they are for the 
 "ideal" case.
 
 Believe it.  I was getting about 45MB/s writes and 14 MB/s reads using
 RAID0 with the 2.3.99pre kernels on a Dual PII 450 with two 30G
 DiamondMax (7200rpm Maxtor) ATA-66 drives connected to a 
 Promise Ultra66
 controller. 
 
 Then I moved back to kernel 2.2.15-pre18 with the RAID and IDE patches
 and here are my results:
 
   RAID0 on Promise Card 2.2.15-pre18 (1200MB test)
 --
  ---Sequential Output ---Sequential Input-- --Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
  K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
   6833 99.2 42532 44.4 18397 42.2  7227 98.3 47754 33.0 182.8  1.5
 **
 
 When doing _actual_ work (I/O bound reads on huge data sets), I often
 see sustained read performance as high as 50MB/s.
 
 Tests on the individual drives show 28+ MB/s.

Sounds dang good, but I don't have any of those yet...  When I can get a
1350 MHz proc, I'll grab a new machine an correlate these results for
myself.  :-)

 
 The performance is simply amazing -- even during real work (at least
 mine -- YMMV).  And best of all, the whole set-up (Promise card + 2X
 Maxtor drives only cost me $550)
 
 I simply can't see how SCSI can compete with that.

Easy, SCSI still competes.  It's called redundancy and scalability.  It's
hard to get more than 4 (maybe 8 with a big system) IDE drives attached to
one box.  That same thing is trivial with SCSI, and you can even go with far
more than that.  Here's on example.  At the office, I've got a single
machine with 4 internal, hot-swap drives, and two external 5 disk chassis
that are both full, as well as a tape drive, a CD-ROM, and a CD-RW.  The
tape is about 3 feet away, and the drive chassis are more like 12,
everything is well withing spec for the SCSI on this machine.  With IDE, I
couldn't get that much space if I tried, and I wouldn't be likely to have
the kind of online redundancy that I have with this machine.  I'll admit
that this is the biggest machine that we have, but we're only taking care of
250 people, with about a dozen people outside of the Information Services
deparment who actually utilize the computing resources.  Any remotely larger
shop, or one with competent employees, could easily need server that scale
well beyond this machine.  I don't think that SCSI has a really good place
on desktops, and it's use is limited when GOOD IDE is available for a
workstation, but servers still have a demand for SCSI.  
Greg

P.S. My employer probably wouldn't take kindly to those words, so I'm
obviousally not representing them here.



Re: performance limitations of linux raid

2000-04-24 Thread bug1

 
 I don't believe the specs either, because they are for the "ideal" case.
 However, I think that either your benchmark is flawed, or you've got a
 crappy controller.  I have a (I think) 5400 RPM 4.5GB IBM SCA SCSI drive in
 a machine at home, and I can easily read at 7MB/sec from it under Solaris.
 Linux is slower, but that's because of the drivers for the SCSI controller.
 I haven't done any benchmarks on my IDE drives because I already know that
 they're SLOW.
 Greg
 

Whatever you think of the interface (ide vs scsi) you have to accept
that a drives speed is dependent on its rotation speed.

A 7200RPM IDE drive is faster than a 5400RPM SCSI drive and a 1RPM
SCSI drive is faster than a 7200RPM drive.

If you have two 7200RPM drives, one scsi and one ide, each on there own
channel, then they should be about the same speed.

Multiple drives per channel give SCSI an edge purely because thats what
the scsi bus was designed for. You pay a big dollars for this advantage
though.

Glenn McGrath



Re: performance limitations of linux raid

2000-04-24 Thread bug1

Clay Claiborne wrote:
 
 For what its worth, we recently built an 8 ide drive 280GB raid5 system.
 Benchmarking with HDBENCH we got  35.7MB/sec read and 29.87MB/sec write. With
 DBENCH and 1 client we got 44.5 MB/sec with 3 clients it dropped down to about
 43MB/sec.
 The system is a 600Mhz P-3 on a ASUS P3C2000 with 256MB of ram, the raid drives
 are 40GB Maxtor DMA66, 7200 RPM, and each is run as master on its own channel.
 
 Turning on DMA seems to be the key. Benchmarking the individual drives with
 HDBENCH we got numbers like 2.57MB/sec read and 3.27MB/sec write with DMA off
 and it jumped up to 24.7MB/sec read and 24.2MB/sec write with it on.
 
 That, and  enough processing power to see that paritity calc is not a
 bottleneck.
 

Can you use your raid system with DMA turned on or do you get irq
timouts like me ?



Re: performance limitations of linux raid

2000-04-24 Thread Seth Vidal

 A 7200RPM IDE drive is faster than a 5400RPM SCSI drive and a 1RPM
 SCSI drive is faster than a 7200RPM drive.
 
 If you have two 7200RPM drives, one scsi and one ide, each on there own
 channel, then they should be about the same speed.
 

Not entirely true - the DMA capabilities of IDE could provide faster
transfer modes than your avg scsi card could generate.

I have a 7200 RPM LVD scsi drive and a 7200RPM UDMA ide drive and the IDE
wins EVERY SINGLE TIME.

-sv





Re: performance limitations of linux raid

2000-04-24 Thread Bill Anderson

Seth Vidal wrote:
 
  A 7200RPM IDE drive is faster than a 5400RPM SCSI drive and a 1RPM
  SCSI drive is faster than a 7200RPM drive.
 
  If you have two 7200RPM drives, one scsi and one ide, each on there own
  channel, then they should be about the same speed.
 
 
 Not entirely true - the DMA capabilities of IDE could provide faster
 transfer modes than your avg scsi card could generate.
 
 I have a 7200 RPM LVD scsi drive and a 7200RPM UDMA ide drive and the IDE
 wins EVERY SINGLE TIME.
 
 -sv


I think everyone seems to be missing the _point_ of SCSI. SCSI is NOT
just about raw speed, even though some SCSI has a speed of 160MB/second.
SCSI has many advantages over IDE, oe EIDE, such as command queing.
Let's see your IDE drive handle 10 or 15 *simultaneous* reads/writes.
let's see it have a MTBF of over 1 Million hours. Let's see it connected
along with 13 other drives on the same chain. Let's see it run on cables
running for several yards or meters. Let's see it conected to multiple
computers. Heck, just for fun, let's run IP over IDE. Oh, wait, that's
SCSI that can do that ;)

The point is, comparing speed of SCSI vs any IDE variant is like
comparing apples and oranges. That said, copmparing two drives of any
variant, and basing their performance upon the rotational speed is also
an error. RPMs are not the sole determining factor. Other factors
include the size of the drive in MB/GB, and the seek or access time.

I have, for example, a 5400RPM SCSI2 drive that still outperforms
7200RPM IDE drives colleagues have. I have seen 6X SCSI CDROMs that
perform better than 8X EIDEs. I have a 7200RPM SCSI outperforming 10Krpm
IDE drives.

The determining factor in the choice of SCSI vs. IDE should be in what
the machine will be doing. If all you need is a desktop machine fo
rgaming and/or basic sufing/office type work, you probably don't need
SCSI. if you are going to run a server that may be running some
intensive I/O, and need reliability and longevity, you should be looking
at SCSI, not IDE.

After all, is speed is your sole criteria, chances are you are missing
something. ;^)

-- 
In flying I have learned that carelessness and overconfidence are 
usually far more dangerous than deliberately accepted risks. 
  -- Wilbur Wright in a letter to his father, September 1900



Re: performance limitations of linux raid

2000-04-24 Thread Bill Anderson

bug1 wrote:
 
 
  I don't believe the specs either, because they are for the "ideal" case.
  However, I think that either your benchmark is flawed, or you've got a
  crappy controller.  I have a (I think) 5400 RPM 4.5GB IBM SCA SCSI drive in
  a machine at home, and I can easily read at 7MB/sec from it under Solaris.
  Linux is slower, but that's because of the drivers for the SCSI controller.
  I haven't done any benchmarks on my IDE drives because I already know that
  they're SLOW.
  Greg
 
 
 Whatever you think of the interface (ide vs scsi) you have to accept
 that a drives speed is dependent on its rotation speed.

Not entirely true. RPMs is but _one_ factor, not the determining factor.

 
 A 7200RPM IDE drive is faster than a 5400RPM SCSI drive and a 1RPM
 SCSI drive is faster than a 7200RPM drive.

Not always true, if you are talking about data throughput and access
speed.

 
 If you have two 7200RPM drives, one scsi and one ide, each on there own
 channel, then they should be about the same speed.
 
 Multiple drives per channel give SCSI an edge purely because thats what
 the scsi bus was designed for. You pay a big dollars for this advantage
 though.

As well as various _other_ advantages, see otehr post.


-- 
In flying I have learned that carelessness and overconfidence are 
usually far more dangerous than deliberately accepted risks. 
  -- Wilbur Wright in a letter to his father, September 1900



Re: performance limitations of linux raid

2000-04-23 Thread Edward Schernau

Chris Mauritz wrote:

  Ive done some superficial performance tests using dd, 55MB/s write
  12MB/s read, interestingly i did get 42MB/s write using just a 2 way ide
  raid0, and got 55MB/s write with one drive per channel on four channels
  (i had no problem writing, just reading) so surprisingly i dont think
  the drive interface is my bottleneck.
 
 I find those numbers rather hard to believe.  I've not yet heard of a
 disk (IDE or SCSI) that can reliably dump 22mb/sec which is what your
 2 drive setup implies.  Something isn't right.

Check http://www.tomshardware.com , they review the Promise IDE RAID
card
(the hacked one for $30).  They get some pretty insane throughputs
on some ATA66 drives.

 RAID0 seemed to scale rather linearly.  I don't think there would be
 much of a problem getting over 100mbits/sec on an array of 8-10 ultra2

I tested a 3 drive RAID0 set where 1 disk = 1.7 M/sec, 2 disks = 3.4
M/sec,
but 3 disks only = 3.9 M/sec.  These were 5 M/sec disks on a AHA2940.
I think SCSI bus overhead and contention issues play an important role.

Ed
-- 
Edward Schernau http://www.schernau.com
Network Architect   mailto:[EMAIL PROTECTED]
Rational Computing  Providence, RI, USA



Re: performance limitations of linux raid

2000-04-23 Thread bug1

Edward Schernau wrote:
 
 Chris Mauritz wrote:
 
   Ive done some superficial performance tests using dd, 55MB/s write
   12MB/s read, interestingly i did get 42MB/s write using just a 2 way ide
   raid0, and got 55MB/s write with one drive per channel on four channels
   (i had no problem writing, just reading) so surprisingly i dont think
   the drive interface is my bottleneck.
 
  I find those numbers rather hard to believe.  I've not yet heard of a
  disk (IDE or SCSI) that can reliably dump 22mb/sec which is what your
  2 drive setup implies.  Something isn't right.
 
 Check http://www.tomshardware.com , they review the Promise IDE RAID
 card
 (the hacked one for $30).  They get some pretty insane throughputs
 on some ATA66 drives.


Gee, they look pretty awesome, excellent performance.
The only drawback i can think of is that it wouldnt be as flexible as
software raid.
 
  RAID0 seemed to scale rather linearly.  I don't think there would be
  much of a problem getting over 100mbits/sec on an array of 8-10 ultra2
 
 I tested a 3 drive RAID0 set where 1 disk = 1.7 M/sec, 2 disks = 3.4
 M/sec,
 but 3 disks only = 3.9 M/sec.  These were 5 M/sec disks on a AHA2940.
 I think SCSI bus overhead and contention issues play an important role.
 
 Ed

I also see linear performance increas with two drives, but after that it
tails off.

Glenn



Re: performance limitations of linux raid

2000-04-23 Thread Michael Robinton


On Sun, 23 Apr 2000, bug1 wrote:
 Chris Mauritz wrote:
  
   Hi, im just wondering has anyone really explored the performance
   limitations of linux raid ?
  
 
 Ive just managed to setup a 4 way ide raid0 that works.
 
 The only way i can get it working is to use *two* drives per channel.
 I have to do this as i have concluded that i cannot use both my onboard
 hpt366 channels and my pci hpt366 channels together.
 
 Ive done some superficial performance tests using dd, 55MB/s write
 12MB/s read, interestingly i did get 42MB/s write using just a 2 way ide
 raid0, and got 55MB/s write with one drive per channel on four channels
 (i had no problem writing, just reading) so surprisingly i dont think
 the drive interface is my bottleneck.

 IDE will always beat SCSI hands down for price/performance, but scsi is
 clearly the winner if you want 4 or more drives on one machine, as ide
 just doesnt scale.

one of the ABIT boards comes with 4 onboard IDE controllers instead of 
the usual 2. As I recall, the kernel software will allow you to run up to 
8 ide pci channels, and the 2.4 kernel reportedly will do more.

Michael



performance limitations of linux raid

2000-04-22 Thread bug1

Hi, im just wondering has anyone really explored the performance
limitations of linux raid ?

Recognising ones limitations is the first step to overcomming them.

Ive found that relative performance increases are better with less
drives.

Ive been using raid for a year or so, ive never managed to get a 4-way
ide raid working efficiently (no timeouts), 2.3.99pre6pre5 is the best
ive had though.

Anyone have any thoughts on where the bottleneck is, or orther
experiences with raid limitations ?


Glenn McGrath



Re: performance limitations of linux raid

2000-04-22 Thread Chris Mauritz

 From [EMAIL PROTECTED] Sun Apr 23 01:06:24 2000
 
 Chris Mauritz wrote:
  
   From [EMAIL PROTECTED] Sat Apr 22 21:37:37 2000
  
   Hi, im just wondering has anyone really explored the performance
   limitations of linux raid ?
  
   Recognising ones limitations is the first step to overcomming them.
  
   Ive found that relative performance increases are better with less
   drives.
  
   Ive been using raid for a year or so, ive never managed to get a 4-way
   ide raid working efficiently (no timeouts), 2.3.99pre6pre5 is the best
   ive had though.
  
   Anyone have any thoughts on where the bottleneck is, or orther
   experiences with raid limitations ?
  
  I've not had any real problems using striped sets of SCSI drives as
  RAID 0 and RAID 5.  You're always going to get rather crappy performance
  with lots of IDE drives unless you have only 1 drive per channel.  By
  the time you buy that many controllers, the cost is pretty much a
  wash with SCSI.
  
 
 Ive just managed to setup a 4 way ide raid0 that works.
 
 The only way i can get it working is to use *two* drives per channel.
 I have to do this as i have concluded that i cannot use both my onboard
 hpt366 channels and my pci hpt366 channels together.
 
 Ive done some superficial performance tests using dd, 55MB/s write
 12MB/s read, interestingly i did get 42MB/s write using just a 2 way ide
 raid0, and got 55MB/s write with one drive per channel on four channels
 (i had no problem writing, just reading) so surprisingly i dont think
 the drive interface is my bottleneck.

I find those numbers rather hard to believe.  I've not yet heard of a
disk (IDE or SCSI) that can reliably dump 22mb/sec which is what your
2 drive setup implies.  Something isn't right.

 I think read performance is a known problem, but at least i dont get
 lockups or timeouts anymore.
 
 IDE will always beat SCSI hands down for price/performance, but scsi is
 clearly the winner if you want 4 or more drives on one machine, as ide
 just doesnt scale.

It's not so hands down anymore.  SCSI drives are becoming quite cheap.
Yes, IDE is cheaper, but not THAT much cheaper...especially if you want
to use more than a couple of disks.

 SCSI drives are 50% dearer than ide arent they ?
 
 What sort of performance do you get from your scsi sets ?

I was getting 45-50mb/sec with a striped set of 3 50gig Seagate
barracudas using software RAID0 and an Adaptec 2940U2W controller.
I used bonnie to test and used 4-5 times the system memory for 
the data size.

 I wonder what the fastest speed any linux software raid has gotten, it
 would be great if the limitation was a hardware limitation i.e. cpu,
 (scsi/ide) interface speed, number of (scsi/ide) interfaces, drive
 speed. It would be interesting to see how close software raid could get
 to its hardware limitations.

RAID0 seemed to scale rather linearly.  I don't think there would be
much of a problem getting over 100mbits/sec on an array of 8-10 ultra2
wide drives.  I ultimately stopped fiddling with software RAID on my
production boxes as I needed something that would reliably do hot
swapping of dead drives.  So I've switched to using Mylex ExtremeRAID
1100 cards instead (certainly not the card you want to use for low 
budget applications...heh).

Cheers,

Chris

-- 
Christopher Mauritz
[EMAIL PROTECTED]