Re: performance limitations of linux raid

2000-05-05 Thread Christopher E. Brown

On Fri, 5 May 2000, Michael Robinton wrote:


Not entirely, there is a fair bit more CPU overhead running an
IDE bus than a proper SCSI one.
   
   A "fair" bit on a 500mhz+ processor is really negligible.
  
  
  Ehem, a fair bit on a 500Mhz CPU is ~ 30%.  I have watched a
  *single* UDMA66 drive (with read ahead, multiblock io, 32bit mode, and
  dma transfers enabled) on a 2.2.14 + IDE + RAID patched take over 30%
  of the CPU during disk activity.  The same system with a 4 x 28G RAID0
  set running would be  .1% idle during large copies.  An exactly
  configured system with UltraWide SCSI instead of IDE sits ~ 95% idle
  during the same ops.
 
 Try turning on DMA

Ahem, try re-reading the above, line 3 first word!


---
As folks might have suspected, not much survives except roaches, 
and they don't carry large enough packets fast enough...
--About the Internet and nuclear war.





Re: performance limitations of linux raid

2000-05-04 Thread Christopher E. Brown

On Wed, 3 May 2000, Michael Robinton wrote:

 The primary limitation is probably the rotational speed of the disks and 
 how fast you can rip data off the drives. For instance, the big IBM 
 drives (20 - 40 gigs) have a limitation of about 27mbs for both the 7200 
 and 10k rpm models. The Drives to come will have to make trade-offs 
 between density and speed, as the technology's in the works have upper 
 constraints on one or the other. So... given enough controllers (either 
 scsii on disk or ide individual) the limit will be related to the 
 bandwidth of the disk interface rather than the speed of the processor 
 it's talking too.


Not entirely, there is a fair bit more CPU overhead running an
IDE bus than a proper SCSI one.

---
As folks might have suspected, not much survives except roaches, 
and they don't carry large enough packets fast enough...
--About the Internet and nuclear war.





Re: performance limitations of linux raid

2000-05-03 Thread Christopher E. Brown

On Sun, 23 Apr 2000, Chris Mauritz wrote:

  I wonder what the fastest speed any linux software raid has gotten, it
  would be great if the limitation was a hardware limitation i.e. cpu,
  (scsi/ide) interface speed, number of (scsi/ide) interfaces, drive
  speed. It would be interesting to see how close software raid could get
  to its hardware limitations.
 
 RAID0 seemed to scale rather linearly.  I don't think there would be
 much of a problem getting over 100mbits/sec on an array of 8-10 ultra2
 wide drives.  I ultimately stopped fiddling with software RAID on my
 production boxes as I needed something that would reliably do hot
 swapping of dead drives.  So I've switched to using Mylex ExtremeRAID
 1100 cards instead (certainly not the card you want to use for low 
 budget applications...heh).


Umm, I can get 13,000K/sec to/from ext2 from a *single*
UltraWide Cheeta (best case, *long* reads, no seeks).  100Mbit is only
12,500K/sec.


A 4 drive UltraWide Cheeta array will top out an UltraWide bus
at 40MByte/sec, over 3 times the max rate of a 100Mbit ethernet.

---
As folks might have suspected, not much survives except roaches, 
and they don't carry large enough packets fast enough...
--About the Internet and nuclear war.





SCSI - IDE RAID Adapters

2000-04-13 Thread Christopher E. Brown



Anyone here worked with one of those devices that plug into
the SCSI bus on one side and emulate one disk, and on the other do
hardware raid5 across 4 - 8 UDMA buses?


I ask because, while not normally somthing I would do, I need
to rig a large storage array in an evil environ.  No way am I mounting
eight  1K$ each drives in a mobile application, but 5 28G UDMA drives
are  1K total, and who cares id you kill a few per year.


Any leads, preferred units, etc?

---
As folks might have suspected, not much survives except roaches, 
and they don't carry large enough packets fast enough...
--About the Internet and nuclear war.





Re: raid-2.2.14-B1.14 doesn't patch 2.2.14 properly

2000-01-13 Thread Christopher E. Brown

On Wed, 12 Jan 2000, James Manning wrote:

 [ Wednesday, January 12, 2000 ] Scott Thomson wrote:
  Am I missing something here? 
  The source has just been freshly untarred from linux-2.2.14.tgz 
  This is just the the first prompt. It goes on and on...  
  
  patching file `linux/init/main.c' 
  Hunk #1 FAILED at 19. 
  Hunk #2 FAILED at 488. 
  Hunk #3 FAILED at 928. 
  Hunk #4 FAILED at 1426. 
  4 out of 4 hunks FAILED -- saving rejects to linux/init/main.c.rej 
  The next patch would create the file 
  `linux/include/linux/raid/linear.h', 
  which already exists!  Assume -R? [n] 
 
 Hmm works fine from here... maybe rm -rf linux or try checking
 your .tar.gz file:

I get

patching file `init/main.c'
Hunk #2 succeeded at 497 (offset 9 lines).
Hunk #3 succeeded at 931 (offset 3 lines).
Hunk #4 succeeded at 1435 (offset 9 lines).

from a clean linux-2.2.14.tar.gz

---
As folks might have suspected, not much survives except roaches, 
and they don't carry large enough packets fast enough...
--About the Internet and nuclear war.




RE: stripes of raid5s - crash

1999-10-19 Thread Christopher E. Brown

On Thu, 14 Oct 1999, Tom Livingston wrote:

 Florian Lohoff wrote:
  I did a bit further - Hung the machine - Couldnt log in (All Terms
  hang immediatly) - Tried to reboot and when it hung at
  "Unmounting file..."
  i got a term SysRq- Tand saw many processes stuck in the D state.
 
  Seems something produces a deadlock (ll_rw_blk ?)  and all processes
  trying to access disk get stuck.
 
 Can you duplicate this using only one of the raid5 sets? I tried to cause
 the same behvior with a single raid5 set and it worked fine... but I did not
 layer raid on raid, perhaps this is where the issue is?


When working with a 5 x 18G RAID5 (0 spare) using 2.2.12SMP +
raid 2.2.11 (compiled in, not modules) I would get a endless stream
about buffers when trying to mount the device, mke2fs and e2fsck
worked fine.  Seemed to happen when the array was in the beginning of
the reconstruct.


With 2.2.13pre15SMP + raid 2.2.11 I managed to get this a
couple times, but only if I mount it right after reconstruct starts on
a just mkraided array.  If I wait till the reconstruct hits 2 - 3 % it
mounts just fine.  I have not seen this on arrays smaller than 50G
(but this is not hard data, it could just be the faster reconstruct).

---
As folks might have suspected, not much survives except roaches, 
and they don't carry large enough packets fast enough...
--About the Internet and nuclear war.




Re: FW: Dream RAID System (fwd)

1999-10-12 Thread Christopher E. Brown


 Date: Wed, 6 Oct 1999 14:03:11 -0400 (EDT)
 From: [EMAIL PROTECTED]
 To: Kenneth Cornetet [EMAIL PROTECTED]
 Cc: "'[EMAIL PROTECTED]'" [EMAIL PROTECTED]
 Subject: Re: FW: Dream RAID System
 
 On Wed, 6 Oct 1999, Kenneth Cornetet wrote:
  Hardware RAID does not necessarily preclude speed. I put a Mylex extremeraid
  1164 (32MB cache, 2 channels) in a dual 450MHz P3 connected to 4 18GB 10K
  RPM seagate wide low voltage differential SCSI disks in a raid 5 config and
  got about 22 MB/sec reads and writes as reported by bonnie.
 
 Were you really getting 22MB/s writes (just as fast as reads)?  Was the
 bonnie test size at least 2x physical memory?  Those are encouraging
 numbers.  I'm looking at building a big file server using more drives and
 an extremeraid.  I was worried about write performance and wondering if
 I'd have to do RAID10 rather than RAID5.


File './Bonnie.1103', size: 1073741824
Writing with putc()...done
Rewriting...done
Writing intelligently...done
Reading with getc()...done
Reading intelligently...done
Seeker 1...Seeker 2...Seeker 3...start 'em...done...done...done...
  ---Sequential Output ---Sequential Input-- --Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
fenris   1024 15716 96.7 45169 70.2 17574 45.8 18524 71.5 47481 38.5 509.5 13.2


System is a dual Xeon 500/1mb w/ 512MB main memory.  It has 3
SCSI busses, one UltraWide (DLT tape), and 2 Ultra2 LVD units (one
used, one for expansion).  There are 7 seagate 18.2G cheeta drives
installed, the last 6 are a 0 spare RAID5 array running left-symmetric
parity and a chunk of 128.  The filesystem was created as follows.

mke2fs -b 4096 -i 16384 -R stride=32 -s sparse-super-flag /dev/md0
tune2fs -c 1 -i 1d -m 10 /dev/md0
e2fsck -yf /dev/md0


As you can see, the read vs write performance is close, not
exact but close.  Of course this is software RAID5 on a high end box
(that was serving as a samba disk/printer server for several hundred
stations and 15 printers, and squid proxy for the stations during the
test).  Why was it I was looking at hardware RAID again?

15.3MB/sec Char Write
18.0MB/sec Char Read

44.1MB/sec Block Write
46.3MB/sec Block Read

17.1MB/sec ReWrite







---
As folks might have suspected, not much survives except roaches, 
and they don't carry large enough packets fast enough...
--About the Internet and nuclear war.




Re: Why RAID1 half-speed?

1999-08-31 Thread Christopher E. Brown

On Mon, 30 Aug 1999, Andy Poling wrote:

 On Mon, 30 Aug 1999, Mike Black wrote:
  But this is doing disk "reads" not "writes".  It shouldn't need to read from
  both disks.
 
 It does read balancing.  That gives the best performance for an optimized
 configuration (in this case that would mean the two disks split across two
 IDE busses).
 
 
  Read performance will usually scale close to to N*P, while write performance
  is the same as on one device, or perhaps even less. Reads can be done in
  parallel, but when writing, the CPU must transfer N times as much data to
  the disks as it usually would (remember, N identical copies of all data must
  be sent to the disks).
 
 This is in the abstract, and probably assumes an optimized configuration.
 
 
  I should expect at least the same performance (when disks are on the same
  bus) -- that's why I'm confused.  I would actually expect better performance
  given that the limit is the disk i/o speed and not the data bus.  If disks
  were on seperate buses I would expect better performance.
 
 I don't know much about IDE, but I don't think it will do well when getting
 pounded with requests for both disks (due to read balancing).  It doesn't
 have a capability analogous to SCSI's command tag queueing to make a shared
 disk I/O bus practical.  My understanding of IDE is that each command must
 completely finish before another can be begun.  



Remember, IDE has a master/slave control system, you can only
talk to one drive at a time, no connect/disconnect, batch
commands/data, disconnect, batch the other drive, repeat.  Just
connect master write these, wait till finished, go to other drive,
write these, wait, repeat, and that the IDE interface basicly sucks.







 
 Add in some overhead for using both devices, and I'm really not surprised
 it's slower than the single disk.
 
 -Andy
 
 

---
As folks might have suspected, not much survives except roaches, 
and they don't carry large enough packets fast enough...
--About the Internet and nuclear war.




2.2.12 and raid0145-19990824-2.2.11.gz

1999-08-27 Thread Christopher E. Brown




Any issues with this patch and the 2.2.12 release?  Everything
applied except for changes to fs.h, and it appears those changes were
already there.

Or should I just revert to patch-2.2.12-final.gz, the last
2.2.12 pre patch to have raid .90 in it?



First Law of System Requirements:
 "Anything is possible if you don't know what you're talking about..."




Re: RAID under 2.2.10

1999-07-05 Thread Christopher E. Brown

On Sun, 4 Jul 1999, John E. Adams wrote:

 Tom Livingston wrote:
 
 As others have pointed out recently on this list, you can get raid working
 with a 2.2.10 kernel.  Ingo posted a fix, which involves changing just one
 line. 
 
 The fix is only one line, BUT that one line occurs TWICE.  Change both
 occurrences of 'current-priority = 0' to 'current-priority = 1'
 in /usr/src/linux/drivers/block/md.c.  Ideally, that constant should
 have a symbolic name like LOWEST_PRIORITY.
 
   johna


So if I am distilling the correct data here, one patches 2.2.1
with the latest 2.2.6 raid patch, ignores the rejects, and cheges
those to lines and then has a working raid system?

Are there and issues with the AC patches?


First Law of System Requirements:
 "Anything is possible if you don't know what you're talking about..."



Re: Upgrading RAID

1999-01-04 Thread Christopher E. Brown

On Thu, 4 Nov 1999, Sean Roe wrote:

 Is there a procedure for adding more drives to a RAID system and increasing
 the size of the partitions?  We have mylex Accellaraid 250's (sp?) driving
 the RAID.  I am a little lost as to how to do it.  I mean when and if the
 Mysql server ever breaks 10-12 gig of data I would like to have an easy way
 out.


Those of us running insane DB sizes have a simple out, one
raid 5 spool per database.  This is of course useless if you must
stuff that many tables in one database (though with the database.table
syntax available you don't really have to care).


Personally all my major databases are on there own filesystem,
I want the massive number of rewrites in the different DBs to take
place on different filesystem, pref on different spindles/sets of
spindles.


---
As folks might have suspected, not much survives except roaches, 
and they don't carry large enough packets fast enough...
--About the Internet and nuclear war.