Re: large ide raid system

2000-01-14 Thread Thomas Waldmann

 Cable length is not so much a pain as the number of cables. Of course with
 scsi you want multiple channels anyway for performance, so the situation
 is very similar to ide. A cable mess.

Well, it is at least only a half / third / ... of the cable count of "tuned"
single-device-on-a-cable EIDE RAID systems (and you don´t have these big
problems with cable length).

I didn´t try LVD/U2W SCSI yet, but using UW SCSI you can put e.g. 2 .. 3 IBM
DNES 9GB on a single UW cable (these are FAST while being affordable each one
does ~~15MB/s) without loosing too much performance.

Did anybody measure how this is with U2W/LVD ?

How is performance when putting e.g. 4, 6 or 8 IBM DNES 9GB LVD on a single
U2W channel compared to putting them on multiple U2W channels ?

Thomas



Hardware RAID chips

2000-01-14 Thread Gregory Leblanc

Since this list appears to be a good place for general RAID on Linux (or
Linux on RAID?) questions, I thought I'd ask.  What do people think of
the StrongARM vs. the i960?  Our i960 based cards scream, but we don't
have any StrongARM yet (although I could probably get some if the
performance is better).
Greg



[profmad@mindspring.com: Raid Help]

2000-01-14 Thread James Manning

I haven't dealt with linear, but the stock RH kernel comes
with 0.90 raid so I'm not sure where the issue is here.


- Forwarded message from "Prof. Mad Crazy" [EMAIL PROTECTED] -

From: "Prof. Mad Crazy" [EMAIL PROTECTED]
To: [EMAIL PROTECTED]
Subject: Raid Help
Date: Thu, 13 Jan 2000 23:51:43 -0500

I followed the instructions as per the readme file contain in the raidtools
documents.  I've created my raidtab file with the example but can not get
the to initialize the drives.  is it possible that I am not specifying the
proper devices?

I have done a "dmesg|more" to look at my available devices and it clearly
shows what I have.  But when I specify it in my raidtab it doesn't work.
example:  /dev/hda1 and /dev/hdb1 are my two devices and I as I set this in
my raidtab it doesn't initialize squat!

The exact raidtab file looks like this:

raiddev /dev/md0
raid-level  linear
nr-raid-disks   2
persistent-superblock   0
chunk-size  4
device  /dev/hda1
raid-disk   0
device  /dev/hdb1
raid-disk   1

when I run the mkraid --really-force /dev/md0 command it aborts with the
following error.
"analyzing superblock
 mkraid: aborted, see the syslog and the /proc/mdstat for potential clues."
I know how to do a "cat /proc/mdstat" but where the hell is the 'syslog'?

and what clues?

Someone please enlighten me.


-- 
To unsubscribe: mail [EMAIL PROTECTED] with "unsubscribe"
as the Subject.

- End forwarded message -



RAID0 problem

2000-01-14 Thread Edward Schernau

Note the crosspost.

patching with the ALPHA kernel patch and recompiling gives much more
verbose, but equally unhelpful messages about WHY my RAID-0 array
fails.

I have raidtools-0.90, and the most recent RAID kernel patch.
Supposedly 2.2 was supposed to handle this but I guess that's a lie.

Does anyone have this working?
-- 
Edward Schernau http://www.schernau.com
Network Architect   mailto:[EMAIL PROTECTED]
Rational Computing  Providence, RI, USA, Earth



Re: kernel patch?

2000-01-14 Thread Danilo Godec

On Thu, 13 Jan 2000, Edward Schernau wrote:

 I am running 2.2.13, whose config script has options for RAID.  I have
 raidtools-0.90.  Why/Do I need to patch?  Pointers appreciated.

You have to patch because plain 2.2.13 kernel has an 'old style' raid,
while raidtools-0.90 are designed for 'new style' raid (which adds
autodetection and other nice stuff).

For 2.2.13 you can use the patch for 2.2.11, while for 2.2.14 you have to
get a new patch from http://www.redhat.com/~mingo/raid-2.2.14-B1

D.





Re: kernel patch?

2000-01-14 Thread Gregory Leblanc

Edward Schernau wrote:
 
 I am running 2.2.13, whose config script has options for RAID.  I have
 raidtools-0.90.  Why/Do I need to patch?  Pointers appreciated.

The kernel config options in the 2.2.x series are not compabible with
raidtools-0.90.  For kernel 2.2.13, you can download the patch from
kernel.org (your closest mirror) under the people directory for the
linux kernel.



Re: kernel patch?

2000-01-14 Thread James Manning

[ Thursday, January 13, 2000 ] Edward Schernau wrote:
 I am running 2.2.13, whose config script has options for RAID.  I have
 raidtools-0.90.  Why/Do I need to patch?  Pointers appreciated.

Yes, you do... all kernels ship with 0.4x raid
use the 2.2.11 patch from kernel.org's linux/daemons/raid/alpha

*or*

Use an -ac patch (linux/kernel/alan I believe) and that will include
the RAID 0.90 support

James
-- 
Miscellaneous Engineer --- IBM Netfinity Performance Development



Compaq SmartArray 221

2000-01-14 Thread Mindaugas Riauba


  Anyone has any experience with Compaq SmartArray 221 RAID controller
in non-compaq server?
  We are trying to run it on the Intel L440GX motherboard with no success.
Intel BIOS seems to detect it, Compaq ROMpaq detects it but Array
Configuration Utility v1.20 can not detect it.
  Does there exist some Linux/DOS utilities for Array configuration?
Or the only solution is return controller and forget Compaq as bad
dream?

  Mindaugas




Re: raid145 patches for 2.2.14 anywhere?

2000-01-14 Thread James Manning

[ Thursday, January 13, 2000 ] Thomas Gebhardt wrote:
 just looked for the raid for 2.2.13 or 2.2.14 in the kernel archive.
 The last patches that I have found are for 2.2.11 and at least one
 hunk cannot be applied to the newer kernel sources without making
 the hands dirty. Can I get the patches for the newer kernels
 anywhere?

the 2.2.11 can apply to .12 and .13 (ignore rejects)
the .14 patch is at www.redhat.com/~mingo (starts with "raid")

James
-- 
Miscellaneous Engineer --- IBM Netfinity Performance Development



benchmark results + questions

2000-01-14 Thread Holger Kiehl

Here are some results I got with bonnie:

  ---Sequential Output ---Sequential Input-- --Random--
  -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
MachineMB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU  /sec %CPU
1 768  3328 46.2  3058  2.8  3601  7.9  7764 98.5 31810 21.1 284.4  3.6
2 768  6610 98.8 28498 38.2 12835 37.1  7323 93.7 28203 20.4 313.5  4.4
3 768  6525 96.9 40281 86.6 18871 57.6  7511 97.0 47529 37.2 344.8  5.3
4 768  6608 98.7 39775 63.1 17404 53.8  7463 96.0 41187 32.2 341.5  4.9
5 768  6582 97.6 38862 61.6 17510 51.0  7497 96.3 41476 30.5 346.0  5.5
6 768  6575 97.4 39471 60.1 15493 44.2  7577 97.3 40808 33.3 371.9  6.3
7 768  6625 97.6 40965 59.9 14219 40.3  7592 97.7 42571 35.9 271.1  4.9
8 768  7121 99.8 72177 73.5 21740 49.5  7711 97.9 64926 45.6 265.8  3.8


1) GDT 6518 RD (32MB EDO-RAM) Raid 5 (chunksize=? stride=8) 2.2.12
2) Software Raid 5 on one Adaptec U2W Controller (chunksize=64 stride=8)
   One disk set to 10MB/s 2.2.12
3) Software Raid 5 on on Adaptec U2W Controller (4 disks) and Symbios 875
   (2 disks) (chunksize=64 stride=16) 2.2.12
4) Software Raid 5 on one Adaptec U2W (chunksize=64 stride=16) 2.2.12
5) Software Raid 5 on one Adaptec U2W (chunksize=64 stride=16) 2.2.14
6) Software Raid 5 on one Adaptec U2W (chunksize=32 stride=8) 2.2.14
7) Software Raid 5 on one Adaptec U2W (chunksize=8 stride=2) 2.2.14
8) Software Raid 0 on one Adaptec U2W (chunksize=4) 2.2.14

2 PIII 450MHz on an Asus P2B-DS with 256 MB ECC RAM and 6 IBM DNES 9GB LVD
drives. Distribution is RH 6.1 and the block size of the file system is
always 4K.

Although the Symbios 875 is only an UW, it does not make much difference. In
fact I did some other benchmarks where lots of small files are being copied
and ftp'ed (locally) around (for 7 hours!), which showed the single controller
with better results.

What puzzles me is the bad results on the hardware raid controller when
using Bonnie. Doing the 7 hour benchmark with lots of small files I come
close to the results of a software raid system.

However, just writing a 30MB file to disk would produce the following
results on a hardware raid 5:

 Write file ( 512Byte)  :  0.365 s  84335 KB/s
 With sync  :  6.252 s  4913 KB/s
 Write file (1024Byte)  :  0.260 s  118260 KB/s
 With sync  :  5.935 s  5176 KB/s
 Write file (4096Byte)  :  0.182 s  169125 KB/s
 With sync  :  5.867 s  5236 KB/s
 Write file (8192Byte)  :  0.192 s  160496 KB/s
 With sync  :  5.872 s  5232 KB/s

and the same thing done on a software raid 5:

 Write file ( 512Byte)  :  0.465 s  66086 KB/s
 With sync  :  1.117 s  27521 KB/s
 Write file (1024Byte)  :  0.327 s  94183 KB/s
 With sync  :  1.20 s  30131 KB/s
 Write file (4096Byte)  :  0.322 s  95614 KB/s
 With sync  :  0.940 s  32699 KB/s
 Write file (8192Byte)  :  0.262 s  117377 KB/s
 With sync  :  0.947 s  32463 KB/s

[The "With sync" part is when putting an fsync() after the 30MB are written]

Why is the hardware raid so much slower when syncing to disk?

Holger




Re: [FAQ-answer] Re: soft RAID5 + journalled FS + power failure =problems ?

2000-01-14 Thread Benno Senoner

Chris Wedgwood wrote:

  In the power+disk failure case, there is a very narrow window in which
  parity may be incorrect, so loss of the disk may result in inability to
  correctly restore the lost data.

 For some people, this very narrow window may still be a problem.
 Especially when you consider the case of a disk failing because of a
 power surge -- which also kills a drive.

  This may affect data which was not being written at the time of the
  crash.  Only raid 5 is affected.

 Long term -- if you journal to something outside the RAID5 array (ie.
 to raid-1 protected log disks) then you should be safe against this
 type of failure?

 -cw

wow, really good idea to journal to a RAID1 array !

do you think it is possible to to the following:

- N disks holding a soft RAID5  array.
- reserve a small partition on at least 2 disks of the array to hold a RAID1
array.
- keep the journal on this partition.

do you think that this will be possible ?
is ext3 / reiserfs  capable of keeping the journal on a different partition
than
the one holding the FS ?

That would really be great !

Benno.




Re: [FAQ-answer] Re: soft RAID5 + journalled FS + power failure =problems ?

2000-01-14 Thread D. Lance Robinson

Ingo,

I can fairly regularly generate corruption (data or ext2 filesystem) on a busy
RAID-5 by adding a spare drive to a degraded array and letting it build the
parity. Could the problem be from the bad (illegal) buffer interactions you
mentioned, or are there other areas that need fixing as well? I have been
looking into this issue for a long time with no resolve. Since you may be aware
of possible problem areas: any ideas, code or encouragement is greatly welcome.

 Lance.


Ingo Molnar wrote:

 On Wed, 12 Jan 2000, Gadi Oxman wrote:

  As far as I know, we took care not to poke into the buffer cache to
  find clean buffers -- in raid5.c, the only code which does a find_buffer()
  is:

 yep, this is still the case. (Sorry Stephen, my bad.) We will have these
 problems once we try to eliminate the current copying overhead.
 Nevertheless there are bad (illegal) interactions between the RAID code
 and the buffer cache, i'm cleaning up this for 2.3 right now. Especially
 the reconstruction code is a rathole. Unfortunately blocking
 reconstruction if b_count == 0 is not acceptable because several
 filesystems (such as ext2fs) keep metadata caches around (eg. the block
 group descriptors in the ext2fs case) which have b_count == 1 for a longer
 time.



Re: raid145 patches for 2.2.14 anywhere?

2000-01-14 Thread Ingo Molnar


On Thu, 13 Jan 2000, Thomas Gebhardt wrote:

 just looked for the raid for 2.2.13 or 2.2.14 in the kernel archive.
 The last patches that I have found are for 2.2.11 and at least one
 hunk cannot be applied to the newer kernel sources without making
 the hands dirty. Can I get the patches for the newer kernels
 anywhere?

it's at:

http://www.redhat.com/~mingo/raid-2.2.14-B1

it applies cleanly to vanilla 2.2.14, do a 'patch -p0  raid-2.2.14-B1'.

-- mingo