Re: Is Raid as frought as it looks?

1999-04-21 Thread Piete Brooks

 Any chance of you providing a 'cookbook' on howto do this? Just an
 illustration of the commands to execute and what to do in what order?

Note that for the fancy / clean way of doing this (rather than just ZAPPing
the partition type, the SB or some such) needs patches to the patches ...

Also, things get a *lot* more hairy if it's the root FS (and a to some extent
harder if the FS is one which cannot be unmounted), so take care to note
whether it's a data FS or a root FS ...



Re: Help with mirroring and Re: Raid 0 - mkraid aborted...

1999-04-21 Thread Osma Ahvenlampi

"Bruno Prior" [EMAIL PROTECTED] writes:
 The fact that the linux source includes legacy raid code which is incompatible with
 the latest raidtools seems to cause a lot of misunderstandings. Can't the legacy
 stuff be taken out and turned into a patch for those who like the older tools?

I understood the current alpha code is scheduled for integration into
the base kernel soon. However, if the error diagnostics of the
raidtools were a bit better, we wouldn't have this question pop up all 
the time. "mkraid aborted" doesn't tell anyone anything. "protocol
error, upgrade kernel raid support" would be much more informative.

-- 
Osma Ahvenlampi



benchmarks

1999-04-21 Thread Seth Vidal

I've mostly been a lurker but recent changes in my company have peaked my
interest in the performance of sw vs hw raid.

Does anyone have some statistics online of sw raid (1,5) vs hw raid
(1,5) on a linux system?

Also is there anyway to have a hot-swappable sw raid system. (IDE or SCSI)?

RTFM's and web page pointers are gladly accepted.

thanks
-sv


   



Re: benchmarks

1999-04-21 Thread Josh Fishman

Seth Vidal wrote:
 
 I've mostly been a lurker but recent changes in my company have peaked my
 interest in the performance of sw vs hw raid.
 
 Does anyone have some statistics online of sw raid (1,5) vs hw raid
 (1,5) on a linux system?

We have a DPT midrange SmartRAID-V and we're going to do testing on two
7 x 17.5 GB RAID 5 arrays, one software, one hardware. We'll post the
results as soon as they're available. (Testing will happen on a dual PII
350 w/ 256 MB RAM  a cheezy IDE disk for /, running 2.2.6 (or later).)

What kind of tests would people like to see run? The main test I'm
going for is simply stability under load on bigish file systems 
biggish file operations.

 -- Josh Fishman
NYU / RLab



Re: benchmarks

1999-04-21 Thread Seth Vidal

  I've mostly been a lurker but recent changes in my company have peaked my
  interest in the performance of sw vs hw raid.
  
  Does anyone have some statistics online of sw raid (1,5) vs hw raid
  (1,5) on a linux system?
 
 We have a DPT midrange SmartRAID-V and we're going to do testing on two
 7 x 17.5 GB RAID 5 arrays, one software, one hardware. We'll post the
 results as soon as they're available. (Testing will happen on a dual PII
 350 w/ 256 MB RAM  a cheezy IDE disk for /, running 2.2.6 (or later).)
 
 What kind of tests would people like to see run? The main test I'm
 going for is simply stability under load on bigish file systems 
 biggish file operations.

stability and read performance speeds and write performance speeds.

possibly optimization for mostly-read situations, mostly-write situations
and then both read and write situations.

-sv



Re: RELEASE: RAID-0,1,4,5 patch 1999.04.21, 2.0.36/2.2.6

1999-04-21 Thread Hans-Georg v. Zezschwitz


Hi!

I'm a bit disappointed Martin Benes Patches are not included.

We are running now 3 systems based on 19990309 + his patches
+ 2.0.36, and everything worked fine - from setting up
to recovering RAID1. Another RAID5 system is working fine,
but I did not really stress it till now.

Though "Alpha" for the disk-IO-systems should mean something
different than "Alpha" for a GUI-application and though I
can't judge how many parts in the code remain "ugly", I'm
pretty happy with the way things work by now.

What makes me most unhappy is that I have to tell our guys
to 

  - take the kernel 2.0/2.x
  - add the alpha patches
  - add the Martin Bene patches on top on the alpha stuff

just to get a kernel like we need it.

To conclude, I suppose if the stuff is still considered to
be alpha, it might be not such a high risk to merge Martins
patches in.

My personal experience was: RAID1/5 is reliable and working!

Congratulation and thanks, 


Georg



Re: Is Raid as frought as it looks?

1999-04-21 Thread Michael

  Any chance of you providing a 'cookbook' on howto do this? Just an
  illustration of the commands to execute and what to do in what order?
 
Probably not until I migrate to the new raid tools. I did all this 
with the 0.42 tool set and am waiting for the current stuff to 
stabalize a little more. If you want to mess with it, steal one of 
your swap partitions and segment it into little 5 meg pieces and 
experiment. That's how I figured out how to do it the first time 
since the old tools don't really want you to mkraid a degraded set. I 
just put the same disk in the set twice and the tools were happy. As 
soon as the mkraid was complete I removed the duplicate entry and ran 
created the file system on the degraded md device. I don't know if 
this will work with the new tools. I've asked Mingo if this 
capability will be added to the tool set.

Michael
[EMAIL PROTECTED]



Re: System hang at shutdown

1999-04-21 Thread Paul Jakma

Hi Aaron,

Have you tried upgrading glibc?

There was a problem with older glibc's, where they would never close
/etc/ld.so.cache, and hence the disk wouldn't be unmountable.

Try upgrading to the latest glibc for RH5.2. (think it's in updates.)


On Wed, 21 Apr 1999, Aaron D. Turner wrote:

  
  Not sure if this is a RAID issue, but I'm running out of things to blame
  it on.
  
  P2 450Mhz
  Genuine SymBios 53c895
  2x Quantum Atlas III 9GB
  RAID 1 for all partitions except for swap
  2.0.36 with raid0145-19990108-2.0.36
  RAID tools 0.90
  RH 5.2
  Execute an init 0 or init 6 and it starts the shutdown process.  The last
  lines it displays:
  
  Stopping kernel services: kerneld
  INIT: no more processes left in this runlevel
  
  And then it just hangs until I hit the reset button.  It then boots,
  fsck's all the partitions and then RAID sync's the secondary to the
  master.  Luckly this doesn't seem to cause any corruption.  However since
  the machine is about a 45 minute drive away, its not very optimal to have
  to hit reset everytime I do an init 6 after a new kernel is compiled.
  
  Thoughts anyone?  (raidtab/mdstat follow)
  
  Thanks!
  
  -- /etc/raidtab
  # Root paritition
  raiddev /dev/md0
  raid-level  1
  nr-raid-disks   2
  nr-spare-disks  0
  chunk-size  128
  device  /dev/sda2
  raid-disk   0
  
  device  /dev/sdb2
  raid-disk   1
  
  # /var
  raiddev /dev/md1
  raid-level  1
  nr-raid-disks   2
  nr-spare-disks  0
  chunk-size  128
  
  device  /dev/sda3
  raid-disk   0
  
  device  /dev/sdb3
  raid-disk   1
  
  # /tmp
  raiddev /dev/md2
  raid-level  1
  nr-raid-disks   2
  nr-spare-disks  0
  chunk-size  128
  
  device  /dev/sda5
  raid-disk   0
  
  device  /dev/sdb5
  raid-disk   1
  
  # /usr
  raiddev /dev/md3
  raid-level  1
  nr-raid-disks   2
  nr-spare-disks  0
  chunk-size  128
  device  /dev/sda6
  raid-disk   0
  
  device  /dev/sdb6
  raid-disk   1
  
  # /usr/local
  raiddev /dev/md4
  raid-level  1
  nr-raid-disks   2
  nr-spare-disks  0
  chunk-size  128
  
  device  /dev/sda7
  raid-disk   0
  
  device  /dev/sdb7
  raid-disk   1
  
  # /home
  raiddev /dev/md5
  raid-level  1
  nr-raid-disks   2
  nr-spare-disks  0
  chunk-size  128
  
  device  /dev/sda8
  raid-disk   0
  
  device  /dev/sdb8
  raid-disk   1
  
  -- /proc/mdstat
  
  Personalities : [raid1] 
  read_ahead 1024 sectors
  md0 : active raid1 sdb2[1] sda2[0] 264960 blocks [2/2] [UU]
  md1 : active raid1 sdb3[1] sda3[0] 530048 blocks [2/2] [UU]
  md2 : active raid1 sdb5[1] sda5[0] 200704 blocks [2/2] [UU]
  md3 : active raid1 sdb6[1] sda6[0] 1052160 blocks [2/2] [UU]
  md4 : active raid1 sdb7[1] sda7[0] 5180800 blocks [2/2] [UU]
  md5 : active raid1 sdb8[1] sda8[0] 1052160 blocks [2/2] [UU]
  unused devices: none
  
  
  

-- 
Paul Jakma
[EMAIL PROTECTED]   http://hibernia.clubi.ie
PGP5 key: http://www.clubi.ie/jakma/publickey.txt
---
Fortune:
Anti-trust laws should be approached with exactly that attitude.