you asked for experiences:
pfeiffer university uses linux for nfs, smb, mail and web services. we wanted
security of our data, but could not afford raid controllers. so,
we have been using a redhat 5.0 box with 2.0.35 or 36 kernel and the ancient
raid tools (.40) on three different boxes. combined uptime of over 1 year.

never any errors or any problems, but i think that may have to do with the fact
that we never re-boot, which is the most dangerous time for these old initrd or
rc.sysinit started arrays.

recently, one of these was replaced with a pentium2 350, 128meg, 2x8.4 gig IBM
IDE, rh 5.2, kernel 2.2.1 and samba 2.0.0. using the first 2.2 raid patch, with
ALOT of raid partition across the two disks.

again, never any errors or any problems, but this machine has been re-booted
many times cause we change NICs and stuff for testing. the new auto-start code
rules. the performance of this code is outstanding compaired to a hardware
controller that we could afford (we have a ami megaraid from a dell NT server
here).

we trust all our users homedirs, maildirs, websites, databases and samba shares
to the raid 1998 10 05 code, or the raid 1999 01 28 code.

with almost 2000 users, and using qmail\'s maildir in the homedir setup, this
code gets hit, and it stands up, day in and day out. thanks mingo, martin bene,
jakob, linus, RMS, et al.

allan noah
pfeiffer university
linux/network admin

\"Tony Wildish, exts 77103 / 71207\" <[EMAIL PROTECTED]> said: 

> Hello,
> 
> > [root@xxxxxx raidtools-0.90]# mkraid --force /dev/md0
> > DESTROYING the contents of /dev/md0 in 5 seconds, Ctrl-C if unsure!
> > handling MD device /dev/md0
> > analyzing super-block
> > disk 0: /dev/sdb1, 4188937kB, raid superblock at 4188864kB
> > disk 1: /dev/sdc1, 4188937kB, raid superblock at 4188864kB
> > mkraid: aborted
> 
>  I\'ve been playing with raid-1 too, with 0.90-0399 on RH 5.2. I have been
> creating/destroying raid partitions quite a lot and have had this error a
> few times. I now have what I believe to be a recipe for success:
> 
> - fdisk your disks to remove the partitions that will be in the raid.
> Completely remove them, don\'t just change the partition type.
> 
> - reboot.
> 
> - fdisk your partitions into existence, with type \'fd\' as advertised in
> the doc. Do not reboot.
> 
> - run \'mkraid --really-force /dev/md<n>\'. I have found that if I run
> mkraid on the device without the --really-force option, then try it with
> it, it doesn\'t work. I need to really force it first time.
> 
> - wait until the raid is created (\'cat /proc/mdstat\' to check it), then
> \'mke2fs /dev/md<n>\'.
> 
> - reboot, and for me it works, I see my device started.
> 
>  Hope this helps. It may be that I am doing more than necessary, but I
> have had this method work a few times without fail. Comments welcomed!
> 
> 
>  While I\'m here, I see a lot of questions in this list about how to get
> RAID working, but none from people reporting bugs in the software once it
> is up and running, or serious loss of data from nasty incidents etc. I
> want to install Linux-RAID in a production environment, but I cannot find
> any solid information about other peoples experiences in such an 
> environment.
> 
>  If anyone out there has any long-term experience with Linux-RAID in a
> production environment would they please let me know (either directly or
> via this list). Obviously, with the tools being updated constantly,
> \'long-term\' might mean only a couple of weeks, but I still think it would
> be useful to get some of this information on the web. I am willing to
> put up a page summarising the results, on the other hand, if anyone
> already has such a page, could they please let me know where it is!
> 
>  Cheers,
>   Tony.
> 
> 
> 



--
 

Reply via email to