Re: A view from the other side...

1999-07-09 Thread Brian Leeper

> On Thu, 8 Jul 1999, Brian Leeper wrote:
> > If the drives are the same size, the following command works very well to
> > copy a partition table from one to the other:
> > 
> > dd if=/dev/sda of=/dev/sdb bs=1024 count=5
> I am curious to know if this can help in creating a RAID1 mirror from an
> existing single disk setup?  I would like to mirror the drive on my server
> without having to reinstall anything.

About the only way I know of to do this, which I haven't tried and don't
know if it works, is to create the mirror in degraded mode and the copy
the data from your existing drive to it, then add your existing drive to
the mirror and resync.

I'm sure someone else has a better idea about this than I do..

Brian




raid-problems after sync

1999-07-09 Thread Peter Bartosch

hello

after a reboot (caused by a power fail) my raid was checked with ckraid and
brought back into sync, but e2fsck sais, that the md-device-partition has
zero length??

the problem is, that my /usr /home and /var on the md-device resist

i used the md-tools from debian 2.0 (with a self build 2.2.6-ac2 
(the crash happened before switch to 2.2.7 with raidtoos 0.90 from april)


any hints?


peter



Re: Sftwr Raid w/Redhat 6.0

1999-07-09 Thread Alvin Oga


hi jim

> I'm trying to setup Raid 1 in sftwr using Redhat 6.0 w/2 identical
>  EIDE drives (13GB) as hda & hdc, but can't get it to work.
>  Is there any definitive doc somewhere that describes how to do this w/the current 
>kernel?

http://ostenfeld.dk/~jakob/Software-RAID.HOWTO

I use linux-2.2.6 or 2.2.10 for raid0/1... w/ rh-6.0

have fun
alvin
http://www.Linux-Consulting.com/Raid/Docs
- have running debian-2.2, rh-6.0, slack-4.0, suse-6.1, cald-2.2, pht, turbocluster 
soon



Sftwr Raid w/Redhat 6.0

1999-07-09 Thread J. Handsel

I'm trying to setup Raid 1 in sftwr using Redhat 6.0 w/2 identical EIDE drives (13GB) 
as hda & hdc, but can't get it to work.  Is there any definitive doc somewhere that 
describes how to do this w/the current kernel?

Thanks . . .

jim



Re: linux-raid 0.9 on SUSE-Linux 6.1

1999-07-09 Thread Mark Ferrell


I installed the raid 0.90 tools and installed and patch the 2.2.6
kernel.  Though I did it from source .. don't know if that matters.
Corse .. getting Yast to believe it's root fs was /dev/md0 is a
completely different storry.

Schackel, Fa. Integrata, ZRZ DA wrote:

> Hello everybody,
>
> I'm using SUSE Linux 6.1 with kernel 2.2.5
> Shipped with SUSE is mdtools 0.42.
>
> So I was loadding the 0.90 rpm packet.
> I installed it an by calling any raid-tool
> I get a segmentation fault.
>
> Is there anybody who managed tho problem
> and could provide me any help ?
>
> Thx
> Barney



Re: FAQ

1999-07-09 Thread Lawrence Dickson

These questions are from the point of view of 0.90 or higher
(i.e. RH 6.0).
- How do you recover a RAID1 or a RAID5 with a bad disk when
you have no spares, i.e. how do you hotremove and hotadd? Please
go through it step by step because many paths seem to lead to
hangs.
- How do you recover a RAID1 or a RAID5 when you do have a
spare? Does it or can it work automatically?
- How do you keep your raidtab file sync'd with your actual
RAID when persistent-superblock is 1? Is there a translator 
for the numerical values, i.e. for parity-algorithm, found in
/proc/mdstat?
   Thanks,
   Larry Dickson
   Land-5 Corporation


At 12:06 PM 7/9/99 +0100, you wrote:
>It strikes me that this list desperately needs a FAQ. I'm off on holiday
for the
>next two weeks, but unless someone else wants to volunteer, I'm willing to
put
>one together when I get back. If people would like me to do this, I would
>welcome suggestions for questions to go in the FAQ.
>
>Cheers,
>
>
>Bruno Prior [EMAIL PROTECTED]
>



[Fwd: backup/redundancy solutions]

1999-07-09 Thread Jonathan F. Dill

"Jonathan F. Dill" wrote:
> 
> Gordon Henderson wrote:
> >
> > So does no-one apart from me use Amanda?
> >
> > I've been using it for many years or different systems and it's never
> > let me down. Emails me every day with a report and to remind me to
> > change tapes if needed.
> >
> > I have 2 main servers here, one running Solaris, the other Linux, both
> > with DLT tapes (The Solaris has a DLT4000 - 40GB compressed, the Linux
> 
> I use AMANDA--that is the "nightly backup" that I mentioned in my other
> post.  I have about 70 SGI workstations with about 500 GB disk space of
> which about 300 GB is actually used.  The capacity of the 10h works out
> to about 126 GB using AMANDA gzip compression and the 8505-XL hardware
> compression turned off (I tried with hw compression turned on and I
> actually got LESS data on each tape since the data was already gzipped).
> 
> AMANDA takes care of figuring out what level backup to do when, and once
> you've got the cycle set up, which tape to reuse when, so I can actually
> keep all 300 GB backed up nightly on the "puny" 10h, with level-0
> backups at least once a week.  Recently, I acquired an Exabyte Mammoth
> to add to the setup, but I haven't received the tapes that I ordered for
> it yet.  The AMANDA check program runs once a day via a cron job and
> sends me e-mail if there are any problems that I need to fix before the
> run, otherwise it's completely automated.  There is no bullshit about
> trying to figure out what time to schedule what backup, or having to
> type in any commands to do it, because all of that is figured out and
> optimized by the AMANDA scheduler.
> 
> --
> "Jonathan F. Dill" ([EMAIL PROTECTED])
> CARB Systems and Network Administrator
> Home Page:  http://www.umbi.umd.edu/~dill

-- 
"Jonathan F. Dill" ([EMAIL PROTECTED])
CARB Systems and Network Administrator
Home Page:  http://www.umbi.umd.edu/~dill



Re: A view from the other side...

1999-07-09 Thread Mike Frisch

On Thu, 8 Jul 1999, Brian Leeper wrote:

> If the drives are the same size, the following command works very well to
> copy a partition table from one to the other:
> 
> dd if=/dev/sda of=/dev/sdb bs=1024 count=5

I am curious to know if this can help in creating a RAID1 mirror from an
existing single disk setup?  I would like to mirror the drive on my server
without having to reinstall anything.

Thanks,

Mike.

-- 
===
  Mike Frisch
  Software Engineer
  Hummmingbird Communications Ltd.   Toronto, Ontario, CANADA

Disclaimer:  I speak for myself, not my employer.



Re: 2.2.9 Patch

1999-07-09 Thread MadHat

MadHat wrote:
> 
> I did all the fixes you have been talking about and made a newer
> patch.   I just used it on a new system and it compiles and works for
> me.
> 
> http://www.unspecific.com/pub/linux/raid/raid-0145-2.2.9-patch.gz
> 

Same patch works for 2.2.10

I will not guarantee anything.  This is just what has been discussed.
I am running this on a few of my boxes and will report any problems I
see.


-- 
  Lee Heath (aka MadHat) 
"The purpose of life is not to see how long you can keep doing it. It's 
to see how well you can do it. There are things worse than death, and 
chief among them is living without honor."  -- Woodchuck, DC-stuff list



Re: mkraid problems (v0.90/2.2.10/RedHat)

1999-07-09 Thread Luca Berra

On Thu, Jul 08, 1999 at 01:00:42PM -0700, Zack Hobson wrote:
> Hello RAID hackers,
> 

> 
> RedHat 6.0 w/ 2.2.10 kernel (compiled with RAID-1 support)
> raidtools 0.90 compiled from distributed source (ie, non-RedHat)
raidtools 0.90 does not work with stock kernels
either you use old raidtools or you patch the kernel
But...
kernel patches do not work with kernels > 2.2.7
(yes i know you can modify a couple of lines, but even if i do it
on my own boxes, i will not suggest anybody to take the risk)

besides that 2.2.10 has the infamous fs corruption bug.

i'd suggest downgrading to the last 'stable' redhat kernel.

Luca

P.S. (for Felix) you can build an N-way mirror (n>2) for
added redundancy, also please quote only relevant parts
of a message when replying.

-- 
Luca Berra -- [EMAIL PROTECTED]
Communications Media & Services S.r.l.



Linux root mirror recipie (repost)

1999-07-09 Thread A James Lewis


I posted this a while back, I think some people found it usefull... so
here goes again

Since I wrote this however I have decided that it makes sense if possible
to use device numbers for raid that are the same as the partitions they
are built from for ease of maintainance!

So that a mirror built from sda2 and sdb2 would be md2 regardless of if
that isn't the next available md number of course this isn't always
going to be possible but it makes things simpler if you can.

A.J. ([EMAIL PROTECTED])
Sometimes you're ahead, somtimes you're behind.
The race is long, and in the end it's only with yourself.

-- Forwarded message --
Date: Fri, 09 Jul 1999 12:49:10 +0100
From: A James Lewis <[EMAIL PROTECTED]>
To: [EMAIL PROTECTED]
Subject: Linux root mirror

This guide assumes that there are two disks hda & hdc, and that "/" is
/dev/hda2

It also assumes that /boot (The kernel) is NOT on /.  /boot requires
special handling but I suggest simply maintaining a copy on both disks
until LILO is raid aware.

This example was done with Martin Bene's failed disk patch and raidtools
snapshot 19990421 (+kernel patches) and Linux 2.2.6.  All of which must be
installed before you start!

1. Create "raidtab" file. (Note deliberately failed disk = mounted "/")

raiddev /dev/md0
  raid-level1
  chunk-size64
  nr-raid-disks 2
  nr-spare-disks0

  device/dev/hdc2
  raid-disk 0
  device/dev/hda2
  failed-disk   1

2. run mkraid.

DESTROYING the contents of /dev/md0 in 5 seconds, Ctrl-C if unsure!
handling MD device /dev/md0
analyzing super-block
disk 0: /dev/hdc2, 700560kB, raid superblock at 700480kB
disk 1: /dev/hda2, failed

3. mke2fs the new array

mke2fs 1.14, 9-Jan-1999 for EXT2 FS 0.5b, 95/08/09
Linux ext2 filesystem format
Filesystem label=
175440 inodes, 700480 blocks
35024 blocks (5.00%) reserved for the super user
First data block=1
Block size=1024 (log=0)
Fragment size=1024 (log=0)
86 block groups
8192 blocks per group, 8192 fragments per group
2040 inodes per group
Superblock backups stored on blocks:
8193, 16385, 24577, 32769, 40961, 49153, 57345, 65537, 73729,
81921,
90113, 98305, 106497, 114689, 122881, 131073, 139265, 147457,
155649,
163841, 172033, 180225, 188417, 196609, 204801, 212993, 221185,
229377,
237569, 245761, 253953, 262145, 270337, 278529, 286721, 294913,
303105,
311297, 319489, 327681, 335873, 344065, 352257, 360449, 368641,
376833,
385025, 393217, 401409, 409601, 417793, 425985, 434177, 442369,
450561,
458753, 466945, 475137, 483329, 491521, 499713, 507905, 516097,
524289,
532481, 540673, 548865, 557057, 565249, 573441, 581633, 589825,
598017,
606209, 614401, 622593, 630785, 638977, 647169, 655361, 663553,
671745,
679937, 688129, 696321

Writing inode tables: done
Writing superblocks and filesystem accounting information: done

4. Change root to /dev/md0 in /etc/fstab.

5. Copy filesystem to new array, mounted on /mnt.

moo:/# for i in `ls | grep -v mnt | grep -v proc`
> do
> cp -dpR $i /mnt
> done

6. Edit /etc/lilo.conf on old copy to spec "root = /dev/md0" (The new
array)

7. Run lilo to reinstall the bootblock

moo:/# lilo
Added Linux *
moo:/#

8. Use fdisk to mark the new partition as type "0xfd" so it can autostart.

9. Reboot

10. Pray, time passes.

11. run "df" to ensure that your new array is indeed  mounted as "/"

12. edit lilo.conf (You have an old copy now!) to refer to your array.
(You could have edited this before booting but I think it's better to do
it now since you are making it look like your system does now, not what
you think it might look like in the future  You can predict the future
if you like once you become an expert at this.)

13. run lilo again to reinstall the bootblock, ensure it doesn't error!!
If you get an error return to step 11.

14. mkdir /proc & /mnt (You excluded these earlier to avoid recursive
copy)

15. mount /proc

16. raidhotadd /dev/md0 /dev/hda2 (Add your old root volume to the
array... Like ODS/SDS metareplace.

17. cat /proc/mdstat and see your reconstruction in progress...

moo:~# more /proc/mdstat
Personalities : [raid0] [raid1] [raid5]
read_ahead 1024 sectors
md0 : active raid1 hda2[2] hdc2[0] 700480 blocks [2/1] [U_] recovery=2%
finish=17.3min
unused devices: 

18. Use fdisk to mark old "/" partition as type "0xfd" so it can also
autostart.

19. Wait until resync complete and reboot.

20. Congratulations you now have a mirrored "/"

Now think about swap and /boot...





Re: Resync Priority.

1999-07-09 Thread A James Lewis


Although your point is very valid, the stability of the filesystem is not
in question although the array is running in "degraded" mode, the
filesystem stability is assured by fsck rather than the resync process
surely your data is more important than your OS.

If a second disk were to fail before the resync was complete then only
resynced data would be recoverable... so the OS is probably least
important... although it should be small enough that resyncing it is
trivial

I'd be interested to know what answer you get... mabe it's based on the
device numbers although the fact that it starts on MD2 suggests that this
is not the case

I think the most likley explanation is that it is related to the order in
which the devices were either created or detected... if detected then it's
probably the device which had the most recent timestamp in it's superblock
which is probably the one which took longest to unmount.

James

On Fri, 9 Jul 1999, Matt Coene wrote:

> 
> Quick question concerning resyncing / regeneration...
> 
> I have 4 9.1 U2W SCSI drives each one with 6 partitions, in a setup as
> follows
> 
> /dev/md0 (sda2, sdb2, sdc2, sdd2)
> /dev/md1 (sda3, sdb3, sdc3, sdd3)
> /dev/md2 (sda5, sdb5, sdc5, sdd5)
> /dev/md3 (sda6, sdb6, sdc6, sdd6)
> /dev/md4 (sda7, sdb7, sdc7, sdd7)
> 
> plus the swap partitions.. sda1, sdb1, sdc1, sdd1...
> 
> Is there anyway to set a preference on which md devices to rebuild first in
> the event of a failure?  I have been testing with the odd powerfailure
> scenario and it seems for me that when the system comes back up, for me
> anyway, the system tries to resync /dev/md2, which for me is a massive
> database partition which takes over half-an-hour to regenerate.  For me it
> would be more important to check the stability of the OS and make sure the
> system comes back online properly first, then goes back to recover the
> /dev/md2.
> 
> Any ideas?
> 
> Regards,
> 
> 
> Matt.C.
> -Systems Admin
> Alliance
> 
> 

A.J. ([EMAIL PROTECTED])
Sometimes you're ahead, somtimes you're behind.
The race is long, and in the end it's only with yourself.



Re: FAQ

1999-07-09 Thread Kelley Spoon

On Fri, 9 Jul 1999, Marc Mutz wrote:
> Bruno Prior wrote:
> > 
> > It strikes me that this list desperately needs a FAQ. I'm off on holiday for the
> > next two weeks, but unless someone else wants to volunteer, I'm willing to put
> > one together when I get back. If people would like me to do this, I would
> > welcome suggestions for questions to go in the FAQ.
 
> Whoever volunteers: The first answer should summarize which version of
> {md,raid}tools works with which kernel patched with{,out} patch XY.
> Can't think of an question for that, though.

Hrm.  "I have kernel  and I'm trying to use raidtools ,
but it isn't working.  What's up with that?"  Maybe also include a few
more subquestions like "I'm using the version of raidtools that came with
my Linux distro, but it doesn't seem to be working"

> IMO it is very necessay to clear the fog that has laid itself across
> raid-with-linux in the last weeks or so.

Having inherited the responsibility of maintaining a kernel RPM after
2.2.8 and discovering that software raid stuff no longer worked, I'd
have to agree.  ;-)

--
Kelley Spoon  <[EMAIL PROTECTED]>
Sic Semper Tyrannis.



Resync Priority.

1999-07-09 Thread Matt Coene


Quick question concerning resyncing / regeneration...

I have 4 9.1 U2W SCSI drives each one with 6 partitions, in a setup as
follows

/dev/md0 (sda2, sdb2, sdc2, sdd2)
/dev/md1 (sda3, sdb3, sdc3, sdd3)
/dev/md2 (sda5, sdb5, sdc5, sdd5)
/dev/md3 (sda6, sdb6, sdc6, sdd6)
/dev/md4 (sda7, sdb7, sdc7, sdd7)

plus the swap partitions.. sda1, sdb1, sdc1, sdd1...

Is there anyway to set a preference on which md devices to rebuild first in
the event of a failure?  I have been testing with the odd powerfailure
scenario and it seems for me that when the system comes back up, for me
anyway, the system tries to resync /dev/md2, which for me is a massive
database partition which takes over half-an-hour to regenerate.  For me it
would be more important to check the stability of the OS and make sure the
system comes back online properly first, then goes back to recover the
/dev/md2.

Any ideas?

Regards,


Matt.C.
-Systems Admin
Alliance




Re: FAQ

1999-07-09 Thread Marc Mutz

Bruno Prior wrote:
> 
> It strikes me that this list desperately needs a FAQ. I'm off on holiday for the
> next two weeks, but unless someone else wants to volunteer, I'm willing to put
> one together when I get back. If people would like me to do this, I would
> welcome suggestions for questions to go in the FAQ.
> 
Whoever volunteers: The first answer should summarize which version of
{md,raid}tools works with which kernel patched with{,out} patch XY.
Can't think of an question for that, though.

IMO it is very necessay to clear the fog that has laid itself across
raid-with-linux in the last weeks or so.

Marc

-- 
Marc Mutz <[EMAIL PROTECTED]>http://marc.mutz.com/
University of Bielefeld, Dep. of Mathematics / Dep. of Physics

PGP-keyID's:   0xd46ce9ab (RSA), 0x7ae55b9e (DSS/DH)



RE: Problem implementing raid-1 filesystems

1999-07-09 Thread Bruno Prior

Here's your problem:

> request_module[md-personality-3]: Root fs not mounted

It looks like you are using a RedHat stock kernel, not one you built yourself.
RedHat's kernels use modular raid support, not built-in. The modules sit in
/lib/modules which is on root and, in your case therefore, on /dev/md8. So when
the kernel tries to start the raid arrays, it looks for the module to provide
raid support, but can't find it because it is on a raid device, which can't be
started because the module to support it isn't available. Catch-22.

There are two solutions. Build raid support into a new kernel (more stable). Or
use an initrd that includes raid support. To do this, do "mkinitrd --with raid1
/boot/initrd 2.2.5-15". Then edit /etc/lilo.conf to add an initrd=/boot/initrd
for the relevant image section. Now run lilo to make sure you use the new initrd
from the next reboot.

I think the messages in syslog are from when you mkraid'ed the arrays, not from
the reboot (hence the time discrepancy).

I still think this system is too "belt-and-braces" and an unnecessary waste of
disk space, by the way.

Cheers,


Bruno Prior [EMAIL PROTECTED]



FAQ

1999-07-09 Thread Bruno Prior

It strikes me that this list desperately needs a FAQ. I'm off on holiday for the
next two weeks, but unless someone else wants to volunteer, I'm willing to put
one together when I get back. If people would like me to do this, I would
welcome suggestions for questions to go in the FAQ.

Cheers,


Bruno Prior [EMAIL PROTECTED]



RE: linux-raid 0.9 on SUSE-Linux 6.1

1999-07-09 Thread Bruno Prior

> I'm using SUSE Linux 6.1 with kernel 2.2.5
> Shipped with SUSE is mdtools 0.42.
>
> So I was loadding the 0.90 rpm packet.
> I installed it an by calling any raid-tool
> I get a segmentation fault.

mdtools 0.42 doesn't require a kernel patch to work, so I assume the SuSE kernel
doesn't include this patch. You need to rebuild the kernel with the raid0145
patch which you can get from
http://www..kernel.org/pub/linux/daemons/raid/alpha. You might be
better off getting the matching raidtools from there as well, instead of using
the rpm.

Alternatively, you could use mdtools 0.42 or raidtools 0.50 and not bother with
the patch. But you would probably be better off with the latest raid support.

Cheers,


Bruno Prior [EMAIL PROTECTED]



RE: mkraid problems (v0.90/2.2.10/RedHat)

1999-07-09 Thread Bruno Prior

> I can't figure out why mkraid is aborting. No messages show up in the
> syslog, and I get what looks like a typical response from /proc/mdstat

You are missing persistent-superblocks lines in your raidtab. I thought this
shouldn't matter, as it should default to "persistent-superblocks 1", but mkraid
is very sensitive to the exact order of items in raidtab, so maybe try putting
it in to see if that helps.

Also, did you follow the advice given recently on this list about using kernel
versions > 2.2.7 with raid? Check the archives
(http://www.kernelnotes.org/lnxlists/linux-raid/) for the last couple of weeks
for a lot of advice about the complications with using 2.2.8+ kernels.

Cheers,


Bruno Prior [EMAIL PROTECTED]



RE: mkraid problems (v0.90/2.2.10/RedHat)

1999-07-09 Thread Bruno Prior

> raid-level 1 with 3 disks?
> You have to use raid-level 5 (or 4).

No you don't. You can have as many mirrors in a RAID-1 mirror set as you want.
The setup described will protect against the simultaneous failure of two disks
(i.e. the failure of a second disk before you are able to replace the first).
But you could argue the case that this was an unnecessary level of paranoia. It
depends on the circumstances.

Cheers,


Bruno Prior [EMAIL PROTECTED]



RE: raidtools compilation problem

1999-07-09 Thread Bruno Prior

> I run Redhat 6.0/kernel 2.2.5-15 and installed raidtools-0.50b.

I believe RH6.0 comes with raidtools-0.90, so there is no point trying to
install raidtools-0.50b. Just setup your /etc/raidtab for the configuration you
want, and then run mkraid. For rebooting, you may want to include raid support
in your initrd. Do "mkinitrd --with raid0 /boot/initrd 2.2.5-15", replacing
raid0 with whatever raid level you need. Then edit /etc/lilo.conf to make sure
the initrd line is pointing to /boot/initrd. Now run lilo to make sure you use
the new initrd from the next reboot.

As long as you don't want / on raid, that's all you should need to do. See the
Software-RAID HOWTO in the raidtools package or at
http://ostenfeld.dk/~jakob/Software-RAID.HOWTO/ for more details.

> p.s. Where can I get the higher version of raidtools than 0.50b?

http://www..kernel.org/pub/linux/daemons/raid/alpha

Cheers,


Bruno Prior [EMAIL PROTECTED]



linux-raid 0.9 on SUSE-Linux 6.1

1999-07-09 Thread Schackel, Fa. Integrata, ZRZ DA

Hello everybody,

I'm using SUSE Linux 6.1 with kernel 2.2.5
Shipped with SUSE is mdtools 0.42.

So I was loadding the 0.90 rpm packet.
I installed it an by calling any raid-tool 
I get a segmentation fault.

Is there anybody who managed tho problem 
and could provide me any help ?

Thx
Barney



Problem implementing raid-1 filesystems

1999-07-09 Thread Joel Fowler

Software levels: Redhat 6.0, kernel 2.2.5-22, raid-tools-0.90

I just configured 5 raid-1 filesystems:
md5  /usr
md6  /home
md7  /var
md8  /
md10 /var/lib/mysql

Configuration was performed from a seperate config system on hdb
 (same filesystems as system under construction). 

 The following procedure was used:

1. defined partitions on hda and hdc
2. copied dd'ed hdb:/boot to hda and hdc
3. constructed hdb:/etc/raidtab (see below)
4. ran:  mkraid --force /dev/md5  (then: 6, 7, 8, 10)
5. ran:  mke2fs /dev/md5  (then: 6, 7, 8, 10)
6. modified /etc/lilo.conf to point root=/dev/md8 (mbr=hda)
7. ran:  lilo -v
8. copied hdb:filesystems to corresponding md filesystems
9. modified: md8:/etc/fstab to point to md filesystems
10. ran:  fdisk on /dev/hda -and- /dev/hdc  -- changed raid partition types
to 'fd'
11. booted newly constructed md system

Received following messages during boot:
==

Messages visable on console display:

md6 stopped.
considering hdc5 ...
  adding hdc5 ...
  adding hda5 ...
created md5
bind
bind
running: 
now!
hdc5's event counter: 0002
hda5's event counter: 0002
request_module[md-personality-3]: Root fs not mounted
do_md_run() returned -22
unbind
export_rdev(hdc5)
unbind
export_rdev(hda5)
md5 stopped.
... autorun DONE.
Bad md_map in ll_rw_block
EXT2-fs: unable to read superblock
Bad md_map in ll_rw_block
isofs_read_super: bread failed, dev=09:08, iso_blknum=16, block=32
Kernel panic: VFS: Unable to mount root fs on 09:08




>From /var/log/messages:
Note: I'm not real sure of these messages - the time doesn't seem right,
however, I only attempted to boot system once.

Jul  8 21:39:13 iServ kernel: hdc5's event counter:  
Jul  8 21:39:13 iServ kernel: hda5's event counter:  
Jul  8 21:39:13 iServ kernel: md: md5: raid array is not clean -- starting
background reconstruction 
Jul  8 21:39:14 iServ kernel: raid1 personality registered 
Jul  8 21:39:14 iServ kernel: md5: max total readahead window set to 128k 
Jul  8 21:39:14 iServ kernel: md5: 1 data-disks, max readahead per
data-disk: 128k 
Jul  8 21:39:14 iServ kernel: raid1: device hdc5 operational as mirror 1 
Jul  8 21:39:14 iServ kernel: raid1: device hda5 operational as mirror 0 
Jul  8 21:39:14 iServ kernel: raid1: raid set md5 not clean; reconstructing
mirrors 
Jul  8 21:39:14 iServ kernel: raid1: raid set md5 active with 2 out of 2
mirrors 
Jul  8 21:39:14 iServ kernel: md: updating md5 RAID superblock on device 
Jul  8 21:39:14 iServ kernel: hdc5 [events: 0001](write) hdc5's sb
offset: 610368 
Jul  8 21:39:14 iServ kernel: md: syncing RAID array md5 
Jul  8 21:39:14 iServ kernel: md: minimum _guaranteed_ reconstruction
speed: 100 KB/sec. 
Jul  8 21:39:14 iServ kernel: md: using maximum available idle IO bandwith
for reconstruction. 
Jul  8 21:39:14 iServ kernel: md: using 128k window. 
Jul  8 21:39:14 iServ kernel: hda5 [events: 0001](write) hda5's sb
offset: 610368 
Jul  8 21:39:14 iServ kernel: . 
Jul  8 21:39:41 iServ kernel: bind 
Jul  8 21:39:41 iServ kernel: bind 
Jul  8 21:39:41 iServ kernel: hdc6's event counter:  
Jul  8 21:39:41 iServ kernel: hda6's event counter:  
Jul  8 21:39:41 iServ kernel: md: md6: raid array is not clean -- starting
background reconstruction 
Jul  8 21:39:41 iServ kernel: md6: max total readahead window set to 128k 
Jul  8 21:39:41 iServ kernel: md6: 1 data-disks, max readahead per
data-disk: 128k 
Jul  8 21:39:41 iServ kernel: raid1: device hdc6 operational as mirror 1 
Jul  8 21:39:41 iServ kernel: raid1: device hda6 operational as mirror 0 
Jul  8 21:39:42 iServ kernel: raid1: raid set md6 not clean; reconstructing
mirrors 
Jul  8 21:39:42 iServ kernel: raid1: raid set md6 active with 2 out of 2
mirrors 
Jul  8 21:39:42 iServ kernel: md: updating md6 RAID superblock on device 
Jul  8 21:39:42 iServ kernel: hdc6 [events: 0001](write) hdc6's sb
offset: 513984 
Jul  8 21:39:42 iServ kernel: md: serializing resync, md6 has overlapping
physical units with md5! 

Jul  8 21:39:42 iServ kernel: hda6 [events: 0001](write) hda6's sb
offset: 513984 
Jul  8 21:39:42 iServ kernel: . 
Jul  8 21:40:24 iServ kernel: bind 
Jul  8 21:40:24 iServ kernel: bind 
Jul  8 21:40:24 iServ kernel: hdc7's event counter:  
Jul  8 21:40:24 iServ kernel: hda7's event counter:  
Jul  8 21:40:24 iServ kernel: md: md7: raid array is not clean -- starting
background reconstruction 
Jul  8 21:40:24 iServ kernel: md7: max total readahead window set to 128k 
Jul  8 21:40:24 iServ kernel: md7: 1 data-disks, max readahead per
data-disk: 128k 
Jul  8 21:40:24 iServ kernel: raid1: device hdc7 operational as mirror 1 
Jul  8 21:40:24 iServ kernel: raid1: device hda7 operational as mirror 0 
Jul  8 21:40:24 iServ kernel: raid1: raid set md7 not clean; reconstructing
mirrors 
Jul  8 21:40:24 iServ kernel: raid1: raid set md7 active with 2 out of 2
mirrors 
Jul  8 21:40:24 iServ kernel: md: updating md7 RAID superblock on device 
Jul  8 21:40

Re: mkraid problems (v0.90/2.2.10/RedHat)

1999-07-09 Thread Felix Egli


> My /etc/raidtab looks like this:
> raiddev /dev/md0
> raid-level  1
> nr-raid-disks   3
> nr-spare-disks  0
> chunk-size  4
> 
> device  /dev/sda2
> raid-disk   0
> device  /dev/sdb2
> raid-disk   1
> device  /dev/sdc2
> raid-disk   2
> 
> raiddev /dev/md1
> raid-level  1
> nr-raid-disks   3
> nr-spare-disks  0
> chunk-size  4
> 
> device  /dev/sda4
> raid-disk   0
> device  /dev/sdb4
> raid-disk   1
> device  /dev/sdc4
> raid-disk   2

raid-level 1 with 3 disks?
You have to use raid-level 5 (or 4).

-Felix
-- 
Felix Egli  | E-Mail: [EMAIL PROTECTED]
Communication Systems Inc. AG   | Phone:  +41-1-926 61 42
GoldNet | Fax:+41-1-926 61 45
Grundstrasse 66, CH-8712 Stäfa, Switzerland