You don't even need that, just do this:

1. echo 'DEVICE /dev/hd*[0-9] /dev/sd*[0-9]' > mdadm.conf
2. mdadm  --examine  --scan --config=mdadm.conf

This will search all partitions and give the relevant SW raid information:

ARRAY /dev/md/4 level=raid5 metadata=1 num-devices=5 UUID=7f453e1889:3e4dd96e:8103724c:724f49 name=4 ARRAY /dev/md/3 level=raid5 metadata=1 num-devices=4 UUID=b3d7134904:52828f3d:0f0245a2:e8226d name=3

Then edit these lines so they look like this:

ARRAY /dev/md3 level=raid5 metadata=1 num-devices=4 UUID=b3d7134904:52828f3d:0f0245a2:e8226d name=3 ARRAY /dev/md4 level=raid5 metadata=1 num-devices=5 UUID=7f453e1889:3e4dd96e:8103724c:724f49 name=4

Then run:

1. mdadm -As /dev/md3
2. mdadm -As /dev/md4

p34:/etc/mdadm# mdadm -As /dev/md3
mdadm: /dev/md3 has been started with 4 drives.
p34:/etc/mdadm# mdadm -As /dev/md4
mdadm: /dev/md4 has been started with 5 drives.

Done.


On Wed, 13 Jun 2007, Thorsten Wolf wrote:

Dear Neil,
hi everyone else.

I've been reading a lot about the mdadm tools lately, and I believe that it is 
possible, but I haven't found the right documentation yet.

Is it possible to re-mount the RAID I have from another Linux installation? Do I 
need more than the 'mdadm --detail --scan > /etc/mdadm.conf' info saved to 
another location (so I can re-import it)?

Regards,

Thorsten

On Tuesday June 12, [EMAIL PROTECTED] wrote:
Hello everyone.

I've got a SLES9 SP3 running and I've been quite happy with it so far.

Recently, I've created a 4 disk spanning RAID-5 on our company
server. Runs quite nice and we're happy with that too. I created
that RAID using the SLES mdadm (1.4 I believe) package.

After discovering that there is a much newer mdadm out here (2.6.1),
I decided to upgrade. It went just fine. Raid still running at 120
MB/sec.

After adding a disk to the raid, which went fine as well...... BUT:

The added disk /dev/sda1 shows up in /proc/mdstat, but does not have
the "spare (s)" flag.

Plus... the --grow doesn't work...

I get the: mdadm: /dev/md0: Cannot get array details from sysfs
error which has been discussed before. Can it be that this is caused
by the 2.6.5-7.2xx Kernel? Any ideas?

Yes.  All of your issues are caused by using a 2.6.5 based kernel.
However even upgrading to SLES10 would not get you raid5-grow.  That
came a little later.  You would need to compile a mainline kernel or
wait for SLES11.

NeilBrown

--
Contact me on ICQ: 7656468
skype://sysfried

Der GMX SmartSurfer hilft bis zu 70% Ihrer Onlinekosten zu sparen!
Ideal für Modem und ISDN: http://www.gmx.net/de/go/smartsurfer
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to