Jack Schneider wrote: > Bob Proulx wrote: > > But, and this is an important but, did you previously add the new disk > > array to the LVM volume group on the above array? If so then you are > > not done yet. The LVM volume group won't be able to assemble without > > the new disk. If you did then you need to fix up LVM next. >> > NO! I did NOT add /dev/sdb and /dev/sdd to the LVM.. So that is not a > problem.. I was about to do that when the machine failed..
Oh good. Then you are good to go. Run these commands to stop the arrays and to reassemble them with the new names. mdadm --stop /dev/md125 mdadm --assemble /dev/md0 --update=super-minor /dev/sda1 /dev/sdc1 mdadm --stop /dev/126 mdadm --assemble /dev/md1 --update=super-minor /dev/sda5 /dev/sdc5 Then try rebooting to the system. I think at that point that all should be okay and that it should boot up into the previous system. > Bob, You cannot know how much I appreciate the time and effort you > and others have given to this, hopefully a few more steps and all will > be well.. I have my fingers crossed for you that it will all be okay. > I have not done the things you have suggested above. I'll wait for your > response and then go!!! Please go ahead and do the above commands to rename the arrays and to reboot to the previous system. I believe that should work. Hope so. These things can be finicky though. > One other thing I am bothered by, md0, md1 were built using mdadm > v0.90, md2 was built with the current mdadm v 3.1.4. which changed > the md names. Does this matter???? Yes. I am a little worried about that problem too. But we were at a good stopping point and I didn't want to get ahead of things. But let's assume that the above renaming of the raid arrays works and you can boot to your system again. Then what should be done about the new disks? Let me talk about the new disks. But hold off working this part of the problem until you have the first part done. Just do one thing at a time. /dev/md127 /dev/sdb /dev/sdd (465G) as yet unformatted ARRAY /dev/md/Speeduke:2 metadata=1.2 name=Speeduke:2 UUID=91ae6046:969bad93:92136016:116577fd This was created using newer metadata. I think that is going to be a problem for Lenny/Sqeeze. It says 1.2 but Lenny/Squeeze is 0.90. (A major difference is where the metadata is located. 1.0 is in a similar location to 0.90 but 1.1 and 1.2 use locations near the start of the device.) Plus you assigned the entire drive (/dev/sdb) instead of using a partition for it (/dev/sdb1). I personally don't prefer that and always set up using a partition instead of the whole disk. I am not sure the best course of action for the new disks. I suggest stopping the new array, partitioning the drives to a partion instead of the raw disk, then recreating it using the newly created partitions. Do that under your (hopefully now booting) Squeeze system and then you are assured of compatibility. It is perhaps possible that because of the new metadata that the metadata=1.2 array won't be recognized under Squeeze. I don't know. I haven't been in that situation yet. I think that would be good though because it would mean that they would just look like raw disks again without needing to stop the array, if it never got started. Then you could partition and so forth. The future is hard to see here. So that is my advice. If the new array is running then I would stop it. (mdadm --stop /dev/md127) Then partition it, partition /dev/sdb into /dev/sdb1 and /dev/sdd into /dev/sdd1. Then create the array using the new sdb1 and sdd1 partitions. Then decide how to make use of it. Note that if you add new disk to the lvm root volume group then you also need to rebuild the initrd or your system won't be able to assemble the array at boot time and will fail to boot. (Saying that mostly for people who find this in the archive later.) Bob
signature.asc
Description: Digital signature