Re: Grub2 reinstall on raid1 system. Corrections!!!!!

2011-01-18 Thread Jack Schneider
On Mon, 17 Jan 2011 20:43:16 -0700
Bob Proulx b...@proulx.com wrote:

 Jack Schneider wrote:
  Bob Proulx wrote:
 mdadm --stop /dev/md125
 mdadm --assemble /dev/md0
   --update=super-minor /dev/sda1 /dev/sdc1
  
 mdadm --stop /dev/126
 mdadm --assemble /dev/md1
   --update=super-minor /dev/sda5 /dev/sdc5
 
  Bob, a small glitch.  mdadm:/dev/sda1 exists but is not an md array.
  mdadm --stop was successful, before the above.
 
 If mdadm --stop was successful then it must have been an array before
 that point.  So that doesn't make sense.  Double check everything.
 
   mdadm --examine /dev/sda1
   mdadm --examine /dev/sdc1
   mdadm --detail /dev/md0
 
  It appears that a --create-like command is needed.  Looks like
  md125 is md0 overwritten somewhere...
 
 If you create an array it will destroy the data that is on the
 array.  Unless you want to discard your data you don't want to do
 that.  You want to assemble an array from the components.  That is
 an important distinction.
 
 You really want to be able to assemble the array.  Do so with one disk
 only if that is the only way (would need the mdadm forcing options to
 start an array without all of the components) and then add the other
 disk back in.  But if the array was up a moment before then it should
 still be okay.  So I am suspicious about the problem.  Poke around a
 little more with --examine and --detail first.  Something does seem
 right.
 
  Additionally, maybe I'm in the wrong config.  Running from a
  sysrescuecd.  I do have a current Debian-AMD64-rescue-live cd.
  Which I made this AM.
 
 That would definitely improve things.  Because then you will have
 compatible versions of all of the tools.
 
 Is your system amd64?
 
Yes,  a Supermicro X7DAL-E M/B with dual XEON quad core 3.2 ghz
processors and 4 Seagate Barracuda drives. 8 gigs of Ram.



  I need to find out what's there...  
  further:
  Can I execute the mdadm commands from a su out of a busybox
  prompt? 
 
 If you are in a busybox prompt at boot time then you are already root
 and don't need an explicit 'su'.  You should be able to execute root
 commands.  The question is whether the mdadm command is available at
 that point.  The reason for busybox is that it is a self-contained set
 of small unix commands.  'mdadm' isn't one of those and so probably
 isn't available.  Normally you can edit files and the like.  Normally
 I would mount and chroot to the system.  But you don't yet have a
 system.  So that is problematic at that point.
 
 Bob

This AM when I booted, (I powerdown init 0 each PM to save power 
hassle from S/O) the machine did not come up with grub-rescue prompt.
It booted to the correct grub menu then to Busy Box. I am thinking it
goes to BB because it can't find /var and or /usr on the md1/sda5 LVM
partition. I checked /proc/mdstat and lo  behold there was md1:active
with correct partitions and md0: active also correct partitions...  I
must have been seeing md125 et al from only the sysrescuecd 2.0.0.  So
here I sit with a root prompt from Busy Box I checked mdadm
--examine for all known partitions and mdadm --detail /mdo  /md1 and
all seems normal and correct.  No Errors. 

I seem to need a way of rerunning grub-install or update-grub to
fix this setup.  What say you??  I am thinking of trying to start the
/etc/grub.d demon.

Jack






-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/20110118081441.6a807...@torrid.volunteerwireless.net



Re: Grub2 reinstall on raid1 system. Corrections!!!!!

2011-01-18 Thread Bob Proulx
Jack Schneider wrote:
 It booted to the correct grub menu then to Busy Box. I am thinking it
 goes to BB because it can't find /var and or /usr on the md1/sda5 LVM
 partition.

Very likely.

 I checked /proc/mdstat and lo  behold there was md1:active
 with correct partitions and md0: active also correct partitions...

That is good news to hear.  Becuase it should mean that all of your
data is okay on those disks.  That is always a comfort to know.

 So here I sit with a root prompt from Busy Box I checked mdadm
 --examine for all known partitions and mdadm --detail /mdo  /md1
 and all seems normal and correct.  No Errors.

Yeah!  :-)

 Both /etc/fstab and /etc/mtab show entries for /dev/md126..
 What the ... ?

That does seem strange.  Could the tool you used previously have
edited that file?

You said you were using /dev/md1 as an lvm volume for /var, /home,
swap and other.  As I read this it means you would only have /dev/md0
for /boot in your /etc/fstab.  Right?  Something like this from my
system:

  /dev/md0/boot  ext2defaults0   2

You /var, /home and swap would use the lvm, right?  So from my system
I have the following:

  /dev/mapper/v1-var  /var   ext3defaults0   2
  /dev/mapper/v1-home /home  ext3defaults0   2

Those don't mention /dev/md1 (which showed up for you as /dev/md126)
at all.  They would only show up in the volume group display.

If you are seeing /dev/md126 in /etc/fstab then it is conflicting
information.  You will have to sort out the information conflict.  Do
you really have LVM in there?

Certainly if the /dev/md0 /boot boot line is incorrect then you
should correct it.  Edit the file and fix it.  If your filesystem is
mounted read-only at that point you will need to remount it
read-write.

  mount -n -o remount,rw /

Bob


signature.asc
Description: Digital signature


Re: Grub2 reinstall on raid1 system. Corrections!!!!!

2011-01-17 Thread Jack Schneider
On Sun, 16 Jan 2011 18:42:49 -0700
Bob Proulx b...@proulx.com wrote:

 Jack,
 
 With your pastebin information and the mdstat information (that last
 information in your mail and pastebins was critical good stuff) and
 I found this old posting from you too:  :-)
 
   http://lists.debian.org/debian-user/2009/10/msg00808.html
 
 With all of that I deduce the following:
 
   /dev/md125 /dev/sda1 /dev/sdc1 (10G) root partition with no lvm
   /dev/md126 /dev/sda5 /dev/sdc5 (288G) LVM for /home, /var, swap, ...
   /dev/md127 /dev/sdb /dev/sdd (465G) as yet unformatted
 
 Jack, If that is wrong please correct me.  But I think that is right.
 

That is Exactly correct.


 The mdstat data showed that the arrays are sync'd.  The UUIDs are as
 follows.
 
   ARRAY /dev/md/125_0 metadata=0.90
 UUID=e45b34d8:50614884:1f1d6a6a:d9c6914c ARRAY /dev/md/126_0
 metadata=0.90 UUID=c06c0ea6:5780b170:ea2fd86a:09558bd1
 ARRAY /dev/md/Speeduke:2 metadata=1.2 name=Speeduke:2
 UUID=91ae6046:969bad93:92136016:116577fd
 
 The desired state:
 
   /dev/md0 /dev/sda1 /dev/sdc1 (10G) root partition with no lvm
   /dev/md1 /dev/sda5 /dev/sdc5 (288G) LVM for /home, /var, swap, ...
 
 Will get to /dev/md2 later...
 
  My thinking is that I should rerun mdadm and reassemble the arrays
  to the original definitions...  /md0  from sda1  sdc1
   /md1  from sda5  sdc5  note: sda2
  sdc2 are  legacy msdos extended partitions.
  I would not build a md device with msdos extended partitions under
  LVM2 at this time..   Agree?
 
 Agreed.  You want to rename the arrays.  Don't touch the msdos
 partitions.
 
  Is the above doable?  If I can figure the right mdadm commands...8-)
 
 Yes.  It is doable.  You can rename the array.  First stop the array.
 Then assemble it again with the new desired name.  Here is what you
 want to do.  Tom, Henrique, others, Please double check me on these.
 
   mdadm --stop /dev/md125
   mdadm --assemble /dev/md0 --update=super-minor /dev/sda1 /dev/sdc1
 
   mdadm --stop /dev/126
   mdadm --assemble /dev/md1 --update=super-minor /dev/sda5 /dev/sdc5
 
 That should by itself be enough to get the arrays going.
 
 But, and this is an important but, did you previously add the new disk
 array to the LVM volume group on the above array?  If so then you are
 not done yet.  The LVM volume group won't be able to assemble without
 the new disk.  If you did then you need to fix up LVM next.


NO!  I did NOT add /dev/sdb and /dev/sdd to the LVM..  So that is not a
problem.. I was about to do that when the machine failed..
 
 I think you should try to get back to where you were before when your
 system was working.  Therefore I would remove the new disks from the
 LVM volume group.  But I don't know if you did or did not add it yet.
 So I must stop here and wait for further information from you.
 

 I don't know if your rescue disk has lvm automatically configured or
 not.  You may need to load the device mapper module dm_mod.  I don't
 know.  If you do then here is a hint:
 
   modprobe dm_mod
 
 To scan for volume groups:
 
   vgscan
 
Found volume group Speeduke using metadata type lvm2


 To activate a volume group:
 
   vgchange -ay

5 logical volume(s) in volume group Speeduke now active

 
 To display the physical volumes associated with a volume group:
 
   pvdisplay


PV Name /dev/md126
VG Name Speeduke

Other data ommited

PV UUID kUoBgV-R9n6-exZ1-fdIk-aqlb-7Ue1-R3B1PD 




 If the new disks haven't been added to the volume group (I am hoping
 not) then you should be home free.  But if they are then I think you
 will need to remove them first.
 
 I don't know if the LVM actions above are going to be needed.  I am
 just trying to proactively give some possible hints.
 
 Bob



 Bob, You cannot know how much I appreciate the time and effort you
 and others have given to this, hopefully a few more steps and all will
 be well..
 I have not done the things you have suggested above. I'll wait for your
 response and then go!!!

 One other thing I am bothered by, md0, md1 were built using mdadm
 v0.90, md2 was built with the current mdadm v 3.1.4. which changed
 the md names.  Does this matter

Jack



Jack


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/20110117071903.09664...@torrid.volunteerwireless.net



Re: Grub2 reinstall on raid1 system. Corrections!!!!!

2011-01-17 Thread Bob Proulx
Jack Schneider wrote:
 Bob Proulx wrote:
  But, and this is an important but, did you previously add the new disk
  array to the LVM volume group on the above array?  If so then you are
  not done yet.  The LVM volume group won't be able to assemble without
  the new disk.  If you did then you need to fix up LVM next.
 
 NO!  I did NOT add /dev/sdb and /dev/sdd to the LVM..  So that is not a
 problem.. I was about to do that when the machine failed..

Oh good.  Then you are good to go.  Run these commands to stop the
arrays and to reassemble them with the new names.

  mdadm --stop /dev/md125
  mdadm --assemble /dev/md0 --update=super-minor /dev/sda1 /dev/sdc1

  mdadm --stop /dev/126
  mdadm --assemble /dev/md1 --update=super-minor /dev/sda5 /dev/sdc5

Then try rebooting to the system.  I think at that point that all
should be okay and that it should boot up into the previous system.

  Bob, You cannot know how much I appreciate the time and effort you
  and others have given to this, hopefully a few more steps and all will
  be well..

I have my fingers crossed for you that it will all be okay.

  I have not done the things you have suggested above. I'll wait for your
  response and then go!!!

Please go ahead and do the above commands to rename the arrays and to
reboot to the previous system.  I believe that should work.  Hope so.
These things can be finicky though.

  One other thing I am bothered by, md0, md1 were built using mdadm
  v0.90, md2 was built with the current mdadm v 3.1.4. which changed
  the md names.  Does this matter

Yes.  I am a little worried about that problem too.  But we were at a
good stopping point and I didn't want to get ahead of things.  But
let's assume that the above renaming of the raid arrays works and you
can boot to your system again.  Then what should be done about the new
disks?  Let me talk about the new disks.  But hold off working this
part of the problem until you have the first part done.  Just do one
thing at a time.

  /dev/md127 /dev/sdb /dev/sdd (465G) as yet unformatted
  ARRAY /dev/md/Speeduke:2 metadata=1.2 name=Speeduke:2 
UUID=91ae6046:969bad93:92136016:116577fd

This was created using newer metadata.  I think that is going to be a
problem for Lenny/Sqeeze.  It says 1.2 but Lenny/Squeeze is 0.90.  (A
major difference is where the metadata is located.  1.0 is in a
similar location to 0.90 but 1.1 and 1.2 use locations near the start
of the device.)  Plus you assigned the entire drive (/dev/sdb) instead
of using a partition for it (/dev/sdb1).  I personally don't prefer
that and always set up using a partition instead of the whole disk.

I am not sure the best course of action for the new disks.  I suggest
stopping the new array, partitioning the drives to a partion instead
of the raw disk, then recreating it using the newly created
partitions.  Do that under your (hopefully now booting) Squeeze system
and then you are assured of compatibility.  It is perhaps possible
that because of the new metadata that the metadata=1.2 array won't be
recognized under Squeeze.  I don't know.  I haven't been in that
situation yet.  I think that would be good though because it would
mean that they would just look like raw disks again without needing to
stop the array, if it never got started.  Then you could partition and
so forth.  The future is hard to see here.

So that is my advice.  If the new array is running then I would stop
it.  (mdadm --stop /dev/md127) Then partition it, partition /dev/sdb
into /dev/sdb1 and /dev/sdd into /dev/sdd1.  Then create the array
using the new sdb1 and sdd1 partitions.  Then decide how to make use
of it.

Note that if you add new disk to the lvm root volume group then you
also need to rebuild the initrd or your system won't be able to
assemble the array at boot time and will fail to boot.  (Saying that
mostly for people who find this in the archive later.)

Bob



signature.asc
Description: Digital signature


Re: Grub2 reinstall on raid1 system. Corrections!!!!!

2011-01-17 Thread Jack Schneider
On Mon, 17 Jan 2011 12:48:29 -0700
Bob Proulx b...@proulx.com wrote:

 Jack Schneider wrote:
  Bob Proulx wrote: will be well..
 
 I have my fingers crossed for you that it will all be okay.
 
   I have not done the things you have suggested above. I'll wait for
  your response and then go!!!
 
 Please go ahead and do the above commands to rename the arrays and to
 reboot to the previous system.  I believe that should work.  Hope so.
 These things can be finicky though.
 
   One other thing I am bothered by, md0, md1 were built using mdadm
   v0.90, md2 was built with the current mdadm v 3.1.4. which changed
   the md names.  Does this matter
 
 Yes.  I am a little worried about that problem too.  But we were at a
 good stopping point and I didn't want to get ahead of things.  But
 let's assume that the above renaming of the raid arrays works and you
 can boot to your system again.  Then what should be done about the new
 disks?  Let me talk about the new disks.  But hold off working this
 part of the problem until you have the first part done.  Just do one
 thing at a time.
 
   /dev/md127 /dev/sdb /dev/sdd (465G) as yet unformatted
   ARRAY /dev/md/Speeduke:2 metadata=1.2 name=Speeduke:2
 UUID=91ae6046:969bad93:92136016:116577fd
 
 This was created using newer metadata.  I think that is going to be a
 problem for Lenny/Sqeeze.  It says 1.2 but Lenny/Squeeze is 0.90.  (A
 major difference is where the metadata is located.  1.0 is in a
 similar location to 0.90 but 1.1 and 1.2 use locations near the start
 of the device.)  Plus you assigned the entire drive (/dev/sdb) instead
 of using a partition for it (/dev/sdb1).  I personally don't prefer
 that and always set up using a partition instead of the whole disk.
 
 I am not sure the best course of action for the new disks.  I suggest
 stopping the new array, partitioning the drives to a partion instead
 of the raw disk, then recreating it using the newly created
 partitions.  Do that under your (hopefully now booting) Squeeze system
 and then you are assured of compatibility.  It is perhaps possible
 that because of the new metadata that the metadata=1.2 array won't be
 recognized under Squeeze.  I don't know.  I haven't been in that
 situation yet.  I think that would be good though because it would
 mean that they would just look like raw disks again without needing to
 stop the array, if it never got started.  Then you could partition and
 so forth.  The future is hard to see here.
 
 So that is my advice.  If the new array is running then I would stop
 it.  (mdadm --stop /dev/md127) Then partition it, partition /dev/sdb
 into /dev/sdb1 and /dev/sdd into /dev/sdd1.  Then create the array
 using the new sdb1 and sdd1 partitions.  Then decide how to make use
 of it.
 
 Note that if you add new disk to the lvm root volume group then you
 also need to rebuild the initrd or your system won't be able to
 assemble the array at boot time and will fail to boot.  (Saying that
 mostly for people who find this in the archive later.)
 
 Bob
 

Thanks, Bob 

What is the command to rebuild initrd? From what directory?
Just mostly for people who find this in the archive later.   8-)

Jack


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/20110117151918.6acbb...@torrid.volunteerwireless.net




Re: Grub2 reinstall on raid1 system. Corrections!!!!!

2011-01-17 Thread Bob Proulx
Jack Schneider wrote:
 Bob Proulx wrote:
  Note that if you add new disk to the lvm root volume group then you
  also need to rebuild the initrd or your system won't be able to
  assemble the array at boot time and will fail to boot.  (Saying that
  mostly for people who find this in the archive later.)

 What is the command to rebuild initrd? From what directory?
 Just mostly for people who find this in the archive later.   8-)

You can do this most easily by reconfiguring the kernel package.

  dpkg-reconfigure linux-image-2.6.32-5-i686

Adjust that for your currently installed kernel.  That will rebuild
the initrd as part of the postinst script process.

Doing so will take the updated /etc/mdadm/mdadm.conf information and
update the stored copy in the initrd.  (In the mdadm.conf file stored
in the /boot/initrd.img-2.6.32-5-amd64 initial ram disk filesystem.)

Bob


signature.asc
Description: Digital signature


Re: Grub2 reinstall on raid1 system. Corrections!!!!!

2011-01-17 Thread Jack Schneider
On Mon, 17 Jan 2011 12:48:29 -0700
Bob Proulx b...@proulx.com wrote:

 Jack Schneider wrote:
  Bob Proulx wrote:
   But, and this is an important but, did you previously add the new
   disk array to the LVM volume group on the above array?  If so
   then you are not done yet.  The LVM volume group won't be able to
   assemble without the new disk.  If you did then you need to fix
   up LVM next.
  
  NO!  I did NOT add /dev/sdb and /dev/sdd to the LVM..  So that is
  not a problem.. I was about to do that when the machine failed..
 
 Oh good.  Then you are good to go.  Run these commands to stop the
 arrays and to reassemble them with the new names.
 
   mdadm --stop /dev/md125
   mdadm --assemble /dev/md0 --update=super-minor /dev/sda1 /dev/sdc1
 
   mdadm --stop /dev/126
   mdadm --assemble /dev/md1 --update=super-minor /dev/sda5 /dev/sdc5
 
 Then try rebooting to the system.  I think at that point that all
 should be okay and that it should boot up into the previous system.
 
   Bob, You cannot know how much I appreciate the time and effort you
   and others have given to this, hopefully a few more steps and all
  will be well..
 
 I have my fingers crossed for you that it will all be okay.
 
   I have not done the things you have suggested above. I'll wait for
  your response and then go!!!
 
 Please go ahead and do the above commands to rename the arrays and to
 reboot to the previous system.  I believe that should work.  Hope so.
 These things can be finicky though.
 
   One other thing I am bothered by, md0, md1 were built using mdadm
   v0.90, md2 was built with the current mdadm v 3.1.4. which changed
   the md names.  Does this matter
 
 Yes.  I am a little worried about that problem too.  But we were at a
 good stopping point and I didn't want to get ahead of things.  But
 let's assume that the above renaming of the raid arrays works and you
 can boot to your system again.  Then what should be done about the new
 disks?  Let me talk about the new disks.  But hold off working this
 part of the problem until you have the first part done.  Just do one
 thing at a time.
 
   /dev/md127 /dev/sdb /dev/sdd (465G) as yet unformatted
   ARRAY /dev/md/Speeduke:2 metadata=1.2 name=Speeduke:2
 UUID=91ae6046:969bad93:92136016:116577fd
 
 This was created using newer metadata.  I think that is going to be a
 problem for Lenny/Sqeeze.  It says 1.2 but Lenny/Squeeze is 0.90.  (A
 major difference is where the metadata is located.  1.0 is in a
 similar location to 0.90 but 1.1 and 1.2 use locations near the start
 of the device.)  Plus you assigned the entire drive (/dev/sdb) instead
 of using a partition for it (/dev/sdb1).  I personally don't prefer
 that and always set up using a partition instead of the whole disk.
 
 I am not sure the best course of action for the new disks.  I suggest
 stopping the new array, partitioning the drives to a partion instead
 of the raw disk, then recreating it using the newly created
 partitions.  Do that under your (hopefully now booting) Squeeze system
 and then you are assured of compatibility.  It is perhaps possible
 that because of the new metadata that the metadata=1.2 array won't be
 recognized under Squeeze.  I don't know.  I haven't been in that
 situation yet.  I think that would be good though because it would
 mean that they would just look like raw disks again without needing to
 stop the array, if it never got started.  Then you could partition and
 so forth.  The future is hard to see here.
 
 So that is my advice.  If the new array is running then I would stop
 it.  (mdadm --stop /dev/md127) Then partition it, partition /dev/sdb
 into /dev/sdb1 and /dev/sdd into /dev/sdd1.  Then create the array
 using the new sdb1 and sdd1 partitions.  Then decide how to make use
 of it.
 
 Note that if you add new disk to the lvm root volume group then you
 also need to rebuild the initrd or your system won't be able to
 assemble the array at boot time and will fail to boot.  (Saying that
 mostly for people who find this in the archive later.)
 
 Bob
 

Bob, a small glitch.  mdadm:/dev/sda1 exists but is not an md array.
mdadm --stop was successful, before the above. 

It appears that a --create-like command is needed.  Looks like
md125 is md0 overwritten somewhere...  

One of the problems of my no problem found mentality..

Jack  


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/20110117155012.7c001...@torrid.volunteerwireless.net



Re: Grub2 reinstall on raid1 system. Corrections!!!!!

2011-01-17 Thread Jack Schneider
On Mon, 17 Jan 2011 15:50:12 -0600
Jack Schneider p...@dp-indexing.com wrote:

 On Mon, 17 Jan 2011 12:48:29 -0700
 Bob Proulx b...@proulx.com wrote:
 
  Jack Schneider wrote:
   Bob Proulx wrote:
But, and this is an important but, did you previously add the
new disk array to the LVM volume group on the above array?  If
so then you are not done yet.  The LVM volume group won't be
able to assemble without the new disk.  If you did then you
need to fix up LVM next.
   
   NO!  I did NOT add /dev/sdb and /dev/sdd to the LVM..  So that is
   not a problem.. I was about to do that when the machine failed..
  
  Oh good.  Then you are good to go.  Run these commands to stop the
  arrays and to reassemble them with the new names.
  
mdadm --stop /dev/md125
mdadm --assemble /dev/md0 --update=super-minor /dev/sda1 /dev/sdc1
  
mdadm --stop /dev/126
mdadm --assemble /dev/md1 --update=super-minor /dev/sda5 /dev/sdc5
  
  Then try rebooting to the system.  I think at that point that all
  should be okay and that it should boot up into the previous system.
  
Bob, You cannot know how much I appreciate the time and effort
   you and others have given to this, hopefully a few more steps and
   all will be well..
  
  I have my fingers crossed for you that it will all be okay.
  
I have not done the things you have suggested above. I'll wait
   for your response and then go!!!
  
  Please go ahead and do the above commands to rename the arrays and
  to reboot to the previous system.  I believe that should work.
  Hope so. These things can be finicky though.
  
One other thing I am bothered by, md0, md1 were built using mdadm
v0.90, md2 was built with the current mdadm v 3.1.4. which
   changed the md names.  Does this matter
  
  Yes.  I am a little worried about that problem too.  But we were at
  a good stopping point and I didn't want to get ahead of things.  But
  let's assume that the above renaming of the raid arrays works and
  you can boot to your system again.  Then what should be done about
  the new disks?  Let me talk about the new disks.  But hold off
  working this part of the problem until you have the first part
  done.  Just do one thing at a time.
  
/dev/md127 /dev/sdb /dev/sdd (465G) as yet unformatted
ARRAY /dev/md/Speeduke:2 metadata=1.2 name=Speeduke:2
  UUID=91ae6046:969bad93:92136016:116577fd
  
  This was created using newer metadata.  I think that is going to be
  a problem for Lenny/Sqeeze.  It says 1.2 but Lenny/Squeeze is
  0.90.  (A major difference is where the metadata is located.  1.0
  is in a similar location to 0.90 but 1.1 and 1.2 use locations near
  the start of the device.)  Plus you assigned the entire drive
  (/dev/sdb) instead of using a partition for it (/dev/sdb1).  I
  personally don't prefer that and always set up using a partition
  instead of the whole disk.
  
  I am not sure the best course of action for the new disks.  I
  suggest stopping the new array, partitioning the drives to a
  partion instead of the raw disk, then recreating it using the newly
  created partitions.  Do that under your (hopefully now booting)
  Squeeze system and then you are assured of compatibility.  It is
  perhaps possible that because of the new metadata that the
  metadata=1.2 array won't be recognized under Squeeze.  I don't
  know.  I haven't been in that situation yet.  I think that would be
  good though because it would mean that they would just look like
  raw disks again without needing to stop the array, if it never got
  started.  Then you could partition and so forth.  The future is
  hard to see here.
  
  So that is my advice.  If the new array is running then I would stop
  it.  (mdadm --stop /dev/md127) Then partition it, partition /dev/sdb
  into /dev/sdb1 and /dev/sdd into /dev/sdd1.  Then create the array
  using the new sdb1 and sdd1 partitions.  Then decide how to make use
  of it.
  
  Note that if you add new disk to the lvm root volume group then you
  also need to rebuild the initrd or your system won't be able to
  assemble the array at boot time and will fail to boot.  (Saying that
  mostly for people who find this in the archive later.)
  
  Bob
  
 
 Bob, a small glitch.  mdadm:/dev/sda1 exists but is not an md array.
 mdadm --stop was successful, before the above. 
 
 It appears that a --create-like command is needed.  Looks like
 md125 is md0 overwritten somewhere...  
 
 One of the problems of my no problem found mentality..
 
 Jack  
 
 
Additionally, maybe I'm in the wrong config.  Running from a
sysrescuecd.  I do have a current Debian-AMD64-rescue-live cd.   Which
I made this AM.  I need to find out what's there...  
further:
Can I execute the mdadm commands from a su out of a busybox prompt? 

Jack


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 

Re: Grub2 reinstall on raid1 system. Corrections!!!!!

2011-01-17 Thread Bob Proulx
Jack Schneider wrote:
 Bob Proulx wrote:
mdadm --stop /dev/md125
mdadm --assemble /dev/md0 --update=super-minor /dev/sda1 /dev/sdc1
 
mdadm --stop /dev/126
mdadm --assemble /dev/md1 --update=super-minor /dev/sda5 /dev/sdc5

 Bob, a small glitch.  mdadm:/dev/sda1 exists but is not an md array.
 mdadm --stop was successful, before the above.

If mdadm --stop was successful then it must have been an array before
that point.  So that doesn't make sense.  Double check everything.

  mdadm --examine /dev/sda1
  mdadm --examine /dev/sdc1
  mdadm --detail /dev/md0

 It appears that a --create-like command is needed.  Looks like
 md125 is md0 overwritten somewhere...

If you create an array it will destroy the data that is on the
array.  Unless you want to discard your data you don't want to do
that.  You want to assemble an array from the components.  That is
an important distinction.

You really want to be able to assemble the array.  Do so with one disk
only if that is the only way (would need the mdadm forcing options to
start an array without all of the components) and then add the other
disk back in.  But if the array was up a moment before then it should
still be okay.  So I am suspicious about the problem.  Poke around a
little more with --examine and --detail first.  Something does seem
right.

 Additionally, maybe I'm in the wrong config.  Running from a
 sysrescuecd.  I do have a current Debian-AMD64-rescue-live cd.   Which
 I made this AM.

That would definitely improve things.  Because then you will have
compatible versions of all of the tools.

Is your system amd64?

 I need to find out what's there...  
 further:
 Can I execute the mdadm commands from a su out of a busybox prompt? 

If you are in a busybox prompt at boot time then you are already root
and don't need an explicit 'su'.  You should be able to execute root
commands.  The question is whether the mdadm command is available at
that point.  The reason for busybox is that it is a self-contained set
of small unix commands.  'mdadm' isn't one of those and so probably
isn't available.  Normally you can edit files and the like.  Normally
I would mount and chroot to the system.  But you don't yet have a
system.  So that is problematic at that point.

Bob


signature.asc
Description: Digital signature


Re: Grub2 reinstall on raid1 system. Corrections!!!!!

2011-01-16 Thread Jack Schneider
On Sat, 15 Jan 2011 16:57:46 -0700
Bob Proulx b...@proulx.com wrote:

 Jack Schneider wrote:
  Bob Proulx wrote:
   Jack Schneider wrote:
I have a raid1 based W/S running Debian Squeeze uptodate. (was
until ~7 days ago) There are 4 drives, 2 of which had never been
used or formatted. I configured a new array using Disk Utility
from a live Ubuntu CD. That's where I screwed up... The end
result was the names of the arrays were changed on the working
2 drives. IE: /dev/md0 to /dev/126 and /dev/md1 became md127.
   
   Something else must have happened too.  Because normally just
   adding arrays will not rename the existing arrays.  I am not
   familiar with the Disk Utility that you mention.
 
  Hi, Bob 
  Thanks for your encouraging advice...
 
 I believe you should be able to completely recover from the current
 problems.  But it may be tedious and not completely trivial.  You will
 just have to work through it.
 
 Now that there is more information available, and knowing that you are
 using software raid and lvm, let me guess.  You added another physical
 extent (a new /dev/md2 partition) to the root volume group?  If so
 that is a common problem.  I have hit it myself on a number of
 occasions.  You need to update the mdadm.conf file and rebuild the
 initrd.  I will say more details about it as I go here in this
 message.
 
  As I mentioned in a prior post,Grub was leaving me at a Grub
  rescueprompt.  
  
  I followed this procedure:
  http://www.gnu.org/software/grub/manual/html_node/GRUB-only-offers-a-rescue-shell.html#GRUB-only-offers-a-rescue-shell
 
 That seems reasonable.  It talks about how to drive the grub boot
 prompt to manually set up the boot.
 
 But you were talking about using a disk utility from a live cd to
 configure a new array with two new drives and that is where I was
 thinking that you had been modifying the arrays.  It sounded like it
 anyway.
 
 Gosh it would be a lot easier if we could just pop in for a quick peek
 at the system in person.  But we will just have to make do with the
 correspondence course.  :-)
 
  Booting now leaves me at a busy box: However the Grub menu is
  correct. With the correct kernels. So it appears that grub is now
  finding the root/boot partitions and files. 
 
 That sounds good.  Hopefully not too bad off then.
 
   Next time instead you might just use mdadm directly.  It really is
   quite easy to create new arrays using it.  Here is an example that
   will create a new device /dev/md9 mirrored from two other devices
   /dev/sdy5 and /dev/sdz5.
   
 mdadm --create /dev/md9 --level=mirror
   --raid-devices=2 /dev/sdy5 /dev/sdz5
   
Strangely the md2 array which I setup on the added drives
remains as /dev/md2. My root partition is/was on /dev/md0. The
result is that Grub2 fails to boot the / array.
 
  This is how I created /dev/md2.
 
 Then that explains why it didn't change.  Probably the HOMEHOST
 parameter is involved on the ones that changed.  Using mdadm from the
 command line doesn't set that parameter.
 
 There was just a long discussion about this topic just recently.
 You might want to jump into it in the middle here and read our
 learnings with HOMEHOST.
 
   http://lists.debian.org/debian-user/2010/12/msg01105.html
 
  mdadm --examine /dev/sda1  /dev/sda2  gives I think a clean result 
  I have posted the output at : http://pastebin.com/pHpKjgK3
 
 That looks good to me.  And healthy and normal.  Looks good to me for
 that part.
 
 But that is only the first partition.  That is just /dev/md0.  Do you
 have any information on the other partitions?
 
 You can look at /proc/partitions to get a list of all of the
 partitions that the kernel knows about.
 
   cat /proc/partitions
 
 Then you can poke at the other ones too.  But it looks like the
 filesystems are there okay.
 
  mdadm --detail /dev/md0 -- gives  mdadm: md device /dev/md0 does
  not appear to be active. 
  
  There is no /proc/mdstat  data output.  
 
 So it looks like the raid data is there on the disks but that the
 multidevice (md) module is not starting up in the kernel.  Because it
 isn't starting then there aren't any /dev/md* devices and no status
 output in /proc/mdstat.
 
   I would boot a rescue image and then inspect the current
   configuration using the above commands.  Hopefully that will show
   something wrong that can be fixed after you know what it is.
 
 I still think this is the best course of action for you.  Boot a
 rescue disk into the system and then go from there.  Do you have a
 Debian install disk #1 or Debian netinst or other installation disk?
 Any of those will have a rescue system that should boot your system
 okay.  The Debian rescue disk will automatically search for raid
 partitions and automatically start the md modules.
 
  So it appears that I must rebuild my arrays.
 
 I think your arrays might be fine.  More information is needed.
 
 You said your boot partition was /dev/md0.  I assume that your 

Re: Grub2 reinstall on raid1 system. Corrections!!!!!

2011-01-16 Thread Bob Proulx
Jack,

With your pastebin information and the mdstat information (that last
information in your mail and pastebins was critical good stuff) and
I found this old posting from you too:  :-)

  http://lists.debian.org/debian-user/2009/10/msg00808.html

With all of that I deduce the following:

  /dev/md125 /dev/sda1 /dev/sdc1 (10G) root partition with no lvm
  /dev/md126 /dev/sda5 /dev/sdc5 (288G) LVM for /home, /var, swap, ...
  /dev/md127 /dev/sdb /dev/sdd (465G) as yet unformatted

Jack, If that is wrong please correct me.  But I think that is right.

The mdstat data showed that the arrays are sync'd.  The UUIDs are as
follows.

  ARRAY /dev/md/125_0 metadata=0.90 UUID=e45b34d8:50614884:1f1d6a6a:d9c6914c
  ARRAY /dev/md/126_0 metadata=0.90 UUID=c06c0ea6:5780b170:ea2fd86a:09558bd1
  ARRAY /dev/md/Speeduke:2 metadata=1.2 name=Speeduke:2 
UUID=91ae6046:969bad93:92136016:116577fd

The desired state:

  /dev/md0 /dev/sda1 /dev/sdc1 (10G) root partition with no lvm
  /dev/md1 /dev/sda5 /dev/sdc5 (288G) LVM for /home, /var, swap, ...

Will get to /dev/md2 later...

 My thinking is that I should rerun mdadm and reassemble the arrays to
 the original definitions...  /md0  from sda1  sdc1
/md1  from sda5  sdc5  note: sda2 sdc2
are  legacy msdos extended partitions.
 I would not build a md device with msdos extended partitions under LVM2
 at this time..   Agree?

Agreed.  You want to rename the arrays.  Don't touch the msdos
partitions.

 Is the above doable?  If I can figure the right mdadm commands...8-)

Yes.  It is doable.  You can rename the array.  First stop the array.
Then assemble it again with the new desired name.  Here is what you
want to do.  Tom, Henrique, others, Please double check me on these.

  mdadm --stop /dev/md125
  mdadm --assemble /dev/md0 --update=super-minor /dev/sda1 /dev/sdc1

  mdadm --stop /dev/126
  mdadm --assemble /dev/md1 --update=super-minor /dev/sda5 /dev/sdc5

That should by itself be enough to get the arrays going.

But, and this is an important but, did you previously add the new disk
array to the LVM volume group on the above array?  If so then you are
not done yet.  The LVM volume group won't be able to assemble without
the new disk.  If you did then you need to fix up LVM next.

I think you should try to get back to where you were before when your
system was working.  Therefore I would remove the new disks from the
LVM volume group.  But I don't know if you did or did not add it yet.
So I must stop here and wait for further information from you.

I don't know if your rescue disk has lvm automatically configured or
not.  You may need to load the device mapper module dm_mod.  I don't
know.  If you do then here is a hint:

  modprobe dm_mod

To scan for volume groups:

  vgscan

To activate a volume group:

  vgchange -ay

To display the physical volumes associated with a volume group:

  pvdisplay

If the new disks haven't been added to the volume group (I am hoping
not) then you should be home free.  But if they are then I think you
will need to remove them first.

I don't know if the LVM actions above are going to be needed.  I am
just trying to proactively give some possible hints.

Bob


signature.asc
Description: Digital signature