I've forced this hundreds of times over the years working for a hosting
provider where we used software raid / LVM on almost all of our
servers.

I've found the following commands quite helpful when in the situation
like you describe.  I'd usually be using a CentOS rescue environment
mainly because I have it all set up to pxe boot, do installs, etc but
the Debian rescue environment works just fine too.

lvm vgscan
lvm vgchange -ay
lvm lvs

I'll always follow up with the following (assuming the commands worked)

e2fsck /dev/volumegroupname/logicalvolumename

<--snip-->

the array wasn't just in a degraded state when the new drive was added,
it could not be started at all. the system had crashed and would not
boot fully. it would load the kernel and whatever it could load from
md0, but once it came time to mount the volumes from md1, it stalled. I
had to go into a liveboot environment, manually assemble the array
(with --force), and add the new drive. it synced, then I had to re-
apply the correct name and uuid to the array. then it was detected and
started by the system on md0. but still could not mount the lvm
volumes.
<--snip-->

--
David

Reply via email to