Re: SOLVED: Software-RAID1 on sarge (AMD64)

2006-05-30 Thread Gabor Gombas
On Mon, May 29, 2006 at 04:32:17AM +0200, Goswin von Brederlow wrote:

 If raid is buildin into the kernel and all the disk drivers for the
 raid too then the type 0xFD causes the kernel to already detect and
 start the raid. So you need no initrd and no mdadm to boot. So in your
 case you DO need that. (PS: I prefer non initrd boot too and have the
 same).

Yeah. I tried initramfs a couple of weeks ago using etch, but

- initramfs-tools assembled the arrays in the wrong order and therefore
  tried to mount the swap device as / - k...
- yaird failed to assemble the root array when the components of the
  array got different names than when the initramfs was created (e.g.
  due to moving the disks to a different controller). Btw. for the same
  reason yaird is also horribly broken for the non-RAID case as well.

So I decided that initramfs support in etch is definitely not ready for
production use and went back to the good old no init(rd|ramfs) setup.

Gabor

-- 
 -
 MTA SZTAKI Computer and Automation Research Institute
Hungarian Academy of Sciences
 -


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



SOLVED: Software-RAID1 on sarge (AMD64) (was: Re: install-mbr on amd64?)

2006-05-28 Thread Kilian
In the last few days, I was struggling to convert a remote machine with 
two identical SATA disks (sda and sdb) to a Software RAID 1. Especially 
the boot-part was tricky as I had no console access to the machine. The 
whole procedure was done remotely via SSH. I use the md tools (mdadm) 
and lilo as bootloader. I chose LILO because IMHO it's more 
straightforward in this setup than GRUB and I have no other Operating 
Systems I would want to boot.


The system was installed on the first disk, the second one has not been 
used before. Those are the steps I went through:



1.  Install a Software-RAID capable kernel and boot the system with it;
Install the md tools: 'apt-get install mdadm';


2.  partition the second harddrive (sdb). I created two partitions, a
large one at the beginning of the disk (sdb1) and a small
swap-partition at the end (sdb2). I do not use separate /boot
partitions.

NOTE: I do not use two swap spaces on the two disks; instead, I
create a RAID array consisting of the two smaller partitions on the
two discs and create the swap space on it. In case of a disk
failure, I don't need to reboot the system because the swap space
is also on RAID. Otherwise, a disk failure would toast one swap
space, probably leaving the system in a unusable state until
rebooted.

Important: both partitions need to be of the type 0xFD Linux raid
autodetect


3.  Create the RAID arrays:

$ mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1
$ mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb2


4.  Create filesystems

$ mkfs -t xfs /dev/md0
$ mkswap /dev/md1

I use XFS as filesystem because it has such nice features as online
resizing etc and is, IMHO, very stable and mature. Of course you can
use whatever you like.


5.  Copy the existing Debian system to the new RAID

$ mkdir -p /mnt/newroot
$ mount /dev/md0 /mnt/newroot
$ cd /
$ find . -xdev | cpio -pm /mnt/newroot


6.  To see if the the new RAID array comes up properly after a reboot,
add the following line to /etc/fstab of the system still running
from sda:

 /dev/md0   /mnt/newroot   xfs   defaults   0  0

 Reboot and check with mount if /dev/md0 is mounted properly.


7.  I now modified /etc/lilo.conf of the system still running from sda
so that on the next reboot, /dev/md0 would be mounted as root
filesystem while lilo would still access /boot on sda:

# START /etc/lilo.conf
lba32
delay=50
map=/boot/map
boot=/dev/sda
image=/boot/vmlinuz-2.6.16.18
  label=RAID
  root=/dev/md0
  read-only
  alias=1

image=/boot/vmlinuz-OLD
  label=LinuxOLD
  root=/dev/sda1
  read-only
  alias=2
# END /etc/lilo.conf

This way, we still have a working boot (LinuxOLD) which uses
/dev/sda1 as root in case anything goes wrong (NOTE: sda1 is the
root partition of the old, non-RAID system).


8.  Run LILO
First, run

$ lilo -t -v

to see what lilo would do. If everything is OK do:

$ lilo -v
$ lilo -v -R RAID

This way, lilo chooses the image labeled RAID on the next reboot. On
every subsequent reboot, the next entry will be used (so the image
specified with -R gets used only one time). So if the system
doesn't come up, you can reset it and lilo will boot the other
image.


9.  Edit the new fstab
The new fstab, located at /mnt/newroot/etc/fstab, must now be
changed so that /dev/md0 gets mounted as root filesystem:

# START /mnt/newroot/etc/fstab
/dev/md0 /   xfs  defaults0  0
/dev/md1 swapswap
proc /proc   proc
# END /mnt/newroot/etc/fstab


10. Reboot the system. If it comes up, check with mount if /dev/md0
is mounted as root filesystem. If it doesn't come up properly,
just reset the machine / reboot it and it will boot the other
image.


11. Integrade sda into RAID array
First, repartition sda exactly as sdb. The partitions must either
be of the exact size or greater than those on sdb. Also, the
partition type must be 0xFD.
Then, integrate the partitions into the existing RAID array:

$ mdadm --add /dev/md0 /dev/sda1
$ mdadm --add /dev/md1 /dev/sda2

Now the arrays are being synchronized. Check with

$ watch cat /proc/mdstat

that the sync proccess is running. You must wait for this
process to complete on both arrays.


12. Modify lilo.conf
Now we want to boot completely from /dev/md0:

# START /etc/lilo.conf
lba32
boot=/dev/md0
root=/dev/md0
install=/boot/boot-menu.b
map=/boot/map
prompt
delay=50
timeout=50
vga=normal
raid-extra-boot=/dev/sda,/dev/sdb
default=RAID

image=/boot/vmlinuz-2.6.16.18
label=RAID
read-only
root=/dev/md0
alias=1

image=/boot/vmlinuz-OLD
label=LinuxOLD
read-only

Re: SOLVED: Software-RAID1 on sarge (AMD64)

2006-05-28 Thread Goswin von Brederlow
Kilian [EMAIL PROTECTED] writes:

 In the last few days, I was struggling to convert a remote machine
 with two identical SATA disks (sda and sdb) to a Software RAID
 1. Especially the boot-part was tricky as I had no console access to
 the machine. The whole procedure was done remotely via SSH. I use the
 md tools (mdadm) and lilo as bootloader. I chose LILO because IMHO
 it's more straightforward in this setup than GRUB and I have no other
 Operating Systems I would want to boot.

 The system was installed on the first disk, the second one has not
 been used before. Those are the steps I went through:


 1.  Install a Software-RAID capable kernel and boot the system with it;
  Install the md tools: 'apt-get install mdadm';

Meaning any Debian kernel. :)

 2.  partition the second harddrive (sdb). I created two partitions, a
  large one at the beginning of the disk (sdb1) and a small
  swap-partition at the end (sdb2). I do not use separate /boot
  partitions.

NOTE: disk speed differs by around a factor of 2 between start and
end. Which one is the fast one can depend on the disk but usualy the
start is. Better swap there.

  NOTE: I do not use two swap spaces on the two disks; instead, I
  create a RAID array consisting of the two smaller partitions on the
  two discs and create the swap space on it. In case of a disk
  failure, I don't need to reboot the system because the swap space
  is also on RAID. Otherwise, a disk failure would toast one swap
  space, probably leaving the system in a unusable state until
  rebooted.

It would cause processes to segfault all over and take down the system.

  Important: both partitions need to be of the type 0xFD Linux raid
  autodetect

Actualy not. mdadm can work just as well without it. Doesn't hurt though.

 3.  Create the RAID arrays:

  $ mdadm --create /dev/md0 --level=1 --raid-disks=2 missing /dev/sdb1
  $ mdadm --create /dev/md1 --level=1 --raid-disks=2 missing /dev/sdb2


 4.  Create filesystems

  $ mkfs -t xfs /dev/md0
  $ mkswap /dev/md1

  I use XFS as filesystem because it has such nice features as online
  resizing etc and is, IMHO, very stable and mature. Of course you can
  use whatever you like.

As does ext3, even more so.

 5.  Copy the existing Debian system to the new RAID

  $ mkdir -p /mnt/newroot
  $ mount /dev/md0 /mnt/newroot
  $ cd /
  $ find . -xdev | cpio -pm /mnt/newroot

Fun, fun. A copy of /proc. That's a few Gig wasted depending on the
size of /proc/kcore.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: SOLVED: Software-RAID1 on sarge (AMD64)

2006-05-28 Thread Michal Schmidt

Goswin von Brederlow wrote:

Kilian [EMAIL PROTECTED] writes:

5.  Copy the existing Debian system to the new RAID

 $ mkdir -p /mnt/newroot
 $ mount /dev/md0 /mnt/newroot
 $ cd /
 $ find . -xdev | cpio -pm /mnt/newroot


Fun, fun. A copy of /proc. That's a few Gig wasted depending on the
size of /proc/kcore.


Umm, that's prevented by the -xdev option, isn't it?

Michal


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: SOLVED: Software-RAID1 on sarge (AMD64)

2006-05-28 Thread Kilian

Michal Schmidt wrote:

Goswin von Brederlow wrote:

Kilian [EMAIL PROTECTED] writes:

5.  Copy the existing Debian system to the new RAID

 $ mkdir -p /mnt/newroot
 $ mount /dev/md0 /mnt/newroot
 $ cd /
 $ find . -xdev | cpio -pm /mnt/newroot


Fun, fun. A copy of /proc. That's a few Gig wasted depending on the
size of /proc/kcore.


Umm, that's prevented by the -xdev option, isn't it?


It is, since -xdev prevents find from descending directories on other 
filesystems, which is exactly what /proc is.


-- Kilian


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: SOLVED: Software-RAID1 on sarge (AMD64)

2006-05-28 Thread Kilian

Goswin von Brederlow wrote:

Kilian [EMAIL PROTECTED] writes:


In the last few days, I was struggling to convert a remote machine
with two identical SATA disks (sda and sdb) to a Software RAID
1. Especially the boot-part was tricky as I had no console access to
the machine. The whole procedure was done remotely via SSH. I use the
md tools (mdadm) and lilo as bootloader. I chose LILO because IMHO
it's more straightforward in this setup than GRUB and I have no other
Operating Systems I would want to boot.

The system was installed on the first disk, the second one has not
been used before. Those are the steps I went through:


1.  Install a Software-RAID capable kernel and boot the system with it;
 Install the md tools: 'apt-get install mdadm';


Meaning any Debian kernel. :)


True, mine had it as a module though, which meant initrd, and since I 
was working remote, I didn't want to bring in another pitfall which 
meant compiling the kernel with RAID support built into it.



2.  partition the second harddrive (sdb). I created two partitions, a
 large one at the beginning of the disk (sdb1) and a small
 swap-partition at the end (sdb2). I do not use separate /boot
 partitions.


NOTE: disk speed differs by around a factor of 2 between start and
end. Which one is the fast one can depend on the disk but usualy the
start is. Better swap there.


I didn't know that, thanks for the hint!


 NOTE: I do not use two swap spaces on the two disks; instead, I
 create a RAID array consisting of the two smaller partitions on the
 two discs and create the swap space on it. In case of a disk
 failure, I don't need to reboot the system because the swap space
 is also on RAID. Otherwise, a disk failure would toast one swap
 space, probably leaving the system in a unusable state until
 rebooted.


It would cause processes to segfault all over and take down the system.


I knew there was a reason ;-)


 Important: both partitions need to be of the type 0xFD Linux raid
 autodetect


Actualy not. mdadm can work just as well without it. Doesn't hurt though.


Didn't know that either, thanks.

[...]

 I use XFS as filesystem because it has such nice features as online
 resizing etc and is, IMHO, very stable and mature. Of course you can
 use whatever you like.


As does ext3, even more so.


Let's not start a filesystem flamewar, you'd propably win ;-)


5.  Copy the existing Debian system to the new RAID

 $ mkdir -p /mnt/newroot
 $ mount /dev/md0 /mnt/newroot
 $ cd /
 $ find . -xdev | cpio -pm /mnt/newroot


Fun, fun. A copy of /proc. That's a few Gig wasted depending on the
size of /proc/kcore.


As pointed out by Michael Schmidt, -xdev takes care of that. Of course 
if there are several filesystems on the original disk, you'd have to 
copy each separately.


Thanks for your suggestions!

-- Kilian


--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: SOLVED: Software-RAID1 on sarge (AMD64)

2006-05-28 Thread Goswin von Brederlow
Michal Schmidt [EMAIL PROTECTED] writes:

 Goswin von Brederlow wrote:
 Kilian [EMAIL PROTECTED] writes:
 5.  Copy the existing Debian system to the new RAID

  $ mkdir -p /mnt/newroot
  $ mount /dev/md0 /mnt/newroot
  $ cd /
  $ find . -xdev | cpio -pm /mnt/newroot
 Fun, fun. A copy of /proc. That's a few Gig wasted depending on the
 size of /proc/kcore.

 Umm, that's prevented by the -xdev option, isn't it?

 Michal

Oh, my bad. I tought it was -xdir to exclude the dev directory
from udev. But yes, it will omit proc and any other mounted
filesystems.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: SOLVED: Software-RAID1 on sarge (AMD64)

2006-05-28 Thread Goswin von Brederlow
Kilian [EMAIL PROTECTED] writes:

 Goswin von Brederlow wrote:
 Kilian [EMAIL PROTECTED] writes:

 In the last few days, I was struggling to convert a remote machine
 with two identical SATA disks (sda and sdb) to a Software RAID
 1. Especially the boot-part was tricky as I had no console access to
 the machine. The whole procedure was done remotely via SSH. I use the
 md tools (mdadm) and lilo as bootloader. I chose LILO because IMHO
 it's more straightforward in this setup than GRUB and I have no other
 Operating Systems I would want to boot.

 The system was installed on the first disk, the second one has not
 been used before. Those are the steps I went through:


 1.  Install a Software-RAID capable kernel and boot the system with it;
  Install the md tools: 'apt-get install mdadm';
 Meaning any Debian kernel. :)

 True, mine had it as a module though, which meant initrd, and since I
 was working remote, I didn't want to bring in another pitfall which
 meant compiling the kernel with RAID support built into it.

  Important: both partitions need to be of the type 0xFD Linux raid
  autodetect
 Actualy not. mdadm can work just as well without it. Doesn't hurt
 though.

 Didn't know that either, thanks.

If raid is buildin into the kernel and all the disk drivers for the
raid too then the type 0xFD causes the kernel to already detect and
start the raid. So you need no initrd and no mdadm to boot. So in your
case you DO need that. (PS: I prefer non initrd boot too and have the
same).

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]