Re: Raid 1 and 2 Disks: kernel panic with init: not found when reboot into broken mirror
On 5/20/06, Joachim Schipper [EMAIL PROTECTED] wrote: ... Kernelized RAIDFrame activated dkcsum wd0 matches BIOS drive 0x80 dkcsum wd1 matches BIOS drive 0x81 root on wd1a rootdev=0x10 rrootdev=0x320 rawdev=0x312 warning: /dev/console does not exist init: not found panic: no init ... ddb This looks like the RAID is not autoconfiguring at all. Maybe you didn't build the kernel with the proper options, or maybe you forgot raidctl -A root? Joachim Hi joachim, it seems so, but I haven't forgotten the above command and the kernel config file options are: # GENERIC.RAID include arch/i386/conf/GENERIC option RAID_AUTOCONFIG pseudo-device raid 4 However, this week I will re-start from zero and I'll write you, thanks ;-) ip
Re: Raid 1 and 2 Disks: kernel panic with init: not found when reboot into broken mirror
On Wed, May 17, 2006 at 12:39:57AM +0200, ip wrote: On 5/14/06, Joachim Schipper [EMAIL PROTECTED] wrote: While wd1a does have a kernel, it does not have a proper root filesystem - for instance, no /dev directory, or more specifically no /dev/console. Fix this, and also have a look at daily(8) which documents the altroot mechanism, which is quite useful to ensure backup kernels can always be found in a RAIDed system. Joachim Hello misc, Hi Joachim and thanks for the tips. However, I don't understand why I receive this errors. From the raidctl man page: Section: Auto-configuration and Root on RAID ... RAID sets which are auto-configurable will be configured before the root file system is mounted. These RAID sets are thus available for use as a root file system, or for any other file system. [snip] Note that kernels can't be directly read from a RAID component. To sup- port the root file system on RAID sets, some mechanism must be used to get a kernel booting. For example, a small partition containing only the secondary boot-blocks and an alternate kernel (or two) could be used. Once a kernel is booting however, and an auto-configured RAID set is found that is eligible to be root, then that RAID set will be auto-con- figured and its `a' partition (aka raid[0..n]a) will be used as the root file system. ... So, when I make the wd1a partition, I think that bsd and boot files are sufficient for the goal. Instead, during the reboot into degrade mode, the error messages seem to indicate that the Auto RAID system has not been activated. Infact I have fixed dev/console and other mechanisms, but this solution go to recreate a minimum complete installation into wd1a... (Sorry for the slow reaction, but the OP might not have this one figured out yet, or at least it'll be useful for the archives...) When you booted, and this was in your original message you snipped above, you posted that you received: ... Kernelized RAIDFrame activated dkcsum wd0 matches BIOS drive 0x80 dkcsum wd1 matches BIOS drive 0x81 root on wd1a rootdev=0x10 rrootdev=0x320 rawdev=0x312 warning: /dev/console does not exist init: not found panic: no init ... ddb This looks like the RAID is not autoconfiguring at all. Maybe you didn't build the kernel with the proper options, or maybe you forgot raidctl -A root? Joachim
Re: Raid 1 and 2 Disks: kernel panic with init: not found when reboot into broken mirror
On 5/14/06, Joachim Schipper [EMAIL PROTECTED] wrote: While wd1a does have a kernel, it does not have a proper root filesystem - for instance, no /dev directory, or more specifically no /dev/console. Fix this, and also have a look at daily(8) which documents the altroot mechanism, which is quite useful to ensure backup kernels can always be found in a RAIDed system. Joachim Hello misc, Hi Joachim and thanks for the tips. However, I don't understand why I receive this errors. From the raidctl man page: Section: Auto-configuration and Root on RAID ... RAID sets which are auto-configurable will be configured before the root file system is mounted. These RAID sets are thus available for use as a root file system, or for any other file system. [snip] Note that kernels can't be directly read from a RAID component. To sup- port the root file system on RAID sets, some mechanism must be used to get a kernel booting. For example, a small partition containing only the secondary boot-blocks and an alternate kernel (or two) could be used. Once a kernel is booting however, and an auto-configured RAID set is found that is eligible to be root, then that RAID set will be auto-con- figured and its `a' partition (aka raid[0..n]a) will be used as the root file system. ... So, when I make the wd1a partition, I think that bsd and boot files are sufficient for the goal. Instead, during the reboot into degrade mode, the error messages seem to indicate that the Auto RAID system has not been activated. Infact I have fixed dev/console and other mechanisms, but this solution go to recreate a minimum complete installation into wd1a... Help :\ -- ip
Re: Raid 1 and 2 Disks: kernel panic with init: not found when reboot into broken mirror
On Sat, May 13, 2006 at 01:38:40PM +0200, ip wrote: Hello misc, I spent two days to read man and how-tos, but today I don't succeed again to make raid 1 to work. I want to install openbsd 3.9 on two ide disks (wd0,wd1) of 10 gb with raidframe raid 1. Following the main steps that I have executed: 1. regular installation of openbsd 3.9 on wd0 2. compiled a new kernel with raidframe autoconfigure support 3. reboot with the new kernel into wd0 4. initialized and partitioned wd1 5. make wd1 bootable # newfs wd1a # mount /dev/wd1a /mnt # cp /bsd.raid /usr/mdec/boot /mnt # /usr/mdec/installboot -v /mnt/boot /usr/mdec/biosboot wd1 # umount /mnt 5. realized and initialized the array 6. partitioned and populated raid0 # disklabel -E raid0 raid0a / 4.2BSD raid0b swap raid0d /usr 4.2BSD raid0e /tmp 4.2BSD raid0f /var 4.2BSD raid0g /home 4.2BSD # for i in a d e f g; do newfs raid0${i}; done # mount /dev/raid0a /mnt # cd /mnt; mkdir usr tmp var home # mount /dev/raid0d /mnt/usr # mount /dev/raid0e /mnt/tmp etc... # (cd /; tar -Xcpf - .) | (cd /mnt; tar -xpf -) # (cd /usr; tar -cpf - .) | (cd /mnt/usr; tar -xpf -) etc... # cat /mnt/etc/fstab /dev/raid0a / ffs rw 1 1 /dev/raid0d /usr ffs rw 1 2 etc... Ok...umount all and reboot into broken mirror. At prompt: boot wd1a:/bsd the kernel go up succesfully until I received: ... Kernelized RAIDFrame activated dkcsum wd0 matches BIOS drive 0x80 dkcsum wd1 matches BIOS drive 0x81 root on wd1a rootdev=0x10 rrootdev=0x320 rawdev=0x312 warning: /dev/console does not exist init: not found panic: no init ... ddb Well, where I have make a mistake ? Thanks in andvance and sorry for the english... While wd1a does have a kernel, it does not have a proper root filesystem - for instance, no /dev directory, or more specifically no /dev/console. Fix this, and also have a look at daily(8) which documents the altroot mechanism, which is quite useful to ensure backup kernels can always be found in a RAIDed system. Joachim
Raid 1 and 2 Disks: kernel panic with init: not found when reboot into broken mirror
Hello misc, I spent two days to read man and how-tos, but today I don't succeed again to make raid 1 to work. I want to install openbsd 3.9 on two ide disks (wd0,wd1) of 10 gb with raidframe raid 1. Following the main steps that I have executed: 1. regular installation of openbsd 3.9 on wd0 2. compiled a new kernel with raidframe autoconfigure support 3. reboot with the new kernel into wd0 4. initialized and partitioned wd1 # fdisk -I wd1 # disklabel -E wd1 wd1a 4.2BSD 64MB wd1d RAID * 5. make wd1 bootable # newfs wd1a # mount /dev/wd1a /mnt # cp /bsd.raid /usr/mdec/boot /mnt # /usr/mdec/installboot -v /mnt/boot /usr/mdec/biosboot wd1 # umount /mnt 5. realized and initialized the array # cat /etc/raid0.conf START array 1 2 0 START disks /dev/wd2d /dev/wd1d START layout 128 1 1 1 START queue fifo 100 # raidctl -C /etc/raid0.conf # raidctl -I 060500 raid0 # raidctl -A root raid0 6. partitioned and populated raid0 # disklabel -E raid0 raid0a / 4.2BSD raid0b swap raid0d /usr 4.2BSD raid0e /tmp 4.2BSD raid0f /var 4.2BSD raid0g /home 4.2BSD # for i in a d e f g; do newfs raid0${i}; done # mount /dev/raid0a /mnt # cd /mnt; mkdir usr tmp var home # mount /dev/raid0d /mnt/usr # mount /dev/raid0e /mnt/tmp etc... # (cd /; tar -Xcpf - .) | (cd /mnt; tar -xpf -) # (cd /usr; tar -cpf - .) | (cd /mnt/usr; tar -xpf -) etc... # cat /mnt/etc/fstab /dev/raid0a / ffs rw 1 1 /dev/raid0d /usr ffs rw 1 2 etc... Ok...umount all and reboot into broken mirror. At prompt: boot wd1a:/bsd the kernel go up succesfully until I received: ... Kernelized RAIDFrame activated dkcsum wd0 matches BIOS drive 0x80 dkcsum wd1 matches BIOS drive 0x81 root on wd1a rootdev=0x10 rrootdev=0x320 rawdev=0x312 warning: /dev/console does not exist init: not found panic: no init ... ddb Well, where I have make a mistake ? Thanks in andvance and sorry for the english... -- ip