Beautyful, as it looks like!
I tried here on 2 300 GB U320, and the setup went through without any
warnings (?? most users encounter some?).
What I did was: (my system disk is sd0)
fdisk -iy sd1
fdisk -iy sd2
printf "a\n\n\n\nRAID\nw\nq\n\n" | disklabel -E sd1
printf "a\n\n\n\nRAID\nw\nq\n\n" | disklabel -E sd2
bioctl -c 1 -l /dev/sd1a,/dev/sd2a softraid0
dd if=/dev/zero of=/dev/rsd3c bs=1m count=1
disklabel sd3 (creating my partitions/slices)
newfs /dev/rsd3a
newfs /dev/rsd3b
mount /dev/sd3b /mnt/
cd /mnt/
[pull one hot-swap out]
echo Nonsense > testo
[push the disk back in]
[pull the other disk]
# ls -l
total 4
-rw-r--r-- 1 root wheel 9 May 13 12:00 testo
[everything okay until here]
# rm testo
rm: testo: Input/output error
[I still guess this may happen]
But now my question: All posts say all info is in 'man softraid' and
'man bioctl'. There is nothing about *warnings* in there. I also tried
bioctl -a/-q, but none would indicate that anything was wrong when one
of the drives was pulled.
This will be a production server, but it can take downtime, in case.
However:
1. I *need to know* when a disk goes offline
2. I need to know, in real life(!), if I can simply use the broken
mirror to save my data; how I can mount it in another machine. Alas,
softraid and bioctl are silent about these two.
Another reason for asking:
Next I issued 'reboot'; and could play hangman :(
After the reboot, I got:
...
softraid0 at root
softraid0: sd3 was not shutdown properly
scsibus3 at softraid0: 1 targets, initiator 1
sd3 at scsibus3 targ 0 lun 0: <OPENBSD, SR RAID 1, 003> SCSI2 0/direct fixed
sd3: 286094MB, 36471 cyl, 255 head, 63 sec, 512 bytes/sec, 585922538 sec
total
Now I wonder what to do. Will a traditional fsck do, or do I have to
recreate the softraid?
Can anyone please help me further?
Uwe