On 02/09/13 15:06, Stefan Sperling wrote:
On Sat, Feb 09, 2013 at 03:52:12AM -0500, Scott McEachern wrote:
On 02/09/13 03:09, Andy Bradford wrote:
Thus said Joel Sing on Sat, 09 Feb 2013 16:44:11 +1100:

umount via DUID  does not work currently - this  will be fixed shortly
after the next release freeze has ended.
Will that  also include shutdown  of softraid  via DUID? e.g.,

bioctl -d DUID

Or is this not even possible?

Thanks,

Andy
Oddly enough, no.
See http://marc.info/?l=openbsd-tech&m=133513662106783&w=2 for a patch.
It hasn't been committed yet because jsing didn't ok it. Perhaps he
will change his mind if we ask again nicely :)


The patch applied cleanly, I rebuilt the system and rebooted. All looked good.

Then I adjusted my /etc/rc.shutdown to this:

umount -f /st7
umount -f /home

#bioctl -d sd10  <-- this was used before
bioctl -d 485a9f963f9cf9ea
#bioctl -d 485a9f963f9cf9ea.a

#bioctl -d sd11  <-- this was used before
bioctl -d 36d18f2cde909b01
#bioctl -d 36d18f2cde909b01.a

and executed a reboot.

The bad news?  I got the same error as before:

syncing disks... done
sd3 detached
softraid0: I/O error 5 on dev 0x433 at block 16
softraid0: could not write metadata to sd3d
sd4 detached
rebooting...

at least I think that's what it said, it went by rather quickly. I definitely saw the "could not write metadata" part.

At this point I figured no harm, no foul.  Was I ever wrong.

Upon reboot the system shit all over the place and dropped me to single user mode. The offending partitions were /dev/sd8a and /dev/sd9a. In my fstab, I have the following:

6be798121798a5a7.b none swap sw
6be798121798a5a7.a / ffs rw,softdep 1 1
6be798121798a5a7.d /tmp ffs rw,nodev,nosuid,softdep 1 2
6be798121798a5a7.f /usr ffs rw,nodev,softdep 1 2
6be798121798a5a7.g /usr/X11R6 ffs rw,nodev,softdep 1 2
6be798121798a5a7.i /usr/local ffs rw,nodev,softdep 1 2
6be798121798a5a7.h /usr/obj ffs rw,nodev,nosuid,softdep 1 2
6be798121798a5a7.e /var ffs rw,nodev,nosuid,softdep 1 2
e1d635ac777ed919.a /st5 ffs rw,nodev,nosuid,noexec,noatime,softdep 1 2
3131dc858bdefd32.a /st6 ffs rw,nodev,nosuid,noexec,noatime,softdep 1 2
darkon:/st1/ /st1 nfs rw,nodev,soft,intr 0 0

See the /st5 (e1d..919.a, aka sd8a) and /st6 (313..f32.a, aka sd9a) mount points? Those are my two 3TB RAID1 volumes. Or should I say, *were*. You can see where this is going, right?

I used ed(1) to comment those lines out, rebooted. Things seemed to come up normally and I figured I might have to fsck the big drives when.... oh *fuck*. sd8 and sd9 no longer exist.

The tail end of my dmesg normally looks like this (before I added the crypto volumes):

softraid0 at root
scsibus4 at softraid0: 256 targets
sd8 at scsibus4 targ 1 lun 0: <OPENBSD, SR RAID 1, 005> SCSI2 0/direct fixed
sd8: 2861588MB, 512 bytes/sector, 5860532576 sectors
sd9 at scsibus4 targ 2 lun 0: <OPENBSD, SR RAID 1, 005> SCSI2 0/direct fixed
sd9: 2861588MB, 512 bytes/sector, 5860532576 sectors
root on sd5a (6be798121798a5a7.a) swap on sd5b dump on sd5b

Now it looks like this:

softraid0 at root
scsibus4 at softraid0: 256 targets
root on sd5a (6be798121798a5a7.a) swap on sd5b dump on sd5b

I didn't know what to wipe first, the sweat off my forehead or ... well, you get the idea.

I'm tempted to try to use "bioctl -c 1 -l /dev/sd0,/dev/sd1 softraid0" and "bioctl -c 1 -l /dev/sd2,/dev/sd3 softraid0" to recreate the volumes (just like how I created them the first time around), and *hope like hell* I can get my shit back, but before I do that, I wanted to get your advice to ensure that's my best possible move.

Hey, you know, maybe it would be best if I reinstalled my previous snapshot (Feb7 I think) and use _that_ version of bioctl, no?

--
Scott McEachern

https://www.blackstaff.ca

Reply via email to