ok, your instructions worked like a charm. So i'm running my nice 4 member SCSI gvinum raid5 array (with softupdates turned on), and it's zipping along. Now I need to test just how robust this is. camcontrol is too nice. I want to test a more real world failure. I'm running dbench and just pull one of the drives. My expectation is that I should see a minor pause, and then the array continue in some slower, degraded mode. What I get is a kernel trap 12 (boom!). I reboot, and it will not mount the degraded set till I replace the drive.

I turned off softupdates, and had the same thing happen. Is this a bogus test? Is it reasonable to expect that a scsi drive failure should of been tolerated w/o crashing?

(bunch of scsi msgs to console)
sub-disk down
plex degraded
g_access failed:6

trap 12
page fault while in kernel mode
cpuid=1 apic id=01
fault virtual address   =0x18c
fault code                   supervisor write, page not present
instruction pointer      =0x8:0xc043d72c
stack pointer             =0x10:cbb17bf0
code segment            =base 0x0, limit 0xfff, type 0x1b
                                =DPL0, pres1,def32,gran1
Processor flags            interupt enabled, resume,IOPL=0
current process            22(irq11:ahc1)


Matthias Schuendehuette wrote:

gvinum> start <plexname>

This (as far as I investigated :-)

a) initializes a newly created RAID5-plex    or

b) recalculates parity informations on a degraded RAID5-plex with
  a new replaced subdisk.

So, a 'gvinum start raid5.p0' initializes my RAID5-plex if newly created. You can monitor the initialization process with subsequent 'gvinum list' commands.

If you degrade a RAID5-plex with 'camcontrol stop <diskname>' (in case of SCSI-Disks) and 'repair' it afterwards with 'camcontrol start <diskname>', the 'gvinum start raid5.p0' (my volume here is called 'raid5') command recalculates the parity and revives the subdisk which was on disk <diskname>.



_______________________________________________ [EMAIL PROTECTED] mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"

Reply via email to