Re: Git pkgsrc - setting file locations

2020-07-15 Thread Bob Bernstein

On Wed, 15 Jul 2020, Greg A. Woods wrote:

These are my relevant hacks which I make directly to 
pkgsrc/mk/defaults/mk.conf:



PKGMAKECONF = /etc/mk.conf
PKGSRC_MAKE_ENV +=  USER=${USER:Q}
WRKOBJDIR ?=/var/package-obj/${USER}
DISTDIR ?=  /var/package-distfiles
PACKAGES ?= 
/var/packages/${USER}/${OPSYS}/${OS_VERSION:C/9.0.*/9.0/:C/9.99..*/9.99/}/${MACHINE_ARCH}


Okay, I think I can wrap my brain around those.

I may slow-walk my adoption of these others. I am in my current 
fix because I played fast and loose with /etc/mk.conf without 
understanding the full implications of the changes I was making. 
Always looking for quick 'n dirty...but enough already with my 
moralizing.


Thank you Greg@

--
 A test of right and wrong must be the means, one would
 think, of ascertaining what is right or wrong, and not a
 consequence of having already ascertained it.

  J. S. Mill


Re: Git pkgsrc - setting file locations

2020-07-15 Thread Greg A. Woods
At Wed, 15 Jul 2020 15:40:40 -0400 (EDT), Bob Bernstein 
 wrote:
Subject: Git pkgsrc - setting file locations
>
> I have been working awhile now with the GIT pkgsrc, and it occurs to
> me that it might be an advantage to provide different locations for
> things such as distfiles, packages, and...what else?
>
> What is suggested, and where/how is the optimum method for altering
> these values?

These are my relevant hacks which I make directly to
pkgsrc/mk/defaults/mk.conf:


PKGMAKECONF = /etc/mk.conf
PKGSRC_MAKE_ENV +=  USER=${USER:Q}
WRKOBJDIR ?=/var/package-obj/${USER}
DISTDIR ?=  /var/package-distfiles
PACKAGES ?= 
/var/packages/${USER}/${OPSYS}/${OS_VERSION:C/9.0.*/9.0/:C/9.99..*/9.99/}/${MACHINE_ARCH}


On my build servers these /var directories are then often symlinks to
separate directories on either shared or other local filesystem(s).


Then in /etc/mk.conf I wrap local pkgsrc-only things in an if testing
BSD_PKG_MK, e.g. as follows:

.if defined(BSD_PKG_MK)

# I.e. the rest is for pkgsrc things that are truly local to this host
# environment:  (as opposed to the site-specific stuff in 
/usr/pkgsrc/mk)

# XXX N.B.:  It is expected that mk/defaults/mk.conf will have set
#
#   PKGMAKECONF =   /etc/mk.conf

# use pkgtools/autoswc to cache some autoconf results
#
.sinclude "/usr/pkg/share/autoswc/autoswc.mk"

PKG_SYSCONFBASE = /etc
PKG_RCD_SCRIPTS =   YES # install rc.d scripts immediately.

.endif

--
Greg A. Woods 

Kelowna, BC +1 250 762-7675   RoboHack 
Planix, Inc.  Avoncote Farms 


pgpfQNnaC3ghq.pgp
Description: OpenPGP Digital Signature


Git pkgsrc - setting file locations

2020-07-15 Thread Bob Bernstein
I have been working awhile now with the GIT pkgsrc, and it 
occurs to me that it might be an advantage to provide different 
locations for things such as distfiles, packages, and...what 
else?


What is suggested, and where/how is the optimum method for 
altering these values?


Thank you

--
 A test of right and wrong must be the means, one would
 think, of ascertaining what is right or wrong, and not a
 consequence of having already ascertained it.

  J. S. Mill


Re: NVMM not working, NetBSD 9x amd64

2020-07-15 Thread Chavdar Ivanov
Checking once more, it would appear I haven't tried qemu-nvmm on the
kernel from 12-th of July; the last successful execution was on the
11th with a kernel and system from the 9th of July, so the window is a
bit wider than initially expected.

On Wed, 15 Jul 2020 at 14:20, Chavdar Ivanov  wrote:
>
> On Wed, 15 Jul 2020 at 11:08, Chavdar Ivanov  wrote:
> >
> > Hi,
> >
> > I decided to reuse this thread; nvmm again ceased to work from yesterday.
> >
> > On
> >
> > # uname -a
> > NetBSD ymir 9.99.69 NetBSD 9.99.69 (GENERIC) #15: Tue Jul 14 11:07:52
> > BST 2020  sysbuild@ymir:/home/sysbuild/amd64/obj/home/sysbuild/src/sys
> > /arch/amd64/compile/GENERIC amd64
> >
> > I can 'modload nvmm', but when I try to start a vm with nvmm
> > acceleration, I get a hard lock, immediately after the message about
> > the interface being initialized. I cannot break into the debugger to
> > trace and I don't get a dump on reboot. It appears the machine is in a
> > deep CPU loop, although it doesn't appear too  hot.
> >
> > I then tried booting onetbsd, which is from the 12th of July and on
> > which nvmm used to work just fine. It is also the same micro version -
> > 9.99.59, so n theory should work - but in this case I get a panic when
> > I 'modload nvmm' - again, I see the short panic message on the screen
> > and the machine apparently gets into another loop here, which I cannot
> > break the usual way into the debugger and the only thing I can do is
> > hit the power button. There weren't that many kernel changes in this
> > period, most notably the per-CPU IDT patch, but I don't know if it is
> > relevant.
> >
>
> I rebuilt my system again today, this time I managed to get a core
> dump after the panic:
>
>  crash -M netbsd.22.core -N netbsd.22
> Crash version 9.99.69, image version 9.99.69.
> crash: _kvm_kvatop(0)
> Kernel compiled without options LOCKDEBUG.
> System panicked: trap
> Backtrace from time of crash is available.
> crash> bt
> _KERNEL_OPT_NARCNET() at 0
> ?() at a0819ba16000
> sys_reboot() at sys_reboot
> vpanic() at vpanic+0x15b
> snprintf() at snprintf
> startlwp() at startlwp
> calltrap() at calltrap+0x19
> kqueue_register() at kqueue_register+0x43e
> kevent1() at kevent1+0x138
> sys___kevent50() at sys___kevent50+0x33
> syscall() at syscall+0x26e
> --- syscall (number 435) ---
> syscall+0x26e:
>
> Any ideas?
>
> The dmesg shows, BTW:
>
> Jul 15 14:09:33 ymir /netbsd: [ 108.7517032] nvmm0: attached, using
> backend x86-vmx
> Jul 15 14:11:40 ymir syslogd[946]: restart
> Jul 15 14:11:40 ymir /netbsd: [ 131.2116186] fatal protection fault in
> supervisor mode
> Jul 15 14:11:40 ymir /netbsd: [ 131.2116186] trap type 4 code 0x323
> rip 0x80c89e21 cs 0x8 rflags 0x10282 cr2 0x784321f9f000 ilevel
> 0
>  rsp 0xa0819ba1ac50
> Jul 15 14:11:40 ymir /netbsd: [ 131.2116186] curlwp 0xd066fc45b100
> pid 2869.2869 lowest kstack 0xa0819ba162c0
> Jul 15 14:11:40 ymir /netbsd: [ 131.2116186] panic: trap
> Jul 15 14:11:40 ymir /netbsd: [ 131.2116186] cpu0: Begin traceback...
> Jul 15 14:11:40 ymir /netbsd: [ 131.2116186] vpanic() at netbsd:vpanic+0x152
> Jul 15 14:11:40 ymir /netbsd: [ 131.2116186] snprintf() at netbsd:snprintf
> Jul 15 14:11:40 ymir /netbsd: [ 131.2116186] startlwp() at netbsd:startlwp
> Jul 15 14:11:40 ymir /netbsd: [ 131.2116186] alltraps() at 
> netbsd:alltraps+0xc3
> Jul 15 14:11:40 ymir /netbsd: [ 131.2116186] kqueue_register() at
> netbsd:kqueue_register+0x43e
> Jul 15 14:11:40 ymir /netbsd: [ 131.2116186] kevent1() at netbsd:kevent1+0x138
> Jul 15 14:11:40 ymir /netbsd: [ 131.2116186] sys___kevent50() at
> netbsd:sys___kevent50+0x33
> Jul 15 14:11:40 ymir /netbsd: [ 131.2116186] syscall() at netbsd:syscall+0x26e
> Jul 15 14:11:40 ymir /netbsd: [ 131.2116186] --- syscall (number 435) ---
> Jul 15 14:11:40 ymir /netbsd: [ 131.2216185] netbsd:syscall+0x26e:
> Jul 15 14:11:40 ymir /netbsd: [ 131.2216185] cpu0: End traceback...
> Jul 15 14:11:40 ymir /netbsd:
> Jul 15 14:11:40 ymir /netbsd: [ 131.2216185] dumping to dev 168,2
> (offset=8, size=5225879):
> Jul 15 14:11:40 ymir /netbsd: [ 131.2216185] dump <5>ktrace timeout
> Jul 15 14:11:40 ymir /netbsd: ktrace timeout
> Jul 15 14:11:40 ymir /netbsd: [ 131.2216185] ktrace timeout
> Jul 15 14:11:40 ymir syslogd[946]: last message repeated 2 times
>
> > Chavdar
> >
> > On Wed, 20 May 2020 at 22:09, Maxime Villard  wrote:
> > >
> > > Le 09/05/2020 à 10:54, Maxime Villard a écrit :
> > > > Le 01/05/2020 à 19:13, Chavdar Ivanov a écrit :
> > > >> On Fri, 1 May 2020 at 13:59, Rhialto  wrote:
> > > >>>
> > > >>> On Sun 26 Apr 2020 at 21:39:12 +0200, Maxime Villard wrote:
> > >  Maybe I should add a note in the man page to say that you cannot 
> > >  expect a CPU
> > >  from before ~2010 to have virtualization support.
> > > >>>
> > > >>> Or even better, what one should look for in the output of, for 
> > > >>> example,
> > > >>> "cpuctl identify 0". Since I didn't exactly know, I made some guesses
> > > >>> and assumed that my

Re: NVMM not working, NetBSD 9x amd64

2020-07-15 Thread Chavdar Ivanov
On Wed, 15 Jul 2020 at 11:08, Chavdar Ivanov  wrote:
>
> Hi,
>
> I decided to reuse this thread; nvmm again ceased to work from yesterday.
>
> On
>
> # uname -a
> NetBSD ymir 9.99.69 NetBSD 9.99.69 (GENERIC) #15: Tue Jul 14 11:07:52
> BST 2020  sysbuild@ymir:/home/sysbuild/amd64/obj/home/sysbuild/src/sys
> /arch/amd64/compile/GENERIC amd64
>
> I can 'modload nvmm', but when I try to start a vm with nvmm
> acceleration, I get a hard lock, immediately after the message about
> the interface being initialized. I cannot break into the debugger to
> trace and I don't get a dump on reboot. It appears the machine is in a
> deep CPU loop, although it doesn't appear too  hot.
>
> I then tried booting onetbsd, which is from the 12th of July and on
> which nvmm used to work just fine. It is also the same micro version -
> 9.99.59, so n theory should work - but in this case I get a panic when
> I 'modload nvmm' - again, I see the short panic message on the screen
> and the machine apparently gets into another loop here, which I cannot
> break the usual way into the debugger and the only thing I can do is
> hit the power button. There weren't that many kernel changes in this
> period, most notably the per-CPU IDT patch, but I don't know if it is
> relevant.
>

I rebuilt my system again today, this time I managed to get a core
dump after the panic:

 crash -M netbsd.22.core -N netbsd.22
Crash version 9.99.69, image version 9.99.69.
crash: _kvm_kvatop(0)
Kernel compiled without options LOCKDEBUG.
System panicked: trap
Backtrace from time of crash is available.
crash> bt
_KERNEL_OPT_NARCNET() at 0
?() at a0819ba16000
sys_reboot() at sys_reboot
vpanic() at vpanic+0x15b
snprintf() at snprintf
startlwp() at startlwp
calltrap() at calltrap+0x19
kqueue_register() at kqueue_register+0x43e
kevent1() at kevent1+0x138
sys___kevent50() at sys___kevent50+0x33
syscall() at syscall+0x26e
--- syscall (number 435) ---
syscall+0x26e:

Any ideas?

The dmesg shows, BTW:

Jul 15 14:09:33 ymir /netbsd: [ 108.7517032] nvmm0: attached, using
backend x86-vmx
Jul 15 14:11:40 ymir syslogd[946]: restart
Jul 15 14:11:40 ymir /netbsd: [ 131.2116186] fatal protection fault in
supervisor mode
Jul 15 14:11:40 ymir /netbsd: [ 131.2116186] trap type 4 code 0x323
rip 0x80c89e21 cs 0x8 rflags 0x10282 cr2 0x784321f9f000 ilevel
0
 rsp 0xa0819ba1ac50
Jul 15 14:11:40 ymir /netbsd: [ 131.2116186] curlwp 0xd066fc45b100
pid 2869.2869 lowest kstack 0xa0819ba162c0
Jul 15 14:11:40 ymir /netbsd: [ 131.2116186] panic: trap
Jul 15 14:11:40 ymir /netbsd: [ 131.2116186] cpu0: Begin traceback...
Jul 15 14:11:40 ymir /netbsd: [ 131.2116186] vpanic() at netbsd:vpanic+0x152
Jul 15 14:11:40 ymir /netbsd: [ 131.2116186] snprintf() at netbsd:snprintf
Jul 15 14:11:40 ymir /netbsd: [ 131.2116186] startlwp() at netbsd:startlwp
Jul 15 14:11:40 ymir /netbsd: [ 131.2116186] alltraps() at netbsd:alltraps+0xc3
Jul 15 14:11:40 ymir /netbsd: [ 131.2116186] kqueue_register() at
netbsd:kqueue_register+0x43e
Jul 15 14:11:40 ymir /netbsd: [ 131.2116186] kevent1() at netbsd:kevent1+0x138
Jul 15 14:11:40 ymir /netbsd: [ 131.2116186] sys___kevent50() at
netbsd:sys___kevent50+0x33
Jul 15 14:11:40 ymir /netbsd: [ 131.2116186] syscall() at netbsd:syscall+0x26e
Jul 15 14:11:40 ymir /netbsd: [ 131.2116186] --- syscall (number 435) ---
Jul 15 14:11:40 ymir /netbsd: [ 131.2216185] netbsd:syscall+0x26e:
Jul 15 14:11:40 ymir /netbsd: [ 131.2216185] cpu0: End traceback...
Jul 15 14:11:40 ymir /netbsd:
Jul 15 14:11:40 ymir /netbsd: [ 131.2216185] dumping to dev 168,2
(offset=8, size=5225879):
Jul 15 14:11:40 ymir /netbsd: [ 131.2216185] dump <5>ktrace timeout
Jul 15 14:11:40 ymir /netbsd: ktrace timeout
Jul 15 14:11:40 ymir /netbsd: [ 131.2216185] ktrace timeout
Jul 15 14:11:40 ymir syslogd[946]: last message repeated 2 times

> Chavdar
>
> On Wed, 20 May 2020 at 22:09, Maxime Villard  wrote:
> >
> > Le 09/05/2020 à 10:54, Maxime Villard a écrit :
> > > Le 01/05/2020 à 19:13, Chavdar Ivanov a écrit :
> > >> On Fri, 1 May 2020 at 13:59, Rhialto  wrote:
> > >>>
> > >>> On Sun 26 Apr 2020 at 21:39:12 +0200, Maxime Villard wrote:
> >  Maybe I should add a note in the man page to say that you cannot 
> >  expect a CPU
> >  from before ~2010 to have virtualization support.
> > >>>
> > >>> Or even better, what one should look for in the output of, for example,
> > >>> "cpuctl identify 0". Since I didn't exactly know, I made some guesses
> > >>> and assumed that my cpu ("Intel(R) Core(TM) i3-2120 CPU @ 3.30GHz")
> > >>> did't have the required features (it is from 2009 or so).  But this
> > >>> thread inspired me to modload nvmm, which actually helped, so I found
> > >>> out that it even works on this cpu.
> > >
> > > On Intel CPUs the information is hidden in privileged registers that 
> > > cpuctl
> > > cannot access, so no, it won't be possible.
> > >
> > > However the day before I had added clear warnings:
> > >
> > >  
> > > https://mail-index.netbsd.org/source-changes

Re: Working ZFS RAID/SAS controller support/mpii

2020-07-15 Thread Sad Clouds
On Wed, 15 Jul 2020 08:59:00 -0400
Jason Mitchell  wrote:

> There's also a FreeBSD version of the utility:
> 
> https://www.freebsd.org/cgi/man.cgi?query=mfiutil&sektion=8
> 
> I'd think the FreeBSD version of the utility would work better given 
> that FreeBSD and NetBSD are similar.

You would still need emulation FreeBSD packages installed, same as
Linux. It has been a very long time since I tried anything like this on
NetBSD, so not sure which emulation is better.

I don't understand why manufactures can't provide binary tools for all
major OSes, there aren't that many and it's mostly x86. Especially
if the tools are to manage RAID cards, etc.


Re: Working ZFS RAID/SAS controller support/mpii

2020-07-15 Thread Jason Mitchell

On 7/15/20 6:44 AM, Peter Kay wrote:

On Wed, 15 Jul 2020 at 09:59, Sad Clouds  wrote:

On Wed, 15 Jul 2020 00:20:33 +0100
Peter Kay  wrote:


Configuration : Boot drive on SATA, other drives on LSI 3008 8i SAS in
JBOD, boot ROM disabled. The mpii driver gets very upset (causes a
kernel panic on boot, even though the boot drive is on SATA [1]) if
some of the drive bays aren't occupied, throws unhappy messages about
drives disappearing from bays, and generally doesn't provide any
confidence that I could ever remove a drive from a running system and
have it work.

So the issue only happens when you remove drives from a live system? If
that's the case, the obvious workaround would be to power off the
system and then replace faulty drive.

No, it also causes a problem if the system is booted up from cold with
drives missing/in a different order than before (the boot drive still
being in the same location). It'd be nice to have the ability to hot
swap, but if it was a cold boot only issue for failed drives that
would be ok.


There is LSI binary Linux command line tool (MegaCli64), so I imagine
you could offline/online individual disks, but you'd need Linux
emulation packages setup on NetBSD.

Interesting, thank you.


There's also a FreeBSD version of the utility:

https://www.freebsd.org/cgi/man.cgi?query=mfiutil&sektion=8

I'd think the FreeBSD version of the utility would work better given 
that FreeBSD and NetBSD are similar.


--
Thanks,

*Jason Mitchell*


Re: Working ZFS RAID/SAS controller support/mpii

2020-07-15 Thread Peter Kay
On Wed, 15 Jul 2020 at 09:59, Sad Clouds  wrote:
>
> On Wed, 15 Jul 2020 00:20:33 +0100
> Peter Kay  wrote:
>
> > Configuration : Boot drive on SATA, other drives on LSI 3008 8i SAS in
> > JBOD, boot ROM disabled. The mpii driver gets very upset (causes a
> > kernel panic on boot, even though the boot drive is on SATA [1]) if
> > some of the drive bays aren't occupied, throws unhappy messages about
> > drives disappearing from bays, and generally doesn't provide any
> > confidence that I could ever remove a drive from a running system and
> > have it work.
>
> So the issue only happens when you remove drives from a live system? If
> that's the case, the obvious workaround would be to power off the
> system and then replace faulty drive.
No, it also causes a problem if the system is booted up from cold with
drives missing/in a different order than before (the boot drive still
being in the same location). It'd be nice to have the ability to hot
swap, but if it was a cold boot only issue for failed drives that
would be ok.

> There is LSI binary Linux command line tool (MegaCli64), so I imagine
> you could offline/online individual disks, but you'd need Linux
> emulation packages setup on NetBSD.
Interesting, thank you.


Re: NVMM not working, NetBSD 9x amd64

2020-07-15 Thread Chavdar Ivanov
Hi,

I decided to reuse this thread; nvmm again ceased to work from yesterday.

On

# uname -a
NetBSD ymir 9.99.69 NetBSD 9.99.69 (GENERIC) #15: Tue Jul 14 11:07:52
BST 2020  sysbuild@ymir:/home/sysbuild/amd64/obj/home/sysbuild/src/sys
/arch/amd64/compile/GENERIC amd64

I can 'modload nvmm', but when I try to start a vm with nvmm
acceleration, I get a hard lock, immediately after the message about
the interface being initialized. I cannot break into the debugger to
trace and I don't get a dump on reboot. It appears the machine is in a
deep CPU loop, although it doesn't appear too  hot.

I then tried booting onetbsd, which is from the 12th of July and on
which nvmm used to work just fine. It is also the same micro version -
9.99.59, so n theory should work - but in this case I get a panic when
I 'modload nvmm' - again, I see the short panic message on the screen
and the machine apparently gets into another loop here, which I cannot
break the usual way into the debugger and the only thing I can do is
hit the power button. There weren't that many kernel changes in this
period, most notably the per-CPU IDT patch, but I don't know if it is
relevant.

Chavdar

On Wed, 20 May 2020 at 22:09, Maxime Villard  wrote:
>
> Le 09/05/2020 à 10:54, Maxime Villard a écrit :
> > Le 01/05/2020 à 19:13, Chavdar Ivanov a écrit :
> >> On Fri, 1 May 2020 at 13:59, Rhialto  wrote:
> >>>
> >>> On Sun 26 Apr 2020 at 21:39:12 +0200, Maxime Villard wrote:
>  Maybe I should add a note in the man page to say that you cannot expect 
>  a CPU
>  from before ~2010 to have virtualization support.
> >>>
> >>> Or even better, what one should look for in the output of, for example,
> >>> "cpuctl identify 0". Since I didn't exactly know, I made some guesses
> >>> and assumed that my cpu ("Intel(R) Core(TM) i3-2120 CPU @ 3.30GHz")
> >>> did't have the required features (it is from 2009 or so).  But this
> >>> thread inspired me to modload nvmm, which actually helped, so I found
> >>> out that it even works on this cpu.
> >
> > On Intel CPUs the information is hidden in privileged registers that cpuctl
> > cannot access, so no, it won't be possible.
> >
> > However the day before I had added clear warnings:
> >
> >  https://mail-index.netbsd.org/source-changes/2020/04/30/msg116878.html
> >
> > So now it will tell you what's missing.
> >
> >>> Of course I immediately tried it with Haiku (the BeOS clone) from
> >>> https://download.haiku-os.org/nightly-images/x86_64/ and I got mixed
> >>> results. Once it manages to boot it works fine and nicely fast (much
> >>> better than without nvmm), but quite often it crashes into its kernel
> >>> debugger during the first 10 seconds of booting, with different messages
> >>> (I have seen "General Protection Exception" and "ASSERT failed ...
> >>> fCPUCount >= 0").  ("qemu-system-x86_64 -accel nvmm -m 2G -cdrom
> >>> haiku-master-hrev54106-x86_64-anyboot.iso" on a 9.0 GENERIC kernel)
> >
> > This was a missing filtering in the CPU identification, on CPUs that have 
> > SMT,
> > leading Haiku to believe it had SMT threads that it didn't.
> >
> >  https://mail-index.netbsd.org/source-changes/2020/05/09/msg117188.html
> >
> > As far as I can tell, your CPU has SMT.
> >
> >> I've never used Haiku so far; upon reading this I decided to try it on
> >> my NetBSD-current laptop with nvmm.
> >>
> >> So far, with several attempts, it works with no problem whatsoever,
> >> directly booting the newest image on the site pointed above.
> >>
> >> Another OS to play with...
> >>
> >> The host cpu is Intel(R) Core(TM) i7-3820QM CPU @ 2.70GHz, id 0x306a9.
> >
> > This CPU too has SMT.
> >
> > Le 01/05/2020 à 20:10, Rhialto a écrit :
> >> There might well be an improvement between 9.0 and -current, of course.
> >> It's good to hear that it works for you; I might upgrade to a -current
> >> kernel.
> >
> > Overall, no, each improvement in -current is propagated to 9, so you should
> > get the same results on both (modulo kernel bugs added in places not
> > related to NVMM).
> >
> > Le 01/05/2020 à 20:52, Chavdar Ivanov a écrit :
> >> Earlier I had similar issues with OmniOS under qemu-nvmm - sometimes
> >> it worked without a problem, sometimes I couldn't even boot. I still
> >> have no idea why.
> >
> > Maybe that's the same problem, I'll test.
>
> I tested the other day, and I saw no problem. With debugging I noticed that
> OmniOS, too, uses the CPU information that used to be mis-reported by NVMM,
> so probably my fix must have helped.
>
> Please confirm the issues are fixed (HaikuOS+OmniOS).



--



Re: Working ZFS RAID/SAS controller support/mpii

2020-07-15 Thread Sad Clouds
On Wed, 15 Jul 2020 00:20:33 +0100
Peter Kay  wrote:

> Configuration : Boot drive on SATA, other drives on LSI 3008 8i SAS in
> JBOD, boot ROM disabled. The mpii driver gets very upset (causes a
> kernel panic on boot, even though the boot drive is on SATA [1]) if
> some of the drive bays aren't occupied, throws unhappy messages about
> drives disappearing from bays, and generally doesn't provide any
> confidence that I could ever remove a drive from a running system and
> have it work.

So the issue only happens when you remove drives from a live system? If
that's the case, the obvious workaround would be to power off the
system and then replace faulty drive.

I've used "LSI 9260-8i" on NetBSD, but it's 6Gb/s per port and no JBOD.
Not actually tried hotplugging with this card. The only issue with not
having JBOD is the virtual disc provided by the card doesn't support
SSD TRIM. The card has flexible config where you can disable cached
I/O, etc, so data is passed directly to SSD without getting slowed down
by onboard cache. I've not used ZFS with it, but I think configuring
each disk individually in RAID0 and then passing them to ZFS would work.

There is LSI binary Linux command line tool (MegaCli64), so I imagine
you could offline/online individual disks, but you'd need Linux
emulation packages setup on NetBSD.