I've been experiencing an error for some time that's been driving me bonkers.
One of my servers (currently running r151020) has a JBOD enclosure with 84 x
8TB Seagate Archive drives. I now know we should've avoided those horribly slow
drives but, the price was right and now I'm stuck with them. The problem is
after about 30 minutes of very heavy reads on those disks (no writes at all),
the system panics with the following backtrace:
ffffff00f58e1920 genunix:vmem_hash_delete+9b ()
ffffff00f58e1980 genunix:vmem_xfree+4b ()
ffffff00f58e19b0 genunix:vmem_free+23 ()
ffffff00f58e1a00 genunix:rmfree+6e ()
ffffff00f58e1a30 mpt_sas:mptsas_pkt_destroy_extern+cf ()
ffffff00f58e1a60 mpt_sas:mptsas_scsi_destroy_pkt+75 ()
ffffff00f58e1a80 scsi:scsi_destroy_pkt+1a ()
ffffff00f58e1ad0 ses:ses_callback+c1 ()
ffffff00f58e1b00 mpt_sas:mptsas_pkt_comp+2b ()
ffffff00f58e1b50 mpt_sas:mptsas_doneq_empty+ae ()
ffffff00f58e1b90 mpt_sas:mptsas_intr+177 ()
ffffff00f58e1be0 apix:apix_dispatch_by_vector+8c ()
ffffff00f58e1c20 apix:apix_dispatch_lowlevel+25 ()
ffffff00f58999e0 unix:switch_sp_and_call+13 ()
ffffff00f5899a40 apix:apix_do_interrupt+387 ()
ffffff00f5899a50 unix:_interrupt+ba ()
ffffff00f5899bc0 unix:acpi_cpu_cstate+11b ()
ffffff00f5899bf0 unix:cpu_acpi_idle+8d ()
ffffff00f5899c00 unix:cpu_idle_adaptive+13 ()
ffffff00f5899c20 unix:idle+a7 ()
ffffff00f5899c30 unix:thread_start+8 ()
After some googling I found this which I thiiiink seems to have addressed the
underlying issue:
https://www.illumos.org/issues/5698 <https://www.illumos.org/issues/5698>
via this commit:
https://github.com/illumos/illumos-gate/commit/2482ae1b96a558eec551575934d5f06c87b807af
<https://github.com/illumos/illumos-gate/commit/2482ae1b96a558eec551575934d5f06c87b807af>
I'm not very familiar with the Illumos build system, so I was hoping someone
could give me a pointer as to how I could just build this updated mpt_sas
driver (can I just build it alone or do I have to build the entire system?). Of
course if that pull could be released as an OmniOS update, that'd be even
better ;)
Thanks,
Michael
_______________________________________________
OmniOS-discuss mailing list
OmniOS-discuss@lists.omniti.com
http://lists.omniti.com/mailman/listinfo/omnios-discuss