hi, I upgraded our home nas from the old stable ce to the new stable one. The process went ok, but after rebooting it has crashed twice:
this is the info I can see with fmpdump: # fmdump -Vp -u 95f72642-1099-ca7a-f1ca-952f0f5a9906 TIME UUID SUNW-MSG-ID Nov 19 2017 18:01:08.485243000 95f72642-1099-ca7a-f1ca-952f0f5a9906 SUNOS-8000-KL TIME CLASS ENA Nov 19 18:01:08.3130 ireport.os.sunos.panic.dump_available 0x0000000000000000 Nov 19 18:00:19.3332 ireport.os.sunos.panic.dump_pending_on_device 0x0000000000000000 nvlist version: 0 version = 0x0 class = list.suspect uuid = 95f72642-1099-ca7a-f1ca-952f0f5a9906 code = SUNOS-8000-KL diag-time = 1511110868 390045 de = fmd:///module/software-diagnosis fault-list-sz = 0x1 fault-list = (array of embedded nvlists) (start fault-list[0]) nvlist version: 0 version = 0x0 class = defect.sunos.kernel.panic certainty = 0x64 asru = sw:///:path=/var/crash/unknown/.95f72642-1099-ca7a-f1ca-952f0f5a9906 resource = sw:///:path=/var/crash/unknown/.95f72642-1099-ca7a-f1ca-952f0f5a9906 savecore-succcess = 1 dump-dir = /var/crash/unknown dump-files = vmdump.1 os-instance-uuid = 95f72642-1099-ca7a-f1ca-952f0f5a9906 panicstr = BAD TRAP: type=e (#pf Page fault) rp=ffffff0016065cc0 addr=0 occurred in module "unix" due to a NULL pointer dereference panicstack = unix:die+df () | unix:trap+e08 () | unix:_cmntrap+e6 () | unix:strncpy+28 () | lx_brand:lx_prctl+48a () | lx_brand:lx_syscall_enter+16f () | unix:brand_sys_syscall+1bd () | crashtime = 1511110541 panic-time = Sun Nov 19 17:55:41 2017 CET (end fault-list[0]) fault-status = 0x1 severity = Major __ttl = 0x1 __tod = 0x5a11b8d4 0x1cec3878 The first time it crashed after re-attaching a ipkg zone, and the second time after I started a vmware vm hosted in a nfs share on the omnios host. I have the dumped crash files, I do not know if anyone could take a look at them. This is a microserver HP proliant N40L, and has been rock stable for 4 years, so maybe the hardware is at fault, obviously. I have not started any ipkg zone or any vm and now the host is running stable. Thanks in advance. -- Groeten, natxo
_______________________________________________ OmniOS-discuss mailing list OmniOS-discuss@lists.omniti.com http://lists.omniti.com/mailman/listinfo/omnios-discuss