Re: [OpenIndiana-discuss] System hangs on boot: No space on device msg.

2011-07-08 Thread Kees Nuyt
On Fri, 8 Jul 2011 12:44:44 -0700, Nick wrote: > Hi everyone! > This is my first post / install of OI and I just ran into > a curious issue. While trying to mount a NFS share > (exported from the OI server) on a client box, the OI > server crashed. I got the black screen of death, what > loo

[OpenIndiana-discuss] System hangs on boot: No space on device msg.

2011-07-08 Thread Nick Faraday
Hi everyone! This is my first post / install of OI and I just ran into a curious issue. While trying to mount a NFS share (exported from the OI server) on a client box, the OI server crashed. I got the black screen of death, what looked like a log dump, then the system restated. It never cam

Re: [OpenIndiana-discuss] Shorter subjects for mailing lists

2011-07-08 Thread Nikola M
Nikola M wrote: > I would like to propose shorter subjects in the mailing lists. > (Caiman-team,G11n-team,Jds-team,Sfw-team, rest are Ok I think) PLEASE Consider shortening Subject lines for Mailing list subjects. PLEASE. > > Like: > [Jds-team] [OpenIndiana Distribution - Bug #1054] dev-148b Gnome

Re: [OpenIndiana-discuss] zpool in sorry state

2011-07-08 Thread Eric Pierce
Indeed, right now zpool status -v is reporting only 1 unrecoverable error. However, other LUNs aren't recognized by VMWare as VMFS volumes anymore. The server does have ECC memory, and an LSI SAS controller (no RAID, ZFS handles everything). We've had this in production for about 4 months without

Re: [OpenIndiana-discuss] zpool in sorry state

2011-07-08 Thread Lucas Van Tol
I think the re silver should have looked at all the data and given you the entire list of bad data, but I'm not entirely sure if re silvers look outside of the vdev they are fixing. A scrub would look at all the data and verify it. I note that your drives are out due to too many errors. Normall

[OpenIndiana-discuss] zpool in sorry state

2011-07-08 Thread Eric Pierce
I'm posting this in hopes someone can help me out. Yesterday, it appears we lost 2-3 drives in our pool. The pool is 22 drives mirrored with 2 hot spares, both of which activated: Here's the current state of the pool: pool: vmstorage state: DEGRADED status: One or more devices has experienced