On Tue, May 29, 2012 at 5:46 PM, Richard Elling <[email protected]> wrote: > idea at the bottom... > > On May 29, 2012, at 12:56 PM, Jason Cox wrote: > >> Let me start by saying that I am very new to OpenIndiana and Solaris >> 10/11 in general. I normally deal with Red Hat Linux. I wanted to use >> OI for ZFS support for a vmware shared storage server to mount LUNs on >> my SAN. >> >> Setup: >> >> 2 servers with multiple fiber-channel connections directly connected >> to my SAN (no SAN switch). >> >> >> Situation: >> >> I have multipath working and I create the zpool with no problem using >> the multipath disk device. >> --- >> [email protected]:~# zpool create lun00 >> c2t60060E80104B8F6004F327FE00000000d0 >> [email protected]:~# zpool status lun00 >> pool: lun00 >> state: ONLINE >> scan: none requested >> config: >> >> NAME STATE READ WRITE CKSUM >> lun00 ONLINE 0 0 0 >> c2t60060E80104B8F6004F327FE00000000d0 ONLINE 0 0 0 >> >> errors: No known data errors >> --- >> >> Now I can export the pool from nfs01 to nfs02 with no problem.. >> >> --- >> [email protected]:~# zpool status lun00 >> pool: lun00 >> state: ONLINE >> scan: none requested >> config: >> >> NAME STATE READ WRITE CKSUM >> lun00 ONLINE 0 0 0 >> c2t60060E80104B8F6004F327FE00000000d0 ONLINE 0 0 0 >> >> errors: No known data errors >> --- >> >> The issue comes up when I then export the pool off nfs02 and try to >> import it again back on nsa01. >> >> --- >> [email protected]:~# zpool import lun00 >> Assertion failed: rn->rn_nozpool == B_FALSE, file >> ../common/libzfs_import.c, line 1093, function zpool_open_func >> Abort (core dumped) >> --- >> >> No matter how many times I try to import the pool on 01 I have this >> issue. Both servers are running the same version of OI and all the >> same updates. They are also the same servers purchased and spec'ed at >> the same time for this project. >> >> >> Any guidance would be appreciated. > > This can occur if the disk label does not look the same from both systems. > prtvtoc /dev/rdsk/c2t60060E80104B8F6004F327FE00000000d0s0 > and compare > -- richard > > -- > ZFS Performance and Training > [email protected] > +1-760-896-4422 > > > > _______________________________________________ > OpenIndiana-discuss mailing list > [email protected] > http://openindiana.org/mailman/listinfo/openindiana-discuss
I have an update on this... After talking to a friend who deals with Solaris all the time, he was thinking it was a bug in EFI. Based on what Richard was thinking too, I took a smaller LUN that was below 2TB and changed the label from EFI to SMI. It works now. I can move the LUN back and forth between the two servers. So I guess I either redo all my LUNs to support SMI and let ZFS RAID them together or I wait and see if a patch comes out for this issue. -- Jason Cox _______________________________________________ OpenIndiana-discuss mailing list [email protected] http://openindiana.org/mailman/listinfo/openindiana-discuss
