On Tue, May 29, 2012 at 5:46 PM, Richard Elling <[email protected]> wrote: > idea at the bottom... > > On May 29, 2012, at 12:56 PM, Jason Cox wrote: > >> Let me start by saying that I am very new to OpenIndiana and Solaris >> 10/11 in general. I normally deal with Red Hat Linux. I wanted to use >> OI for ZFS support for a vmware shared storage server to mount LUNs on >> my SAN. >> >> Setup: >> >> 2 servers with multiple fiber-channel connections directly connected >> to my SAN (no SAN switch). >> >> >> Situation: >> >> I have multipath working and I create the zpool with no problem using >> the multipath disk device. >> --- >> [email protected]:~# zpool create lun00 >> c2t60060E80104B8F6004F327FE00000000d0 >> [email protected]:~# zpool status lun00 >> pool: lun00 >> state: ONLINE >> scan: none requested >> config: >> >> NAME STATE READ WRITE CKSUM >> lun00 ONLINE 0 0 0 >> c2t60060E80104B8F6004F327FE00000000d0 ONLINE 0 0 0 >> >> errors: No known data errors >> --- >> >> Now I can export the pool from nfs01 to nfs02 with no problem.. >> >> --- >> [email protected]:~# zpool status lun00 >> pool: lun00 >> state: ONLINE >> scan: none requested >> config: >> >> NAME STATE READ WRITE CKSUM >> lun00 ONLINE 0 0 0 >> c2t60060E80104B8F6004F327FE00000000d0 ONLINE 0 0 0 >> >> errors: No known data errors >> --- >> >> The issue comes up when I then export the pool off nfs02 and try to >> import it again back on nsa01. >> >> --- >> [email protected]:~# zpool import lun00 >> Assertion failed: rn->rn_nozpool == B_FALSE, file >> ../common/libzfs_import.c, line 1093, function zpool_open_func >> Abort (core dumped) >> --- >> >> No matter how many times I try to import the pool on 01 I have this >> issue. Both servers are running the same version of OI and all the >> same updates. They are also the same servers purchased and spec'ed at >> the same time for this project. >> >> >> Any guidance would be appreciated. > > This can occur if the disk label does not look the same from both systems. > prtvtoc /dev/rdsk/c2t60060E80104B8F6004F327FE00000000d0s0 > and compare > -- richard > > -- > ZFS Performance and Training > [email protected] > +1-760-896-4422 > > > > _______________________________________________ > OpenIndiana-discuss mailing list > [email protected] > http://openindiana.org/mailman/listinfo/openindiana-discuss
Richard, Ok I ran that on both machines and they look the same: [email protected]:~# prtvtoc /dev/rdsk/c2t60060E80104B8F6004F327FE00000000d0s0 * /dev/rdsk/c2t60060E80104B8F6004F327FE00000000d0s0 partition map * * Dimensions: * 512 bytes/sector * 22978074624 sectors * 22978074557 accessible sectors * * Flags: * 1: unmountable * 10: read-only * * Unallocated space: * First Sector Last * Sector Count Sector * 34 222 255 * * First Sector Last * Partition Tag Flags Sector Count Sector Mount Directory 0 4 00 256 22978057951 22978058206 8 11 00 22978058207 16384 22978074590 [email protected]:~# --- [email protected]:~# prtvtoc /dev/rdsk/c2t60060E80104B8F6004F327FE00000000d0s0 * /dev/rdsk/c2t60060E80104B8F6004F327FE00000000d0s0 partition map * * Dimensions: * 512 bytes/sector * 22978074624 sectors * 22978074557 accessible sectors * * Flags: * 1: unmountable * 10: read-only * * Unallocated space: * First Sector Last * Sector Count Sector * 34 222 255 * * First Sector Last * Partition Tag Flags Sector Count Sector Mount Directory 0 4 00 256 22978057951 22978058206 8 11 00 22978058207 16384 22978074590 [email protected]:~# -- Jason Cox _______________________________________________ OpenIndiana-discuss mailing list [email protected] http://openindiana.org/mailman/listinfo/openindiana-discuss
