I have a simple fibre channel SAN setup, with 2 disc arrays and 2 SunFire boxes attached to a FC switch. Each disc array holds a ZFS pool which should be mounted by one OpenSolaris system, and not the other.
One of the two pairs was a recent addition to the FC switch (it was previously direct-attached), and on boot the default filesystem/local SMF service failed. We tracked it down to "zfs mount -a" being executed in /lib/svc/method/fs-local and failing, while trying to read the zfs pool already open and locked by the other OpenSolaris system. A suggestion given as a permanent resolution of this problem was to do a "zpool export" on the pools that should not be mounted on each system. A simple diagram to illustrate our setup: d1(zpool1) <- switch1 -> comp1 d2(zpool2) <- switch1 -> comp2 Because comp1 already has zpool1 mounted, comp2 seems unable to export zpool1 to prevent messiness on boot (presuming that the export being stored in the zpool.cache would prevent the failure). I can enable WWN/lun masking on d1 and d2 such that comp1 and comp2 see only those luns they should be mounting, but I thought I'd ask if the "best" way to handle this would be to do a temporary export of zpool1 on comp1, then import/export zpool1 on comp2 (and likewise import/export zpool2 on comp1), or if there was some other, more "zfs" way to handle this. If the somewhat convoluted technique described is the only way to handle things, would submitting a RFE for an "export this pool even if you don't technically know about it" option be amiss? While it's not a huge issue for me to temporarily export zpool1 in this case, I could see it becoming a problem as more pools get added to the SAN. Thanks, Jeff Bachtel _______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss