I got around that some time ago with a little hack;
Maintain a directory with soft links to disks of interest:


ls -l .../mydsklist
total 50
lrwxrwxrwx   1 cx158393 staff         17 Apr 29  2006 c1t0d0s1 -> 
/dev/dsk/c1t0d0s1
lrwxrwxrwx   1 cx158393 staff         18 Apr 29  2006 c1t16d0s1 -> 
/dev/dsk/c1t16d0s1
lrwxrwxrwx   1 cx158393 staff         18 Apr 29  2006 c1t17d0s1 -> 
/dev/dsk/c1t17d0s1


Then:


        zpool import -d ..../mydkslist mypool

Hope that helps...

-r

Jason J. W. Williams writes:
 > Hi Luke,
 > 
 > That's terrific!
 > 
 > You know you might be able to tell ZFS which disks to look at. I'm not
 > sure. It would be interesting, if anyone with a Thumper could comment
 > on whether or not they see the import time issue. What are your load
 > times now with MPXIO?
 > 
 > Best Regards,
 > Jason
 > 
 > 
 > 
 > On 12/7/06, Luke Schwab <[EMAIL PROTECTED]> wrote:
 > > Jason,
 > >
 > > Sorry, I don't have IM. I did make some progress on
 > > testing.
 > >
 > > The solaris 10 OS allows you to kind of lun mask
 > > within the fp.conf file by creating a list of luns you
 > > don't want to see. This has improved my time greatly.
 > > I can now create/export within a second and import
 > > only takes about 10 seconds. What a differnce compared
 > > to the 5-8 minutes I've been seeing! but this not good
 > > if my machine needs to see LOTs of luns.
 > >
 > > It would be nice if there was a feature in zfs where
 > > you could specify which disk the the pool resides
 > > instead of zfs looking through every disk attached to
 > > the machine.
 > >
 > > Luke Schwab
 > >
 > >
 > > --- "Jason J. W. Williams" <[EMAIL PROTECTED]>
 > > wrote:
 > >
 > > > Hey Luke,
 > > >
 > > > Do you have IM?
 > > >
 > > > My Yahoo IM ID is [EMAIL PROTECTED]
 > > > -J
 > > >
 > > > On 12/6/06, Luke Schwab <[EMAIL PROTECTED]>
 > > > wrote:
 > > > > Rats, I think I know where your going? We use LSIs
 > > > > exclusively.
 > > > >
 > > > > LSI performs lun masking as the driver level. You
 > > > can
 > > > > specifically tell the LSI HBA to only bind to
 > > > specific
 > > > > luns on the array. The array doesn't appear to
 > > > support
 > > > > lun masking by itself.
 > > > >
 > > > > I believe you can also mask at the SAN switch but
 > > > most
 > > > > of our connections are direct connect to the
 > > > array.
 > > > >
 > > > > I just thought you know a quick and easy way to
 > > > mask
 > > > > in Solaris 10. I tried the fp.conf file with a
 > > > > "blackout list" to prevent certain luns from being
 > > > > viewed but I couldn't get the OS to have a list of
 > > > > luns that it can only allow.
 > > > >
 > > > > Thanks,
 > > > > Luke
 > > > >
 > > > >
 > > > > --- "Jason J. W. Williams"
 > > > <[EMAIL PROTECTED]>
 > > > > wrote:
 > > > >
 > > > > > Hi Luke,
 > > > > >
 > > > > > Who makes your array? IBM, SGI or StorageTek?
 > > > > >
 > > > > > Best Regards,
 > > > > > Jason
 > > > > >
 > > > > > On 12/6/06, Luke Schwab <[EMAIL PROTECTED]>
 > > > > > wrote:
 > > > > > > Jason,
 > > > > > >
 > > > > > > could you give me a tip on how to do lun
 > > > masking.
 > > > > > I
 > > > > > > used to do it via the /kernel/drv/ssd.conf
 > > > file
 > > > > > with
 > > > > > > an LSI HBA. Now I have Emulex HBAs with the
 > > > > > Leadville.
 > > > > > >
 > > > > > >
 > > > > > > I saw on sunsolve a way to mask using a "black
 > > > > > list"
 > > > > > > in the /kernel/drv/fp.conf file but that isn't
 > > > > > what I
 > > > > > > was looking for.
 > > > > > >
 > > > > > > Do you know any other ways?
 > > > > > >
 > > > > > > Thanks,
 > > > > > >
 > > > > > > Luke Scwhab
 > > > > > > --- "Jason J. W. Williams"
 > > > > > <[EMAIL PROTECTED]>
 > > > > > > wrote:
 > > > > > >
 > > > > > > > Hi Luke,
 > > > > > > >
 > > > > > > > I think you'll really like it. We moved from
 > > > > > UFS/SVM
 > > > > > > > and its a night
 > > > > > > > and day management difference. Though I
 > > > > > understand
 > > > > > > > SVM itself is
 > > > > > > > easier to deal with than VxVM, so it may be
 > > > an
 > > > > > order
 > > > > > > > of magnitude
 > > > > > > > easier.
 > > > > > > >
 > > > > > > > Best Regards,
 > > > > > > > Jason
 > > > > > > >
 > > > > > > > On 12/6/06, Luke Schwab
 > > > <[EMAIL PROTECTED]>
 > > > > > > > wrote:
 > > > > > > > > The 4884 as well as the V280 server is
 > > > using 2
 > > > > > > > ports
 > > > > > > > > each. I don't have any FS's at this point.
 > > > I'm
 > > > > > > > trying
 > > > > > > > > to keep it simple for now.
 > > > > > > > >
 > > > > > > > > We are beta testing to go away from VxVM
 > > > and
 > > > > > such.
 > > > > > > > >
 > > > > > > > >
 > > > > > > > >
 > > > > > > > > --- "Jason J. W. Williams"
 > > > > > > > <[EMAIL PROTECTED]>
 > > > > > > > > wrote:
 > > > > > > > >
 > > > > > > > > > Hi Luke,
 > > > > > > > > >
 > > > > > > > > > Is the 4884 using two or four ports?
 > > > Also,
 > > > > > how
 > > > > > > > many
 > > > > > > > > > FSs are involved?
 > > > > > > > > >
 > > > > > > > > > Best Regards,
 > > > > > > > > > Jason
 > > > > > > > > >
 > > > > > > > > > On 12/6/06, Luke Schwab
 > > > > > <[EMAIL PROTECTED]>
 > > > > > > > > > wrote:
 > > > > > > > > > > I, too, experienced a long delay while
 > > > > > > > importing a
 > > > > > > > > > zpool on a second machine. I do not have
 > > > any
 > > > > > > > > > filesystems in the pool. Just the
 > > > Solaris 10
 > > > > > > > > > Operating system, Emulex 10002DC HBA,
 > > > and a
 > > > > > 4884
 > > > > > > > LSI
 > > > > > > > > > array (dual attached).
 > > > > > > > > > >
 > > > > > > > > > > I don't have any file systems created
 > > > but
 > > > > > when
 > > > > > > > > > STMS(mpxio) is enabled I see
 > > > > > > > > > >
 > > > > > > > > > > # time zpool import testpool
 > > > > > > > > > > real 6m41.01s
 > > > > > > > > > > user 0m.30s
 > > > > > > > > > > sys 0m0.14s
 > > > > > > > > > >
 > > > > > > > > > > When I disable STMS(mpxio), the times
 > > > are
 > > > > > much
 > > > > > > > > > better but still not that great?
 > > > > > > > > > >
 > > > > > > > > > > # time zpool import testpool
 > > > > > > > > > > real 1m15.01s
 > > > > > > > > > > user 0m.15s
 > > > > > > > > > > sys 0m0.35s
 > > > > > > > > > >
 > > > > > > > > > > Are these normal symproms??
 > > > > > > > > > >
 > > > > > > > > > > Can anyone explain why I too see
 > > > delays
 > > > > > even
 > > > > > > > > > though I don't have any file systems in
 > > > the
 > > > > > > > zpool?
 > > > > > > > > > >
 > > > > > > > > > >
 > > > > > > > > > > This message posted from
 > > > opensolaris.org
 > > > > > > > > > >
 > > > > > > >
 > > > _______________________________________________
 > > > > > > > > > > zfs-discuss mailing list
 > > > > > > > > > > zfs-discuss@opensolaris.org
 > > > > > > > > > >
 > > > > > > > > >
 > > > > > > > >
 > > > > > > >
 > > > > > >
 > > > > >
 > > > >
 > > >
 > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
 > > > > > > > > > >
 > > > > > > > > >
 > > > > > > > >
 > > > > > > > >
 > > > > > > > >
 > > > > >
 > > > __________________________________________________
 > > > > > > > > Do You Yahoo!?
 > > > > > > > > Tired of spam?  Yahoo! Mail has the best
 > > > spam
 > > > > > > > protection around
 > > > > > > > > http://mail.yahoo.com
 > > > > > > > >
 > > > > > > >
 > > > > > >
 > > > > > >
 > > >
 > > === message truncated ===
 > >
 > >
 > >
 > >
 > > ____________________________________________________________________________________
 > > Have a burning question?
 > > Go to www.Answers.yahoo.com and get answers from real people who know.
 > >
 > _______________________________________________
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to