Re[2]: [zfs-discuss] Re: zpool import takes to long with large numbers of file systems

2006-12-12 Thread Robert Milkowski
Hello Jason,

Thursday, December 7, 2006, 11:18:17 PM, you wrote:

JJWW> Hi Luke,

JJWW> That's terrific!

JJWW> You know you might be able to tell ZFS which disks to look at. I'm not
JJWW> sure. It would be interesting, if anyone with a Thumper could comment
JJWW> on whether or not they see the import time issue. What are your load
JJWW> times now with MPXIO?

On x4500 importing a pool made of 44 disks takes about 13 seconds.



-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zpool import takes to long with large numbers of file systems

2006-12-08 Thread Jason J. W. Williams

Hi Roch,

That's a pretty cool idea!

-J

On 12/8/06, Roch - PAE <[EMAIL PROTECTED]> wrote:


I got around that some time ago with a little hack;
Maintain a directory with soft links to disks of interest:


ls -l .../mydsklist
total 50
lrwxrwxrwx   1 cx158393 staff 17 Apr 29  2006 c1t0d0s1 -> 
/dev/dsk/c1t0d0s1
lrwxrwxrwx   1 cx158393 staff 18 Apr 29  2006 c1t16d0s1 -> 
/dev/dsk/c1t16d0s1
lrwxrwxrwx   1 cx158393 staff 18 Apr 29  2006 c1t17d0s1 -> 
/dev/dsk/c1t17d0s1


Then:


zpool import -d /mydkslist mypool

Hope that helps...

-r

Jason J. W. Williams writes:
 > Hi Luke,
 >
 > That's terrific!
 >
 > You know you might be able to tell ZFS which disks to look at. I'm not
 > sure. It would be interesting, if anyone with a Thumper could comment
 > on whether or not they see the import time issue. What are your load
 > times now with MPXIO?
 >
 > Best Regards,
 > Jason
 >
 >
 >
 > On 12/7/06, Luke Schwab <[EMAIL PROTECTED]> wrote:
 > > Jason,
 > >
 > > Sorry, I don't have IM. I did make some progress on
 > > testing.
 > >
 > > The solaris 10 OS allows you to kind of lun mask
 > > within the fp.conf file by creating a list of luns you
 > > don't want to see. This has improved my time greatly.
 > > I can now create/export within a second and import
 > > only takes about 10 seconds. What a differnce compared
 > > to the 5-8 minutes I've been seeing! but this not good
 > > if my machine needs to see LOTs of luns.
 > >
 > > It would be nice if there was a feature in zfs where
 > > you could specify which disk the the pool resides
 > > instead of zfs looking through every disk attached to
 > > the machine.
 > >
 > > Luke Schwab
 > >
 > >
 > > --- "Jason J. W. Williams" <[EMAIL PROTECTED]>
 > > wrote:
 > >
 > > > Hey Luke,
 > > >
 > > > Do you have IM?
 > > >
 > > > My Yahoo IM ID is [EMAIL PROTECTED]
 > > > -J
 > > >
 > > > On 12/6/06, Luke Schwab <[EMAIL PROTECTED]>
 > > > wrote:
 > > > > Rats, I think I know where your going? We use LSIs
 > > > > exclusively.
 > > > >
 > > > > LSI performs lun masking as the driver level. You
 > > > can
 > > > > specifically tell the LSI HBA to only bind to
 > > > specific
 > > > > luns on the array. The array doesn't appear to
 > > > support
 > > > > lun masking by itself.
 > > > >
 > > > > I believe you can also mask at the SAN switch but
 > > > most
 > > > > of our connections are direct connect to the
 > > > array.
 > > > >
 > > > > I just thought you know a quick and easy way to
 > > > mask
 > > > > in Solaris 10. I tried the fp.conf file with a
 > > > > "blackout list" to prevent certain luns from being
 > > > > viewed but I couldn't get the OS to have a list of
 > > > > luns that it can only allow.
 > > > >
 > > > > Thanks,
 > > > > Luke
 > > > >
 > > > >
 > > > > --- "Jason J. W. Williams"
 > > > <[EMAIL PROTECTED]>
 > > > > wrote:
 > > > >
 > > > > > Hi Luke,
 > > > > >
 > > > > > Who makes your array? IBM, SGI or StorageTek?
 > > > > >
 > > > > > Best Regards,
 > > > > > Jason
 > > > > >
 > > > > > On 12/6/06, Luke Schwab <[EMAIL PROTECTED]>
 > > > > > wrote:
 > > > > > > Jason,
 > > > > > >
 > > > > > > could you give me a tip on how to do lun
 > > > masking.
 > > > > > I
 > > > > > > used to do it via the /kernel/drv/ssd.conf
 > > > file
 > > > > > with
 > > > > > > an LSI HBA. Now I have Emulex HBAs with the
 > > > > > Leadville.
 > > > > > >
 > > > > > >
 > > > > > > I saw on sunsolve a way to mask using a "black
 > > > > > list"
 > > > > > > in the /kernel/drv/fp.conf file but that isn't
 > > > > > what I
 > > > > > > was looking for.
 > > > > > >
 > > > > > > Do you know any other ways?
 > > > > > >
 > > > > > > Thanks,
 > > > > > >
 > > > > > > Luke Scwhab
 > > > > > > --- "Jason J. W. Williams"
 > > > > > <[EMAIL PROTECTED]>
 > > > > > > wrote:
 > > > > > >
 > > > > > > > Hi Luke,
 > > > > > > >
 > > > > > > > I think you'll really like it. We moved from
 > > > > > UFS/SVM
 > > > > > > > and its a night
 > > > > > > > and day management difference. Though I
 > > > > > understand
 > > > > > > > SVM itself is
 > > > > > > > easier to deal with than VxVM, so it may be
 > > > an
 > > > > > order
 > > > > > > > of magnitude
 > > > > > > > easier.
 > > > > > > >
 > > > > > > > Best Regards,
 > > > > > > > Jason
 > > > > > > >
 > > > > > > > On 12/6/06, Luke Schwab
 > > > <[EMAIL PROTECTED]>
 > > > > > > > wrote:
 > > > > > > > > The 4884 as well as the V280 server is
 > > > using 2
 > > > > > > > ports
 > > > > > > > > each. I don't have any FS's at this point.
 > > > I'm
 > > > > > > > trying
 > > > > > > > > to keep it simple for now.
 > > > > > > > >
 > > > > > > > > We are beta testing to go away from VxVM
 > > > and
 > > > > > such.
 > > > > > > > >
 > > > > > > > >
 > > > > > > > >
 > > > > > > > > --- "Jason J. W. Williams"
 > > > > > > > <[EMAIL PROTECTED]>
 > > > > > > > > wrote:
 > > > > > > > >
 > > > > > > > > > Hi Luke,
 > > > > > > > > >
 > > > > > > > > > Is the 4884 using two or four ports?
 > > > Also,
 > > > > > 

Re: [zfs-discuss] Re: zpool import takes to long with large numbers of file systems

2006-12-08 Thread Roch - PAE

I got around that some time ago with a little hack;
Maintain a directory with soft links to disks of interest:


ls -l .../mydsklist
total 50
lrwxrwxrwx   1 cx158393 staff 17 Apr 29  2006 c1t0d0s1 -> 
/dev/dsk/c1t0d0s1
lrwxrwxrwx   1 cx158393 staff 18 Apr 29  2006 c1t16d0s1 -> 
/dev/dsk/c1t16d0s1
lrwxrwxrwx   1 cx158393 staff 18 Apr 29  2006 c1t17d0s1 -> 
/dev/dsk/c1t17d0s1


Then:


zpool import -d /mydkslist mypool

Hope that helps...

-r

Jason J. W. Williams writes:
 > Hi Luke,
 > 
 > That's terrific!
 > 
 > You know you might be able to tell ZFS which disks to look at. I'm not
 > sure. It would be interesting, if anyone with a Thumper could comment
 > on whether or not they see the import time issue. What are your load
 > times now with MPXIO?
 > 
 > Best Regards,
 > Jason
 > 
 > 
 > 
 > On 12/7/06, Luke Schwab <[EMAIL PROTECTED]> wrote:
 > > Jason,
 > >
 > > Sorry, I don't have IM. I did make some progress on
 > > testing.
 > >
 > > The solaris 10 OS allows you to kind of lun mask
 > > within the fp.conf file by creating a list of luns you
 > > don't want to see. This has improved my time greatly.
 > > I can now create/export within a second and import
 > > only takes about 10 seconds. What a differnce compared
 > > to the 5-8 minutes I've been seeing! but this not good
 > > if my machine needs to see LOTs of luns.
 > >
 > > It would be nice if there was a feature in zfs where
 > > you could specify which disk the the pool resides
 > > instead of zfs looking through every disk attached to
 > > the machine.
 > >
 > > Luke Schwab
 > >
 > >
 > > --- "Jason J. W. Williams" <[EMAIL PROTECTED]>
 > > wrote:
 > >
 > > > Hey Luke,
 > > >
 > > > Do you have IM?
 > > >
 > > > My Yahoo IM ID is [EMAIL PROTECTED]
 > > > -J
 > > >
 > > > On 12/6/06, Luke Schwab <[EMAIL PROTECTED]>
 > > > wrote:
 > > > > Rats, I think I know where your going? We use LSIs
 > > > > exclusively.
 > > > >
 > > > > LSI performs lun masking as the driver level. You
 > > > can
 > > > > specifically tell the LSI HBA to only bind to
 > > > specific
 > > > > luns on the array. The array doesn't appear to
 > > > support
 > > > > lun masking by itself.
 > > > >
 > > > > I believe you can also mask at the SAN switch but
 > > > most
 > > > > of our connections are direct connect to the
 > > > array.
 > > > >
 > > > > I just thought you know a quick and easy way to
 > > > mask
 > > > > in Solaris 10. I tried the fp.conf file with a
 > > > > "blackout list" to prevent certain luns from being
 > > > > viewed but I couldn't get the OS to have a list of
 > > > > luns that it can only allow.
 > > > >
 > > > > Thanks,
 > > > > Luke
 > > > >
 > > > >
 > > > > --- "Jason J. W. Williams"
 > > > <[EMAIL PROTECTED]>
 > > > > wrote:
 > > > >
 > > > > > Hi Luke,
 > > > > >
 > > > > > Who makes your array? IBM, SGI or StorageTek?
 > > > > >
 > > > > > Best Regards,
 > > > > > Jason
 > > > > >
 > > > > > On 12/6/06, Luke Schwab <[EMAIL PROTECTED]>
 > > > > > wrote:
 > > > > > > Jason,
 > > > > > >
 > > > > > > could you give me a tip on how to do lun
 > > > masking.
 > > > > > I
 > > > > > > used to do it via the /kernel/drv/ssd.conf
 > > > file
 > > > > > with
 > > > > > > an LSI HBA. Now I have Emulex HBAs with the
 > > > > > Leadville.
 > > > > > >
 > > > > > >
 > > > > > > I saw on sunsolve a way to mask using a "black
 > > > > > list"
 > > > > > > in the /kernel/drv/fp.conf file but that isn't
 > > > > > what I
 > > > > > > was looking for.
 > > > > > >
 > > > > > > Do you know any other ways?
 > > > > > >
 > > > > > > Thanks,
 > > > > > >
 > > > > > > Luke Scwhab
 > > > > > > --- "Jason J. W. Williams"
 > > > > > <[EMAIL PROTECTED]>
 > > > > > > wrote:
 > > > > > >
 > > > > > > > Hi Luke,
 > > > > > > >
 > > > > > > > I think you'll really like it. We moved from
 > > > > > UFS/SVM
 > > > > > > > and its a night
 > > > > > > > and day management difference. Though I
 > > > > > understand
 > > > > > > > SVM itself is
 > > > > > > > easier to deal with than VxVM, so it may be
 > > > an
 > > > > > order
 > > > > > > > of magnitude
 > > > > > > > easier.
 > > > > > > >
 > > > > > > > Best Regards,
 > > > > > > > Jason
 > > > > > > >
 > > > > > > > On 12/6/06, Luke Schwab
 > > > <[EMAIL PROTECTED]>
 > > > > > > > wrote:
 > > > > > > > > The 4884 as well as the V280 server is
 > > > using 2
 > > > > > > > ports
 > > > > > > > > each. I don't have any FS's at this point.
 > > > I'm
 > > > > > > > trying
 > > > > > > > > to keep it simple for now.
 > > > > > > > >
 > > > > > > > > We are beta testing to go away from VxVM
 > > > and
 > > > > > such.
 > > > > > > > >
 > > > > > > > >
 > > > > > > > >
 > > > > > > > > --- "Jason J. W. Williams"
 > > > > > > > <[EMAIL PROTECTED]>
 > > > > > > > > wrote:
 > > > > > > > >
 > > > > > > > > > Hi Luke,
 > > > > > > > > >
 > > > > > > > > > Is the 4884 using two or four ports?
 > > > Also,
 > > > > > how
 > > > > > > > many
 > > > > > > > > > FSs are involved?
 > > > > > > > > >
 > > > >

Re: [zfs-discuss] Re: zpool import takes to long with large numbers of file systems

2006-12-07 Thread Jason J. W. Williams

Hi Luke,

That's terrific!

You know you might be able to tell ZFS which disks to look at. I'm not
sure. It would be interesting, if anyone with a Thumper could comment
on whether or not they see the import time issue. What are your load
times now with MPXIO?

Best Regards,
Jason



On 12/7/06, Luke Schwab <[EMAIL PROTECTED]> wrote:

Jason,

Sorry, I don't have IM. I did make some progress on
testing.

The solaris 10 OS allows you to kind of lun mask
within the fp.conf file by creating a list of luns you
don't want to see. This has improved my time greatly.
I can now create/export within a second and import
only takes about 10 seconds. What a differnce compared
to the 5-8 minutes I've been seeing! but this not good
if my machine needs to see LOTs of luns.

It would be nice if there was a feature in zfs where
you could specify which disk the the pool resides
instead of zfs looking through every disk attached to
the machine.

Luke Schwab


--- "Jason J. W. Williams" <[EMAIL PROTECTED]>
wrote:

> Hey Luke,
>
> Do you have IM?
>
> My Yahoo IM ID is [EMAIL PROTECTED]
> -J
>
> On 12/6/06, Luke Schwab <[EMAIL PROTECTED]>
> wrote:
> > Rats, I think I know where your going? We use LSIs
> > exclusively.
> >
> > LSI performs lun masking as the driver level. You
> can
> > specifically tell the LSI HBA to only bind to
> specific
> > luns on the array. The array doesn't appear to
> support
> > lun masking by itself.
> >
> > I believe you can also mask at the SAN switch but
> most
> > of our connections are direct connect to the
> array.
> >
> > I just thought you know a quick and easy way to
> mask
> > in Solaris 10. I tried the fp.conf file with a
> > "blackout list" to prevent certain luns from being
> > viewed but I couldn't get the OS to have a list of
> > luns that it can only allow.
> >
> > Thanks,
> > Luke
> >
> >
> > --- "Jason J. W. Williams"
> <[EMAIL PROTECTED]>
> > wrote:
> >
> > > Hi Luke,
> > >
> > > Who makes your array? IBM, SGI or StorageTek?
> > >
> > > Best Regards,
> > > Jason
> > >
> > > On 12/6/06, Luke Schwab <[EMAIL PROTECTED]>
> > > wrote:
> > > > Jason,
> > > >
> > > > could you give me a tip on how to do lun
> masking.
> > > I
> > > > used to do it via the /kernel/drv/ssd.conf
> file
> > > with
> > > > an LSI HBA. Now I have Emulex HBAs with the
> > > Leadville.
> > > >
> > > >
> > > > I saw on sunsolve a way to mask using a "black
> > > list"
> > > > in the /kernel/drv/fp.conf file but that isn't
> > > what I
> > > > was looking for.
> > > >
> > > > Do you know any other ways?
> > > >
> > > > Thanks,
> > > >
> > > > Luke Scwhab
> > > > --- "Jason J. W. Williams"
> > > <[EMAIL PROTECTED]>
> > > > wrote:
> > > >
> > > > > Hi Luke,
> > > > >
> > > > > I think you'll really like it. We moved from
> > > UFS/SVM
> > > > > and its a night
> > > > > and day management difference. Though I
> > > understand
> > > > > SVM itself is
> > > > > easier to deal with than VxVM, so it may be
> an
> > > order
> > > > > of magnitude
> > > > > easier.
> > > > >
> > > > > Best Regards,
> > > > > Jason
> > > > >
> > > > > On 12/6/06, Luke Schwab
> <[EMAIL PROTECTED]>
> > > > > wrote:
> > > > > > The 4884 as well as the V280 server is
> using 2
> > > > > ports
> > > > > > each. I don't have any FS's at this point.
> I'm
> > > > > trying
> > > > > > to keep it simple for now.
> > > > > >
> > > > > > We are beta testing to go away from VxVM
> and
> > > such.
> > > > > >
> > > > > >
> > > > > >
> > > > > > --- "Jason J. W. Williams"
> > > > > <[EMAIL PROTECTED]>
> > > > > > wrote:
> > > > > >
> > > > > > > Hi Luke,
> > > > > > >
> > > > > > > Is the 4884 using two or four ports?
> Also,
> > > how
> > > > > many
> > > > > > > FSs are involved?
> > > > > > >
> > > > > > > Best Regards,
> > > > > > > Jason
> > > > > > >
> > > > > > > On 12/6/06, Luke Schwab
> > > <[EMAIL PROTECTED]>
> > > > > > > wrote:
> > > > > > > > I, too, experienced a long delay while
> > > > > importing a
> > > > > > > zpool on a second machine. I do not have
> any
> > > > > > > filesystems in the pool. Just the
> Solaris 10
> > > > > > > Operating system, Emulex 10002DC HBA,
> and a
> > > 4884
> > > > > LSI
> > > > > > > array (dual attached).
> > > > > > > >
> > > > > > > > I don't have any file systems created
> but
> > > when
> > > > > > > STMS(mpxio) is enabled I see
> > > > > > > >
> > > > > > > > # time zpool import testpool
> > > > > > > > real 6m41.01s
> > > > > > > > user 0m.30s
> > > > > > > > sys 0m0.14s
> > > > > > > >
> > > > > > > > When I disable STMS(mpxio), the times
> are
> > > much
> > > > > > > better but still not that great?
> > > > > > > >
> > > > > > > > # time zpool import testpool
> > > > > > > > real 1m15.01s
> > > > > > > > user 0m.15s
> > > > > > > > sys 0m0.35s
> > > > > > > >
> > > > > > > > Are these normal symproms??
> > > > > > > >
> > > > > > > > Can anyone explain why I too see
> delays
> > > even
> > > > > > > though I don't have any file systems in
> the
> > > > > zpool?
> > > > > > > >
> > > > > > > >
> > > 

Re: [zfs-discuss] Re: zpool import takes to long with large numbers of file systems

2006-12-06 Thread Jason J. W. Williams

Hi Luke,

Is the 4884 using two or four ports? Also, how many FSs are involved?

Best Regards,
Jason

On 12/6/06, Luke Schwab <[EMAIL PROTECTED]> wrote:

I, too, experienced a long delay while importing a zpool on a second machine. I 
do not have any filesystems in the pool. Just the Solaris 10 Operating system, 
Emulex 10002DC HBA, and a 4884 LSI array (dual attached).

I don't have any file systems created but when STMS(mpxio) is enabled I see

# time zpool import testpool
real 6m41.01s
user 0m.30s
sys 0m0.14s

When I disable STMS(mpxio), the times are much better but still not that great?

# time zpool import testpool
real 1m15.01s
user 0m.15s
sys 0m0.35s

Are these normal symproms??

Can anyone explain why I too see delays even though I don't have any file 
systems in the zpool?


This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: zpool import takes to long with large numbers of file systems

2006-12-06 Thread Luke Schwab
I, too, experienced a long delay while importing a zpool on a second machine. I 
do not have any filesystems in the pool. Just the Solaris 10 Operating system, 
Emulex 10002DC HBA, and a 4884 LSI array (dual attached). 

I don't have any file systems created but when STMS(mpxio) is enabled I see 

# time zpool import testpool
real 6m41.01s
user 0m.30s
sys 0m0.14s

When I disable STMS(mpxio), the times are much better but still not that great? 

# time zpool import testpool
real 1m15.01s
user 0m.15s
sys 0m0.35s

Are these normal symproms??

Can anyone explain why I too see delays even though I don't have any file 
systems in the zpool?
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: zpool import takes to long with large numbers of file systems

2006-10-27 Thread Roch - PAE


as an alternative,  I thaught this  would be relevant to the
discussion:

Bug ID: 6478980
Synopsis: zfs should support automount property

In other words, do we really need to mount 1 FS in a
snap, or do we just need to system to be up quickly then
mount on demand

-r


Chris Gerhard writes:
 > thank you Eric, Doug.
 > 
 > Is there anymore information about the sharemgr project out in 
 > opensolaris.org?  Searching for it just finds this thread.
 >  
 >  
 > This message posted from opensolaris.org
 > ___
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: zpool import takes to long with large numbers of file systems

2006-10-27 Thread Chris Gerhard
thank you Eric, Doug.

Is there anymore information about the sharemgr project out in opensolaris.org? 
 Searching for it just finds this thread.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: zpool import takes to long with large numbers of file systems

2006-10-26 Thread Douglas R. McCallum
>> If you share the file systems the time increases even further but as I
>> understand it that issue is being worked:
>>
>> [EMAIL PROTECTED] # time zpool import zpool1
>>
>> real 7h6m28.62s
>> user 14m55.28s
>> sys 5h58m13.24s
>> [EMAIL PROTECTED] #
>
> Yes, this is a limitation of the antiquated NFS share subsystem. This is being
> worked on as part of the sharemgr work.

The initial putback of sharemgr won't help this much but we understand the 
issues.
A good part of the time is in the popen() used to run the share command. The 
plan
is to use the sharemgr interfaces to avoid the two forks and two execs for every
share being started.

Doug
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss