Re: [zfs-discuss] Re: [Fwd: [zones-discuss] Zone boot problems after installing patches]

2006-08-04 Thread Enda o'Connor - Sun Microsystems Ireland - Software Engineer




Hi
I logged CR 6457216 to track this for now.


Enda

Enda o'Connor - Sun Microsystems Ireland - Software Engineer wrote:

  
   
 
 Enda o'Connor - Sun Microsystems Ireland - Software Engineer wrote:
 
  

   Hi
  I guess the problem is that David is using smpatch (our automated patching
 system )
  So in theory he is up to date on his patches
  ( he has since removed 
   122660-02 

 
 122658-02
   
    122640-05
  )
  
  So when I install the following onto a system ( SPARC S10 FCS ) with two
 zones already running:
 typo should be update 10 1/06 not FCS
 
  
  119254-25 ( patchutilties patch )
  119578-26
  118822-30
  118833-18
  122650-02
  122640-05
  And reboot, I too have the same issue, there is no /dev/zfs in my local 
zones?
  
  # zonename
  global
  # 
  # cat /etc/release
     Solaris 10 1/06 s10s_u1wos_19a SPARC
     
  # ls /var/sadm/patch
  118822-30  119254-26  120900-04  122640-05
  118833-18  119578-26  121133-02  122650-02
  # uptime
    5:48pm  up 2 min(s),  1 user,  load average: 0.58, 0.29, 0.11
  # ls /export/zones/sparse-1/dev/zfs
  /export/zones/sparse-1/dev/zfs: No such file or directory
  # zlogin sparse-1 ls /dev/zfs
  /dev/zfs: No such file or directory
  # 
  I rebooted the zone and then the system, touching /reconfigure, all to
no  avail
  I then added the rest of the patches you suggested and rebooted my zones
 and I had /dev/zfs, strange.
  
  But David had all the patches added and still did not get /dev/zfs in the
 non global zones
  Enda
  
  
  George Wilson wrote:
 
Apologies
for  the internal URL, I'm including the list of patches for  the everyone's
benefit:
   
   
  sparc Patches 
      * ZFS Patches 
    o 118833-17 SunOS 5.10: kernel patch 
    o 118925-02 SunOS 5.10: unistd header file patch 
    o 119578-20 SunOS 5.10: FMA Patch 
    o 119982-05 SunOS 5.10: ufsboot patch 
    o 120986-04 SunOS 5.10: mkfs and newfs patch 
    o 122172-06 SunOS 5.10: swap swapadd isaexec patch 
    o 122174-03 SunOS 5.10: dumpadm patch 
    o 122637-01 SunOS 5.10: zonename patch 
    o 122640-05 SunOS 5.10: zfs genesis patch 
    o 122644-01 SunOS 5.10: zfs header file patch 
    o 122646-01 SunOS 5.10: zlogin patch 
    o 122650-02 SunOS 5.10: zfs tools patch 
    o 122652-03 SunOS 5.10: zfs utilities patch 
    o 122658-02 SunOS 5.10: zonecfg patch 
    o 122660-03 SunOS 5.10: zoneadm zoneadmd patch 
    o 122662-02 SunOS 5.10: libzonecfg patch 
      * Man Pages 
    o 119246-15 SunOS 5.10: Manual Page updates for Solaris 10 
      * Other Patches 
    o 119986-03 SunOS 5.10: clri patch 
    o 123358-01 SunOS 5.10: jumpstart and live upgrade compliance 

    o 121430-11 SunOS 5.8 5.9 5.10: Live Upgrade Patch 
   
  i386 Patches 
      * ZFS Patches 
    o 118344-11 SunOS 5.10_x86: Fault Manager Patch 
    o 118855-15 SunOS 5.10_x86: kernel patch 
    o 118919-16 SunOS 5.10_x86: Solaris Crypto Framework patch 
    o 120987-04 SunOS 5.10_x86: mkfs, newfs, other ufs utils patch 

    o 122173-04 SunOS 5.10_x86: swap swapadd patch 
    o 122175-03 SunOS 5.10_x86: dumpadm patch 
    o 122638-01 SunOS 5.10_x86: zonename patch 
    o 122641-06 SunOS 5.10_x86: zfs genesis patch 
    o 122647-03 SunOS 5.10_x86: zlogin patch 
    o 122653-03 SunOS 5.10_x86: utilities patch 
    o 122659-03 SunOS 5.10_x86: zonecfg patch 
    o 122661-02 SunOS 5.10_x86: zoneadm patch 
    o 122663-04 SunOS 5.10_x86: libzonecfg patch 
    o 122665-02 SunOS 5.10_x86: rnode.h/systm.h/zone.h header file 

      * Man Pages 
    o 119247-15 SunOS 5.10_x86: Manual Page updates for Solaris 10 

      * Other Patches 
    o 118997-03 SunOS 5.10_x86: format patch 
    o 119987-03 SunOS 5.10_x86: clri patch 
    o 122655-05 SunOS 5.10_x86: jumpstart and live upgrade  compliance
 patch 
    o 121431-11 SunOS 5.8_x86 5.9_x86 5.10_x86: Live Upgrade Patch 

   
   
  Thanks, 
  George 
   
   
  George Wilson wrote: 
 
  Dave, 
   
  I'm copying the zfs-discuss alias on this as well... 
   
  It's possible that not all necessary patches have been installed or they
  maybe hitting CR# 6428258. If you reboot the zone does it continue to 
end  up in maintenance mode? Also do you know if the necessary ZFS/Zones
 patches  have been updated? 
   
  Take a look at our webpage which includes the patch list required for 
Solaris  10: 
   
  http://rpe.sfbay/bin/view/Tech/ZFS 
  
   
  Thanks, 
  George 
   
  Mahesh Siddheshwar wrote: 
 
 
   
   Original Message  
  Subject: [zones-discuss] Zone boot problems after installing patches 

 

Re: [zfs-discuss] Re: [Fwd: [zones-discuss] Zone boot problems after installing patches]

2006-08-04 Thread Enda o'Connor - Sun Microsystems Ireland - Software Engineer






Enda o'Connor - Sun Microsystems Ireland - Software Engineer wrote:

  
   Hi
 I guess the problem is that David is using smpatch (our automated patching 
system )
 So in theory he is up to date on his patches
 ( he has since removed 
  122660-02
 

   122658-02
  
   122640-05
 )
 
 So when I install the following onto a system ( SPARC S10 FCS ) with two 
zones already running:
typo should be update 10 1/06 not FCS

 119254-25 ( patchutilties patch )
 119578-26
 118822-30
 118833-18
 122650-02
 122640-05
 And reboot, I too have the same issue, there is no /dev/zfs in my local
zones?
 
 # zonename
 global
 # 
 # cat /etc/release
    Solaris 10 1/06 s10s_u1wos_19a SPARC
    
 # ls /var/sadm/patch
 118822-30  119254-26  120900-04  122640-05
 118833-18  119578-26  121133-02  122650-02
 # uptime
   5:48pm  up 2 min(s),  1 user,  load average: 0.58, 0.29, 0.11
 # ls /export/zones/sparse-1/dev/zfs
 /export/zones/sparse-1/dev/zfs: No such file or directory
 # zlogin sparse-1 ls /dev/zfs
 /dev/zfs: No such file or directory
 # 
 I rebooted the zone and then the system, touching /reconfigure, all to no 
avail
 I then added the rest of the patches you suggested and rebooted my zones 
and I had /dev/zfs, strange.
 
 But David had all the patches added and still did not get /dev/zfs in the 
non global zones
 Enda
 
 
 George Wilson wrote:
 
  Apologies for 
the internal URL, I'm including the list of patches for  the everyone's benefit:
   
  
  
 sparc Patches 
     * ZFS Patches 
   o 118833-17 SunOS 5.10: kernel patch 
   o 118925-02 SunOS 5.10: unistd header file patch 
   o 119578-20 SunOS 5.10: FMA Patch 
   o 119982-05 SunOS 5.10: ufsboot patch 
   o 120986-04 SunOS 5.10: mkfs and newfs patch 
   o 122172-06 SunOS 5.10: swap swapadd isaexec patch 
   o 122174-03 SunOS 5.10: dumpadm patch 
   o 122637-01 SunOS 5.10: zonename patch 
   o 122640-05 SunOS 5.10: zfs genesis patch 
   o 122644-01 SunOS 5.10: zfs header file patch 
   o 122646-01 SunOS 5.10: zlogin patch 
   o 122650-02 SunOS 5.10: zfs tools patch 
   o 122652-03 SunOS 5.10: zfs utilities patch 
   o 122658-02 SunOS 5.10: zonecfg patch 
   o 122660-03 SunOS 5.10: zoneadm zoneadmd patch 
   o 122662-02 SunOS 5.10: libzonecfg patch 
     * Man Pages 
   o 119246-15 SunOS 5.10: Manual Page updates for Solaris 10 
     * Other Patches 
   o 119986-03 SunOS 5.10: clri patch 
   o 123358-01 SunOS 5.10: jumpstart and live upgrade compliance

   o 121430-11 SunOS 5.8 5.9 5.10: Live Upgrade Patch 
  
 i386 Patches 
     * ZFS Patches 
   o 118344-11 SunOS 5.10_x86: Fault Manager Patch 
   o 118855-15 SunOS 5.10_x86: kernel patch 
   o 118919-16 SunOS 5.10_x86: Solaris Crypto Framework patch 
   o 120987-04 SunOS 5.10_x86: mkfs, newfs, other ufs utils patch

   o 122173-04 SunOS 5.10_x86: swap swapadd patch 
   o 122175-03 SunOS 5.10_x86: dumpadm patch 
   o 122638-01 SunOS 5.10_x86: zonename patch 
   o 122641-06 SunOS 5.10_x86: zfs genesis patch 
   o 122647-03 SunOS 5.10_x86: zlogin patch 
   o 122653-03 SunOS 5.10_x86: utilities patch 
   o 122659-03 SunOS 5.10_x86: zonecfg patch 
   o 122661-02 SunOS 5.10_x86: zoneadm patch 
   o 122663-04 SunOS 5.10_x86: libzonecfg patch 
   o 122665-02 SunOS 5.10_x86: rnode.h/systm.h/zone.h header file

     * Man Pages 
   o 119247-15 SunOS 5.10_x86: Manual Page updates for Solaris 10

     * Other Patches 
   o 118997-03 SunOS 5.10_x86: format patch 
   o 119987-03 SunOS 5.10_x86: clri patch 
   o 122655-05 SunOS 5.10_x86: jumpstart and live upgrade  compliance 
patch 
   o 121431-11 SunOS 5.8_x86 5.9_x86 5.10_x86: Live Upgrade Patch

  
  
 Thanks, 
 George 
  
  
 George Wilson wrote: 
   
Dave, 
  
 I'm copying the zfs-discuss alias on this as well... 
  
 It's possible that not all necessary patches have been installed or they 
 maybe hitting CR# 6428258. If you reboot the zone does it continue to  end 
up in maintenance mode? Also do you know if the necessary ZFS/Zones  patches 
have been updated? 
  
 Take a look at our webpage which includes the patch list required for  Solaris 
10: 
  
 http://rpe.sfbay/bin/view/Tech/ZFS
  
  
 Thanks, 
 George 
  
 Mahesh Siddheshwar wrote: 
 
   
  
  Original Message  
 Subject: [zones-discuss] Zone boot problems after installing patches

 Date: Wed, 02 Aug 2006 13:47:46 -0400 
 From: Dave Bevans <[EMAIL PROTECTED]> 
 To: zones-discuss@opensolaris.org,
[EMAIL PROTECTED],  [EMAIL PROTECTED]
   
  
  
  
 Hi, 
  
 I  have a customer with the following problem. 
  
 He has a V440 running Solaris 10 1/06 with zones. In 

Re: [zfs-discuss] Re: [Fwd: [zones-discuss] Zone boot problems after installing patches]

2006-08-04 Thread Enda o'Connor - Sun Microsystems Ireland - Software Engineer




Hi
I guess the problem is that David is using smpatch (our automated patching
system )
So in theory he is up to date on his patches
( he has since removed 
 122660-02 
 
 122658-02
 
  122640-05
)

So when I install the following onto a system ( SPARC S10 FCS ) with two
zones already running:
119254-25 ( patchutilties patch )
119578-26
118822-30
118833-18
122650-02
122640-05
And reboot, I too have the same issue, there is no /dev/zfs in my local zones?

# zonename
global
# 
# cat /etc/release
   Solaris 10 1/06 s10s_u1wos_19a SPARC
   
# ls /var/sadm/patch
118822-30  119254-26  120900-04  122640-05
118833-18  119578-26  121133-02  122650-02
# uptime
  5:48pm  up 2 min(s),  1 user,  load average: 0.58, 0.29, 0.11
# ls /export/zones/sparse-1/dev/zfs
/export/zones/sparse-1/dev/zfs: No such file or directory
# zlogin sparse-1 ls /dev/zfs
/dev/zfs: No such file or directory
# 
I rebooted the zone and then the system, touching /reconfigure, all to no
avail
I then added the rest of the patches you suggested and rebooted my zones
and I had /dev/zfs, strange.

But David had all the patches added and still did not get /dev/zfs in the
non global zones
Enda


George Wilson wrote:
Apologies for
the internal URL, I'm including the list of patches for  the everyone's benefit: 
  
 
 
sparc Patches 
    * ZFS Patches 
  o 118833-17 SunOS 5.10: kernel patch 
  o 118925-02 SunOS 5.10: unistd header file patch 
  o 119578-20 SunOS 5.10: FMA Patch 
  o 119982-05 SunOS 5.10: ufsboot patch 
  o 120986-04 SunOS 5.10: mkfs and newfs patch 
  o 122172-06 SunOS 5.10: swap swapadd isaexec patch 
  o 122174-03 SunOS 5.10: dumpadm patch 
  o 122637-01 SunOS 5.10: zonename patch 
  o 122640-05 SunOS 5.10: zfs genesis patch 
  o 122644-01 SunOS 5.10: zfs header file patch 
  o 122646-01 SunOS 5.10: zlogin patch 
  o 122650-02 SunOS 5.10: zfs tools patch 
  o 122652-03 SunOS 5.10: zfs utilities patch 
  o 122658-02 SunOS 5.10: zonecfg patch 
  o 122660-03 SunOS 5.10: zoneadm zoneadmd patch 
  o 122662-02 SunOS 5.10: libzonecfg patch 
    * Man Pages 
  o 119246-15 SunOS 5.10: Manual Page updates for Solaris 10 
    * Other Patches 
  o 119986-03 SunOS 5.10: clri patch 
  o 123358-01 SunOS 5.10: jumpstart and live upgrade compliance 
  o 121430-11 SunOS 5.8 5.9 5.10: Live Upgrade Patch 
 
i386 Patches 
    * ZFS Patches 
  o 118344-11 SunOS 5.10_x86: Fault Manager Patch 
  o 118855-15 SunOS 5.10_x86: kernel patch 
  o 118919-16 SunOS 5.10_x86: Solaris Crypto Framework patch 
  o 120987-04 SunOS 5.10_x86: mkfs, newfs, other ufs utils patch 
  o 122173-04 SunOS 5.10_x86: swap swapadd patch 
  o 122175-03 SunOS 5.10_x86: dumpadm patch 
  o 122638-01 SunOS 5.10_x86: zonename patch 
  o 122641-06 SunOS 5.10_x86: zfs genesis patch 
  o 122647-03 SunOS 5.10_x86: zlogin patch 
  o 122653-03 SunOS 5.10_x86: utilities patch 
  o 122659-03 SunOS 5.10_x86: zonecfg patch 
  o 122661-02 SunOS 5.10_x86: zoneadm patch 
  o 122663-04 SunOS 5.10_x86: libzonecfg patch 
  o 122665-02 SunOS 5.10_x86: rnode.h/systm.h/zone.h header file 
    * Man Pages 
  o 119247-15 SunOS 5.10_x86: Manual Page updates for Solaris 10 
    * Other Patches 
  o 118997-03 SunOS 5.10_x86: format patch 
  o 119987-03 SunOS 5.10_x86: clri patch 
  o 122655-05 SunOS 5.10_x86: jumpstart and live upgrade  compliance
patch 
  o 121431-11 SunOS 5.8_x86 5.9_x86 5.10_x86: Live Upgrade Patch 
 
 
Thanks, 
George 
 
 
George Wilson wrote: 
  Dave, 
 
I'm copying the zfs-discuss alias on this as well... 
 
It's possible that not all necessary patches have been installed or they
 maybe hitting CR# 6428258. If you reboot the zone does it continue to  end
up in maintenance mode? Also do you know if the necessary ZFS/Zones  patches
have been updated? 
 
Take a look at our webpage which includes the patch list required for  Solaris
10: 
 
http://rpe.sfbay/bin/view/Tech/ZFS 
 
Thanks, 
George 
 
Mahesh Siddheshwar wrote: 
 
 
 Original Message  
Subject: [zones-discuss] Zone boot problems after installing patches 
Date: Wed, 02 Aug 2006 13:47:46 -0400 
From: Dave Bevans <[EMAIL PROTECTED]> 
To: zones-discuss@opensolaris.org, [EMAIL PROTECTED],  [EMAIL PROTECTED] 
  
 
 
 
Hi, 
 
I  have a customer with the following problem. 
 
He has a V440 running Solaris 10 1/06 with zones. In the case notes he  says
that he installed a couple Sol 10 patches and now he has problems  booting
his zones. After doing  some checking he found that it appears  to be related
to a couple of ZFS patches (122650 and 122640).  I found  a bug (6271309
/ lack of zvol breaks all ZFS commands), but not sure  if it applies to this
situation. Any ideas on this.

Re: [zfs-discuss] query re share and zfs

2006-07-05 Thread Enda o'Connor - Sun Microsystems Ireland - Software Engineer




Hi Eric 
Thanks for the update.
Basically I am just trying to have tank mounted onto /export and I'm not
concerned about what is already in /export,
I'm just doing some zfs testing as part of ongoing patch release, and trying
to script a quick and dirty way of making /export a zfs pool without having
to reboot etc. So personally I am not too concerned about the warning re
empty directory, I know what it is and I forced the mount so as to get tank
onto /export.
As part of the script I destroy the pool as well, to do that currently I
just get the mountpoints and then  zfs umount starting at the parent which
does the trick as you explained.

So at this point I have just sharenfs=on for tank, so that home and home1
will be shared by default.
I wasn't aware that zfs create was also doing a mount, which were being covered
over by explicit my mount -o after the zfs create
I've reversed the order so that the mount -o goes in first before the subsequent
zfs creates.

thanks
Enda




Eric Schrock wrote:

  Yes, this is a known bug, or rather a clever variation of a known bug.
I don't have the ID handy, but the problem is that 'zfs unmount -a' (and
'zpool destroy') both try to unmount filesystems in DSL order, rather
than consulting /etc/mnttab.  It should just unmount filesystems
according to /etc/mnttab.  Otherwise, an I/O error in the DSL can render
a pool undestroyable.

Now in this case, you have mounted a parent container _on top of_ two of
its children so that they are no longer visible in the namespace.  I'm
not sure how the tools would be able to deal with this situation.  The
problem is that /export/home and /export/home1 appear mounted, but they
have been obscured by later mounting /export on top of them (!).  I will
have to play around with this scenario to see if there's a way for the
utility to know that 'tank' needs to be unmounted _before_ 'tank/home'
or 'tank/home1'.  Casual investigation leads me to believe this is not
possible.

Despite it sometimes annoyances, the warning about the empty directory
is there for a reason.  In this case you mounted 'tank' on top of
existing filesystems, totally obscuring them from view.  I can try and
make the utilities behave a little more sanely in this circumstance, but
it doesn't change the fact that '/export/home' was unavailable because
of the 'zfs mount -O'.

- Eric

On Tue, Jul 04, 2006 at 04:10:34PM +0100, Enda o'Connor - Sun Microsystems Ireland - Software Engineer wrote:
  
  
Hi
I was trying to overlay a pool onto an existing mount



# cat /etc/release
  Solaris 10 6/06 s10s_u2wos_09a SPARC
   
# df -k /export
Filesystemkbytesused   avail capacity  Mounted on
/dev/dsk/c1t0d0s320174761 3329445 1664356917%/export
# share
#
#zpool create -f tank raidz c1t1d0 c1t2d0 c1t3d0 c1t4d0 c1t5d0zfs create 
tank/home
#zfs create tank/home1
#zfs set mountpoint=/export tank
cannot mount '/export': directory is not empty
use legacy mountpoint to allow this behavior, or use the -O flag
#zfs set sharenfs=on tank/home
#zfs set sharenfs=on tank/home1
# share
-   /export/home   rw   "" 
-   /export/home1   rw   "" 
#


Now I ran the following to force the mount

# df -k /export
Filesystemkbytesused   avail capacity  Mounted on
/dev/dsk/c1t0d0s320174761 3329445 1664356917%/export
# zfs mount -O tank
# df -k /export
Filesystemkbytesused   avail capacity  Mounted on
tank 701890560  53 701890286 1%/export
#

Then further down the line I tried
# zpool destroy tank
cannot unshare 'tank/home': /export/home: not shared
cannot unshare 'tank/home1': /export/home1: not shared
could not destroy 'tank': could not unmount datasets
#

I eventually got this to go with
# zfs umount tank/home
# zfs umount tank/home1
# zpool destroy -f tank
#

Is this normal, and if so why?


Enda




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

  
  
--
Eric Schrock, Solaris Kernel Development   http://blogs.sun.com/eschrock
  




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: query re share and zfs

2006-07-04 Thread Enda o'Connor - Sun Microsystems Ireland - Software Engineer

Slight typo

I had to run

# zfs umount tank
cannot unmount 'tank': not currently mounted
# zfs umount /export/home1
# zfs umount /export/home
#

in order to get zpool destroy to run


Enda

Enda o'Connor - Sun Microsystems Ireland - Software Engineer wrote:


Hi
I was trying to overlay a pool onto an existing mount



# cat /etc/release
  Solaris 10 6/06 s10s_u2wos_09a SPARC
   
# df -k /export
Filesystemkbytesused   avail capacity  Mounted on
/dev/dsk/c1t0d0s320174761 3329445 1664356917%/export
# share
#
#zpool create -f tank raidz c1t1d0 c1t2d0 c1t3d0 c1t4d0 c1t5d0zfs 
create tank/home

#zfs create tank/home1
#zfs set mountpoint=/export tank
cannot mount '/export': directory is not empty
use legacy mountpoint to allow this behavior, or use the -O flag
#zfs set sharenfs=on tank/home
#zfs set sharenfs=on tank/home1
# share
-   /export/home   rw   "" -   /export/home1   
rw   "" #



Now I ran the following to force the mount

# df -k /export
Filesystemkbytesused   avail capacity  Mounted on
/dev/dsk/c1t0d0s320174761 3329445 1664356917%/export
# zfs mount -O tank
# df -k /export
Filesystemkbytesused   avail capacity  Mounted on
tank 701890560  53 701890286 1%/export
#

Then further down the line I tried
# zpool destroy tank
cannot unshare 'tank/home': /export/home: not shared
cannot unshare 'tank/home1': /export/home1: not shared
could not destroy 'tank': could not unmount datasets
#

I eventually got this to go with
# zfs umount tank/home
# zfs umount tank/home1
# zpool destroy -f tank
#

Is this normal, and if so why?


Enda







___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] query re share and zfs

2006-07-04 Thread Enda o'Connor - Sun Microsystems Ireland - Software Engineer

Hi
I was trying to overlay a pool onto an existing mount



# cat /etc/release
  Solaris 10 6/06 s10s_u2wos_09a SPARC
   
# df -k /export
Filesystemkbytesused   avail capacity  Mounted on
/dev/dsk/c1t0d0s320174761 3329445 1664356917%/export
# share
#
#zpool create -f tank raidz c1t1d0 c1t2d0 c1t3d0 c1t4d0 c1t5d0zfs create 
tank/home

#zfs create tank/home1
#zfs set mountpoint=/export tank
cannot mount '/export': directory is not empty
use legacy mountpoint to allow this behavior, or use the -O flag
#zfs set sharenfs=on tank/home
#zfs set sharenfs=on tank/home1
# share
-   /export/home   rw   "" 
-   /export/home1   rw   "" 
#



Now I ran the following to force the mount

# df -k /export
Filesystemkbytesused   avail capacity  Mounted on
/dev/dsk/c1t0d0s320174761 3329445 1664356917%/export
# zfs mount -O tank
# df -k /export
Filesystemkbytesused   avail capacity  Mounted on
tank 701890560  53 701890286 1%/export
#

Then further down the line I tried
# zpool destroy tank
cannot unshare 'tank/home': /export/home: not shared
cannot unshare 'tank/home1': /export/home1: not shared
could not destroy 'tank': could not unmount datasets
#

I eventually got this to go with
# zfs umount tank/home
# zfs umount tank/home1
# zpool destroy -f tank
#

Is this normal, and if so why?


Enda




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] assertion failure when destroy zpool on tmpfs

2006-06-27 Thread Enda o'Connor - Sun Microsystems Ireland - Software Engineer




Hi
Looks like same stack as 6413847, although it is pointed more towards hardware
failure.

the stack below is from 5.11 snv_38, but also seems to affect update 2 as
per above bug.

Enda

Thomas Maier-Komor wrote:

  Hi,

my colleage is just testing ZFS and created a zpool which had a backing store file on a TMPFS filesystem. After deleting the file everything still worked normally. But destroying the pool caused an assertion failure and a panic. I know this is neither a real-live szenario nor a good idea. The assertion failure occured on Solaris 10 update 2.

Below is some mdb output, in case someone is interested in this.

BTW: great to have Solaris 10 update 2 with ZFS. I can't wait to deploy it.

Cheers,
Tom

  
  
::panicinfo

  
   cpu1
  thread  2a100ea7cc0
 message 
assertion failed: vdev_config_sync(rvd, txg) == 0, file: ../../common/fs/zfs/spa
.c, line: 2149
  tstate   4480001601
  g1  30037505c40
  g2   10
  g32
  g42
  g53
  g6   16
  g7  2a100ea7cc0
  o0  11eb1e8
  o1  2a100ea7928
  o2  306f5b0
  o3  30037505c50
  o4  3c7a000
  o5   15
  o6  2a100ea6ff1
  o7  105e560
  pc  104220c
 npc  1042210
   y   10 
  
  
::stack

  
  vpanic(11eb1e8, 13f01d8, 13f01f8, 865, 600026d4ef0, 60002793ac0)
assfail+0x7c(13f01d8, 13f01f8, 865, 183e000, 11eb000, 0)
spa_sync+0x190(60001f244c0, 3dd9, 600047f3500, 0, 2a100ea7cc4, 2a100ea7cbc)
txg_sync_thread+0x130(60001f9c580, 3dd9, 2a100ea7ab0, 60001f9c6a0, 60001f9c692, 
60001f9c690)
thread_start+4(60001f9c580, 0, 0, 0, 0, 0)
  
  
::status

  
  debugging crash dump vmcore.0 (64-bit) from ai
operating system: 5.11 snv_38 (sun4u)
panic message: 
assertion failed: vdev_config_sync(rvd, txg) == 0, file: ../../common/fs/zfs/spa
.c, line: 2149
dump content: kernel pages only
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
  




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] query re availability of zfs in S10

2006-06-20 Thread Enda o'Connor - Sun Microsystems Ireland - Software Engineer

Hi
It is my understanding that zfs will be available to pre S10 update 2 
customers via patches, ie customers on FCS could install the necessary 
zfs patches and thereby start using zfs.


But there seems to be confusion in regards to whether this is supported 
or not.
Some people say only zfs on update 2 and beyond are supported others say 
that those on pre update 2 will also be supported via patches.

Could someone attempt to clarify this issue for me.

regards
Enda

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss