Re: [zfs-discuss] zpool import hangs indefinitely (retry post in parts; too long?)

2010-06-28 Thread Geoff Shipman

Andrew,

Looks like the zpool is telling you the devices are still doing work of 
some kind, or that there are locks still held.


From man of section 2 intro page the errors are listed.  Number 16 
looks to be an EBUSY.



 16 EBUSYDevice busy

 An attempt was made to mount a  dev-
 ice  that  was already mounted or an
 attempt was made to unmount a device
 on  which  there  is  an active file
 (open   file,   current   directory,
 mounted-on  file,  active  text seg-
 ment). It  will  also  occur  if  an
 attempt is made to enable accounting
 when it  is  already  enabled.   The
 device or resource is currently una-
 vailable.   EBUSY is  also  used  by
 mutexes, semaphores, condition vari-
 ables, and r/w  locks,  to  indicate
 that   a  lock  is held,  and by the
 processor  control  function
 P_ONLINE.


On 06/28/10 01:50 PM, Andrew Jones wrote:

Just re-ran 'zdb -e tank' to confirm the CSV1 volume is still exhibiting error 
16:


Could not open tank/CSV1, error 16


Considering my attempt to delete the CSV1 volume lead to the failure in the 
first place, I have to think that if I can either 1) complete the deletion of 
this volume or 2) roll back to a transaction prior to this based on logging or 
3) repair whatever corruption has been caused by this partial deletion, that I 
will then be able to import the pool.

What does 'error 16' mean in the ZDB output, any suggestions?
   


--
Geoff Shipman | Senior Technical Support Engineer
Phone: +13034644710
Oracle Global Customer Services
500 Eldorado Blvd. UBRM-04 | Broomfield, CO 80021
Email: geoff.ship...@sun.com | Hours:9am-5pm MT,Monday-Friday

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any way to fix ZFS "sparse file bug" #6792701

2010-03-03 Thread Geoff Shipman
Per Francois's response of 11:02 MT Yes adding the patch will free the
space on the file system.

 From: 
Francois Napoleoni

   To: 
Geoff Shipman

   Cc: 
Edward Ned Harvey
,
zfs-discuss@opensolaris.org
,
'Robert Loper' 
  Subject: 
Re: [zfs-discuss] Any way to fix ZFS
"sparse file bug" #6792701
 Date: 
Wed, 03 Mar 2010 19:02:13 +0100

Yes iT will ...

But this can be a good time to initiate/justify that useful backup which
we never had time to do before :) .
F.

> Geoff Shipman wrote:
>> Right it would get rid of the CR, but does it free the disk space
from an event that occurred prior to patching ?. 


On Wed, 2010-03-03 at 11:15, Robert Loper wrote:
> So the question remains... if I install patch 14144{45}-09 (by a LU to
> Solaris 10 upd 8) will that allow the filesystem (not an rpool) to
> free up the space?
> 
>  - Robert
> 
> On Wed, Mar 3, 2010 at 12:08 PM, Geoff Shipman 
> wrote:
> IDR's are fixes for troubles from the initial development
> state.  The
> patches noted are the official fixes.
> 
> On Wed, 2010-03-03 at 11:03, Robert Loper wrote:
> > Can you clarify what this "IDR" is?  I do not have Sun
> Support on this
> > specific server.
> >
> >  - Robert Loper
> >
> > On Wed, Mar 3, 2010 at 11:51 AM, Francois Napoleoni
> >  wrote:
> > From my experience with customer hitting this bug,
> installing
> > the now obsolete IDR and rebooting was enoug hto get
> ride of
> > these sparse file bug.
> >
> > F.
> >
> > Geoff Shipman wrote:
> > The Solaris 10 Update 8 has the fix for
> 6792701
> > included.  This is part
> > of kernel patches 141444-09 (sparc),
> 141445-09 (x86).
> > For OpenSolaris
> > build 118 or later contains the fix so in
> the
> > development builds.
> > This avoids future problems with the CR but
> if your
> > currently effected
> > by the problem the fix doesn't clear the
> troubles.
> >
> > I believe a backup of the data, then destroy
> of the
> > file system, then
> > recreating it and restoring is method to
> clear the
> > space.  With the
> > later kernel the problem is avoided in the
> future.
> >
> > Geoff
> >
> > On Wed, 2010-03-03 at 10:14, Francois
> Napoleoni wrote:
> > If you have a valid Solaris Support
> contract
> > you can ask for the corresponding
> IDR to fix
> > this issue.
> >
> > (Hi to Richard E. ... who must be
> boiling
> > right now :) )
> >
> > F.
> >
> > Edward Ned Harvey wrote:
> > I don?t know the answer to
> your
> > question, but I am running
> the same
> > version of OS you are, and
> this bug
> > could affect us.  Do you
> have
> > any
> > link to any documentation
> about this
> > bug?  I?d like to forward
> > something
> > to inform the other admins
> at work.
> >
> >
> >
> >
> > *From:*
> >
> zfs-discuss-boun...@opensolaris.org
> >
> [mailto:zfs-discuss-boun...@openso

Re: [zfs-discuss] Any way to fix ZFS "sparse file bug" #6792701

2010-03-03 Thread Geoff Shipman
IDR's are fixes for troubles from the initial development state.  The
patches noted are the official fixes.

On Wed, 2010-03-03 at 11:03, Robert Loper wrote:
> Can you clarify what this "IDR" is?  I do not have Sun Support on this
> specific server.
> 
>  - Robert Loper
> 
> On Wed, Mar 3, 2010 at 11:51 AM, Francois Napoleoni
>  wrote:
> From my experience with customer hitting this bug, installing
> the now obsolete IDR and rebooting was enoug hto get ride of
> these sparse file bug.
>     
> F.
> 
> Geoff Shipman wrote:
> The Solaris 10 Update 8 has the fix for 6792701
> included.  This is part
> of kernel patches 141444-09 (sparc), 141445-09 (x86).
> For OpenSolaris
> build 118 or later contains the fix so in the
> development builds.  
> This avoids future problems with the CR but if your
> currently effected
> by the problem the fix doesn't clear the troubles.
> 
> I believe a backup of the data, then destroy of the
> file system, then
> recreating it and restoring is method to clear the
> space.  With the
> later kernel the problem is avoided in the future.
> 
> Geoff 
> 
> On Wed, 2010-03-03 at 10:14, Francois Napoleoni wrote:
> If you have a valid Solaris Support contract
> you can ask for the corresponding IDR to fix
> this issue.
> 
> (Hi to Richard E. ... who must be boiling
> right now :) )
> 
> F.
> 
> Edward Ned Harvey wrote:
> I don?t know the answer to your
> question, but I am running the same
> version of OS you are, and this bug
> could affect us.  Do you have
> any 
> link to any documentation about this
> bug?  I?d like to forward
> something 
> to inform the other admins at work.
> 
>  
>  
>  
> *From:*
> zfs-discuss-boun...@opensolaris.org
> [mailto:zfs-discuss-boun...@opensolaris.org] 
> *On Behalf Of *Robert
> Loper
> *Sent:* Tuesday, March 02, 2010 12:09
> PM
> *To:* zfs-discuss@opensolaris.org
> *Subject:* [zfs-discuss] Any way to
> fix ZFS "sparse file bug"
> #6792701
>  
> I have a Solaris x86 server running
> update 6 (Solaris 10 10/08
> s10x_u6wos_07b X86).  I recently hit
> this "sparse file bug" when I deleted
> a 512GB sparse file from a 1.2TB
> filesystem and the space
> was 
> never freed up.  What I am asking is
> would there be any way to
> recover 
> the space in the filesystem without
> having to destroy and recreate
> it?  
> I am assuming before trying anything I
> would need to update the
> server 
> to U8.
> 
> Thanks in advance...
> 
> -- 
> Robert Loper
> rlo...@gmail.com
> <mailto:rlo...@gmail.com>
> 
> 
>  

Re: [zfs-discuss] Any way to fix ZFS "sparse file bug" #6792701

2010-03-03 Thread Geoff Shipman
Right it would get rid of the CR, but does it free the disk space from
an event that occurred prior to patching ?.  


On Wed, 2010-03-03 at 10:51, Francois Napoleoni wrote:
>  From my experience with customer hitting this bug, installing the now
> obsolete IDR and rebooting was enoug hto get ride of these sparse file
> bug.
> 
> F.
> 
> Geoff Shipman wrote:
> > The Solaris 10 Update 8 has the fix for 6792701 included.  This is
> part
> > of kernel patches 141444-09 (sparc), 141445-09 (x86).   For
> OpenSolaris
> > build 118 or later contains the fix so in the development builds.  
> > 
> > This avoids future problems with the CR but if your currently
> effected
> > by the problem the fix doesn't clear the troubles.
> > 
> > I believe a backup of the data, then destroy of the file system,
> then
> > recreating it and restoring is method to clear the space.  With the
> > later kernel the problem is avoided in the future.
> > 
> > Geoff 
> > 
> > 
> > On Wed, 2010-03-03 at 10:14, Francois Napoleoni wrote:
> >> If you have a valid Solaris Support contract you can ask for the 
> >> corresponding IDR to fix this issue.
> >>
> >> (Hi to Richard E. ... who must be boiling right now :) )
> >>
> >> F.
> >>
> >> Edward Ned Harvey wrote:
> >>> I don?t know the answer to your question, but I am running the
> same 
> >>> version of OS you are, and this bug could affect us.  Do you have
> >> any 
> >>> link to any documentation about this bug?  I?d like to forward
> >> something 
> >>> to inform the other admins at work.
> >>>
> >>>  
> >>>
> >>>  
> >>>
> >>>  
> >>>
> >>> *From:* zfs-discuss-boun...@opensolaris.org 
> >>> [mailto:zfs-discuss-boun...@opensolaris.org] *On Behalf Of *Robert
> >> Loper
> >>> *Sent:* Tuesday, March 02, 2010 12:09 PM
> >>> *To:* zfs-discuss@opensolaris.org
> >>> *Subject:* [zfs-discuss] Any way to fix ZFS "sparse file bug"
> >> #6792701
> >>>  
> >>>
> >>> I have a Solaris x86 server running update 6 (Solaris 10 10/08 
> >>> s10x_u6wos_07b X86).  I recently hit this "sparse file bug" when I
> >>> deleted a 512GB sparse file from a 1.2TB filesystem and the space
> >> was 
> >>> never freed up.  What I am asking is would there be any way to
> >> recover 
> >>> the space in the filesystem without having to destroy and recreate
> >> it?  
> >>> I am assuming before trying anything I would need to update the
> >> server 
> >>> to U8.
> >>>
> >>> Thanks in advance...
> >>>
> >>> -- 
> >>> Robert Loper
> >>> rlo...@gmail.com <mailto:rlo...@gmail.com>
> >>>
> >>>
> >>>
> >>
> --------
> >>> ___
> >>> zfs-discuss mailing list
> >>> zfs-discuss@opensolaris.org
> >>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> >> -- 
> >> Francois Napoleoni / Sun Support Engineer
> >> mail  : francois.napole...@sun.com
> >> phone : +33 (0)1 3403 1707
> >> fax   : +33 (0)1 3403 1114
> >>
> >> ___
> >> zfs-discuss mailing list
> >> zfs-discuss@opensolaris.org
> >> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 
> -- 
> Francois Napoleoni / Sun Support Engineer
> mail  : francois.napole...@sun.com
> phone : +33 (0)1 3403 1707
> fax   : +33 (0)1 3403 1114
-- 
Geoff Shipman - (303) 223-6266
Systems Technology Service Center - Operating System
Solaris and Network Technology Domain
Americas Systems Technology Service Center

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any way to fix ZFS "sparse file bug" #6792701

2010-03-03 Thread Geoff Shipman
The Solaris 10 Update 8 has the fix for 6792701 included.  This is part
of kernel patches 141444-09 (sparc), 141445-09 (x86).   For OpenSolaris
build 118 or later contains the fix so in the development builds.  

This avoids future problems with the CR but if your currently effected
by the problem the fix doesn't clear the troubles.

I believe a backup of the data, then destroy of the file system, then
recreating it and restoring is method to clear the space.  With the
later kernel the problem is avoided in the future.

Geoff 


On Wed, 2010-03-03 at 10:14, Francois Napoleoni wrote:
> If you have a valid Solaris Support contract you can ask for the 
> corresponding IDR to fix this issue.
> 
> (Hi to Richard E. ... who must be boiling right now :) )
> 
> F.
> 
> Edward Ned Harvey wrote:
> > I don?t know the answer to your question, but I am running the same 
> > version of OS you are, and this bug could affect us.  Do you have
> any 
> > link to any documentation about this bug?  I?d like to forward
> something 
> > to inform the other admins at work.
> > 
> >  
> > 
> >  
> > 
> >  
> > 
> > *From:* zfs-discuss-boun...@opensolaris.org 
> > [mailto:zfs-discuss-boun...@opensolaris.org] *On Behalf Of *Robert
> Loper
> > *Sent:* Tuesday, March 02, 2010 12:09 PM
> > *To:* zfs-discuss@opensolaris.org
> > *Subject:* [zfs-discuss] Any way to fix ZFS "sparse file bug"
> #6792701
> > 
> >  
> > 
> > I have a Solaris x86 server running update 6 (Solaris 10 10/08 
> > s10x_u6wos_07b X86).  I recently hit this "sparse file bug" when I 
> > deleted a 512GB sparse file from a 1.2TB filesystem and the space
> was 
> > never freed up.  What I am asking is would there be any way to
> recover 
> > the space in the filesystem without having to destroy and recreate
> it?  
> > I am assuming before trying anything I would need to update the
> server 
> > to U8.
> > 
> > Thanks in advance...
> > 
> > -- 
> > Robert Loper
> > rlo...@gmail.com <mailto:rlo...@gmail.com>
> > 
> > 
> >
> 
> > 
> > ___
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 
> -- 
> Francois Napoleoni / Sun Support Engineer
> mail  : francois.napole...@sun.com
> phone : +33 (0)1 3403 1707
> fax   : +33 (0)1 3403 1114
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-- 
Geoff Shipman - (303) 223-6266
Systems Technology Service Center - Operating System
Solaris and Network Technology Domain
Americas Systems Technology Service Center

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Importing zpool after one side of mirror was destroyed

2009-04-07 Thread Geoff Shipman
-
>version=4
>name='local'
>state=0
>txg=4
>pool_guid=x
>top_guid=xx
>guid=x
>vdev_tree
>  type='mirror'
>  id=0
>  guid=
>  metaslab_array=14
>  metaslab_shift=31
>  ashift=9
>  asize=
>  children[0]
>  type='disk'
>  id=0
>  guid=
>  path='/dev/dsk/c1d0s6'
>  devid='id1,c...@xxx/g'
>  whole_disk=0
>  children[1]
>  type='disk'
>  id=1
>  guid=
>  path='/dev/dsk/c2d0s6'
>  devid='id1,c...@xxx/g'
>  whole_disk=0
> 
> <  --- The remaining 3 labels on this disk are also valid --- >
> 
> -- 
> Christopher West, OS Administration
> Email: christopher.w...@sun.com
> Phone: 1-800-USA-4SUN
> My Working Hours : 8am-5pm MT, Monday thru Friday
> My Manager : Michael Ventimiglia 
> ===
> TO REACH THE NEXT AVAILABLE ENGINEER:
> 1. Call 1-800-USA-4SUN choose opt 2 and enter your SR number.
> 2. Wait for my voice mail message to begin.
> 3. Press "0" during my message to reach the next available engineer.
> 4. You will hear hold music until the next engineer answers.
> 
> Submit, check and update tickets at http://www.sun.com/osc
> ====
> This email may contain confidential and privileged material for the
> sole
> use of the intended recipient. Any review or distribution by others is
> strictly prohibited. If you are not the intended recipient please
> contact the sender and delete all copies.
> 
> 
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-- 
Geoff Shipman - (303) 272-9955
Systems Technology Service Center - Operating System
Solaris and Network Technology Domain
Americas Systems Technology Service Center

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Error ZFS-8000-9P

2009-04-03 Thread Geoff Shipman
Joe,

I just checked the referenced document and the document is providing
steps via an example of replacing the failed/faulted device.

I found in the ZFS Administration guide the URL below on repairing a
device in a zpool.

http://docs.sun.com/app/docs/doc/819-5461/gbbvf?l=en&a=view

The above URL was linked from the Chapter 11 portion of the ZFS
Administration guide on troubleshooting problems.

http://docs.sun.com/app/docs/doc/819-5461/gavwg?l=en&a=view

The link was in the paragraph below.

Physically Reattaching the Device
Exactly how a missing device is reattached depends on the device in
question. If the device is a network-attached drive, connectivity should
be restored. If the device is a USB or other removable media, it should
be reattached to the system. If the device is a local disk, a controller
might have failed such that the device is no longer visible to the
system. In this case, the controller should be replaced at which point
the disks will again be available. Other pathologies can exist and
depend on the type of hardware and its configuration. If a drive fails
and it is no longer visible to the system (an unlikely event), the
device should be treated as a damaged device. Follow the procedures
outlined in Repairing a Damaged Device.

I do agree that if we (Sun) point people to additional steps that if
they are externally available those should be referenced before an
internal only link.

Geoff


On Fri, 2009-04-03 at 11:45, Joe S wrote:
> On Fri, Apr 3, 2009 at 10:41 AM, Joe S  wrote:
> > Today, I noticed this:
> >
> > [...@coruscant$] zpool status
> >  pool: tank
> >  state: ONLINE
> > status: One or more devices has experienced an unrecoverable error.
> An
> >attempt was made to correct the error.  Applications are
> unaffected.
> > action: Determine if the device needs to be replaced, and clear the
> errors
> >using 'zpool clear' or replace the device with 'zpool
> replace'.
> >   see: http://www.sun.com/msg/ZFS-8000-9P
> >  scrub: resilver completed after 0h0m with 0 errors on Sat Apr  4
> 08:31:49 2009
> > config:
> >
> >NAMESTATE READ WRITE CKSUM
> >tankONLINE   0 0 0
> >  raidz1ONLINE   0 0 0
> >c2t0d0  ONLINE   0 0 0
> >c2t1d0  ONLINE   0 0 0
> >c2t4d0  ONLINE   0 0 0
> >  raidz1ONLINE   0 0 0
> >c2t2d0  ONLINE   0 0 0
> >c2t3d0  ONLINE   0 0 4  36K resilvered
> >c2t5d0  ONLINE   0 0 0
> >
> > errors: No known data errors
> >
> >
> > I think this means a disk is failing and that ZFS did a good job of
> > keeping everything sane.
> >
> > According to http://www.sun.com/msg/ZFS-8000-9P:
> >
> > The Message ID: ZFS-8000-9P indicates a device has exceeded the
> > acceptable limit of errors allowed by the system. See document
> 203768
> > for additional information.
> >
> > Unfortunately, I'm not *authorized* to see that document.
> >
> >
> > Question: I'm assuming the disk is dying. How can I get more
> > information from the OS to confirm?
> >
> > Rant: Sun, you suck for telling me to read a document for additional
> > information, and then denying me access.
> >
> 
> Running Nevada 105.
> 
> Incidentally, I tried upgrading to Nevada 110, but the OS wouldn't
> finish booting. It stopped at the part where it was trying to mount my
> ZFS filesystems. I booted back into 105 and it boots, but then as I
> ran a zpool status, I noticed that message.
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-- 
Geoff Shipman - (303) 272-9955
Systems Technology Service Center - Operating System
Solaris and Network Technology Domain
Americas Systems Technology Service Center

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs using java

2009-03-30 Thread Geoff Shipman
Tim,

You are correct it does sound like the Java WebConsole ZFS
Administration tool.   The patch ID's below look to fix the two issues I
am aware of.  One was a registration of the tool to the WebConsole page
the second was a Java JAR file bug that only displayed a white screen
when the ZFS administration tool was accessed.

Install the patch below for the architecture and restart the WebConsole
as Note 1 of the readme for the patch lists to get the functionality
back.

Geoff 

SunOS 5.10: ZFS Administration Java Web Console Patch
Document ID: 141104-01
Mar 25, 2009

SunOS 5.10_x86: ZFS Administration Java Web Console Patch
Document ID: 141105-01
Mar 25, 2009

On Mon, 2009-03-30 at 16:35, Tim wrote:
> 
> On Mon, Mar 30, 2009 at 4:58 PM, Blake  wrote:
> Can you list the exact command you used to launch the control
> panel?
> I'm not sure what tool you are referring to.
> 
> 
> 
> 2009/3/25 Howard Huntley :
> > I once installed ZFS on my home Sun Blade 100 and it worked
> fine on the sun
> > blade 100 running solaris 10. I reinstalled Solaris 10 09
> version and
> > created a zpool which is not visible using the java control
> panel. When I
> > attempt to run the Java control panel to manage the zfs
> system I receive an
> > error message stating "!Launch Error, No application is
> registered with this
> > sun Java or I have no rights to use any applications that
> are registered,
> > see my sys admin." Can any one tell me how to get this
> straightened out. I
> > have been fooling around with it for some time now.
> >
> > Is any one is Jacksonville, Florida??
> 
> 
> ev0l top poster!  I'm assuming he means the web interface, but I could
> be crazy.
> 
> --Tim 
> 
> 
> 
> __
> _______
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
-- 
Geoff Shipman - (303) 272-9955
Systems Technology Service Center - Operating System
Solaris and Network Technology Domain
Americas Systems Technology Service Center

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Questions on timing for root pools and vfstab entries

2009-03-25 Thread Geoff Shipman
Hello All,

I have a ZFS root system with /export and /export/home ZFS file systems
from the root pool.When I have additional non-ZFS mounts added to
the /etc/vfstab for /export/install or /export/install-Sol10.   Upon
boot I get an error from the SMF service
svc:/system/filesystem/local:default that there is no mount point
available for /export/install and /export/install-Sol10.

This looks like a race condition as I can issue the mountall and
everything works after I login.   So the root pool mount of /export
takes longer than the filesystem/local SMF service to try and mount the
/etc/vfstab entries.

I am able to work around this by moving all the /export mounts under
/etc/vfstab control (legacy mode for ZFS) as common ground, but wanted
to know if there was a bug or other modification that could be made to
keep the ZFS auto mounting and management and using the /etc/vfstab for
the other non-ZFS file systems ?.

Thanks

Geoff

Geoff Shipman - (303) 272-9955
Systems Technology Service Center - Operating System
Solaris and Network Technology Domain
Americas Systems Technology Service Center

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss