Re: [zfs-discuss] is this pool recoverable?
> Your original zpool status says that this pool was last accessed on > another system, which I believe is what caused of the pool to fail, > particularly if it was accessed simultaneously from two systems. The message "last accessed on another system" is the normal behavior if the pool is ungracefully offlined for some reason, and then you boot back up again on the same system. I learned that by using a pool on an external disk, and accidentally knocking out the power cord of the external disk. The system hung. I power cycled, couldn't boot normal. Had to boot failsafe, and got the above message while trying to import. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] is this pool recoverable?
Patrick, I'm happy that you were able to recover your pool. Your original zpool status says that this pool was last accessed on another system, which I believe is what caused of the pool to fail, particularly if it was accessed simultaneously from two systems. It is important that the cause of the original pool failure is identified to prevent it from happening again. This rewind pool recovery is a last-ditch effort and might not recover all broken pools. Thanks, Cindy On 04/02/10 12:32, Patrick Tiquet wrote: Thanks, that worked!! It needed "-Ff" The pool has been recovered with minimal loss in data. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] is this pool recoverable?
Thanks, that worked!! It needed "-Ff" The pool has been recovered with minimal loss in data. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] is this pool recoverable?
On Fri, 2 Apr 2010, Patrick Tiquet wrote: I tried booting with b134 to attempt to recover the pool. I attempted with one disk of the mirror. Zpool tells me to use -F for import, fails, but then tells me to use -f, which also fails and tells me to use -F again. Any thoughts? It looks like it wants you to use both -f and -F at the same time. I don't see that you tried that. Good luck. Bob -- Bob Friesenhahn bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/ GraphicsMagick Maintainer,http://www.GraphicsMagick.org/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] is this pool recoverable?
I tried booting with b134 to attempt to recover the pool. I attempted with one disk of the mirror. Zpool tells me to use -F for import, fails, but then tells me to use -f, which also fails and tells me to use -F again. Any thoughts? j...@opensolaris:~# zpool import pool: atomfs id: 1344695315736882 state: FAULTED status: The pool was last accessed by another system. action: The pool cannot be imported due to damaged devices or data. The pool may be active on another system, but can be imported using the '-f' flag. see: http://www.sun.com/msg/ZFS-8000-EY config: atomfs FAULTED corrupted data mirror-0 FAULTED corrupted data c4t5d0 ONLINE c9d0UNAVAIL cannot open j...@opensolaris:~# zpool import -f pool: atomfs id: 1344695315736882 state: FAULTED status: The pool was last accessed by another system. action: The pool cannot be imported due to damaged devices or data. The pool may be active on another system, but can be imported using the '-f' flag. see: http://www.sun.com/msg/ZFS-8000-EY config: atomfs FAULTED corrupted data mirror-0 FAULTED corrupted data c4t5d0 ONLINE c9d0UNAVAIL cannot open j...@opensolaris:~# zpool import -f 1344695315736882 newpool cannot import 'atomfs' as 'newpool': one or more devices is currently unavailable Recovery is possible, but will result in some data loss. Returning the pool to its state as of March 12, 2010 09:08:29 AM PST should correct the problem. Recovery can be attempted by executing 'zpool import -F atomfs'. A scrub of the pool is strongly recommended after recovery. j...@opensolaris:~# zpool import -F atomfs cannot import 'atomfs': pool may be in use from other system, it was last accessed by blue (hostid: 0x82aa00) on Fri Mar 12 09:08:29 2010 use '-f' to import anyway j...@opensolaris:~# zpool status no pools available j...@opensolaris:~# zpool import -f 1344695315736882 cannot import 'atomfs': one or more devices is currently unavailable Recovery is possible, but will result in some data loss. Returning the pool to its state as of March 12, 2010 09:08:29 AM PST should correct the problem. Recovery can be attempted by executing 'zpool import -F atomfs'. A scrub of the pool is strongly recommended after recovery. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] is this pool recoverable?
Thanks for the info. I'll try the live CD method when I have access to the system next week. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] is this pool recoverable?
On Sun, Mar 21, 2010 at 12:32 AM, Miles Nordin wrote: >> "sn" == Sriram Narayanan writes: > > sn> http://docs.sun.com/app/docs/doc/817-2271/ghbxs?a=view > > yeah, but he has no slog, and he says 'zpool clear' makes the system > panic and reboot, so even from way over here that link looks useless. > > Patrick, maybe try a newer livecd from genunix.org like b130 or later > and see if the panic is fixed so that you can import/clear/export the > pool. The new livecd's also have 'zpool import -F' for Fix Harder > (see manpage first). Let us know what happens. > Yes, I realized that after I posted to the list, and I replied again asking him to use the opensolaris LiveCD. I just noticed that I replied direct rather than to the list. -- Sriram - Belenix: www.belenix.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] is this pool recoverable?
> "sn" == Sriram Narayanan writes: sn> http://docs.sun.com/app/docs/doc/817-2271/ghbxs?a=view yeah, but he has no slog, and he says 'zpool clear' makes the system panic and reboot, so even from way over here that link looks useless. Patrick, maybe try a newer livecd from genunix.org like b130 or later and see if the panic is fixed so that you can import/clear/export the pool. The new livecd's also have 'zpool import -F' for Fix Harder (see manpage first). Let us know what happens. pgpT7dIOFPNUD.pgp Description: PGP signature ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] is this pool recoverable?
On Sat, Mar 20, 2010 at 9:19 PM, Patrick Tiquet wrote: > Also, I tried to run zpool clear, but the system crashes and reboots. Please see if this link helps http://docs.sun.com/app/docs/doc/817-2271/ghbxs?a=view -- Sriram - Belenix: www.belenix.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] is this pool recoverable?
Also, I tried to run zpool clear, but the system crashes and reboots. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] is this pool recoverable?
This system is running stock 111b runinng on an Intel Atom D945GCLF2 motherboard. The pool is of two mirrored 1TB sata disks. I noticed the system was locked up, rebooted and the pool status shows as follows: pool: atomfs state: FAULTED status: An intent log record could not be read. Waiting for adminstrator intervention to fix the faulted pool. action: Either restore the affected device(s) and run 'zpool online', or ignore the intent log records by running 'zpool clear'. see: http://www.sun.com/msg/ZFS-8000-K4 scrub: none requested config: NAMESTATE READ WRITE CKSUM atomfs FAULTED 0 0 1 bad intent log mirrorDEGRADED 0 0 6 c8d0DEGRADED 0 0 6 too many errors c9d0DEGRADED 0 0 6 too many errors -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss