Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-22 Thread Michael DeMan
I can not help but agree with Tim's comment below.

If you want a free version of ZFS, in which case you are still responsible for 
things yourself - like having backups, then maybe:

www.freenas.org
www.linuxonzfs.org
www.openindiana.org

Meanwhile, it is grossly inappropriate to be complaining about lack of support 
when using an operating system / file system that you know has no support.  
Doubly so if your data is important and doubly so again if did not already back 
it up.

- mike

On Aug 19, 2011, at 6:54 AM, Tim Cook wrote:

> 
> 
> You digitally signed a license agreement stating the following:
> No Technical Support
> Our technical support organization will not provide technical support, phone 
> support, or updates to you for the Programs licensed under this agreement.
> 
> To turn around and keep repeating that they're "holding your data hostage" is 
> disingenuous at best.  Nobody is holding your data hostage.  You voluntarily 
> put it on an operating system that explicitly states doesn't offer support 
> from the parent company.  Nobody from Oracle is going to show up with a patch 
> for you on this mailing list because none of the Oracle employees want to 
> lose their job and subsequently be subjected to a lawsuit.  If that's what 
> you're planning on waiting for, I'd suggest you take a new approach.
> 
> Sorry to be a downer, but that's reality.
> 
> --Tim

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-19 Thread John D Groenveld
In message <1313687977.77375.yahoomail...@web121903.mail.ne1.yahoo.com>, Stu Wh
itefish writes:
>Nope, not a clue how to do that and I have installed Windows on this box inste
>ad of Solaris since I can't get my data back from ZFS.
>I have my two drives the pool is on disconnected so if this ever gets resolved
> I can reinstall Solaris and start learning again.

I believe you can configure VirtualBox for Windows to pass thru
the disk with your unimportable rpool to guest OSs.
Can OpenIndiana or FreeBSD guest import the pool?
Does Solaris 11X crash at the same place when run from within
VirtualBox?

John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-19 Thread Tim Cook
On Fri, Aug 19, 2011 at 4:43 AM, Stu Whitefish  wrote:

>
> > It seems that obtaining an Oracle support contract or a contract renewal
> is equally frustrating.
>
> I don't have any axe to grind with Oracle. I'm new to the Solaris thing and
> wanted to see if it was for me.
>
> If I was using this box to make money then sure I wouldn't have any problem
> paying for support. I don't expect
> handouts and I don't mind paying.
>
> I trusted ZFS because I heard it's for enterprise use and now I have 200G
> of data offline and not a peep from Oracle.
> Looking on the net I found another guy who had the same exact failure.
>
> To my way of thinking somebody needs to standup and get this fixed for us
> and make sure it doesn't happen to anybody
> else. If that happens I have no grudge against Oracle or Solaris. If it
> doesn't that's a pretty sour experience for someone
> to go through and it will definitely make me look at this whole thing in
> another light.
>
> I still believe somebody over there will do the right thing. I don't
> believe Oracle needs to hold people's data hostage to make money.
> I am sure they have enough good products and services to make money
> honestly.
>
> Jim
>
>

You digitally signed a license agreement stating the following:
*No Technical Support*
Our technical support organization will not provide technical support, phone
support, or updates to you for the Programs licensed under this agreement.

To turn around and keep repeating that they're "holding your data hostage"
is disingenuous at best.  Nobody is holding your data hostage.  You
voluntarily put it on an operating system that explicitly states doesn't
offer support from the parent company.  Nobody from Oracle is going to show
up with a patch for you on this mailing list because none of the Oracle
employees want to lose their job and subsequently be subjected to a
lawsuit.  If that's what you're planning on waiting for, I'd suggest you
take a new approach.

Sorry to be a downer, but that's reality.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-19 Thread Stu Whitefish


> It seems that obtaining an Oracle support contract or a contract renewal is 
> equally frustrating.
>
>I don't have any axe to grind with Oracle. I'm new to the Solaris thing and 
>wanted to see if it was for me.
>
>If I was using this box to make money then sure I wouldn't have any problem 
>paying for support. I don't expect
>handouts and I don't mind paying.
>
>I trusted ZFS because I heard it's for enterprise use and now I have 200G of 
>data offline and not a peep from Oracle.
>Looking on the net I found another guy who had the same exact failure.
>
>To my way of thinking somebody needs to standup and get this fixed for us and 
>make sure it doesn't happen to anybody
>else. If that happens I have no grudge against Oracle or Solaris. If it 
>doesn't that's a pretty sour experience for someone
>to go through and it will definitely make me look at this whole thing in 
>another light.
>
>I still believe somebody over there will do the right thing. I don't believe 
>Oracle needs to hold people's data hostage to make money.
>I am sure they have enough good products and services to make money honestly.
>
>Jim
>
>___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-19 Thread Stu Whitefish




> lots of replies and no suggestion to try on FreeBSD. How about trying
>> on one? I believe if it crashed on FreeBSD, the developers would be
>> interested in helping to solve it. Try using the 9.0-beta1 since
>> 8.2-release has some problems importing certain zpools.
>
>I didn't think FreeBSD support could be ahead of Solaris ZFS. I downloaded what
>you suggested. I'll try it over the weekend and let you know.
>
>> Asking Oracle for help without support contract would be like shouting
>> in vacuum space...
>
>From where I stand there's a big difference between some bug in Solaris that 
>doesn't do what it's supposed to, or not giving away new features,
>and losing somebody's data. I don't expect them to fix every minor problem I 
>come up with if I don't pay for a support contract hell I don't even
>expect them to fix major problems. But when it comes to data integrity I 
>really don't think it's on the same level of discussion.
>
>As I say, I can't believe Oracle would hold somebody's data hostage. That is 
>bad business and I just don't believe they would act that way.
>Here's a chance for somebody to be a real standup guy and get this fixed. 
>Imagine what kind of impression that would make. Now that would
>be good marketing!
>
>Oracle's silence on this serious data integrity issue (and I'm not the only 
>one having it) is pretty disappointing.
>
>
>
>___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-18 Thread Bob Friesenhahn

On Fri, 19 Aug 2011, Edho Arief wrote:


Asking Oracle for help without support contract would be like shouting
in vacuum space...


It seems that obtaining an Oracle support contract or a contract 
renewal is equally frustrating.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-18 Thread Edho Arief
On Fri, Aug 19, 2011 at 12:19 AM, Stu Whitefish  wrote:
>> From: Thomas Gouverneur 
>
>> To: zfs-discuss@opensolaris.org
>> Cc:
>> Sent: Thursday, August 18, 2011 5:11:16 PM
>> Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data 
>> inaccessible!
>>
>> Have you already extracted the core file of the kernel crash ?
>
> Nope, not a clue how to do that and I have installed Windows on this box 
> instead of Solaris since I can't get my data back from ZFS.
> I have my two drives the pool is on disconnected so if this ever gets 
> resolved I can reinstall Solaris and start learning again.
>
>> (and btw activated dump device for such dumping happen at next reboot...)
>
> This was a development box for me to see how I get along with Solaris. I'm 
> afraid I don't have any experience in Solaris to understand your question.
>
>> Have you also tried applying the latest kernel/zfs patches and try importing 
>> the pool afterwards ?
>
> Wish I had them and knew what to do with them if I had them. Somebody on OTN 
> noted this is supposed to be fixed by 142910 but
> I didn't hear back yet whether it fixes an pool ZFS won't import, or it only 
> stops it from happening in the first place. Don't have a service
> contract as I say this box was my first try with Solaris and it is a homebrew 
> system not on Oracle's support list.
>
> I am sure if there is a patch for this or a way to get my 200G of data back 
> some kind soul at Oracle will certainly help me since I lost
> my data and getting it back isn't a matter of convenience. What an 
> opportunity to generate some old fashioned goodwill!  :-)
>

lots of replies and no suggestion to try on FreeBSD. How about trying
on one? I believe if it crashed on FreeBSD, the developers would be
interested in helping to solve it. Try using the 9.0-beta1 since
8.2-release has some problems importing certain zpools.

Asking Oracle for help without support contract would be like shouting
in vacuum space...
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-18 Thread Stu Whitefish
> From: Thomas Gouverneur 

> To: zfs-discuss@opensolaris.org
> Cc: 
> Sent: Thursday, August 18, 2011 5:11:16 PM
> Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data 
> inaccessible!
> 
> Have you already extracted the core file of the kernel crash ?

Nope, not a clue how to do that and I have installed Windows on this box 
instead of Solaris since I can't get my data back from ZFS.
I have my two drives the pool is on disconnected so if this ever gets resolved 
I can reinstall Solaris and start learning again.

> (and btw activated dump device for such dumping happen at next reboot...)

This was a development box for me to see how I get along with Solaris. I'm 
afraid I don't have any experience in Solaris to understand your question.

> Have you also tried applying the latest kernel/zfs patches and try importing 
> the pool afterwards ?

Wish I had them and knew what to do with them if I had them. Somebody on OTN 
noted this is supposed to be fixed by 142910 but
I didn't hear back yet whether it fixes an pool ZFS won't import, or it only 
stops it from happening in the first place. Don't have a service
contract as I say this box was my first try with Solaris and it is a homebrew 
system not on Oracle's support list.

I am sure if there is a patch for this or a way to get my 200G of data back 
some kind soul at Oracle will certainly help me since I lost
my data and getting it back isn't a matter of convenience. What an opportunity 
to generate some old fashioned goodwill!  :-)

Jim

> 
> 
> Thomas
> 
> On 08/18/2011 06:40 PM, Stu Whitefish wrote:
>>  Hi Thomas,
>> 
>>  Thanks for that link. That's very similar but not identical. 
> There's a different line number in zfs_ioctl.c, mine and Preston's fail 
> on line 1815. It could be because of a difference in levels in that module of 
> course, but the traceback is not identical either. Ours show brand_sysenter 
> and 
> the one you linked to shows brand_sys_syscall. I don't know what all that 
> means but it is different. Anyway at least two of us have identical failures.
>> 
>>  I was not using crypto, just a plain jane mirror on 2 drives. Possibly I 
> had compression on a few file systems but everything else was allowed to 
> default.
>> 
>>  Here are our screenshots in case anybody doesn't want to go through the 
> thread.
>> 
>> 
>>  http://imageshack.us/photo/my-images/13/zfsimportfail.jpg/
>> 
>>  http://prestonconnors.com/zvol_get_stats.jpg
>> 
>> 
>>  I hope somebody can help with this. It's not a good feeling having so 
> much data gone.
>> 
>>  Thanks for your help. Oracle, are you listening?
>> 
>>  Jim
>> 
>> 
>> 
>>  - Original Message -
>>     
>>>  From: Thomas Gouverneur
>>>  To: zfs-discuss@opensolaris.org
>>>  Cc: Stu Whitefish
>>>  Sent: Thursday, August 18, 2011 1:57:29 PM
>>>  Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data 
> inaccessible!
>>> 
>>>  You're probably hitting bug 7056738 ->
>>>  http://wesunsolve.net/bugid/id/7056738
>>>  Looks like it's not fixed yet @ oracle anyway...
>>> 
>>>  Were you using crypto on your datasets ?
>>> 
>>> 
>>>  Regards,
>>> 
>>>  Thomas
>>>       
>>  ___
>>  zfs-discuss mailing list
>>  zfs-discuss@opensolaris.org
>>  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>     
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-18 Thread Thomas Gouverneur

Have you already extracted the core file of the kernel crash ?
(and btw activated dump device for such dumping happen at next reboot...)

Have you also tried applying the latest kernel/zfs patches and try 
importing the pool afterwards ?



Thomas

On 08/18/2011 06:40 PM, Stu Whitefish wrote:

Hi Thomas,

Thanks for that link. That's very similar but not identical. There's a 
different line number in zfs_ioctl.c, mine and Preston's fail on line 1815. It 
could be because of a difference in levels in that module of course, but the 
traceback is not identical either. Ours show brand_sysenter and the one you 
linked to shows brand_sys_syscall. I don't know what all that means but it is 
different. Anyway at least two of us have identical failures.

I was not using crypto, just a plain jane mirror on 2 drives. Possibly I had 
compression on a few file systems but everything else was allowed to default.

Here are our screenshots in case anybody doesn't want to go through the thread.


http://imageshack.us/photo/my-images/13/zfsimportfail.jpg/

http://prestonconnors.com/zvol_get_stats.jpg


I hope somebody can help with this. It's not a good feeling having so much data 
gone.

Thanks for your help. Oracle, are you listening?

Jim



- Original Message -
   

From: Thomas Gouverneur
To: zfs-discuss@opensolaris.org
Cc: Stu Whitefish
Sent: Thursday, August 18, 2011 1:57:29 PM
Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data 
inaccessible!

You're probably hitting bug 7056738 ->
http://wesunsolve.net/bugid/id/7056738
Looks like it's not fixed yet @ oracle anyway...

Were you using crypto on your datasets ?


Regards,

Thomas
 

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
   


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-18 Thread Stu Whitefish
Hi Thomas,

Thanks for that link. That's very similar but not identical. There's a 
different line number in zfs_ioctl.c, mine and Preston's fail on line 1815. It 
could be because of a difference in levels in that module of course, but the 
traceback is not identical either. Ours show brand_sysenter and the one you 
linked to shows brand_sys_syscall. I don't know what all that means but it is 
different. Anyway at least two of us have identical failures.

I was not using crypto, just a plain jane mirror on 2 drives. Possibly I had 
compression on a few file systems but everything else was allowed to default.

Here are our screenshots in case anybody doesn't want to go through the thread.


http://imageshack.us/photo/my-images/13/zfsimportfail.jpg/

http://prestonconnors.com/zvol_get_stats.jpg


I hope somebody can help with this. It's not a good feeling having so much data 
gone.

Thanks for your help. Oracle, are you listening?

Jim



- Original Message -
> From: Thomas Gouverneur 
> To: zfs-discuss@opensolaris.org
> Cc: Stu Whitefish 
> Sent: Thursday, August 18, 2011 1:57:29 PM
> Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data 
> inaccessible!
> 
> You're probably hitting bug 7056738 -> 
> http://wesunsolve.net/bugid/id/7056738
> Looks like it's not fixed yet @ oracle anyway...
> 
> Were you using crypto on your datasets ?
> 
> 
> Regards,
> 
> Thomas
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-18 Thread Thomas Gouverneur
You're probably hitting bug 7056738 -> http://wesunsolve.net/bugid/id/7056738
Looks like it's not fixed yet @ oracle anyway...

Were you using crypto on your datasets ?


Regards,

Thomas

On Tue, 16 Aug 2011 09:33:34 -0700 (PDT)
Stu Whitefish  wrote:

> - Original Message -
> 
> > From: Alexander Lesle 
> > To: zfs-discuss@opensolaris.org
> > Cc: 
> > Sent: Monday, August 15, 2011 8:37:42 PM
> > Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data 
> > inaccessible!
> > 
> > Hello Stu Whitefish and List,
> > 
> > On August, 15 2011, 21:17  wrote in [1]:
> > 
> >>>  7. cannot import old rpool (c0t2d0s0 c0t3d0s0), any attempt causes a
> >>>  kernel panic, even when booted from different OS versions
> > 
> >>  Right. I have tried OpenIndiana 151 and Solaris 11 Express (latest
> >>  from Oracle) several times each as well as 2 new installs of Update 8.
> > 
> > When I understand you right is your primary interest to recover your
> > data on tank pool.
> > 
> > Have you check the way to boot from a Live-DVD, mount your "safe 
> > place"
> > and copy the data on a other machine?
> 
> Hi Alexander,
> 
> Yes of course...the problem is no version of Solaris can import the pool. 
> Please refer to the first message in the thread.
> 
> Thanks,
> 
> Jim
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


-- 
Gouverneur Thomas 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-16 Thread Stu Whitefish
- Original Message -

> From: John D Groenveld 
> To: "zfs-discuss@opensolaris.org" 
> Cc: 
> Sent: Monday, August 15, 2011 6:12:37 PM
> Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data 
> inaccessible!
> 
> In message <1313431448.5331.yahoomail...@web121911.mail.ne1.yahoo.com>, 
> Stu Whi
> tefish writes:
>> I'm sorry, I don't understand this suggestion.
>> 
>> The pool that won't import is a mirror on two drives.
> 
> Disconnect all but the two mirrored drives that you must import
> and try to import from a S11X LiveUSB.

Hi John,

Thanks for the suggestion, but it fails the same way. It panics and reboots too 
fast for me to capture the messages but they're the same as what I posted in 
the opening post of this thread.

This is a snap of zpool import before I tried importing it. Everything looks 
normal except it's odd the controller numbers keep changing.

http://imageshack.us/photo/my-images/705/sol11expresslive.jpg/

Thanks,

Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-16 Thread Stu Whitefish
- Original Message -

> From: Alexander Lesle 
> To: zfs-discuss@opensolaris.org
> Cc: 
> Sent: Monday, August 15, 2011 8:37:42 PM
> Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data 
> inaccessible!
> 
> Hello Stu Whitefish and List,
> 
> On August, 15 2011, 21:17  wrote in [1]:
> 
>>>  7. cannot import old rpool (c0t2d0s0 c0t3d0s0), any attempt causes a
>>>  kernel panic, even when booted from different OS versions
> 
>>  Right. I have tried OpenIndiana 151 and Solaris 11 Express (latest
>>  from Oracle) several times each as well as 2 new installs of Update 8.
> 
> When I understand you right is your primary interest to recover your
> data on tank pool.
> 
> Have you check the way to boot from a Live-DVD, mount your "safe 
> place"
> and copy the data on a other machine?

Hi Alexander,

Yes of course...the problem is no version of Solaris can import the pool. 
Please refer to the first message in the thread.

Thanks,

Jim

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-15 Thread Alexander Lesle
Hello Stu Whitefish and List,

On August, 15 2011, 21:17  wrote in [1]:

>> 7. cannot import old rpool (c0t2d0s0 c0t3d0s0), any attempt causes a
>> kernel panic, even when booted from different OS versions

> Right. I have tried OpenIndiana 151 and Solaris 11 Express (latest
> from Oracle) several times each as well as 2 new installs of Update 8.

When I understand you right is your primary interest to recover your
data on tank pool.

Have you check the way to boot from a Live-DVD, mount your "safe place"
and copy the data on a other machine?

-- 
Best Regards
Alexander
August, 15 2011

[1] mid:1313435871.14520.yahoomail...@web121919.mail.ne1.yahoo.com


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-15 Thread Stu Whitefish
Given I can boot to single user mode and elect not to import or mount any 
pools, and that later I can issue an import against only the pool I need, I 
don't understand how this can help.

Still, given that nothing else seems to help I will try this and get back to 
you tomorrow.

Thanks,

Jim



- Original Message -
> From: John D Groenveld 
> To: "zfs-discuss@opensolaris.org" 
> Cc: 
> Sent: Monday, August 15, 2011 6:12:37 PM
> Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data 
> inaccessible!
> 
> In message <1313431448.5331.yahoomail...@web121911.mail.ne1.yahoo.com>, 
> Stu Whi
> tefish writes:
>> I'm sorry, I don't understand this suggestion.
>> 
>> The pool that won't import is a mirror on two drives.
> 
> Disconnect all but the two mirrored drives that you must import
> and try to import from a S11X LiveUSB.
> 
> John
> groenv...@acm.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-15 Thread Stu Whitefish
Hi Paul,

> 1. Install system to pair of mirrored disks (c0t2d0s0 c0t3d0s0),

> system works fine

I don't remember at this point which disks were which, but I believe it was 0 
and 1 because during the first install there were only 2 drives in the box 
because I had only 2 drives.

> 2. add two more disks (c0t0d0s0 c0t1d0s0), create zpool tank, test and
> determine these disks are fine

Again, probably was on disks 2 and 3 but in principle, correct.

> 3. copy data to save to rpool (c0t2d0s0 c0t3d0s0)

I did this in a few steps that probably don't make sense because I had only 2 
500G drives at the beginning when I did my install. Later I got two 320G and 
realized I should have the root pool on the smaller drives. But in the interim, 
I installed the new pair of 320G and moved a bunch of data onto that pool. 
After the initial installation when update 8 first came out, what happened next 
was something like:

1. I created tank mirror on the 2 320G drives and moved data from another 
system on to the tank. After I verified it was good I rebooted the box and 
checked again and everything was healthy, all pools were imported and mounted 
correctly.

2. Then I realized I should install on the 320s and use the 500s for storage so 
I copied everything I had just put on the 320s (tank) onto the 500s (root). I 
rebooted again and verified the data on root was good, then I deleted it from 
tank.

3. I installed a new install on the 320s (formerly tank)

4. I rebooted and it used my old root on the 500s as root, which surprised me 
but makes sense now because it was created as rpool during the very first 
install.

5. I rebooted in single user mode and tried to import the new install. It 
imported fine.

6. I don't know what happened next but I believe after that I rebooted again to 
see why Solaris didn't choose the new install, the tank pool could not be 
imported and I got the panic shown in the screenshot.

> 3. install OS to c0t0d0s0, c0t1d0s0
> 4. reboot, system still boots from old rpool (c0t2d0s0 c0t3d0s0)

Correct. At some point I read you can change the name of the pool so I imported 
rpool as tank and that much worked. At this point both pools were still good, 
and now the install was correctly called rpool and my tank was called tank.

> 5. change boot device and boot from new OS (c0t0d0s0 c0t1d0s0)

That was the surprising thing. I had already changed my BIOS to boot from the 
new pool, but that didn't stop Solaris from using the old install as the root 
pool, I guess because of the name. I thought originally as long as I specified 
the correct boot device I wouldn't have any problem, but even taking the old 
rpool out of the boot sequence and specifying only the newly installed pool as 
boot devices wasn't enough.

> 6. cannot import old rpool (c0t2d0s0 c0t3d0s0) with your data
> 
> At this point could you still boot from the old rpool (c0t2d0s0 c0t3d0s0) ?

Yes, I could use the newly installed pool to boot from, or import it from shell 
in several versions of Solaris/Sol 11, etc. Of course now I cannot, since I 
have installed so many times over that pool trying to get the other pool 
imported.

> 
>  and
> 
> 7. cannot import old rpool (c0t2d0s0 c0t3d0s0), any attempt causes a
> kernel panic, even when booted from different OS versions

Right. I have tried OpenIndiana 151 and Solaris 11 Express (latest from Oracle) 
several times each as well as 2 new installs of Update 8.

> Have you been using the same hardware for all of this ?

Yes, I have. 

Thanks for the help,

Jim


Thanks>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-15 Thread John D Groenveld
In message <1313431448.5331.yahoomail...@web121911.mail.ne1.yahoo.com>, Stu Whi
tefish writes:
>I'm sorry, I don't understand this suggestion.
>
>The pool that won't import is a mirror on two drives.

Disconnect all but the two mirrored drives that you must import
and try to import from a S11X LiveUSB.

John
groenv...@acm.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-15 Thread Stu Whitefish
I'm sorry, I don't understand this suggestion.

The pool that won't import is a mirror on two drives.



- Original Message -
> From: LaoTsao 
> To: Stu Whitefish 
> Cc: "zfs-discuss@opensolaris.org" 
> Sent: Monday, August 15, 2011 5:50:08 PM
> Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data 
> inaccessible!
> 
> iirc if you use two hdd, you can import the zpool
> can you try to import -R with only two hdd at time
> 
> Sent from my iPad
> Hung-Sheng Tsao ( LaoTsao) Ph.D
> 
> On Aug 15, 2011, at 13:42, Stu Whitefish  wrote:
> 
>>  Unfortunately this panics the same exact way. Thanks for the suggestion 
> though.
>> 
>> 
>> 
>>  - Original Message -
>>>  From: ""Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D."" 
> 
>>>  To: zfs-discuss@opensolaris.org
>>>  Cc: 
>>>  Sent: Monday, August 15, 2011 3:06:20 PM
>>>  Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data 
> inaccessible!
>>> 
>>>  may be try the following
>>>  1)boot s10u8 cd into single user mode (when boot cdrom, choose Solaris 
>>>  then choose single user mode(6))
>>>  2)when ask to mount rpool just say no
>>>  3)mkdir /tmp/mnt1 /tmp/mnt2
>>>  4)zpool  import -f -R /tmp/mnt1 tank
>>>  5)zpool import -f -R /tmp/mnt2 rpool
>>> 
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-15 Thread Paul Kraus
I am catching up here and wanted to see if I correctly understand the
chain of events...

1. Install system to pair of mirrored disks (c0t2d0s0 c0t3d0s0),
system works fine
2. add two more disks (c0t0d0s0 c0t1d0s0), create zpool tank, test and
determine these disks are fine
3. copy data to save to rpool (c0t2d0s0 c0t3d0s0)
3. install OS to c0t0d0s0, c0t1d0s0
4. reboot, system still boots from old rpool (c0t2d0s0 c0t3d0s0)
5. change boot device and boot from new OS (c0t0d0s0 c0t1d0s0)
6. cannot import old rpool (c0t2d0s0 c0t3d0s0) with your data

At this point could you still boot from the old rpool (c0t2d0s0 c0t3d0s0) ?

 and

7. cannot import old rpool (c0t2d0s0 c0t3d0s0), any attempt causes a
kernel panic, even when booted from different OS versions

Have you been using the same hardware for all of this ?

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Designer: Frankenstein, A New Musical
(http://www.facebook.com/event.php?eid=123170297765140)
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-15 Thread LaoTsao
iirc if you use two hdd, you can import the zpool
can you try to import -R with only two hdd at time

Sent from my iPad
Hung-Sheng Tsao ( LaoTsao) Ph.D

On Aug 15, 2011, at 13:42, Stu Whitefish  wrote:

> Unfortunately this panics the same exact way. Thanks for the suggestion 
> though.
> 
> 
> 
> - Original Message -
>> From: ""Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D."" 
>> To: zfs-discuss@opensolaris.org
>> Cc: 
>> Sent: Monday, August 15, 2011 3:06:20 PM
>> Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data 
>> inaccessible!
>> 
>> may be try the following
>> 1)boot s10u8 cd into single user mode (when boot cdrom, choose Solaris 
>> then choose single user mode(6))
>> 2)when ask to mount rpool just say no
>> 3)mkdir /tmp/mnt1 /tmp/mnt2
>> 4)zpool  import -f -R /tmp/mnt1 tank
>> 5)zpool import -f -R /tmp/mnt2 rpool
>> 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-15 Thread Stu Whitefish
Unfortunately this panics the same exact way. Thanks for the suggestion though.



- Original Message -
> From: ""Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D."" 
> To: zfs-discuss@opensolaris.org
> Cc: 
> Sent: Monday, August 15, 2011 3:06:20 PM
> Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data 
> inaccessible!
> 
> may be try the following
> 1)boot s10u8 cd into single user mode (when boot cdrom, choose Solaris 
> then choose single user mode(6))
> 2)when ask to mount rpool just say no
> 3)mkdir /tmp/mnt1 /tmp/mnt2
> 4)zpool  import -f -R /tmp/mnt1 tank
> 5)zpool import -f -R /tmp/mnt2 rpool
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-15 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.



On 8/15/2011 11:25 AM, Stu Whitefish wrote:


Hi. Thanks I have tried this on update 8 and Sol 11 Express.

The import always results in a kernel panic as shown in the picture.

I did not try an alternate mountpoint though. Would it make that much 
difference?

try it



- Original Message -

From: ""Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.""
To: zfs-discuss@opensolaris.org
Cc:
Sent: Monday, August 15, 2011 3:06:20 PM
Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data 
inaccessible!

may be try the following
1)boot s10u8 cd into single user mode (when boot cdrom, choose Solaris
then choose single user mode(6))
2)when ask to mount rpool just say no
3)mkdir /tmp/mnt1 /tmp/mnt2
4)zpool  import -f -R /tmp/mnt1 tank
5)zpool import -f -R /tmp/mnt2 rpool


On 8/15/2011 9:12 AM, Stu Whitefish wrote:

  On Thu, Aug 4, 2011 at 2:47 PM, Stuart James Whitefish
 wrote:

# zpool import -f tank

   http://imageshack.us/photo/my-images/13/zfsimportfail.jpg/

  I encourage you to open a support case and ask for an escalation on CR

7056738.
  -- 
  Mike Gerdts

  Hi Mike,

  Unfortunately I don't have a support contract. I've been trying to

set up a development system on Solaris and learn it.

  Until this happened, I was pretty happy with it. Even so, I don't have

supported hardware so I couldn't buy a contract

  until I bought another machine and I really have enough machines so I

cannot justify the expense right now. And I

  refuse to believe Oracle would hold people hostage in a situation like

this, but I do believe they could generate a lot of

  goodwill by fixing this for me and whoever else it happened to and telling

us what level of Solaris 10 this is fixed at so

  this doesn't continue happening. It's a pretty serious failure and

I'm not the only one who it happened to.

  It's incredible but in all the years I have been using computers I

don't ever recall losing data due to a filesystem or OS issue.

  That includes DOS, Windows, Linux, etc.

  I cannot believe ZFS on Intel is so fragile that people lose hundreds of

gigs of data and that's just the way it is. There

  must be a way to recover this data and some advice on preventing it from

happening again.

  Thanks,
  Jim
  ___
  zfs-discuss mailing list
  zfs-discuss@opensolaris.org
  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

<>___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-15 Thread Stu Whitefish


Hi. Thanks I have tried this on update 8 and Sol 11 Express.

The import always results in a kernel panic as shown in the picture.

I did not try an alternate mountpoint though. Would it make that much 
difference?


- Original Message -
> From: ""Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D."" 
> To: zfs-discuss@opensolaris.org
> Cc: 
> Sent: Monday, August 15, 2011 3:06:20 PM
> Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data 
> inaccessible!
> 
> may be try the following
> 1)boot s10u8 cd into single user mode (when boot cdrom, choose Solaris 
> then choose single user mode(6))
> 2)when ask to mount rpool just say no
> 3)mkdir /tmp/mnt1 /tmp/mnt2
> 4)zpool  import -f -R /tmp/mnt1 tank
> 5)zpool import -f -R /tmp/mnt2 rpool
> 
> 
> On 8/15/2011 9:12 AM, Stu Whitefish wrote:
>>>  On Thu, Aug 4, 2011 at 2:47 PM, Stuart James Whitefish
>>>    wrote:
>>>>    # zpool import -f tank
>>>> 
>>>>   http://imageshack.us/photo/my-images/13/zfsimportfail.jpg/
>>>  I encourage you to open a support case and ask for an escalation on CR 
> 7056738.
>>> 
>>>  -- 
>>>  Mike Gerdts
>>  Hi Mike,
>> 
>>  Unfortunately I don't have a support contract. I've been trying to 
> set up a development system on Solaris and learn it.
>>  Until this happened, I was pretty happy with it. Even so, I don't have 
> supported hardware so I couldn't buy a contract
>>  until I bought another machine and I really have enough machines so I 
> cannot justify the expense right now. And I
>>  refuse to believe Oracle would hold people hostage in a situation like 
> this, but I do believe they could generate a lot of
>>  goodwill by fixing this for me and whoever else it happened to and telling 
> us what level of Solaris 10 this is fixed at so
>>  this doesn't continue happening. It's a pretty serious failure and 
> I'm not the only one who it happened to.
>> 
>>  It's incredible but in all the years I have been using computers I 
> don't ever recall losing data due to a filesystem or OS issue.
>>  That includes DOS, Windows, Linux, etc.
>> 
>>  I cannot believe ZFS on Intel is so fragile that people lose hundreds of 
> gigs of data and that's just the way it is. There
>>  must be a way to recover this data and some advice on preventing it from 
> happening again.
>> 
>>  Thanks,
>>  Jim
>>  ___
>>  zfs-discuss mailing list
>>  zfs-discuss@opensolaris.org
>>  http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-15 Thread Hung-Sheng Tsao (Lao Tsao 老曹) Ph.D.

may be try the following
1)boot s10u8 cd into single user mode (when boot cdrom, choose Solaris 
then choose single user mode(6))

2)when ask to mount rpool just say no
3)mkdir /tmp/mnt1 /tmp/mnt2
4)zpool  import -f -R /tmp/mnt1 tank
5)zpool import -f -R /tmp/mnt2 rpool


On 8/15/2011 9:12 AM, Stu Whitefish wrote:

On Thu, Aug 4, 2011 at 2:47 PM, Stuart James Whitefish
  wrote:

  # zpool import -f tank

  http://imageshack.us/photo/my-images/13/zfsimportfail.jpg/

I encourage you to open a support case and ask for an escalation on CR 7056738.

--
Mike Gerdts

Hi Mike,

Unfortunately I don't have a support contract. I've been trying to set up a 
development system on Solaris and learn it.
Until this happened, I was pretty happy with it. Even so, I don't have 
supported hardware so I couldn't buy a contract
until I bought another machine and I really have enough machines so I cannot 
justify the expense right now. And I
refuse to believe Oracle would hold people hostage in a situation like this, 
but I do believe they could generate a lot of
goodwill by fixing this for me and whoever else it happened to and telling us 
what level of Solaris 10 this is fixed at so
this doesn't continue happening. It's a pretty serious failure and I'm not the 
only one who it happened to.

It's incredible but in all the years I have been using computers I don't ever 
recall losing data due to a filesystem or OS issue.
That includes DOS, Windows, Linux, etc.

I cannot believe ZFS on Intel is so fragile that people lose hundreds of gigs 
of data and that's just the way it is. There
must be a way to recover this data and some advice on preventing it from 
happening again.

Thanks,
Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
<>___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-15 Thread Stu Whitefish
> On Thu, Aug 4, 2011 at 2:47 PM, Stuart James Whitefish

>  wrote:
>>  # zpool import -f tank
>> 
>>  http://imageshack.us/photo/my-images/13/zfsimportfail.jpg/
> 
> I encourage you to open a support case and ask for an escalation on CR 
> 7056738.
> 
> -- 
> Mike Gerdts

Hi Mike,

Unfortunately I don't have a support contract. I've been trying to set up a 
development system on Solaris and learn it.
Until this happened, I was pretty happy with it. Even so, I don't have 
supported hardware so I couldn't buy a contract
until I bought another machine and I really have enough machines so I cannot 
justify the expense right now. And I
refuse to believe Oracle would hold people hostage in a situation like this, 
but I do believe they could generate a lot of
goodwill by fixing this for me and whoever else it happened to and telling us 
what level of Solaris 10 this is fixed at so
this doesn't continue happening. It's a pretty serious failure and I'm not the 
only one who it happened to.

It's incredible but in all the years I have been using computers I don't ever 
recall losing data due to a filesystem or OS issue.
That includes DOS, Windows, Linux, etc.

I cannot believe ZFS on Intel is so fragile that people lose hundreds of gigs 
of data and that's just the way it is. There
must be a way to recover this data and some advice on preventing it from 
happening again.

Thanks,
Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-05 Thread Mike Gerdts
On Thu, Aug 4, 2011 at 2:47 PM, Stuart James Whitefish
 wrote:
> # zpool import -f tank
>
> http://imageshack.us/photo/my-images/13/zfsimportfail.jpg/

I encourage you to open a support case and ask for an escalation on CR 7056738.

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-05 Thread Stuart James Whitefish
I am opening a new thread since I found somebody else reported a similar 
failure in May and I didn't see a resolution hopefully this post will be easier 
to find for people with similar problems. Original thread was 
http://opensolaris.org/jive/thread.jspa?threadID=140861

System: snv_151a 64 bit on Intel.
Error: panic[cpu0] assertion failed: zvol_get_stats(os, nv) == 0,
file: ../../common/fs/zfs/zfs_ioctl.c, line: 1815

Failure first seen on Solaris 10, update 8

History:

I recently received two 320G drives and realized from reading this list it
would have been better if I would have done the install on the small drives
but I didn't have them at the time. I added the two 320G drives and created
tank mirror.

I moved some data from other sources to the tank and then decided to go
ahead and do a new install. In preparation for that I moved all the data I
wanted to save onto the rpool mirror and then installed Solaris 10 update 8
again on the 320G drives.

When my system rebooted after the installation, I saw for some reason it
used my tank pool as root. I realize now since it was originally a root pool
and had boot blocks this didn't help. Anyway I shut down, changed the boot
order and then booted into my system. It paniced when trying to access the
tank and instantly rebooted. I had to go through this several times until I
caught a glimpse of one of the first messages:

assertion failed: zvol_get_stats(os, nv)

Here is what my system looks like when I boot into failsafe mode.

# zpool import
pool: rpool
id: 16453600103421700325
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

rpool ONLINE
mirror ONLINE
c0t2d0s0 ONLINE
c0t3d0s0 ONLINE

pool: tank
id: 12861119534757646169
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

tank ONLINE
mirror ONLINE
c0t0d0s0 ONLINE
c0t1d0s0 ONLINE

# zpool import tank
cannot import 'tank': pool may be in use from other system
use '-f' to import anyway

Here is a photo of my screen (hah hah old fashioned "screen shot") when Sol 11 
starts now that I tried importing my pool it fails constantly.

# zpool import -f tank

http://imageshack.us/photo/my-images/13/zfsimportfail.jpg/

I installed Solaris 11 Express USB via Hiroshi-san's Windows tool. 
Unfortunately it also panics trying to import the pool although zpool import 
shows the pool online with no errors just like in the above doc.

and here is an eerily identical photo capture made by somebody with a 
similar/identical error. http://prestonconnors.com/zvol_get_stats.jpg

At first I thought it was a copy of my screenshot but I see his terminal is 
white and mine is black.

Looks like the problem has been around since 2009 although my problem is with a 
newly created mirror pool that had plenty of space available (200G in use out 
of about 500G) and no snapshots were taken.

Similar discussion with discouraging lack of follow up:
http://opensolaris.org/jive/message.jspa?messageID=376366

Looks like the defect, it's closed and I see no resolution.

https://defect.opensolaris.org/bz/show_bug.cgi?id=5682

I have about 200G of data on the tank pool, about 100G or so I don't have
anywhere else. I created this pool specifically to make a "safe place" to
store data that I had accumulated over several years and didn't have
organized yet. I can't believe such a serious bug has been around for two years 
and hasn't been fixed. Can somebody please help me get this data back?

Thank you.

Jim
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible! assertion failed: zvol_get_stats(os, nv) == 0

2011-08-05 Thread Stu Whitefish
System: snv_151a 64 bit on Intel.
Error: panic[cpu0] assertion failed: zvol_get_stats(os, nv) == 0,
file: ../../common/fs/zfs/zfs_ioctl.c, line: 1815

Failure first seen on Solaris 10, update 8

History:

I recently received two 320G drives and realized from reading this list it
would have been better if I would have done the install on the small drives
but I didn't have them at the time. I added the two 320G drives and created
tank mirror.

I moved some data from other sources to the tank and then decided to go
ahead and do a new install. In preparation for that I moved all the data I
wanted to save onto the rpool mirror and then installed Solaris 10 update 8
again on the 320G drives.

When my system rebooted after the installation, I saw for some reason it
used my tank pool as root. I realize now since it was originally a root pool
and had boot blocks this didn't help. Anyway I shut down, changed the boot
order and then booted into my system. It paniced when trying to access the
tank and instantly rebooted. I had to go through this several times until I
caught a glimpse of one of the first messages:

assertion failed: zvol_get_stats(os, nv)

Here is what my system looks like when I boot into failsafe mode.

# zpool import
pool: rpool
id: 16453600103421700325
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

rpool ONLINE
mirror ONLINE
c0t2d0s0 ONLINE
c0t3d0s0 ONLINE

pool: tank
id: 12861119534757646169
state: ONLINE
action: The pool can be imported using its name or numeric identifier.
config:

tank ONLINE
mirror ONLINE
c0t0d0s0 ONLINE
c0t1d0s0 ONLINE

# zpool import tank
cannot import 'tank': pool may be in use from other system
use '-f' to import anyway

I
 installed Solaris 11 Express USB via Hiroshi-san's Windows tool. 
Unfortunately it also 

panics trying to import the pool although zpool 
import shows the pool online with no errors 

just like in the above doc.

http://imageshack.us/photo/my-images/13/zfsimportfail.jpg/

and here is an eerily identical photo capture made by somebody with a 
similar/identical 

error. http://prestonconnors.com/zvol_get_stats.jpg

At first I thought it was a copy of my screenshot but I see his terminal is 
white and mine is black.

Looks
 like the problem has been around since 2009 although my problem is with
 a newly created 

mirror pool that had plenty of space available (200G in use out of about 500G) 
and no snapshots 

were taken.

Similar discussion with discouraging lack of follow up:
http://opensolaris.org/jive/message.jspa?messageID=376366

Looks like the defect, it's closed and I see no resolution.

https://defect.opensolaris.org/bz/show_bug.cgi?id=5682

I have about 200G of data on the tank pool, about 100G or so I don't have
anywhere else. I created this pool specifically to make a "safe place" to
store data that I had accumulated over several years and didn't have
organized
 yet. I can't believe such a serious bug has been around for two years
and hasn't been fixed. Can somebody please help me get this data back?

Thank you.

Jim 


I joined the forums but I didn't see my post on zfs-discuss mailing list which
seems alot more active than the forum. Sorry if this is a duplicate for people 
on the mailing list.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss