Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-16 Thread Stu Whitefish
- Original Message -

> From: John D Groenveld 
> To: "zfs-discuss@opensolaris.org" 
> Cc: 
> Sent: Monday, August 15, 2011 6:12:37 PM
> Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data 
> inaccessible!
> 
> In message <1313431448.5331.yahoomail...@web121911.mail.ne1.yahoo.com>, 
> Stu Whi
> tefish writes:
>> I'm sorry, I don't understand this suggestion.
>> 
>> The pool that won't import is a mirror on two drives.
> 
> Disconnect all but the two mirrored drives that you must import
> and try to import from a S11X LiveUSB.

Hi John,

Thanks for the suggestion, but it fails the same way. It panics and reboots too 
fast for me to capture the messages but they're the same as what I posted in 
the opening post of this thread.

This is a snap of zpool import before I tried importing it. Everything looks 
normal except it's odd the controller numbers keep changing.

http://imageshack.us/photo/my-images/705/sol11expresslive.jpg/

Thanks,

Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Kernel panic on zpool import. 200G of data inaccessible!

2011-08-16 Thread Stu Whitefish
- Original Message -

> From: Alexander Lesle 
> To: zfs-discuss@opensolaris.org
> Cc: 
> Sent: Monday, August 15, 2011 8:37:42 PM
> Subject: Re: [zfs-discuss] Kernel panic on zpool import. 200G of data 
> inaccessible!
> 
> Hello Stu Whitefish and List,
> 
> On August, 15 2011, 21:17  wrote in [1]:
> 
>>>  7. cannot import old rpool (c0t2d0s0 c0t3d0s0), any attempt causes a
>>>  kernel panic, even when booted from different OS versions
> 
>>  Right. I have tried OpenIndiana 151 and Solaris 11 Express (latest
>>  from Oracle) several times each as well as 2 new installs of Update 8.
> 
> When I understand you right is your primary interest to recover your
> data on tank pool.
> 
> Have you check the way to boot from a Live-DVD, mount your "safe 
> place"
> and copy the data on a other machine?

Hi Alexander,

Yes of course...the problem is no version of Solaris can import the pool. 
Please refer to the first message in the thread.

Thanks,

Jim

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Sudden drop in disk performance - WD20EURS & 4k sectors to blame?

2011-08-16 Thread krad
On 15 August 2011 15:55, Andrew Gabriel  wrote:

> David Wragg wrote:
>
>> I've not done anything different this time from when I created the
>> original (512b)  pool. How would I check ashift?
>>
>>
>
> For a zpool called "export"...
>
> # zdb export | grep ashift
> ashift: 12
> ^C
> #
>
> As far as I know (although I don't have any WD's), all the current 4k
> sectorsize hard drives claim to be 512b sectorsize, so if you didn't do
> anything special, you'll probably have ashift=9.
>
> I would look at a zpool iostat -v to see what the IOPS rate is (you may
> have bottomed out on that), and I would also work out average transfer size
> (although that alone doesn't necessarily tell you much - a dtrace quantize
> aggregation would be better). Also check service times on the disks (iostat)
> to see if there's one which is significantly worse and might be going bad.
>
> --
> Andrew Gabriel
>
> __**_
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/**mailman/listinfo/zfs-discuss
>


from what i have read you really do need to 4k align your drives and
ashift=12 them on the western digitals. Unfortunately that probably means
you have to rebuild your pool. 4k aligining is fairly easy it just means
when you partition a disk, just make sure the 1st sector and the size of
the partition is /8. Ie dont start the 1st partition at sector 34 as
normally happens, start it at say 40. eg here are my 4k alinged drives from
a freebsd system

# gpart show ada0
=>34  3907029101  ada0  GPT  (1.8T)
  34   6- free -  (3.0k)
  40 128 1  freebsd-boot  (64k)
 168 6291456 2  freebsd-swap  (3.0G)
 6291624  3900213229 3  freebsd-zfs  (1.8T)


making ashift=12 is a little more tricky. I have seen a patch posted on the
mailing lists for zpool which forces it. Alternatively you could boot into a
freebsd live cd and create the pool with the 'gnop -S 4096' trick. Its
possible there is another way to do it now on opensolaris that i havent
come across yet
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intel 320 as ZIL?

2011-08-16 Thread Eugen Leitl
On Mon, Aug 15, 2011 at 01:38:36PM -0700, Brandon High wrote:
> On Thu, Aug 11, 2011 at 1:00 PM, Ray Van Dolson  wrote:
> > Are any of you using the Intel 320 as ZIL?  It's MLC based, but I
> > understand its wear and performance characteristics can be bumped up
> > significantly by increasing the overprovisioning to 20% (dropping
> > usable capacity to 80%).
> 
> Intel recently added the 311, a small SLC-based drive for use as a
> temp cache with their Z68 platform. It's limited to 20GB, but it might
> be a better fit for use as a ZIL than the 320.

Works fine over here (Nexenta Core 3.1).

-- 
Eugen* Leitl http://leitl.org";>leitl http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss