Re: [zfs-discuss] Compatibility of Hitachi Deskstar 7K3000 HDS723030ALA640 with ZFS

2012-03-06 Thread Lou Picciano

- Original Message -
From: "Jordan McQuown"  
To: "Jan-Peter Koopmann"  
Cc: zfs-discuss@opensolaris.org 
Sent: Tuesday, March 6, 2012 1:36:54 PM 
Subject: Re: [zfs-discuss] Compatibility of Hitachi Deskstar 7K3000 
HDS723030ALA640 with ZFS 

> -Original Message- 
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- 
> boun...@opensolaris.org] On Behalf Of Brandon High 
> Sent: Tuesday, March 06, 2012 1:28 PM 
> To: Koopmann, Jan-Peter 
> Cc: zfs-discuss@opensolaris.org; luis Johnstone 
> Subject: Re: [zfs-discuss] Compatibility of Hitachi Deskstar 7K3000 
> HDS723030ALA640 with ZFS 
> 
> On Tue, Mar 6, 2012 at 2:40 AM, Koopmann, Jan-Peter  pe...@koopmann.eu> wrote: 
> > Do you or anyone else have experience with the 3TB 5K3000 drives 
> > (namely HDS5C3030ALA630)? I am thinking of replacing my current 4*1TB 
> > drives with 4*3TB drives (home server). Any issues with TER or alike? 
> 
> I have been using 8 x 3TB 5k3000 in a raidz2 for about a year without issue. 
> 
> The Deskstar 3TB come off the same production line as the Ultrastar 5k3000. I 
> would avoid the 2TB and smaler 5k3000 - They come off a separate 
> production line. 


Though I must say, in their defense, we've got a bunch of these (5K3000s) in 
one of our machines, constituting a ZFS array - and they've been perfectly 
reliable. Only paid about $70 for 'em at the time, too. 


Lou Picciano 



> -B 
> 
> -- 
> Brandon High : bh...@freaks.com 
> ___ 
> zfs-discuss mailing list 
> zfs-discuss@opensolaris.org 
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss 

We have been using around 24 of these on our backup targets for approximately 9 
months without issue. 
___ 
zfs-discuss mailing list 
zfs-discuss@opensolaris.org 
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [zfs] Oddly-persistent file error on ZFS root pool

2012-01-31 Thread Lou Picciano
Bayard, 

>> You wouldn't happen to have preserved output that could be used to determine 
>> if/where there's a bug? 

Whoops - in fact, this was the gist of my email to the lists, really; that 
someone might be interested in diagnostic information. The short answer: After 
not hearing a lot of interest in that - apart from your own response - and now 
having deleted the datasets in question: I hope so! I certainly do have the 
last crash dump - intact, as the machine hasn't crashed since...(...sound of 
knocking on wood) Pls tell me what debug info I can provide. 

The fix itself turned out to be quite easy, as you've indicated. My first 
concern was in being a good 'Listizen' - identifying/reporting on a bug, if one 
exists. 

Thanks again for your help, 

Lou Picciano 

- Original Message -
From: "Bayard G. Bell"  
To: z...@lists.illumos.org 
Cc: zfs-discuss@opensolaris.org 
Sent: Tuesday, January 31, 2012 7:01:53 AM 
Subject: Re: [zfs] Oddly-persistent file error on ZFS root pool 

On Mon, 2012-01-30 at 01:50 +, Lou Picciano wrote: 
> Bayard, 
> 
> Indeed, you did answer it - and thanks for getting back to me - your 
> suggestion was spot ON! 
> 
> However, the simple zpool clear/scrub cycle wouldn't work in our case - at 
> least initially. In fact, after multiple 'rinse/repeats', the offending file 
> - or its hex representation - would reappear. In fact, the CHSKUM errors 
> would often mount... Logically, this seems to make some sense; that zfs would 
> attempt to reconstitute the damaged file with each scrub...(?) 

As the truth is somewhere in between, I'll insert my comment 
accordingly. You should only see the errors continue if there's a 
dataset with a reference to the version of the file that creates those 
errors. I've seen this before: until all of the datasets are deleted, 
the errors will continue to be diagnosed, sometimes presented without 
databaset names, which might be considered a bug (it seems wrong that 
you don't get a dataset name for clones). You wouldn't happen to have 
preserved output that could be used to determine if/where there's a bug? 

> In any case, after gathering the nerve to start deleting old snapshots - 
> including the one with the offending file - the clear/scrub process worked a 
> charm. Many thanks again! 
> 
> Lou Picciano 
> 
> - Original Message - 
> From: "Bayard G. Bell"  
> To: z...@lists.illumos.org 
> Cc: zfs-discuss@opensolaris.org 
> Sent: Sunday, January 29, 2012 3:22:39 PM 
> Subject: Re: [zfs] Oddly-persistent file error on ZFS root pool 
> 
> Lou, 
> 
> Tried to answer this when you asked on IRC. Try a zpool clear and scrub 
> again to see if the errors persist. 
> 
> Cheers, 
> Bayard 



--- 
illumos-zfs 
Archives: https://www.listbox.com/member/archive/182191/=now 
RSS Feed: https://www.listbox.com/member/archive/rss/182191/22086598-09fa5b64 
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=22086598&id_secret=22086598-86c7d407 
Powered by Listbox: http://www.listbox.com 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [zfs] Oddly-persistent file error on ZFS root pool

2012-01-29 Thread Lou Picciano
Bayard, 

Indeed, you did answer it - and thanks for getting back to me - your suggestion 
was spot ON! 

However, the simple zpool clear/scrub cycle wouldn't work in our case - at 
least initially. In fact, after multiple 'rinse/repeats', the offending file - 
or its hex representation - would reappear. In fact, the CHSKUM errors would 
often mount... Logically, this seems to make some sense; that zfs would attempt 
to reconstitute the damaged file with each scrub...(?) 

In any case, after gathering the nerve to start deleting old snapshots - 
including the one with the offending file - the clear/scrub process worked a 
charm. Many thanks again! 

Lou Picciano 

- Original Message -
From: "Bayard G. Bell"  
To: z...@lists.illumos.org 
Cc: zfs-discuss@opensolaris.org 
Sent: Sunday, January 29, 2012 3:22:39 PM 
Subject: Re: [zfs] Oddly-persistent file error on ZFS root pool 

Lou, 

Tried to answer this when you asked on IRC. Try a zpool clear and scrub 
again to see if the errors persist. 

Cheers, 
Bayard 

On Sat, 2012-01-28 at 17:52 +, Lou Picciano wrote: 
> 
> 
> 
> Hello ZFS wizards, 
> 
> Have an odd ZFS problem I'd like to run by you - 
> 
> Root pool on this machine is a 'simple' mirror - just two disks. # zpool 
> status 
> 
> NAME STATE READ WRITE CKSUM 
> rpool ONLINE 0 0 3 
> mirror-0 ONLINE 0 0 6 
> c2t0d0s0 ONLINE 0 0 6 
> c2t1d0s0 ONLINE 0 0 6 
> 
> errors: Permanent errors have been detected in the following files: 
> 
> rpool/ROOT/openindiana-userland-154@zfs-auto-snap_monthly-2011-11-22-09h19:/etc/svc/repository-boot-tmpEdaGba
>  
> 
> ... or similar; CKSUM counts have varied, but were always in that 1x - 2x , 
> 'symmetrical' pattern. 
> 
> After working through the problems above, scrubbing and zfs destroying the 
> snapshot with 'permanent errors', the CKSUMS clear up, but vestiges of the 
> file remain as hex addresses: 
> 
> NAME STATE READ WRITE CKSUM 
> rpool ONLINE 0 0 0 
> mirror-0 ONLINE 0 0 0 
> c2t0d0s0 ONLINE 0 0 0 
> c2t1d0s0 ONLINE 0 0 0 
> 
> errors: Permanent errors have been detected in the following files: 
> 
> <0x18e73>:<0x78007> 
> 
> I have no evidence that ZFS is itself the direct culprit here; it may just be 
> on the receiving end of one of the couple of problems we've recently worked 
> through on this machine: 
> 1. a defective CPU, managed by the fault manager, but without a 
> fully-configured crashdump (now rectified), then 
> 2. the SandyBridge 'interrupt storm' problem, which we seem to have now 
> worked around. 
> 
> The storage pools are scrubbed pretty regularly, and we generally have no 
> cksum errors at all. At one point, vmstat reported 7+ _million+ interrupt 
> faults over 5 seconds! I've attempted to clear stats on the pool as well 
> (didn't expect this to work, but worth a try, right?) 
> 
> Important to note that Memtest+ had been run, last time for ~14 hrs, with no 
> error reported. 
> 
> Don't think the storage controller is the culprit, either, as _all_ drives 
> are controlled by the P67A - and no other problems seen. And no errors 
> reported via smartctl. 
> 
> Would welcome input from two perspectives: 
> 
> 1) Before I rebuild the pool/reinstall/whatever, is anyone here interested in 
> any diagnostic output which might still be available? Is any of this useful 
> as a bug report? 
> 2) Then, would love to hear ideas on a solution. 
> 
> Proposed solutions include: 
> 1) creating new BE based on snap of root pool: 
> - Snapshot root pool 
> - (zfs send to datapool for safekeeping) 
> - Split rpool 
> - zpool create newpool (on Drive 'B') 
> - beadm -p create newpool NEWboot (being sure to use slice 0 of Drive 'B') 
> 
> 2) Simply deleting _all_ snapshots on the rpool. 
> 
> 3) complete re-install 
> 
> Tks for feedback. Lou Picciano 
> 
> 
> 
> --- 
> illumos-zfs 
> Archives: https://www.listbox.com/member/archive/182191/=now 
> RSS Feed: https://www.listbox.com/member/archive/rss/182191/22062040-29ecd758 
> Modify Your Subscription: https://www.listbox.com/member/?&; 
> Powered by Listbox: http://www.listbox.com 





--- 
illumos-zfs 
Archives: https://www.listbox.com/member/archive/182191/=now 
RSS Feed: https://www.listbox.com/member/archive/rss/182191/22086598-09fa5b64 
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=22086598&id_secret=22086598-86c7d407 
Powered by Listbox: http://www.listbox.com 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Oddly-persistent file error on ZFS root pool

2012-01-28 Thread Lou Picciano




Hello ZFS wizards, 

Have an odd ZFS problem I'd like to run by you - 

Root pool on this machine is a 'simple' mirror - just two disks. # zpool status 

NAME STATE READ WRITE CKSUM 
rpool ONLINE 0 0 3 
mirror-0 ONLINE 0 0 6 
c2t0d0s0 ONLINE 0 0 6 
c2t1d0s0 ONLINE 0 0 6 

errors: Permanent errors have been detected in the following files: 

rpool/ROOT/openindiana-userland-154@zfs-auto-snap_monthly-2011-11-22-09h19:/etc/svc/repository-boot-tmpEdaGba
 

... or similar; CKSUM counts have varied, but were always in that 1x - 2x , 
'symmetrical' pattern. 

After working through the problems above, scrubbing and zfs destroying the 
snapshot with 'permanent errors', the CKSUMS clear up, but vestiges of the file 
remain as hex addresses: 

NAME STATE READ WRITE CKSUM 
rpool ONLINE 0 0 0 
mirror-0 ONLINE 0 0 0 
c2t0d0s0 ONLINE 0 0 0 
c2t1d0s0 ONLINE 0 0 0 

errors: Permanent errors have been detected in the following files: 

<0x18e73>:<0x78007> 

I have no evidence that ZFS is itself the direct culprit here; it may just be 
on the receiving end of one of the couple of problems we've recently worked 
through on this machine: 
1. a defective CPU, managed by the fault manager, but without a 
fully-configured crashdump (now rectified), then 
2. the SandyBridge 'interrupt storm' problem, which we seem to have now worked 
around. 

The storage pools are scrubbed pretty regularly, and we generally have no cksum 
errors at all. At one point, vmstat reported 7+ _million+ interrupt faults over 
5 seconds! I've attempted to clear stats on the pool as well (didn't expect 
this to work, but worth a try, right?) 

Important to note that Memtest+ had been run, last time for ~14 hrs, with no 
error reported. 

Don't think the storage controller is the culprit, either, as _all_ drives are 
controlled by the P67A - and no other problems seen. And no errors reported via 
smartctl. 

Would welcome input from two perspectives: 

1) Before I rebuild the pool/reinstall/whatever, is anyone here interested in 
any diagnostic output which might still be available? Is any of this useful as 
a bug report? 
2) Then, would love to hear ideas on a solution. 

Proposed solutions include: 
1) creating new BE based on snap of root pool: 
- Snapshot root pool 
- (zfs send to datapool for safekeeping) 
- Split rpool 
- zpool create newpool (on Drive 'B') 
- beadm -p create newpool NEWboot (being sure to use slice 0 of Drive 'B') 

2) Simply deleting _all_ snapshots on the rpool. 

3) complete re-install 

Tks for feedback. Lou Picciano 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss