On Sat, 18 Nov 2006 [EMAIL PROTECTED] wrote:

> I'm new to this group, so hello everyone! I am having some issues with

Welcome!

> my Nexenta system I set up about two months ago as a zfs/zraid server. I
> have two new Maxtor 500GB Sata drives and an Adaptec controller which I
> believe has a Silicon Image chipset. Also I have a Seasonic 80+ power
> supply, so the power should be as clean as you can get. I had an issue

Just wondering (out loud) if your PSU is capable of meeting the demands of
your current hardware - including the zfs related disk drives you just
added and if the system is on a UPS.  Just questions for you to answer and
off topic for this list.  But you'll see that this thought process is
relevant to your particular problem - see more below.

> with Nexenta where I had to reinstall, and since then everytime I reboot
> I have to type
>
> zpool export amber
> zpool import amber
>
> to get my zfs volume mounted. A week ago I noticed a couple of CKSUM
> errors when I did a zpool status, so I did a zpool scrub. This is the
> output after:
>
> # zpool status
>   pool: amber
>  state: ONLINE
> status: One or more devices has experienced an unrecoverable error.  An
>         attempt was made to correct the error.  Applications are unaffected.
> action: Determine if the device needs to be replaced, and clear the errors
>         using 'zpool clear' or replace the device with 'zpool replace'.
>    see: http://www.sun.com/msg/ZFS-8000-9P
>  scrub: scrub completed with 0 errors on Mon Nov 13 04:49:35 2006
> config:
>
>         NAME        STATE     READ WRITE CKSUM
>         amber       ONLINE       0     0     0
>           raidz1    ONLINE       0     0     0
>             c4d0    ONLINE       0     0    51
>             c5d0    ONLINE       0     0    41
>
> errors: No known data errors
>
>
> I have md5sums on a lot of the files and it looks like maybe 5% of my
> files are corrupted. Does anyone have any ideas? I was under the
> impression that zfs was pretty reliable but I guess with any software it
> needs time to get the bugs ironed out.

[ I've seen the response where one astute list participate noticed you're
running a 2-way raidz device, when the documentation clearly states that
the mimimum raidz volume consists of 3 devices ]

Going back to zero day (my terminology) for ZFS, when it was first
integrated, if you read the zfs related blogs, you'll realize that zfs is
arguably one of the most extensively tested bodies of software _ever_
added to (Open)Solaris.  If there was a basic issue with zfs, like you
describe above, zfs would never have been integrated (into (Open)Solaris).
You can imagine that there were a lot of willing zfs testers ("please can
I be on the beta test...")[0] - but there were also a few cases of "this
issue has *got* to be ZFS related" - because there were no other
_rational_ explanations.  One such case is mentioned here:

http://blogs.sun.com/roller/page/elowe?anchor=zfs_saves_the_day_ta

I would suggest that you look for some basic hardware problems within your
system.  The first place to start is to download/burn a copy of the
Ultimate Boot CD ROM (UBCD) [1] and run the latest version of memtest
memtest86 for 24 hours.  It's likely that you have hardware issues.

Please keep the list informed....

[0] including this author who built hardware specifically to eval/test/use
ZFS and get it into production ASAP to solve a business storage problem
for $6k instead of $30k to $40k.

[1] http://www.ultimatebootcd.com/

Regards,

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
           Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris.Org Community Advisory Board (CAB) Member - Apr 2005
             OpenSolaris Governing Board (OGB) Member - Feb 2006
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to