Re: [zfs-discuss] ZFS corruption

2009-02-10 Thread Roodnitsky, Leonid
Could this be relevant? Notice sd_cache_control mismatch message. Thank
you everybody for any ideas or help. I really appreciate it.

Feb 06 2009 23:14:07.704531935 ereport.io.scsi.cmd.disk.dev.uderr
nvlist version: 0
class = ereport.io.scsi.cmd.disk.dev.uderr
ena = 0x2487a4cf2e00c01
detector = (embedded nvlist)
nvlist version: 0
version = 0x0
scheme = dev
device-path =
/p...@0,0/pci10de,3...@f/pci108e,2...@0/d...@1,0
devid = id1,s...@tsun_stk_raid_int6db80b08
(end detector)

driver-assessment = fail
op-code = 0x1a
cdb = 0x1a 0x0 0x8 0x0 0x18 0x0
pkt-reason = 0x0
pkt-state = 0x1f
pkt-stats = 0x0
stat-code = 0x0
un-decode-info = sd_cache_control: Mode Sense caching page code
mismatch 0
un-decode-value =
__ttl = 0x1
__tod = 0x498d189f 0x29fe4ddf


Leonid

-Original Message-
From: cindy.swearin...@sun.com [mailto:cindy.swearin...@sun.com] 
Sent: Tuesday, February 10, 2009 3:42 PM
To: Roodnitsky, Leonid
Cc: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] ZFS corruption

Leonid,

You could use the fmdump -eV command to look for problems with these
disks. This command might generate a lot of output, but it should be
clear if the root cause is a problem accessing these devices.

I would also check /var/adm/messages for any driver-related messages.

Cindy

Leonid Roodnitsky wrote:
> Dear All,
> 
> Is there any way to figure out which piece is at fault? Sun SAS RAID
(Adaptec/Intel) controller is reporting that drives are good, but ZFS is
not happy about checksum errors. Is there any way to figure out which
component introduced the error?
> 
> Leonid
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS corruption

2009-02-10 Thread Cindy . Swearingen

Leonid,

You could use the fmdump -eV command to look for problems with these
disks. This command might generate a lot of output, but it should be
clear if the root cause is a problem accessing these devices.

I would also check /var/adm/messages for any driver-related messages.

Cindy

Leonid Roodnitsky wrote:

Dear All,

Is there any way to figure out which piece is at fault? Sun SAS RAID 
(Adaptec/Intel) controller is reporting that drives are good, but ZFS is not 
happy about checksum errors. Is there any way to figure out which component 
introduced the error?

Leonid

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS corruption

2009-02-10 Thread Leonid Roodnitsky
Dear All,

Is there any way to figure out which piece is at fault? Sun SAS RAID 
(Adaptec/Intel) controller is reporting that drives are good, but ZFS is not 
happy about checksum errors. Is there any way to figure out which component 
introduced the error?

Leonid
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS corruption

2009-02-09 Thread Richard Elling
Leonid Roodnitsky wrote:
> Dear All,
>
> I am receiving DEGRAGED for zpool status -v. 3 out of 14 disks are reported 
> as degraded with 'too many errors'. This is Build 99 running on x4240 with 
> STK SAS RAID controller. Version of AAC driver is 2.2.5. I am not sure even 
> where to start. Any advice is very much appreciated. Trying to convince 
> management that ZFS is the way to go and then getting this problem. RAID 
> controller does not report any problems with drives. This is RAIDZ (RAID5) 
> zpool. Thank you everybody.
>
>   

The zpool man page says:
 The health of the top-level vdev, such as  mirror  or  raidz
 device,  is potentially impacted by the state of its associ-
 ated vdevs, or component devices. A top-level vdev  or  com-
 ponent device is in one of the following states:

 DEGRADEDOne or more top-level vdevs is in  the  degraded
 state  because one or more component devices are
 offline. Sufficient replicas exist  to  continue
 functioning.

 One or more component devices is in the degraded
 or  faulted state, but sufficient replicas exist
 to continue functioning. The  underlying  condi-
 tions are as follows:

 oThe number of checksum  errors  exceeds
  acceptable  levels  and  the  device is
  degraded as an  indication  that  some-
  thing  may  be  wrong. ZFS continues to
  use the device as necessary.

 oThe  number  of  I/O   errors   exceeds
  acceptable levels. The device could not
  be marked as faulted because there  are
  insufficient replicas to continue func-
  tioning.

You should take this into consideration as you decide whether
to replace disks or not.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption...

2008-06-20 Thread Akhilesh Mritunjai
If there was no redundancy configured in zfs then you're mostly toast. RAID is 
no protection against data errors as has been told by zfs guys and you just 
discovered.

I think your only option is to somehow setup a recent build of OpenSolaris 
(05/08 or SXCE), configure it to not panic on checksum failure (just give IO 
Err) and import the pool. Your data is mostly toast though.

Please don't use zfs without configuring redundancy. If you do, please make 
sure you have backups !
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-11-04 Thread grant beattie
Ed Saipetch wrote:
> To answer a number of questions:
> 
> Regarding different controllers, I've tried 2 Syba Sil 3114 controllers 
> purchased about 4 months apart.  I've tried 5.4.3 firmware with one and 
> 5.4.13 with another.  Maybe Syba makes crappy Sil 3114 cards but it's the 
> same one that someone on blogs.sun.com used with success.  I had weird 
> problems flashing the first card I got, hence the order of another one.  I'm 
> not sure how I could get 2 different controllers 4 months apart and then use 
> them in 2 completely different computers and both controllers be bad.

another data point..

I run two SiI 3114 based cards in my home fileserver running s10u3. I 
was having ZFS data corruption issues and I suspected the SiI cards - 
that was until I replaced the motherboard/CPU/memory. I didn't have the 
time or patience to try to determine which component was at fault, but I 
swapped the motherboard/CPU/memory and stressed it for a few hours and 
the data corruption problem was gone.

before that, I was seeing data corruption issues within minutes. maybe 
it was just memory, but I'll never know. I junked the old kit after I 
confirmed I had eliminated the problem.

grant.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-31 Thread Edward Saipetch
Mario,

I don't have any issues getting a new card.  The root of the discussion 
started because people did indeed post that they had good luck with 
them.  In fact, when I went out there and google'd to find which cards 
would worked well, it seemed to be at the top of the list.  I'm 
interested to know if it's something I can help resolve so other people 
don't have this problem or make sure people don't run into the same 
issue I do.

Mario Goebbels wrote:
> I haven't seen the beginning of this discussion, but seeing SiI sets the
> fire alarm off here.
>
> The Silicon Image chipsets are renowned to be crap and causing data
> corruption. At least the variants that usually go onto mainboards. Based
> on this, I suggest that you should get a different card.
>
> -mg
>   
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-31 Thread Mario Goebbels
I haven't seen the beginning of this discussion, but seeing SiI sets the
fire alarm off here.

The Silicon Image chipsets are renowned to be crap and causing data
corruption. At least the variants that usually go onto mainboards. Based
on this, I suggest that you should get a different card.

-mg



signature.asc
Description: OpenPGP digital signature
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-31 Thread Edward Saipetch
Nigel,

Thanks for the response!  Basically my last method of testing was to 
sftp a few 50-100MB files to /tank over a couple of minutes and force a 
scrub after.  The very first time this happened, I was using it as a NAS 
device dumping data to it for over a week.  I went to a customer's site 
to show him how cool zfs was and upon running zpool status, I saw the 
data corruption status and telling me to restore from a backup.  Running 
zpool status without a scrub shows no errors.

I tried mirrored devices, no raid whatsoever and raidz, all with the 
same results.  All the motherboards I've been using only have PCI since 
I was hoping I could create a low cost solution as a POC.  I'll test 
changing the transfer mode a bit later.  Other people have had better 
luck, what other debugging can be done?  I'm willing to even let someone 
have remote access to the box if they want.

Nigel Smith wrote:
> Ok, this is a strange problem!
> You seem to have tried & eliminated all the possible issues
> that the community has suggested!
>
> I was hoping you would see some errors logged in
> '/var/adm/messages' that would give a clue.
>
> Your original 'zpool status' said 140 errors.
> Over what time period are these occurring?
> I'm wondering if the errors are occurring at a
> constant steady rate or if there are bursts of error?
> Maybe you could monitor zpool status while generating
> activity with "dd" or similar.
> You could use "zpool iostat " to monitor
> bandwidth and see if it is reasonably steady or erratic.
>
> >From your "prtconf -D" we see the 3114 card is using
> the "ata" driver, as expected.
> I believe the driver can talk to the disk drive
> in either PIO or DMA mode, so you could try 
> changing that in the "ata.conf" file. See here for details:
> http://docs.sun.com/app/docs/doc/819-2254/ata-7d?a=view
>
> I've just had a quick look at the source code for
> the ata driver, and there does seem to be specific support
> for the Silicon Image chips in the drivers:
> http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/intel/io/dktp/controller/ata/sil3xxx.c
> and
> http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/intel/io/dktp/controller/ata/sil3xxx.h
> The file "sil3xxx.h" does mention:
>   "Errata Sil-AN-0109-B2 (Sil3114 Rev 0.3)
>   To prevent erroneous ERR set for queued DMA transfers
>   greater then 8k, FIS reception for FIS0cfg needs to be set
>   to Accept FIS without Interlock"
> ..which I read as meaning there have being some 'issues'
> with this chip. And it sounds similar to the issue mention on
> the link that Tomasz supplied:
> http://home-tj.org/wiki/index.php/Sil_m15w
>
> If you decide to try a different SATA controller card, possible options are:
>
> 1. The si3124 driver, which supports SiI-3132 (PCI-E)
>and SiI-3124 (PCI-X) devices.
>
> 2. The AHCI driver, which supports the Intel ICH6 and latter devices, often
>found on motherboard.
>
> 4. The NV_SATA driver which supports Nvidia ck804/mcp55 devices.
>
> Regards
> Nigel Smith
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-31 Thread Nigel Smith
Ok, this is a strange problem!
You seem to have tried & eliminated all the possible issues
that the community has suggested!

I was hoping you would see some errors logged in
'/var/adm/messages' that would give a clue.

Your original 'zpool status' said 140 errors.
Over what time period are these occurring?
I'm wondering if the errors are occurring at a
constant steady rate or if there are bursts of error?
Maybe you could monitor zpool status while generating
activity with "dd" or similar.
You could use "zpool iostat " to monitor
bandwidth and see if it is reasonably steady or erratic.

>From your "prtconf -D" we see the 3114 card is using
the "ata" driver, as expected.
I believe the driver can talk to the disk drive
in either PIO or DMA mode, so you could try 
changing that in the "ata.conf" file. See here for details:
http://docs.sun.com/app/docs/doc/819-2254/ata-7d?a=view

I've just had a quick look at the source code for
the ata driver, and there does seem to be specific support
for the Silicon Image chips in the drivers:
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/intel/io/dktp/controller/ata/sil3xxx.c
and
http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/intel/io/dktp/controller/ata/sil3xxx.h
The file "sil3xxx.h" does mention:
  "Errata Sil-AN-0109-B2 (Sil3114 Rev 0.3)
  To prevent erroneous ERR set for queued DMA transfers
  greater then 8k, FIS reception for FIS0cfg needs to be set
  to Accept FIS without Interlock"
..which I read as meaning there have being some 'issues'
with this chip. And it sounds similar to the issue mention on
the link that Tomasz supplied:
http://home-tj.org/wiki/index.php/Sil_m15w

If you decide to try a different SATA controller card, possible options are:

1. The si3124 driver, which supports SiI-3132 (PCI-E)
   and SiI-3124 (PCI-X) devices.
   
2. The AHCI driver, which supports the Intel ICH6 and latter devices, often
   found on motherboard.
   
4. The NV_SATA driver which supports Nvidia ck804/mcp55 devices.

Regards
Nigel Smith
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-30 Thread Mauro Mozzarelli
Hi,

I have the same sil3114 based controller, installed in a dual Opteron box. I 
have installed Solaris x86 and have had no problem with it, however I hardly 
used that box with Solaris as my installation was only to try out Solaris on my 
Opteron worksation. Instead, on that workstation I constantly run Linux, and 
twice in a few months I came across (while running linux Fedora) several I/O 
errors on the SATA disk attached to that controller. I though at first that the 
hard drive was gone, but then I swapped that controller with a sil3112 and the 
I/O errors stopped. I swapped back the sil3114 and had no errors since. I 
reckon that it might have been due to one of the SATA cables (power or data?) 
not making a perfect contact. SATA connectors are of extremely poor quality and 
they fail to hold in place as well as the older IDE or SCSI or molex power 
connector. I noticed as well that they crack easily if inadvertently pulled or 
pushed while working inside the computer case.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-30 Thread Ed Saipetch
Tried that... completely different cases with different power supplies.

On Oct 30, 2007, at 10:28 AM, Al Hopper wrote:

> On Mon, 29 Oct 2007, MC wrote:
>
>>> Here's what I've done so far:
>>
>> The obvious thing to test is the drive controller, so maybe you  
>> should do that :)
>>
>
> Also - while you're doing swapTronics - don't forget the Power Supply
> (PSU).  Ensure that your PSU has sufficient capacity on its 12Volt
> rails (older PSUs did'nt even tell you how much current they can push
> out on the 12V outputs).
>
> See also: http://blogs.sun.com/elowe/entry/zfs_saves_the_day_ta
>
> Regards,
>
> Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
>Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
> OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
> http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
> Graduate from "sugar-coating school"?  Sorry - I never attended! :)
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-30 Thread Al Hopper
On Mon, 29 Oct 2007, MC wrote:

>> Here's what I've done so far:
>
> The obvious thing to test is the drive controller, so maybe you should do 
> that :)
>

Also - while you're doing swapTronics - don't forget the Power Supply 
(PSU).  Ensure that your PSU has sufficient capacity on its 12Volt 
rails (older PSUs did'nt even tell you how much current they can push 
out on the 12V outputs).

See also: http://blogs.sun.com/elowe/entry/zfs_saves_the_day_ta

Regards,

Al Hopper  Logical Approach Inc, Plano, TX.  [EMAIL PROTECTED]
Voice: 972.379.2133 Fax: 972.379.2134  Timezone: US CDT
OpenSolaris Governing Board (OGB) Member - Apr 2005 to Mar 2007
http://www.opensolaris.org/os/community/ogb/ogb_2005-2007/
Graduate from "sugar-coating school"?  Sorry - I never attended! :)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-30 Thread Ed Saipetch
To answer a number of questions:

Regarding different controllers, I've tried 2 Syba Sil 3114 controllers 
purchased about 4 months apart.  I've tried 5.4.3 firmware with one and 5.4.13 
with another.  Maybe Syba makes crappy Sil 3114 cards but it's the same one 
that someone on blogs.sun.com used with success.  I had weird problems flashing 
the first card I got, hence the order of another one.  I'm not sure how I could 
get 2 different controllers 4 months apart and then use them in 2 completely 
different computers and both controllers be bad.

Regarding cables, they aren't densely packed.  I've just got 1 drive attached 
in this new instance.  In the old, I just had 4 cables unbundled (not bound 
together) attached between the card and the drives.

Here's an error on startup in /var/adm/messages, note however that this error 
didn't come up on the old mb/cpu combo with the older 3114 hba.  These errors 
happen only during boot and don't happen during file transfers:

Sep 14 23:51:49 eknas genunix: [ID 936769 kern.info] sd0 is /[EMAIL 
PROTECTED],0/[EMAIL PROTECTED],1/[EMAIL PROTECTED]/[EMAIL PROTECTED],0
Sep 14 23:52:11 eknas scsi: [ID 107833 kern.warning] WARNING: /[EMAIL 
PROTECTED],0/[EMAIL PROTECTED]/[EMAIL PROTECTED] (ata0):
Sep 14 23:52:11 eknas   timeout: abort request, target=1 lun=0

Here's the scanpci output:
pci bus 0x cardnum 0x08 function 0x00: vendor 0x1095 device 0x3114
 Silicon Image, Inc. SiI 3114 [SATALink/SATARaid] Serial ATA Controller

and prtconf -pv:
subsystem-vendor-id:  1095
subsystem-id:  3114
unit-address:  '8'
class-code:  00018000
revision-id:  0002
vendor-id:  1095
device-id:  3114

and prtconf -D:
pci-ide, instance #0 (driver name: pci-ide)
ide, instance #0 (driver name: ata)

and pertinent modinfo:
 40 fbbf1250   1050 224   1  pci-ide (pciide nexus driver for 'PCI-ID)
 41 f783c000  10230 112   1  ata (ATA AT-bus attachment disk cont)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-30 Thread Frank . Hofmann
On Tue, 30 Oct 2007, Tomasz Torcz wrote:

> On 10/30/07, Neal Pollack <[EMAIL PROTECTED]> wrote:
>>> I'm experiencing major checksum errors when using a syba silicon image 3114 
>>> based pci sata controller w/ nonraid firmware.  I've tested by copying data 
>>> via sftp and smb.  With everything I've swapped out, I can't fathom this 
>>> being a hardware problem.
>> Even before ZFS, I've had numerous situations where various si3112 and
>> 3114 chips
>> would corrupt data on UFS and PCFS, with very simple  copy and checksum
>> test scripts, doing large bulk transfers.
>
>  Those SIL chips are really broken when used with certain Seagate drivers.
> But I have data corrupted by them with WD drive also.
> Linux can workaround this bug by reducing transfer sizes (and thus
> dramatically impacting speed). Solaris probably don't have workaround.

Might be slightly off-topic for the whole, but _this_ specific thing 
(reducing transfer sizes) is possible on Solaris as well. As documented 
here:

http://docs.sun.com/app/docs/doc/819-2724/chapter2-29?a=view

You can also read a bit more on the following thread:

http://www.opensolaris.org/jive/thread.jspa?threadID=6866

It's possible to limit this system-wide or per-LUN.

Best regards,
FrankH.

> With this quirk enabled (on Linux), I get at most 20 MB/s from drives,
> but ZFS do not report any corruption. Before I had corruptions hourly.
>
> More info about SIL issue: http://home-tj.org/wiki/index.php/Sil_m15w
> I have Si 3112, but despite SIL claims other chips seem to be affected also.
>
>
> -- 
> Tomasz Torcz
> [EMAIL PROTECTED]
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>

--
No good can come from selling your freedom, not for all the gold in the world,
for the value of this heavenly gift far exceeds that of any fortune on earth.
--
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-30 Thread Stephen Usher
One thing to check before you blame your controller:

Are the SATA cables close together for an extended length?

Basically, most SATA cables will generate massive levels of cross-talk between 
them if they're tied together or a run parallel in close proximity for a part 
of 
their run-length.

I friend found this sort of problem a couple of months ago and it was cured by 
separating the cables.

Steve
-- 
---
Computer Systems Administrator,E-Mail:[EMAIL PROTECTED]
Department of Earth Sciences, Tel:-  +44 (0)1865 282110
University of Oxford, Parks Road, Oxford, UK. Fax:-  +44 (0)1865 272072
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-30 Thread Tomasz Torcz
On 10/30/07, Neal Pollack <[EMAIL PROTECTED]> wrote:
> > I'm experiencing major checksum errors when using a syba silicon image 3114 
> > based pci sata controller w/ nonraid firmware.  I've tested by copying data 
> > via sftp and smb.  With everything I've swapped out, I can't fathom this 
> > being a hardware problem.
> Even before ZFS, I've had numerous situations where various si3112 and
> 3114 chips
> would corrupt data on UFS and PCFS, with very simple  copy and checksum
> test scripts, doing large bulk transfers.

  Those SIL chips are really broken when used with certain Seagate drivers.
But I have data corrupted by them with WD drive also.
Linux can workaround this bug by reducing transfer sizes (and thus
dramatically impacting speed). Solaris probably don't have workaround.
With this quirk enabled (on Linux), I get at most 20 MB/s from drives,
but ZFS do not report any corruption. Before I had corruptions hourly.

More info about SIL issue: http://home-tj.org/wiki/index.php/Sil_m15w
I have Si 3112, but despite SIL claims other chips seem to be affected also.


-- 
Tomasz Torcz
[EMAIL PROTECTED]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-30 Thread Nigel Smith
And are you seeing any error messages in '/var/adm/messages'
indicating any failure on the disk controller card?
If so, please post a sample back here to the forum.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-30 Thread Nigel Smith
First off, can we just confirm the exact version of the Silicon Image Card
and which driver Solaris is using.

Use 'prtconf -pv' and '/usr/X11/bin/scanpci'
to get the PCI vendor & device ID information.

Use 'prtconf -D' to confirm which drivers are being used by which devices.

And 'modinfo' will tell you the version of the drivers.

The above commands will give details for all the devices
in the PC.  You may want to edit down the output before
posting it back here, or alternatively put the output into an
attached file.

See this link for an example of this sort of information
for a different hard disk controller card:
http://mail.opensolaris.org/pipermail/storage-discuss/2007-September/003399.html

Regards
Nigel Smith
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-29 Thread Neal Pollack
Edward Saipetch wrote:
> Neal Pollack wrote:
>> Ed Saipetch wrote:
>>> Hello,
>>>
>>> I'm experiencing major checksum errors when using a syba silicon 
>>> image 3114 based pci sata controller w/ nonraid firmware.  I've 
>>> tested by copying data via sftp and smb.  With everything I've 
>>> swapped out, I can't fathom this being a hardware problem.  
>>
>> I can.  But I suppose it could also be in some unknown way a driver 
>> issue.
>> Even before ZFS, I've had numerous situations where various si3112 
>> and 3114 chips
>> would corrupt data on UFS and PCFS, with very simple  copy and checksum
>> test scripts, doing large bulk transfers.
>>
>> Si chips are best used to clean coffee grinders.  Go buy a real SATA 
>> controller.
>>
>> Neal
> I have no problem ponying up money for a better SATA controller.  I 
> saw a bunch of blog posts that people were successful using the card 
> so I thought maybe I had a bad card with corrupt firmware nvram.  Is 
> it worth trying to trace down the bug?

Of course it is.  File a bug so someone on the SATA team can study it.

> If this type of corruption exists, nobody should be using this card.  
> As a side note, what SATA cards are people having luck with?

A lot of people are happy with the 8 port PCI SATA card made by 
SuperMicro that has the Marvell chip on it.
Don't buy other marvell cards on ebay, because Marvell dumped a ton of 
cards that ended up with an earlier
rev of the silicon that can corrupt data.  But all the cards made by 
SuperMicro and sold by them have the c rev
or later silicon and work great.

That said, I wish someone would investigate the Silicon Image issues, 
but there are only so many engineers,
with so little time.
>>
>>> There have been quite a few blog posts out there with people having 
>>> a similar config and not having any problems.
>>>
>>> Here's what I've done so far:
>>> 1. Changed solaris releases from S10 U3 to NV 75a
>>> 2. Switched out motherboards and cpus from AMD sempron to a Celeron D
>>> 3. Switched out memory to use completely different dimms
>>> 4. Switched out sata drives (2-3 250gb hitachi's and seagates in 
>>> RAIDZ, 3x400GB seagates RAIDZ and 1x250GB hitachi with no raid)
>>>
>>> Here's output of a scrub and the status (ignore the date and time, I 
>>> haven't reset it on this new motherboard) and please point me in the 
>>> right direction if I'm barking up the wrong tree.
>>>
>>> # zpool scrub tank
>>> # zpool status
>>>   pool: tank
>>>  state: ONLINE
>>> status: One or more devices has experienced an error resulting in data
>>> corruption.  Applications may be affected.
>>> action: Restore the file in question if possible.  Otherwise restore 
>>> the
>>> entire pool from backup.
>>>see: http://www.sun.com/msg/ZFS-8000-8A
>>>  scrub: scrub completed with 140 errors on Sat Sep 15 02:07:35 2007
>>> config:
>>>
>>> NAMESTATE READ WRITE CKSUM
>>> tankONLINE   0 0   293
>>>   c0d1  ONLINE   0 0   293
>>>
>>> errors: 140 data errors, use '-v' for a list
>>>  
>>>  
>>> This message posted from opensolaris.org
>>> ___
>>> zfs-discuss mailing list
>>> zfs-discuss@opensolaris.org
>>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>>   
>>
>

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-29 Thread James C. McPherson
Will Murnane wrote:
> On 10/30/07, Edward Saipetch <[EMAIL PROTECTED]> wrote:
>> As a side note, what SATA cards are people having luck with?
> Running b74, I'm very happy with the Marvell mv88sx6081-based Supermicro card:
> http://www.supermicro.com/products/accessories/addon/AoC-SAT2-MV8.cfm
> http://www.newegg.com/Product/Product.aspx?Item=N82E16815121009&Tpk=aoc-sat2
> http://www.wiredzone.com/xq/asp/ic.10016527/qx/itemdesc.htm
> It hypothetically supports port multipliers, but I haven't tested this myself.
> 
> On earlier releases (b69, specifically) I had problems with disks
> occasionally disappearing.  Those appear to have been completely
> resolved; the box has most recently been up for 16 days with no
> errors.

We don't currently have support for SATA port multipliers in
Solaris or OpenSolaris. I know this because people in my team
are working on it (no ETA as yet) and we discussed it last week.



James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-29 Thread Will Murnane
On 10/30/07, Edward Saipetch <[EMAIL PROTECTED]> wrote:
> As a side note, what SATA cards are people having luck with?
Running b74, I'm very happy with the Marvell mv88sx6081-based Supermicro card:
http://www.supermicro.com/products/accessories/addon/AoC-SAT2-MV8.cfm
http://www.newegg.com/Product/Product.aspx?Item=N82E16815121009&Tpk=aoc-sat2
http://www.wiredzone.com/xq/asp/ic.10016527/qx/itemdesc.htm
It hypothetically supports port multipliers, but I haven't tested this myself.

On earlier releases (b69, specifically) I had problems with disks
occasionally disappearing.  Those appear to have been completely
resolved; the box has most recently been up for 16 days with no
errors.

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-29 Thread MC
> Here's what I've done so far:

The obvious thing to test is the drive controller, so maybe you should do that 
:)
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-29 Thread Edward Saipetch
Neal Pollack wrote:
> Ed Saipetch wrote:
>> Hello,
>>
>> I'm experiencing major checksum errors when using a syba silicon  
>> image 3114 based pci sata controller w/ nonraid firmware.  I've  
>> tested by copying data via sftp and smb.  With everything I've  
>> swapped out, I can't fathom this being a hardware problem.
>
> I can.  But I suppose it could also be in some unknown way a driver  
> issue.
> Even before ZFS, I've had numerous situations where various si3112  
> and 3114 chips
> would corrupt data on UFS and PCFS, with very simple  copy and  
> checksum
> test scripts, doing large bulk transfers.
>
> Si chips are best used to clean coffee grinders.  Go buy a real SATA  
> controller.
>
> Neal

I have no problem ponying up money for a better SATA controller.  I saw
a bunch of blog posts that people were successful using the card so I
thought maybe I had a bad card with corrupt firmware nvram.  Is it worth
trying to trace down the bug?  If this type of corruption exists, nobody
should be using this card.  As a side note, what SATA cards are people
having luck with?

>
>> There have been quite a few blog posts out there with people having  
>> a similar config and not having any problems.
>>
>> Here's what I've done so far:
>> 1. Changed solaris releases from S10 U3 to NV 75a
>> 2. Switched out motherboards and cpus from AMD sempron to a Celeron D
>> 3. Switched out memory to use completely different dimms
>> 4. Switched out sata drives (2-3 250gb hitachi's and seagates in  
>> RAIDZ, 3x400GB seagates RAIDZ and 1x250GB hitachi with no raid)
>>
>> Here's output of a scrub and the status (ignore the date and time,  
>> I haven't reset it on this new motherboard) and please point me in  
>> the right direction if I'm barking up the wrong tree.
>>
>> # zpool scrub tank
>> # zpool status
>>  pool: tank
>> state: ONLINE
>> status: One or more devices has experienced an error resulting in  
>> data
>>corruption.  Applications may be affected.
>> action: Restore the file in question if possible.  Otherwise  
>> restore the
>>entire pool from backup.
>>   see: http://www.sun.com/msg/ZFS-8000-8A
>> scrub: scrub completed with 140 errors on Sat Sep 15 02:07:35 2007
>> config:
>>
>>NAMESTATE READ WRITE CKSUM
>>tankONLINE   0 0   293
>>  c0d1  ONLINE   0 0   293
>>
>> errors: 140 data errors, use '-v' for a list
>>  This message posted from opensolaris.org
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-29 Thread Neal Pollack
Ed Saipetch wrote:
> Hello,
>
> I'm experiencing major checksum errors when using a syba silicon image 3114 
> based pci sata controller w/ nonraid firmware.  I've tested by copying data 
> via sftp and smb.  With everything I've swapped out, I can't fathom this 
> being a hardware problem.  

I can.  But I suppose it could also be in some unknown way a driver issue.
Even before ZFS, I've had numerous situations where various si3112 and 
3114 chips
would corrupt data on UFS and PCFS, with very simple  copy and checksum
test scripts, doing large bulk transfers.

Si chips are best used to clean coffee grinders.  Go buy a real SATA 
controller.

Neal

> There have been quite a few blog posts out there with people having a similar 
> config and not having any problems.
>
> Here's what I've done so far:
> 1. Changed solaris releases from S10 U3 to NV 75a
> 2. Switched out motherboards and cpus from AMD sempron to a Celeron D
> 3. Switched out memory to use completely different dimms
> 4. Switched out sata drives (2-3 250gb hitachi's and seagates in RAIDZ, 
> 3x400GB seagates RAIDZ and 1x250GB hitachi with no raid)
>
> Here's output of a scrub and the status (ignore the date and time, I haven't 
> reset it on this new motherboard) and please point me in the right direction 
> if I'm barking up the wrong tree.
>
> # zpool scrub tank
> # zpool status
>   pool: tank
>  state: ONLINE
> status: One or more devices has experienced an error resulting in data
> corruption.  Applications may be affected.
> action: Restore the file in question if possible.  Otherwise restore the
> entire pool from backup.
>see: http://www.sun.com/msg/ZFS-8000-8A
>  scrub: scrub completed with 140 errors on Sat Sep 15 02:07:35 2007
> config:
>
> NAMESTATE READ WRITE CKSUM
> tankONLINE   0 0   293
>   c0d1  ONLINE   0 0   293
>
> errors: 140 data errors, use '-v' for a list
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>   

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption w/ sil3114 sata controllers

2007-10-29 Thread Nathan Kroenert
You have not mentioned if you have swapped the 3114 based HBA itself...?

Have you tried a different HBA? :)

Nathan.

Ed Saipetch wrote:
> Hello,
> 
> I'm experiencing major checksum errors when using a syba silicon image 3114 
> based pci sata controller w/ nonraid firmware.  I've tested by copying data 
> via sftp and smb.  With everything I've swapped out, I can't fathom this 
> being a hardware problem.  There have been quite a few blog posts out there 
> with people having a similar config and not having any problems.
> 
> Here's what I've done so far:
> 1. Changed solaris releases from S10 U3 to NV 75a
> 2. Switched out motherboards and cpus from AMD sempron to a Celeron D
> 3. Switched out memory to use completely different dimms
> 4. Switched out sata drives (2-3 250gb hitachi's and seagates in RAIDZ, 
> 3x400GB seagates RAIDZ and 1x250GB hitachi with no raid)
> 
> Here's output of a scrub and the status (ignore the date and time, I haven't 
> reset it on this new motherboard) and please point me in the right direction 
> if I'm barking up the wrong tree.
> 
> # zpool scrub tank
> # zpool status
>   pool: tank
>  state: ONLINE
> status: One or more devices has experienced an error resulting in data
> corruption.  Applications may be affected.
> action: Restore the file in question if possible.  Otherwise restore the
> entire pool from backup.
>see: http://www.sun.com/msg/ZFS-8000-8A
>  scrub: scrub completed with 140 errors on Sat Sep 15 02:07:35 2007
> config:
> 
> NAMESTATE READ WRITE CKSUM
> tankONLINE   0 0   293
>   c0d1  ONLINE   0 0   293
> 
> errors: 140 data errors, use '-v' for a list
>  
>  
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption -- odd inum?

2007-02-11 Thread Joe Little

On 2/11/07, Jeff Bonwick <[EMAIL PROTECTED]> wrote:

The object number is in hex.  21e282 hex is 2220674 decimal --
give that a whirl.

This is all better now thanks to some recent work by Eric Kustarz:

6410433 'zpool status -v' would be more useful with filenames

This was integrated into Nevada build 57.

Jeff

On Sat, Feb 10, 2007 at 05:18:05PM -0800, Joe Little wrote:
> So, I attempting to find the inode from the result of a "zpool status -v":
>
> errors: The following persistent errors have been detected:
>
>  DATASET  OBJECT  RANGE
>  cc   21e382  lvl=0 blkid=0
>
>
> Well, 21e282 appears to not be a valid number for "find . -inum blah"
>
> Any suggestions?


Ok.. but using the hex as suggested gave me an odder error result that
I can't parse..

zdb -vvv tier2 0x21e382
   version=3
   name='tier2'
   state=0
   txg=353444
   pool_guid=3320175367383032945
   vdev_tree
   type='root'
   id=0
   guid=3320175367383032945
   children[0]
   type='disk'
   id=0
   guid=1858965616559880189
   path='/dev/dsk/c3t4d0s0'

devid='id1,[EMAIL PROTECTED]/a'
   whole_disk=1
   metaslab_array=16
   metaslab_shift=33
   ashift=9
   asize=1500336095232
   children[1]
   type='disk'
   id=1
   guid=2406851811694064278
   path='/dev/dsk/c3t5d0s0'

devid='id1,[EMAIL PROTECTED]/a'
   whole_disk=1
   metaslab_array=13
   metaslab_shift=33
   ashift=9
   asize=1500336095232
   children[2]
   type='disk'
   id=2
   guid=4840324923103758504
   path='/dev/dsk/c3t6d0s0'

devid='id1,[EMAIL PROTECTED]/a'
   whole_disk=1
   metaslab_array=4408
   metaslab_shift=33
   ashift=9
   asize=1500336095232
   children[3]
   type='disk'
   id=3
   guid=18356839793156279878
   path='/dev/dsk/c3t7d0s0'

devid='id1,[EMAIL PROTECTED]/a'
   whole_disk=1
   metaslab_array=4407
   metaslab_shift=33
   ashift=9
   asize=1500336095232
Uberblock

   magic = 00bab10c
   version = 3
   txg = 2834960
   guid_sum = 12336413438187464178
   timestamp = 1171223485 UTC = Sun Feb 11 11:51:25 2007
   rootbp = [L0 DMU objset] 400L/200P DVA[0]=<2:3aa12a3600:200>
DVA[1]=<3:378957f000:200> DVA[2]=<0:7d2312f200:200> fletcher4 lzjb LE
contiguous birth=2834960 fill=3672
cksum=f65361601:5b3233d8018:117d616a33b47:24feff94a90701

Dataset mos [META], ID 0, cr_txg 4, 294M, 3672 objects, rootbp [L0 DMU
objset] 400L/200P DVA[0]=<2:3aa12a3600:200> DVA[1]=<3:378957f000:200>
DVA[2]=<0:7d2312f200:200> fletcher4 lzjb LE contiguous birth=2834960
fill=3672 cksum=f65361601:5b3233d8018:117d616a33b47:24feff94a90701

   Object  lvl   iblk   dblk  lsize  asize  type
zdb: dmu_bonus_hold(2220930) failed, errno 2



> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption -- odd inum?

2007-02-11 Thread Tim Foster

Hi Joe,

Joe Little wrote:

So, I attempting to find the inode from the result of a "zpool status -v":

errors: The following persistent errors have been detected:

 DATASET  OBJECT  RANGE
 cc   21e382  lvl=0 blkid=0

>

Well, 21e282 appears to not be a valid number for "find . -inum blah"


It's not an inode, it's a ZFS object -

see this thread: 
http://www.opensolaris.org/jive/thread.jspa?messageID=39450&tstart=0


for info on how to track it down.

There was a recent putback by EricK for 6410433 which makes zpool
status -v a bit more useful in the future.

Hope this helps ?

cheers,
tim


--
Tim Foster, Sun Microsystems Inc, Solaris Engineering Ops
  http://blogs.sun.com/timf
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption -- odd inum?

2007-02-11 Thread Tomas Ögren
On 10 February, 2007 - Joe Little sent me these 0,4K bytes:

> So, I attempting to find the inode from the result of a "zpool status -v":
> 
> errors: The following persistent errors have been detected:
> 
>  DATASET  OBJECT  RANGE
>  cc   21e382  lvl=0 blkid=0
> 
> 
> Well, 21e282 appears to not be a valid number for "find . -inum blah"

Looks very hexadecimal to me.. Try 2220930 instead.

/Tomas
-- 
Tomas Ögren, [EMAIL PROTECTED], http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs corruption -- odd inum?

2007-02-11 Thread Jeff Bonwick
The object number is in hex.  21e282 hex is 2220674 decimal --
give that a whirl.

This is all better now thanks to some recent work by Eric Kustarz:

6410433 'zpool status -v' would be more useful with filenames

This was integrated into Nevada build 57.

Jeff

On Sat, Feb 10, 2007 at 05:18:05PM -0800, Joe Little wrote:
> So, I attempting to find the inode from the result of a "zpool status -v":
> 
> errors: The following persistent errors have been detected:
> 
>  DATASET  OBJECT  RANGE
>  cc   21e382  lvl=0 blkid=0
> 
> 
> Well, 21e282 appears to not be a valid number for "find . -inum blah"
> 
> Any suggestions?
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Corruption

2006-12-12 Thread eric kustarz

Bill Casale wrote:

Please reply directly to me. Seeing the message below.

Is it possible to determine exactly which file is corrupted?
I was thinking the OBJECT/RANGE info may be pointing to it
but I don't know how to equate that to a file.


This is bug:
6410433 'zpool status -v' would be more useful with filenames

and i'm actually working on it right now!

eric




# zpool status -v
  pool: u01
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
u01 ONLINE   0 0 6
  c1t102d0  ONLINE   0 0 6

errors: The following persistent errors have been detected:

  DATASET  OBJECT   RANGE
  u01  4741362  600178688-600309760



Thanks,
Bill




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Corruption

2006-12-12 Thread George Wilson

Bill,

If you want to find the file associated with the corruption you could do 
a "find /u01 -inum 4741362" or use the output of "zdb -d u01" to 
find the object associated with that id.


Thanks,
George

Bill Casale wrote:

Please reply directly to me. Seeing the message below.

Is it possible to determine exactly which file is corrupted?
I was thinking the OBJECT/RANGE info may be pointing to it
but I don't know how to equate that to a file.


# zpool status -v
  pool: u01
 state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
entire pool from backup.
   see: http://www.sun.com/msg/ZFS-8000-8A
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
u01 ONLINE   0 0 6
  c1t102d0  ONLINE   0 0 6

errors: The following persistent errors have been detected:

  DATASET  OBJECT   RANGE
  u01  4741362  600178688-600309760



Thanks,
Bill



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Corruption

2006-12-12 Thread Robert Milkowski
Hello Bill,

Tuesday, December 12, 2006, 2:34:01 PM, you wrote:

BC> Please reply directly to me. Seeing the message below.

BC> Is it possible to determine exactly which file is corrupted?
BC> I was thinking the OBJECT/RANGE info may be pointing to it
BC> but I don't know how to equate that to a file.


BC> # zpool status -v
BC>pool: u01
BC>   state: ONLINE
BC> status: One or more devices has experienced an error resulting in data
BC>  corruption.  Applications may be affected.
BC> action: Restore the file in question if possible.  Otherwise restore the
BC>  entire pool from backup.
BC> see: http://www.sun.com/msg/ZFS-8000-8A
BC>   scrub: none requested
BC> config:

BC>  NAMESTATE READ WRITE CKSUM
BC>  u01 ONLINE   0 0 6
BC>c1t102d0  ONLINE   0 0 6

BC> errors: The following persistent errors have been detected:

BC>DATASET  OBJECT   RANGE
BC>u01  4741362  600178688-600309760
^^^

This is inode number so just use find to find a file.

There's an RFE for this so zpool status will give you actual file
names.



-- 
Best regards,
 Robertmailto:[EMAIL PROTECTED]
   http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss