[zfs-discuss] zfs data corruption

2008-04-23 Thread Vic Engle
I'm hoping someone can help me understand a zfs data corruption symptom. We have a zpool with checksum turned off. Zpool status shows that data corruption occured. The application using the pool at the time reported a "read" error and zoppl status (see below) shows 2 read errors on a device. The

[zfs-discuss] ZFS read error

2008-03-21 Thread Vic Engle
I have a zfs filesystem on a simple stripe pool which reported a read error to an application using it and zpool status shows the following... status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in ques

Re: [zfs-discuss] ZFS hang and boot hang when iSCSI device removed

2008-02-05 Thread Vic Engle
I don't think this is so much a ZFS problem as an iSCSI initiator problem. Are you using static configs or Send Target discovery? There are many reports of sent target discovery misbehavior in the storage discuss forum. To recover: 1. Boot into single user from CD 2. mount the root slice on /a 3.

[zfs-discuss] Virtual IP Integration

2007-06-15 Thread Vic Engle
Has there been any discussion here about the idea integrating a virtual IP into ZFS. It makes sense to me because of the integration of NFS and iSCSI with the sharenfs and shareiscsi properties. Since these are both dependent on an IP it would be pretty cool if there was also a virtual IP that w

[zfs-discuss] zfs send/receive incremental

2007-06-02 Thread Vic Engle
Hi All, Just curious about how the incremental send works. Is it changed blocks or files and how are the changed blocks or files identified? Regards, Vic This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris

[zfs-discuss] zpool relayout

2007-05-31 Thread Vic Engle
Just a quick question. If I create a raidz pool but then later find that I need more space I can add another raidz set to the pool but what happens to data already in the pool? Does a relayout occur or does zfs work towards balancing I/O to the pool across the 2 raidz sets only as new data is wr