I'm hoping someone can help me understand a zfs data corruption symptom. We
have a zpool with checksum turned off. Zpool status shows that data corruption
occured. The application using the pool at the time reported a "read" error and
zoppl status (see below) shows 2 read errors on a device. The
I have a zfs filesystem on a simple stripe pool which reported a read error to
an application using it and zpool status shows the following...
status: One or more devices has experienced an error resulting in data
corruption. Applications may be affected.
action: Restore the file in ques
I don't think this is so much a ZFS problem as an iSCSI initiator
problem. Are you using static configs or Send Target discovery? There
are many reports of sent target discovery misbehavior in the storage
discuss forum.
To recover:
1. Boot into single user from CD
2. mount the root slice on /a
3.
Has there been any discussion here about the idea integrating a virtual IP into
ZFS. It makes sense to me because of the integration of NFS and iSCSI with the
sharenfs and shareiscsi properties. Since these are both dependent on an IP it
would be pretty cool if there was also a virtual IP that w
Hi All,
Just curious about how the incremental send works. Is it changed blocks or
files and how are the changed blocks or files identified?
Regards,
Vic
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris
Just a quick question. If I create a raidz pool but then later find that I need
more space I can add another raidz set to the pool but what happens to data
already in the pool? Does a relayout occur or does zfs work towards balancing
I/O to the pool across the 2 raidz sets only as new data is wr