I require a new high capacity 8 disk zpool. The disks I will be
purchasing (Samsung or Hitachi) have an Error Rate (non-recoverable,
bits read) of 1 in 10^14 and will be 2TB. I'm staying clear of WD
because they have the new 2048b sectors which don't play nice with ZFS
at the moment.
My
-
boun...@opensolaris.org] On Behalf Of Matthew Angelo
My question is, how do I determine which of the following zpool and
vdev configuration I should run to maximize space whilst mitigating
rebuild failure risk?
1. 2x RAIDZ(3+1) vdev
2. 1x RAIDZ(7+1) vdev
3. 1x RAIDZ2(6+2) vdev
I just want
Hi Benji,
I did take a read of your blog before posting this. But it didn't
have the exact answer I was looking for. OS Version is Solaris 10 U9
x86. Your blog was highly informative, but didn't say if you can zfs
replace into a 4kb drive - it was more discussing the ability to
detect the WD
Hi ZFS Discuss,
I have a 8x 1TB RAIDZ running on Samsung 1TB 5400rpm drives with 512b sectors.
I will be replacing all of these with 8x Western Digital 2TB drives
with support for 4K sectors. The replacement plan will be to swap out
each of the 8 drives until all are replaced and the new size
Assuming I have a zpool which consists of a simple 2 disk mirror.
How do I attach a third disk (disk3) to this zpool to mirror the existing
data? Then split this mirror and remove disk0 and disk1, leaving a single
disk zpool which consist of the new disk3. AKA. Online data migration.
0h0m with 0 errors on Thu Mar 26 19:55:24
2009
config:
NAME STATE READ WRITE CKSUM
test ONLINE 0 0 0
/export/disk3 ONLINE 0 0 0 71.5K resilvered
errors: No known data errors
F.
On 03/26/09 08:20, Matthew
Hi there,
Is there a way to get as much data as possible off an existing slightly
corrupted zpool? I have a 2 disk stripe which I'm moving to new storage. I
will be moving it to a ZFS Mirror, however at the moment I'm having problems
with ZFS Panic'ing the system during a send | recv.
I don't
Hello
I found myself in a curious situation regarding the state of a zpool inside
a VMWare Guest. I've run into CKSUM errors on the below infastructure
stack.
Hitachi (HDS) 9570V SAN, FC Disks
SUN X4600 M2 (16 Core, 32GB Memory)
VMWare ESXi 3.5 U3
Single Extended Datastore, 4x 35GB FC
Hello,
We recently had SAN corruption (hard power outage), and we lost a few
transaction that were waiting to be written to real disk. The end result as
we all know is CKSUM errors on the zpool from a scrub, and we also had a few
corrupted files reported by ZFS.
My question is, what is the
a RAIDZ.
On Mon, Oct 27, 2008 at 2:57 PM, Matthew Angelo [EMAIL PROTECTED] wrote:
Another update.
Weekly cron kicked in again this week, but this time is failed with a lot
of CKSUM errors and now also complained about corrupted files. The single
file it complained about is a new one I
The original disk failure was very explicit. High Read Errors and errors
inside /var/adm/messages.
When I replaced the disk however, these have all gone and the resilver was
okay. I am not seeing any read/write or /var/adm/messages errors -- but for
some reason I am seeing errors inside the
After performing the following steps in exact order, I am now seeing CKSUM
errors in my zpool. I've never seen any Checksum errors before in the
zpool.
1. Performing running setup (RAIDZ 7D+1P) - 8x 1TB. Solaris 10 Update 3
x86.
2. Disk 6 (c6t2d0) was dying, $(zpool status) read errors, and
12 matches
Mail list logo