I've had an interesting time with this over the past few days ...

After the resilver completed, I had the message "no known data errors" in a 
zpool status.

I guess the title of my post should have been "how permanent are permanent 
errors?". Now, I don't know whether the action of completing the resilver was 
the thing that fixed the one remaining error (in the snapshot of the 'meerkat' 
zvol), or whether my looped zpool clear commands have done it. Anyhow, for 
space/noise reasons, I set the machine back up with the original cables 
(eSATA), in its original tucked-away position, installed SXCE 119 to get me 
remotely up to date, and imported the pool.

So far so good. I then powered up a load of my virtual machines. None of them 
report errors when running a chkdsk, and SQL Server 'DBCC CHECKDB' hasn't 
reported any problems yet. Things are looking promising on the corruption front 
- feels like the errors that were reported while the resilvers were in progress 
have finally been fixed by the final (successful) resilver! Microsoft Exchange 
2003 did complain of corruption of mailbox stores, however I have seen this a 
few times as a result of unclean shutdowns, and don't think it's related to the 
errors that ZFS was reporting on the pool during resilver.

Then, 'disk is gone' again - I think I can definitely put my original troubles 
down to cabling, which I'll sort out for good in the next few days. Now, I'm 
back on the same SATA cables which saw me through the resilvering operation.

One of the drives is showing read errors when I run dmesg. I'm having one 
problem after another with this pool!! I think the disk I/O during the resilver 
has tipped this disk over the edge. I'll replace it ASAP, and then I'll test 
the drive in a separate rig and RMA it.

Anyhow, there is one last thing that I'm struggling with - getting the pool to 
expand to use the size of the new disk. Before my original replace, I had 3x1TB 
and 1x750GB disk. I replaced the 750 with another 1TB, which by my reckoning 
should give me around 4TB as a total size even after checksums and metadata. No:

# zpool list
NAME    SIZE   USED  AVAIL    CAP  HEALTH  ALTROOT
rpool    74G  8.81G  65.2G    11%  ONLINE  -
zp     2.73T  2.36T   379G    86%  ONLINE  -

2.73T? I'm convinced I've expanded a pool in this way before. What am I missing?

Chris
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to