From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Yaverot
rpool remains 1% inuse. tank reports 100% full (with 1.44G free),
I recommend:
When creating your new pool, use slices of the new disks, which are 99% of
the size of the new disks
On Mar 5, 2011, at 9:14 PM, Yaverot wrote:
I'm (still) running snv_134 on a home server. My main pool tank filled up
last night ( 1G free remaining ).
So today I bought new drives, adding them one at a time running format
between each one to see what name they received.
As I had a pair
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Yaverot
We're heading into the 3rd hour of the zpool destroy on others.
The system isn't locked up, as it responds to local keyboard input, and
I bet you, you're in a semi-crashed state
From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Yaverot
I'm (still) running snv_134 on a home server. My main pool tank filled
up
last night ( 1G free remaining ).
There is (or was) a bug that would sometimes cause the system to crash
Why wouldn't they try a reboot -d? That would at least get some data in
the form of a crash dump if at all possible...
A power cycle seems a little medieval to me... At least in the first
instance.
The other thing I have noted is that sometimes things to get wedged, and
if you can find
Follow up, and current status:
In the morning I cut power (before receiving the 4 replies). Turning it on
again, I got too impatient to get a text screen for diagnostics to show that I
overfilled the keyboard buffer. I forced it off again (to stop the beeps),
then waited longer before
I'm (still) running snv_134 on a home server. My main pool tank filled up
last night ( 1G free remaining ).
So today I bought new drives, adding them one at a time running format between
each one to see what name they received.
As I had a pair of names, I zpool create/add newtank mirror cxxt0do