> I'm more worried about the availability of my data in the even of a
> controller failure. I plan on using 4-chan SATA controllers and
> creating multiple 4 disk RAIDZ vdevs. I want to use a single pool, but
> it looks like I can't as controller failure = ZERO access, although the
> same can be
> If I have a pool that made up of 2 raidz vdevs, all data is striped across?
> So if I somehow lose a vdev I lose all my data?!
If your vdevs are RAID-Z's, there has to be a rare coincidence to happen
to break the pool (two disks failing in the same RAID-Z)...
But yeah, ZFS spreads blocks to
So,
If I have a pool that made up of 2 raidz vdevs, all data is striped across?
So if I somehow lose a vdev I lose all my data?!
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensola
Bill Sommerfeld wrote:
> On Wed, 2007-09-05 at 14:26 -0700, Richard Elling wrote:
>
>> AFAIK, nobody has characterized resilvering, though this is about the 4th
>> time this week someone has brought the topic up. Has anyone done work here
>> that we don't know about? If so, please speak up :-)
On Wed, 2007-09-05 at 14:26 -0700, Richard Elling wrote:
> AFAIK, nobody has characterized resilvering, though this is about the 4th
> time this week someone has brought the topic up. Has anyone done work here
> that we don't know about? If so, please speak up :-)
I haven't been conducting contr
Solaris wrote:
> Is it possible to force ZFS to "nicely" re-organize data inside a zpool
> after a new root level vdev has been introduced?
Currently, ZFS will not reorganize the existing data for such cases.
You can force this to occur by copying the data and removing the old,
but that seems lik
Is it possible to force ZFS to "nicely" re-organize data inside a zpool
after a new root level vdev has been introduced?
e.g. Take a pool with 1 vdev consisting of a 2 disk mirror. Populate some
arbitrary files using about 50% of the capacity. Then add another 2
mirrored disks to the pool.
It s