Hi,
I have seen a similar question on this list in the archive but
haven't seen the answer.
Can I avoid striping across top level vdevs ?
If I use a zpool which is one LUN from the SAN, and when it
becomes full I add a new LUN to it.
But I cannot guarantee that the
On Oct 16, 2010, at 4:57 AM, Edward Ned Harvey wrote:
>> From: Bob Friesenhahn [mailto:bfrie...@simple.dallas.tx.us]
>>
>>> raidzN takes a really long time to resilver (code written
>> inefficiently,
>>> it's a known problem.) If you had a huge raidz3, it would literally
>> never
>>> finish, bec
On Oct 16, 2010, at 9:48 PM, Derek G Nokes wrote:
> I tried using format to format the drive and got the following:
>
> Ready to format. Formatting cannot be interrupted
> and takes 5724 minutes (estimated). Continue? y
> Beginning format. The current time is Sat Oct 16 23:58:17 2010
>
> Formatt
On Oct 17, 2010, at 6:17 AM, Edward Ned Harvey wrote:
>> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
>> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>>
>> If scrub is operating at a block-level (and I think it is), then how
>> can
>> checksum failures be mapped to fi
On Oct 17, 2010, at 6:38 AM, Edward Ned Harvey wrote:
> The default blocksize is 128K. If you are using mirrors, then each block on
> disk will be 128K whenever possible. But if you're using raidzN with a
> capacity of M disks (M disks useful capacity + N disks redundancy) then the
> block si
On Sun, Oct 17, 2010 at 12:31 PM, Simon Breden wrote:
> OK, thanks Ian.
>
> Another example:
>
> Would you lose all pool data if you had two vdevs: (1) a RAID-Z2 vdev and (2)
> a two drive mirror vdev, and three drives in the RAID-Z2 vdev failed?
If you lose 1 vdev, you lose the pool.
Doesn't m
OK, thanks Ian.
Another example:
Would you lose all pool data if you had two vdevs: (1) a RAID-Z2 vdev and (2) a
two drive mirror vdev, and three drives in the RAID-Z2 vdev failed?
--
This message posted from opensolaris.org
___
zfs-discuss mailing li
On 10/18/10 06:28 AM, Simon Breden wrote:
I would just like to confirm or not whether a vdev failure would lead to
failure of the whole pool or not.
For example, if I created a pool from two RAID-Z2 vdevs, and three drives fail
within the first vdev, is all the data within the whole pool unrec
I would just like to confirm or not whether a vdev failure would lead to
failure of the whole pool or not.
For example, if I created a pool from two RAID-Z2 vdevs, and three drives fail
within the first vdev, is all the data within the whole pool unrecoverable?
--
This message posted from opens
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 10/17/2010 9:38 AM, Edward Ned Harvey wrote:
>
> The default blocksize is 128K. If you are using mirrors, then
> each block on disk will be 128K whenever possible. But if you're
> using raidzN with a capacity of M disks (M disks useful capacity
On Sun, 17 Oct 2010, Edward Ned Harvey wrote:
The default blocksize is 128K. If you are using mirrors, then each
block on disk will be 128K whenever possible. But if you're using
raidzN with a capacity of M disks (M disks useful capacity + N disks
redundancy) then the block size on each in
On Fri, 15 Oct 2010, Gerry Bragg wrote:
Is it possible for a read to bypass the write cache and fetch from
disk before the flush of the cache to disk occurs?
No. Zfs is fully coherent in memory. On a server, most accesses are
to the data in memory rather than from disk.
Bob
--
Bob Friese
The default blocksize is 128K. If you are using mirrors, then each block on
disk will be 128K whenever possible. But if you're using raidzN with a
capacity of M disks (M disks useful capacity + N disks redundancy) then the
block size on each individual disk will be 128K / M. Right? This is one
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Edward Ned Harvey
>
> If scrub is operating at a block-level (and I think it is), then how
> can
> checksum failures be mapped to file names? For example, this is a
> long-requested feature of
On Sun, 17 Oct 2010 03:05:34 PDT, Orvar Korvar
wrote:
> here are some links.
Wow, that's a great overview, thanks!
--
( Kees Nuyt
)
c[_]
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-di
budy,
here are some links. Remember, the reason you get corrupted files, is because
ZFS detects it. Probably, you got corruption earlier as well, but your hardware
did not notice it. This is called Silent Corruption. But ZFS is designed to
detect and correct Silent Corruption. Which no normal ha
I would definitely consider raidz2 or raidz3 in several vdevs. Maximum 8-9
drives in each vdev. Not a huge 20 disc vdev.
One vdev gives you the IOPS as in one single drive. If you have three vdevs,
you get IOPS worth of three drives. That is better than one single vdev of 20
discs.
--
This me
Does it support 3TB drives?
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
18 matches
Mail list logo