Hello Richard,
RE But I am curious as to why you believe 2x CF are necessary?
RE I presume this is so that you can mirror. But the remaining memory
RE in such systems is not mirrored. Comments and experiences are welcome.
I was thinking about mirroring - it's not clear from the comment above
Torrey McMahon writes:
Toby Thain wrote:
On 25-May-07, at 1:22 AM, Torrey McMahon wrote:
Toby Thain wrote:
On 22-May-07, at 11:01 AM, Louwtjie Burger wrote:
On 5/22/07, Pål Baltzersen [EMAIL PROTECTED] wrote:
What if your HW-RAID-controller dies? in say 2 years or
On 30-May-07, at 12:33 PM, Roch - PAE wrote:
Torrey McMahon writes:
Toby Thain wrote:
On 25-May-07, at 1:22 AM, Torrey McMahon wrote:
Toby Thain wrote:
On 22-May-07, at 11:01 AM, Louwtjie Burger wrote:
On 5/22/07, Pål Baltzersen [EMAIL PROTECTED] wrote:
What if your
Robert Milkowski wrote:
Hello Richard,
Thursday, May 24, 2007, 6:10:34 PM, you wrote:
RE Incidentally, thumper field reliability is better than we expected. This
is causing
RE me to do extra work, because I have to explain why.
I've got some thumpers and there're very reliable.
Even disks
Richard Elling wrote:
But I am curious as to why you believe 2x CF are necessary?
I presume this is so that you can mirror. But the remaining memory
in such systems is not mirrored. Comments and experiences are welcome.
CF == bit-rot-prone disk, not RAM. You need to mirror it for all the
Subject: Re: [zfs-discuss] Re: ZFS - Use h/w raid or not?Thoughts.
Considerations.
Richard Elling wrote:
But I am curious as to why you believe 2x CF are necessary?
I presume this is so that you can mirror. But the remaining memory
in such systems is not mirrored. Comments and experiences
Ellis, Mike wrote:
Also the unmirrored memory for the rest of the system has ECC and
ChipKill, which provides at least SOME protection against random
bit-flips.
CF devices, at least the ones we'd be interested in, do have ECC as
well as spare sectors and write verification.
Note: flash
On Tue, 2007-05-29 at 18:48 -0700, Richard Elling wrote:
The belief is that COW file systems which implement checksums and data
redundancy (eg, ZFS and the ZFS copies option) will be redundant over
CF's ECC and wear leveling *at the block level.* We believe ZFS will
excel in this area, but
Hello Richard,
Thursday, May 24, 2007, 6:10:34 PM, you wrote:
RE Incidentally, thumper field reliability is better than we expected. This
is causing
RE me to do extra work, because I have to explain why.
I've got some thumpers and there're very reliable.
Even disks aren't failing that much -
Depend on the guarantees. Some RAID systems have built in block
checksumming.
But we all know that block checksums stored with the blocks do
not catch a number of common errors.
(Ghost writes, misdirected writes, misdirected reads)
Casper
___
On 25-May-07, at 1:22 AM, Torrey McMahon wrote:
Toby Thain wrote:
On 22-May-07, at 11:01 AM, Louwtjie Burger wrote:
On 5/22/07, Pål Baltzersen [EMAIL PROTECTED] wrote:
What if your HW-RAID-controller dies? in say 2 years or more..
What will read your disks as a configured RAID? Do you
Toby Thain wrote:
On 25-May-07, at 1:22 AM, Torrey McMahon wrote:
Toby Thain wrote:
On 22-May-07, at 11:01 AM, Louwtjie Burger wrote:
On 5/22/07, Pål Baltzersen [EMAIL PROTECTED] wrote:
What if your HW-RAID-controller dies? in say 2 years or more..
What will read your disks as a
Anton B. Rang wrote:
Thumper seems to be designed as a file server (but curiously, not for high
availability).
hmmm... Often people think that because a system is not clustered, then it is
not
designed to be highly available. Any system which provides a single view of
data
(eg. a
Please tell us how many storage arrays are required to meet a
theoretical I/O bandwidth of 244 GBytes/s?
Just considering disks, you need approximately 6,663 all streaming 50
MB/sec with RAID-5 3+1 (for example).
That is assuming sustained large block sequential I/O. If you have 8 KB
Richard wrote:
Any system which provides a single view of data (eg. a persistent storage
device) must have at least one single point of failure.
Why?
Consider this simple case: A two-drive mirrored array.
Use two dual-ported drives, two controllers, two power supplies,
arranged roughly as
Anton B. Rang wrote:
Richard wrote:
Any system which provides a single view of data (eg. a persistent storage
device) must have at least one single point of failure.
Why?
Consider this simple case: A two-drive mirrored array.
Use two dual-ported drives, two controllers, two power supplies,
Anton B. Rang wrote:
Richard wrote:
Any system which provides a single view of data (eg. a persistent storage
device) must have at least one single point of failure.
Why?
Consider this simple case: A two-drive mirrored array.
Use two dual-ported drives, two controllers, two power supplies,
Anton B. Rang wrote:
Richard wrote:
Any system which provides a single view of data (eg. a persistent storage
device) must have at least one single point of failure.
Why?
Consider this simple case: A two-drive mirrored array.
Use two dual-ported drives, two controllers, two power supplies,
Toby Thain wrote:
On 22-May-07, at 11:01 AM, Louwtjie Burger wrote:
On 5/22/07, Pål Baltzersen [EMAIL PROTECTED] wrote:
What if your HW-RAID-controller dies? in say 2 years or more..
What will read your disks as a configured RAID? Do you know how to
(re)configure the controller or restore
I did say depends on the guarantees, right? :-) My point is that all
hw raid systems are not created equally.
Nathan Kroenert wrote:
Which has little benefit if it's the HBA or the Array internals change
the meaning of the message...
That's the whole point of ZFS's checksumming - It's end
At the moment, I'm hearing that using h/w raid under my zfs may be
better for some workloads and the h/w hot spare would be nice to
have across multiple raid groups, but the checksum capabilities in
zfs are basically nullified with single/multiple h/w lun's
resulting in reduced protection.
If you've got the internal system bandwidth to drive all drives then RAID-Z
is definitely
superior to HW RAID-5. Same with mirroring.
You'll need twice as much I/O bandwidth as with a hardware controller, plus the
redundancy, since the reconstruction is done by the host. For instance, to
What if your HW-RAID-controller dies? in say 2 years or more..
What will read your disks as a configured RAID? Do you know how to
(re)configure the controller or restore the config without destroying your
data? Do you know for sure that a spare-part and firmware will be identical, or
at least
On 5/22/07, Pål Baltzersen [EMAIL PROTECTED] wrote:
What if your HW-RAID-controller dies? in say 2 years or more..
What will read your disks as a configured RAID? Do you know how to (re)configure the
controller or restore the config without destroying your data? Do you know for sure
that a
On 22-May-07, at 11:01 AM, Louwtjie Burger wrote:
On 5/22/07, Pål Baltzersen [EMAIL PROTECTED] wrote:
What if your HW-RAID-controller dies? in say 2 years or more..
What will read your disks as a configured RAID? Do you know how to
(re)configure the controller or restore the config without
Thanks for the input. So, I'm trying to meld the two replies and come up with
a direction for my case and maybe a rule of thumb that I can use in the
future (i.e., near future until new features come out in zfs) when I have
external storage arrays that have built in RAID.
At the moment, I'm
There isn't a global hot spare, but you can add a hot spare to multiple pools.
Paul
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Personally I would go with ZFS entirely in most cases.
That's the rule of thumb :) If you have a fast enough CPU and enough RAM, do
everything with ZFS. This sounds koolaid-induced, but you'll need nothing else
because ZFS does it all.
My second personal rule of thumb concerns RAIDZ
More redundancy below...
Torrey McMahon wrote:
Phillip Fiedler wrote:
Thanks for the input. So, I'm trying to meld the two replies and come
up with a direction for my case and maybe a rule of thumb that I can
use in the future (i.e., near future until new features come out in
zfs) when I
29 matches
Mail list logo