I bought similar kit from them, but when I received the machine,
uninstalled, I looked at the install manual for the Areca card and
found that it's a manual driver add that is documented to
_occasionally hang_ and you have to _kill it off manually_ if it does.
I'm really not having that in a
Hmm. That's kind of sad. I grabbed the latest Areca drivers and haven't had a
speck of trouble. Was the driver revision specified in the docs you read the
latest one?
Flash boot does seem nice in a way, since Solaris writes to the boot volume so
seldom on a machine that has enough RAM to
This Areca card is Solaris Certified (so says the HCL) and not that expensive:
http://www.sun.com/bigadmin/hcl/data/components/details/1179.html
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Hello Blake,
did you end up purchasing this ? We're considering buying a SilMech K501
as our new fileserver with a pair of Areca controllers in JBOD mode. Any
experience would be appreciated.
Thanks,
Christophe Dupre
Blake Irvin wrote:
The only supported controller I've found is the Areca
We are currently using the 2-port Areca card SilMech offers for boot, and 2 of
the Supermicro/Marvell cards for our array. Silicon Mechanics gave us great
support and burn-in testing for Solaris 10. Talk to a sales rep there and I
don't think you will be disappointed.
cheers,
Blake
This
Jacob Ritorto wrote:
Right, a nice depiction of the failure modes involved and their
probabilities based on typical published mtbf of components and other
arguments/caveats, please? Does anyone have the cycles to actually
illustrate this or have urls to such studies?
Yes, this is what I
On Apr 15, 2008, at 13:18, Bob Friesenhahn wrote:
ZFS raidz1 and raidz2 are NOT directly equivalent to RAID5 and RAID6
so the failure statistics would be different. Regardless, single disk
failure in a raidz1 substantially increases the risk that something
won't be recoverable if there is a
On Wed, 16 Apr 2008, David Magda wrote:
RAID5 and RAID6 rebuild the entire disk while raidz1 and raidz2 only
rebuild existing data blocks so raidz1 and raidz2 are less likely to
experience media failure if the pool is not full.
While the failure statistics may be different, I think any
On Mon, Apr 14, 2008 at 9:41 PM, Tim [EMAIL PROTECTED] wrote:
I'm sure you're already aware, but if not, 22 drives in a raid-6 is
absolutely SUICIDE when using SATA disks. 12 disks is the upper end of what
you want even with raid-6. The odds of you losing data in a 22 disk raid-6
is far too
On Tue, Apr 15, 2008 at 10:09 AM, Maurice Volaski [EMAIL PROTECTED]
wrote:
I have 16 disks in RAID 5 and I'm not worried.
I'm sure you're already aware, but if not, 22 drives in a raid-6 is
absolutely SUICIDE when using SATA disks. 12 disks is the upper end of
what
you want even with
On Apr 15, 2008, at 10:58 AM, Tim wrote:
On Tue, Apr 15, 2008 at 10:09 AM, Maurice Volaski
[EMAIL PROTECTED] wrote:
I have 16 disks in RAID 5 and I'm not worried.
I'm sure you're already aware, but if not, 22 drives in a raid-6 is
absolutely SUICIDE when using SATA disks. 12 disks is
Right, a nice depiction of the failure modes involved and their
probabilities based on typical published mtbf of components and other
arguments/caveats, please? Does anyone have the cycles to actually
illustrate this or have urls to such studies?
On Tue, Apr 15, 2008 at 1:03 PM, Keith Bierman
On Tue, 15 Apr 2008, Keith Bierman wrote:
Perhaps providing the computations rather than the conclusions would be more
persuasive on a technical list ;
No doubt. The computations depend considerably on the size of the
disk drives involved. The odds of experiencing media failure on a
On Tue, Apr 15, 2008 at 12:03 PM, Keith Bierman [EMAIL PROTECTED] wrote:
On Apr 15, 2008, at 10:58 AM, Tim wrote:
On Tue, Apr 15, 2008 at 10:09 AM, Maurice Volaski [EMAIL PROTECTED]
wrote:
I have 16 disks in RAID 5 and I'm not worried.
I'm sure you're already aware, but if not, 22
Perhaps providing the computations rather than the conclusions would
be more persuasive on a technical list ;
2 16-disk SATA arrays in RAID 5
2 16-disk SATA arrays in RAID 6
1 9-disk SATA array in RAID 5.
4 drive failures over 5 years. Of course, YMMV, especially if you
drive drunk :-)
--
On Tue, 15 Apr 2008, Maurice Volaski wrote:
4 drive failures over 5 years. Of course, YMMV, especially if you
drive drunk :-)
Note that there is a difference between drive failure and media data
loss. In a system which has been running fine for a while, the chance
of a second drive failing
Maurice Volaski wrote:
Perhaps providing the computations rather than the conclusions would
be more persuasive on a technical list ;
2 16-disk SATA arrays in RAID 5
2 16-disk SATA arrays in RAID 6
1 9-disk SATA array in RAID 5.
4 drive failures over 5 years. Of course, YMMV,
On Apr 15, 2008, at 11:18 AM, Bob Friesenhahn wrote:
On Tue, 15 Apr 2008, Keith Bierman wrote:
Perhaps providing the computations rather than the conclusions
would be more persuasive on a technical list ;
No doubt. The computations depend considerably on the size of the
disk drives
Luke Scharf wrote:
Maurice Volaski wrote:
Perhaps providing the computations rather than the conclusions would
be more persuasive on a technical list ;
2 16-disk SATA arrays in RAID 5
2 16-disk SATA arrays in RAID 6
1 9-disk SATA array in RAID 5.
4 drive failures over
Tim schrieb:
I'm sure you're already aware, but if not, 22 drives in a raid-6 is
absolutely SUICIDE when using SATA disks. 12 disks is the upper end of
what you want even with raid-6. The odds of you losing data in a 22
disk raid-6 is far too great to be worth it if you care about your
Truly :)
I was planning something like 3 pools concatenated. But we are only populating
12 bays at the moment.
Blake
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
Bob Friesenhahn wrote:
On Tue, 15 Apr 2008, Maurice Volaski wrote:
4 drive failures over 5 years. Of course, YMMV, especially if you
drive drunk :-)
Note that there is a difference between drive failure and media data
loss. In a system which has been running fine for a while, the
The only supported controller I've found is the Areca ARC-1280ML. I want to
put it in one of the 24-disk Supermicro chassis that Silicon Mechanics builds.
Has anyone had success with this card and this kind of chassis/number of drives?
cheers,
Blake
This message posted from opensolaris.org
On Mon, 14 Apr 2008, Blake Irvin wrote:
The only supported controller I've found is the Areca ARC-1280ML.
I want to put it in one of the 24-disk Supermicro chassis that
Silicon Mechanics builds.
For obvious reasons (redundancy and throughput), it makes more sense
to purchase two 12 port
On Tue, Apr 15, 2008 at 1:25 AM, Bob Friesenhahn
[EMAIL PROTECTED] wrote:
For obvious reasons (redundancy and throughput), it makes more sense
to purchase two 12 port cards. I see that there is an option to
populate more cache RAM.
More RAM always helps ;)
I would be interested to know
On Mon, Apr 14, 2008 at 11:34 PM, Will Murnane [EMAIL PROTECTED]
wrote:
On Tue, Apr 15, 2008 at 1:25 AM, Bob Friesenhahn
[EMAIL PROTECTED] wrote:
For obvious reasons (redundancy and throughput), it makes more sense
to purchase two 12 port cards. I see that there is an option to
26 matches
Mail list logo