That's quite a blanket statement. MANY companies (including Oracle)
purchased Xserve RAID arrays for important applications because of their
price point and capabilities. You easily could buy two Xserve RAIDs and
mirror them for what comparable arrays of the time cost.
-Aaron
On Wed, Jun 10, 20
Absolutely agree. I'l love to be able to free up some LUNs that I
don't need in the pool any more.
Also, concatenation of devices in a zpool would be great for devices
that have LUN limits. It also seems like it may be an easy thing to
implement.
-Aaron
On 2/28/09, Thomas Wagner wrote:
>> >> p
I've done some basic testing with a X4150 machine using 6 disks in a RAID 5
and RAID Z configuration. They perform very similarly, but RAIDZ definitely
has more system overhead. In many cases this won't be a big deal, but if
you need as many CPU cycles as you can muster, hardware RAID may be your
I actually ran into a situation where I needed to concatenate LUNs last
week. In my case, the Sun 2540 storage arrays don't yet have the ability to
create LUNs over 2TB, so to use all the storage within the array on one host
efficiently, I created two LUNs per RAID group, for a total of 4 LUNs. T
On Wed, Sep 3, 2008 at 1:48 PM, Miles Nordin <[EMAIL PROTECTED]> wrote:
> I've never heard of a battery that's used for anything but RAID
> features. It's an interesting question, if you use the controller in
> ``JBOD mode'' will it use the write cache or not? I would guess not,
> but it might.
Couple of questions,
What version of Solaris are you using? (cat /etc/release)
If you're exposing each disk individually through a LUN/2540 Volume, you
don't really gain anything by having a spare on the 2540 (which I assume
you're doing by only exposing 11 LUNs instead of 12). Your best bet is to
I've heard (though I'd be really interested to read the studies if someone
has a link) that a lot of this human error percentage comes at the hardware
level. Replacing the wrong physical disk in a RAID-5 disk group, bumping
cables, etc.
-Aaron
On Wed, Aug 20, 2008 at 3:40 PM, Bob Friesenhahn <
All,
I'm currently working out details on an upgrade from UFS/SDS on DAS to ZFS
on a SAN fabric. I'm interested in hearing how ZFS has behaved in more
traditional SAN environments using gear that scales vertically like EMC
Clarion/HDS AMS/3PAR etc. Do you experience issues with zpool integrity
be
My take is that since RAID-Z creates a stripe for every block
(http://blogs.sun.com/bonwick/entry/raid_z), it should be able to
rebuild the bad sectors on a per block basis. I'd assume that the
likelihood of having bad sectors on the same places of all the disks
is pretty low since we're only read
I've had great luck with my Supermicro AOC-SAT2-MV8 card so far. I'm
using it in an old PCI slot, so it's probably not as fast as it could
be, but it worked great right out of the box.
-Aaron
On Fri, May 23, 2008 at 12:09 AM, David Francis <[EMAIL PROTECTED]> wrote:
> Greetings all
>
> I was lo
10 matches
Mail list logo