Re: [zfs-discuss] Apple Removes Nearly All Reference To ZFS

2009-06-10 Thread Aaron Blew
That's quite a blanket statement. MANY companies (including Oracle) purchased Xserve RAID arrays for important applications because of their price point and capabilities. You easily could buy two Xserve RAIDs and mirror them for what comparable arrays of the time cost. -Aaron On Wed, Jun 10, 20

Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-02-28 Thread Aaron Blew
Absolutely agree. I'l love to be able to free up some LUNs that I don't need in the pool any more. Also, concatenation of devices in a zpool would be great for devices that have LUN limits. It also seems like it may be an easy thing to implement. -Aaron On 2/28/09, Thomas Wagner wrote: >> >> p

Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun X4150/X4450

2008-12-03 Thread Aaron Blew
I've done some basic testing with a X4150 machine using 6 disks in a RAID 5 and RAID Z configuration. They perform very similarly, but RAIDZ definitely has more system overhead. In many cases this won't be a big deal, but if you need as many CPU cycles as you can muster, hardware RAID may be your

Re: [zfs-discuss] Disk Concatenation

2008-09-23 Thread Aaron Blew
I actually ran into a situation where I needed to concatenate LUNs last week. In my case, the Sun 2540 storage arrays don't yet have the ability to create LUNs over 2TB, so to use all the storage within the array on one host efficiently, I created two LUNs per RAID group, for a total of 4 LUNs. T

Re: [zfs-discuss] SAS or SATA HBA with write cache

2008-09-03 Thread Aaron Blew
On Wed, Sep 3, 2008 at 1:48 PM, Miles Nordin <[EMAIL PROTECTED]> wrote: > I've never heard of a battery that's used for anything but RAID > features. It's an interesting question, if you use the controller in > ``JBOD mode'' will it use the write cache or not? I would guess not, > but it might.

Re: [zfs-discuss] ZFS Pools 1+TB

2008-08-27 Thread Aaron Blew
Couple of questions, What version of Solaris are you using? (cat /etc/release) If you're exposing each disk individually through a LUN/2540 Volume, you don't really gain anything by having a spare on the 2540 (which I assume you're doing by only exposing 11 LUNs instead of 12). Your best bet is to

Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-20 Thread Aaron Blew
I've heard (though I'd be really interested to read the studies if someone has a link) that a lot of this human error percentage comes at the hardware level. Replacing the wrong physical disk in a RAID-5 disk group, bumping cables, etc. -Aaron On Wed, Aug 20, 2008 at 3:40 PM, Bob Friesenhahn <

[zfs-discuss] ZFS with Traditional SAN

2008-08-20 Thread Aaron Blew
All, I'm currently working out details on an upgrade from UFS/SDS on DAS to ZFS on a SAN fabric. I'm interested in hearing how ZFS has behaved in more traditional SAN environments using gear that scales vertically like EMC Clarion/HDS AMS/3PAR etc. Do you experience issues with zpool integrity be

Re: [zfs-discuss] Why RAID 5 stops working in 2009

2008-07-03 Thread Aaron Blew
My take is that since RAID-Z creates a stripe for every block (http://blogs.sun.com/bonwick/entry/raid_z), it should be able to rebuild the bad sectors on a per block basis. I'd assume that the likelihood of having bad sectors on the same places of all the disks is pretty low since we're only read

Re: [zfs-discuss] ZFS Project Hardware

2008-05-23 Thread Aaron Blew
I've had great luck with my Supermicro AOC-SAT2-MV8 card so far. I'm using it in an old PCI slot, so it's probably not as fast as it could be, but it worked great right out of the box. -Aaron On Fri, May 23, 2008 at 12:09 AM, David Francis <[EMAIL PROTECTED]> wrote: > Greetings all > > I was lo