Re: [zfs-discuss] Apple Removes Nearly All Reference To ZFS

2009-06-10 Thread Aaron Blew
That's quite a blanket statement.  MANY companies (including Oracle)
purchased Xserve RAID arrays for important applications because of their
price point and capabilities.  You easily could buy two Xserve RAIDs and
mirror them for what comparable arrays of the time cost.

-Aaron

On Wed, Jun 10, 2009 at 8:53 AM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:

> On Wed, 10 Jun 2009, Rodrigo E. De León Plicet wrote:
>
>
>> http://hardware.slashdot.org/story/09/06/09/2336223/Apple-Removes-Nearly-All-Reference-To-ZFS
>>
>
> Maybe Apple will drop the server version of OS-X and will eliminate their
> only server hardware (Xserve) since all it manages to do is lose money for
> Apple and distracts from releasing the next iPhone?
>
> Only a lunatic would rely on Apple for a mission-critical server
> application.
>
> Bob
> --
> Bob Friesenhahn
> bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] GSoC 09 zfs ideas?

2009-02-28 Thread Aaron Blew
Absolutely agree. I'l love to be able to free up some LUNs that I
don't need in the pool any more.

Also, concatenation of devices in a zpool would be great for devices
that have LUN limits.  It also seems like it may be an easy thing to
implement.

-Aaron

On 2/28/09, Thomas Wagner  wrote:
>> >> pool-shrinking (and an option to shrink disk A when i want disk B to
>> >> become a mirror, but A is a few blocks bigger)
>>  This may be interesting... I'm not sure how often you need to shrink a
>> pool
>>  though?  Could this be classified more as a Home or SME level feature?
>
> Enterprise level especially in SAN environments need this.
>
> Projects own theyr own pools and constantly grow and *shrink* space.
> And they have no downtime available for that.
>
> give a +1 if you agree
>
> Thomas
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>

-- 
Sent from my mobile device
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Hardware Raid Vs ZFS implementation on Sun X4150/X4450

2008-12-03 Thread Aaron Blew
I've done some basic testing with a X4150 machine using 6 disks in a RAID 5
and RAID Z configuration.  They perform very similarly, but RAIDZ definitely
has more system overhead.  In many cases this won't be a big deal, but if
you need as many CPU cycles as you can muster, hardware RAID may be your
better choice.

-Aaron

On Tue, Dec 2, 2008 at 4:22 AM, Vikash Gupta <[EMAIL PROTECTED]> wrote:

>  Hi,
>
>
>
> Has anyone implemented the Hardware RAID 1/5 on Sun X4150/X4450 class of
> servers .
>
> Also any comparison between ZFS Vs H/W Raid ?
>
>
>
> I would like to know the experience (good/bad) and the pros/cons?
>
>
>
> Regards,
>
> Vikash
>
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk Concatenation

2008-09-23 Thread Aaron Blew
I actually ran into a situation where I needed to concatenate LUNs last
week.  In my case, the Sun 2540 storage arrays don't yet have the ability to
create LUNs over 2TB, so to use all the storage within the array on one host
efficiently, I created two LUNs per RAID group, for a total of 4 LUNs.  Then
we create two stripes (LUNs 0 and 2, 1 and 3) and concatenate them.  This
way the data is laid out contigously on the RAID groups.

-Aaron

On Mon, Sep 22, 2008 at 11:56 PM, Nils Goroll <[EMAIL PROTECTED]> wrote:

> Hi Darren,
>
> >> http://www.opensolaris.org/jive/thread.jspa?messageID=271983񂙯
> >>
> >> The case mentioned there is one where concatenation in zdevs would be
> > useful.
> >
> > That case appears to be about trying to get a raidz sized properly
> > against disks of different sizes.  I don't see a similar issue for
> > someone preferring a concat over a stripe.
>
> I don't quite understand your comment.
>
> The question I was referring to was from someone who wanted a configuration
> which would optimally use the available physical disk space. The
> configuration
> which would yield maximum net capacity was to use concats, so IMHO this is
> a
> case where one might want a concat below a vdev.
>
> Were you asking for use cases of a concat at the pool layer?
>
> I think those exist when using RAID hardware where additional striping can
> lead
> to an increase of concurrent I/O on the same disks or I/Os being split up
> unnecessarily. All of this highly depends upon the configuration.
>
> Nils
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SAS or SATA HBA with write cache

2008-09-03 Thread Aaron Blew
On Wed, Sep 3, 2008 at 1:48 PM, Miles Nordin <[EMAIL PROTECTED]> wrote:

> I've never heard of a battery that's used for anything but RAID
> features.  It's an interesting question, if you use the controller in
> ``JBOD mode'' will it use the write cache or not?  I would guess not,
> but it might.  And if it doesn't, can you force it, even by doing
> sneaky things like making 2-disk mirrors where 1 disk happens to be
> missing thus wasting half the ports you bought, but turning on the
> damned write cache?  I don't know.
>

The X4150 SAS RAID controllers will use the on-board battery backed cache
even when disks are presented as individual LUNs.  You can also globally
enable/disable the disk write caches.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Pools 1+TB

2008-08-27 Thread Aaron Blew
Couple of questions,
What version of Solaris are you using? (cat /etc/release)
If you're exposing each disk individually through a LUN/2540 Volume, you
don't really gain anything by having a spare on the 2540 (which I assume
you're doing by only exposing 11 LUNs instead of 12).  Your best bet is to
set no spares on the 2540 and then set one of the LUNs as a spare via ZFS.
How will you be using the storage?  This will help determine how your zpool
should be structured.

-Aaron


On Wed, Aug 27, 2008 at 11:08 AM, Kenny <[EMAIL PROTECTED]> wrote:

> Has anyone had issues with creating ZFS pools greater than 1 terabyte (TB)?
>
> I've created 11 LUNs from a Sun 2540 Disk array (approx 1 TB each).  The
> host system ( SUN Enterprise 5220) reconizes the "disks" as each having
> 931GB space.  So that should be 10+ TB in size total.  However when I zpool
> them together (using raidz) the zpool status reports 9GB instead of 9TB.
>
> Does ZFS have problem reporting TB and defaults to GB instead??  Is my pool
> really TB in size??
>
> I've read in the best practice wiki that splitting them into smaller pools.
>  Any recommendation on this??  I'm desperate in keepingas much space useable
> as possible.
>
> Thanks   --Kenny
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2008-08-20 Thread Aaron Blew
I've heard (though I'd be really interested to read the studies if someone
has a link) that a lot of this human error percentage comes at the hardware
level.  Replacing the wrong physical disk in a RAID-5 disk group, bumping
cables, etc.

-Aaron



On Wed, Aug 20, 2008 at 3:40 PM, Bob Friesenhahn <
[EMAIL PROTECTED]> wrote:

> On Wed, 20 Aug 2008, Miles Nordin wrote:
>
> >> "j" == John  <[EMAIL PROTECTED]> writes:
> >
> > j> There is also the human error factor.  If someone accidentally
> > j> grows a zpool
> >
> > or worse, accidentally adds an unredundant vdev to a redundant pool.
> > Once you press return, all you can do is scramble to find mirrors for
> > it.
>
> Not to detract from the objective to be able to re-shuffle the zfs
> storage layout, any system administration related to storage is risky
> business.  Few people should be qualified to do it.  Studies show that
> 36% of data loss is due to human error.  Once zfs mirroring, raidz, or
> raidz2 are used to virtually eliminate loss due to hardware or system
> malfunction, this 36% is increased to a much higher percentage.  For
> example, if loss due to hardware or system malfunction is reduced to
> just 1% (still a big number) then the human error factor is increased
> to a wopping 84%.  Humans are like a ticking time bomb for data.
>
> The errant command which accidentally adds a vdev could just as easily
> be a command which scrambles up or erases all of the data.
>
> Bob
> ==
> Bob Friesenhahn
> [EMAIL PROTECTED], http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS with Traditional SAN

2008-08-20 Thread Aaron Blew
All,
I'm currently working out details on an upgrade from UFS/SDS on DAS to ZFS
on a SAN fabric.  I'm interested in hearing how ZFS has behaved in more
traditional SAN environments using gear that scales vertically like EMC
Clarion/HDS AMS/3PAR etc.  Do you experience issues with zpool integrity
because of MPxIO events?  Has the zpool been reliable over your fabric?  Has
performance been where you would have expected it to be?

Thanks much,
-Aaron
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why RAID 5 stops working in 2009

2008-07-03 Thread Aaron Blew
My take is that since RAID-Z creates a stripe for every block
(http://blogs.sun.com/bonwick/entry/raid_z), it should be able to
rebuild the bad sectors on a per block basis.  I'd assume that the
likelihood of having bad sectors on the same places of all the disks
is pretty low since we're only reading the sectors related to the
block being rebuilt.  It also seems that fragmentation would work in
your favor here since the stripes would be distributed across more of
the platter(s), hopefully protecting you from a wonky manufacturing
defect that causes UREs on the same place on the disk.

-Aaron


On Thu, Jul 3, 2008 at 12:24 PM, Jim <[EMAIL PROTECTED]> wrote:
> Anyone here read the article "Why RAID 5 stops working in 2009" at 
> http://blogs.zdnet.com/storage/?p=162
>
> Does RAIDZ have the same chance of unrecoverable read error as RAID5 in Linux 
> if the RAID has to be rebuilt because of a faulty disk?  I imagine so because 
> of the physical constraints that plague our hds.  Granted, the chance of 
> failure in my case shouldn't be nearly as high as I will most likely recruit 
> four or three 750gb drives- not in the order of 10tb.
>
> With my opensolaris NAS, I will be scrubbing every week (consumer grade 
> drives[every month for enterprise-grade]) as recommended in the ZFS best 
> practices guide.  If I "zpool status" and I see that the scrub is 
> increasingly fixing errors, would that mean that the disk is in fact headed 
> towards failure or perhaps that the natural expansion of disk usage is to 
> blame?
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Project Hardware

2008-05-23 Thread Aaron Blew
I've had great luck with my Supermicro AOC-SAT2-MV8 card so far.  I'm
using it in an old PCI slot, so it's probably not as fast as it could
be, but it worked great right out of the box.

-Aaron


On Fri, May 23, 2008 at 12:09 AM, David Francis <[EMAIL PROTECTED]> wrote:
> Greetings all
>
> I was looking at creating a little ZFS storage box at home using the 
> following SATA controllers (Adaptec Serial ATA II RAID 1420SA) on Opensolaris 
> X86 build
>
> Just wanted to know if anyone out there is using these and can vouch for 
> them. If not if there's something else you can recommend or suggest.
>
> Disk's would be 6*Seagate 500GB drives.
>
> Thanks
>
> David
>
>
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss