Re: [zfs-discuss] Using WD Green drives?

2010-05-18 Thread Dan Pritts
On Tue, May 18, 2010 at 09:40:15AM +0300, Pasi Kärkkäinen wrote:
> > Thus, you'll get good throughput for resilver on these drives pretty
> > much in just ONE case:  large files with NO deletions.  If you're using
> > them for write-once/read-many/no-delete archives, then you're OK.
> > Anything else is going to suck.

thanks for pointing out the obvious.  :)

Still, though, this is basically true for ANY drive.

It's worse for slower RPM drives, but it's not like resilvers will
exactly be fast with 7200rpm drives, either.

danno
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224

Visit our website: www.internet2.edu
Follow us on Twitter: www.twitter.com/internet2
Become a Fan on Facebook: www.internet2.edu/facebook
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Using WD Green drives?

2010-05-17 Thread Dan Pritts
On Mon, May 17, 2010 at 06:25:18PM +0200, Tomas Ögren wrote:
> Resilver does a whole lot of random io itself, not bulk reads.. It reads
> the filesystem tree, not "block 0, block 1, block 2..". You won't get
> 60MB/s sustained, not even close.

Even with large, unfragmented files?  

danno
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224

Visit our website: www.internet2.edu
Follow us on Twitter: www.twitter.com/internet2
Become a Fan on Facebook: www.internet2.edu/facebook
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Using WD Green drives?

2010-05-17 Thread Dan Pritts
On Thu, May 13, 2010 at 06:09:55PM +0200, Roy Sigurd Karlsbakk wrote:
> 1. even though they're 5900, not 7200, benchmarks I've seen show they are 
> quite good 

Minor correction, they are 5400rpm.  Seagate makes some 5900rpm drives.

The "green" drives have reasonable raw throughput rate, due to the
extremely high platter density nowadays.  however, due to their low
spin speed, their average-access time is significantly slower than
7200rpm drives.

For bulk archive data containing large files, this is less of a concern.

Regarding slow reslivering times, in the absence of other disk activity,
I think that should really be limited by the throughput rate, not the
relatively slow random i/o performance...again assuming large files
(and low fragmentation, which if the archive is write-and-never-delete
is what i'd expect).

One test i saw suggests 60MB/sec avg throughput on the 2TB drives.
That works out to 9.25 hours to read the entire 2TB.  At a conservative
50MB/sec it's 11 hours.  This assumes that you have enough I/O bandwidth
and CPU on the system to saturate all your disks.

if there's other disk activity during a resilver, though, it turns into
random i/o.  Which is slow on these drives.

danno
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224

Visit our website: www.internet2.edu
Follow us on Twitter: www.twitter.com/internet2
Become a Fan on Facebook: www.internet2.edu/facebook
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Consolidating a huge stack of DVDs using ZFS dedup: automation?

2010-03-04 Thread Dan Pritts
On Tue, Mar 02, 2010 at 05:35:07PM -0800, R.G. Keen wrote:
> And as to automation for reading: I recently ripped and archived my entire CD 
> collection, some 500 titles. Not the same issue in terms of data, but much 
> the same in terms of needing to load/unload the disks. I went as far as to 
> think of getting/renting an autoloader, but I found that it was much more 
> efficient to keep a stack by my desk and swap disks when the ripper beeped at 
> me. This was a very low priority task in my personal stack, but over a  few 
> weeks, there were enough beeps and minutes to swap the disks out. 

I did something very similar but with over 1000 CDs.  If you can scare
up an external DVD drive, use it too - that way you'll have to change
half as many times.  

danno
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224

Internet2 Spring Member Meeting
April 26-28, 2010 - Arlington, Virginia
http://events.internet2.edu/2010/spring-mm/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] verging OT: how to buy J4500 w/o overpriced

2010-02-11 Thread Dan Pritts
On Tue, Feb 09, 2010 at 03:44:02PM -0600, Wes Felter wrote:
> Have you considered Promise JBODs? They officially support 
> bring-your-own-drives.

Have you used these yourself, Wes?

I've been considering it, but I talked to a colleague at another
institution who had some really awful tales to tell about promise
FC arrays.  They were clearly not ready for prime time.

OTOH a SAS jbod is a lot less complicated.

danno
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224

Internet2 Spring Member Meeting
April 26-28, 2010 - Arlington, Virginia
http://events.internet2.edu/2010/spring-mm/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] need a few suggestions for a poor man's ZIL/SLOG device

2010-01-06 Thread Dan Pritts
On Mon, Jan 04, 2010 at 08:33:23PM -0600, Al Hopper wrote:
> On Mon, Jan 4, 2010 at 4:39 PM, Thomas Burgess  wrote:
> >
> > I'm PRETTY sure the kingston drives i ordered are as good/better
> >
> > i just didnt' know that they weren't "good enough"
> 
> I disagree that those drives are "good enough".  That particular drive
> uses the dreaded JMicron controller - which has a really bad
> reputation.  And a poor reputation that it *earned* and deserves.
> Even though these drives use a newer revision of the original JMicron
> part (that basically sucks) - this one is *not* much better.  Have a


meandering off topic here ...

i use one of those 64G kingston jmicron/toshiba drives in my mac.

The "stuttering" problems attributed to the older jmicron drives are
non-existent with this one in my experience.

I have not done anything to optimize for slow writes (eg, disable browser
disk cache).

The overall performance improvement on my system is huge, due to the
very-fast reads.  

Mine is old enough and full enough that all cells have been written to
at this point.

Overall I am very pleased with the drive, especially for the price
paid.

I agree with Al that it probably isn't suitable as a ZIL.  Maybe as a
read cache though.

danno
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224

Winter 2010 ESCC/Internet2 Joint Techs
Hosted by the University of Utah - Salt Lake City, UT
January 31 - February 4, 2010
http://events.internet2.edu/2010/jt-slc/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] (home NAS) zfs and spinning down of drives

2009-11-23 Thread dan pritts
On Nov 4, 2009, at 6:02 PM, Jim Klimov wrote:
> Thanks for the link, but the main concern in spinning down drives of a ZFS 
> pool 
> is that ZFS by default is not so idle. Every 5 to 30 seconds it closes a 
> transaction 
> group (TXG) which requires a synchronous write of metadata to disk.

I'm running freebsd 7.2 with ZFS and have my data drives set to spin down when 
idle.
They don't get re-spun back up all the time like you fear; only when there is a 
userland access.

as mentioned elsewhere, the disks spin up sequentially which is a PITA.  I had 
a little
script on my previous linux box that spun up all the disks .5 second apart but 
i can't
figure out how to get freebsd to tell me the current state of the disks.  

I don't know if the freebsd folks made any mods to ZFS for this but I kinda 
doubt it.

I don't have anything spiffy like prefetching to a cache disk, etc.  When I am 
streaming
music to a squeezebox, all my disks are spinning.  I have thought about hacking 
squeezecenter to copy to a temp disk but the win is pretty minimal, by the time 
they spin down
i'll want them back to choose the next album.

danno
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] 7110 questions

2009-06-18 Thread Dan Pritts
Hi all,

(down to the wire here on EDU grant pricing :)

i'm looking at buying a pair of 7110's in the EDU grant sale.
The price is sure right.  I'd use them in a mirrored, cold-failover
config.

I'd primarily be using them to serve a vmware cluster; the current config
is two standalone ESX servers with local storage, 450G of SAS RAID10 each.

the 7110 price point is great, and i think i have a reasonable
understanding of how this stuff ought to work.

I'm curious about a couple things that would be "unsupported."

Specifically, whether they are "not supported" if they have specifically
been crippled in the software.

1) SSD's 

I can imagine buying an intel SSD, slotting it into the 7110, and using
it as a ZFS L2ARC (? i mean the equivalent of "readzilla")

2) expandability

I can imagine buying a SAS card and a JBOD and hooking it up to
the 7110; it has plenty of PCI slots.

finally, one question - I presume that I need to devote a pair of disks
to the OS, so I really only get 14 disks for data.  Correct?

thanks!

danno
--
Dan Pritts, Sr. Systems Engineer
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224

ESCC/Internet2 Joint Techs
July 19-23, 2009 - Indianapolis, Indiana
http://jointtechs.es.net/indiana2009/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs on a raid box

2007-11-19 Thread Dan Pritts
On Mon, Nov 19, 2007 at 11:10:32AM +0100, Paul Boven wrote:
> Any suggestions on how to further investigate / fix this would be very
> much welcomed. I'm trying to determine whether this is a zfs bug or one
> with the Transtec raidbox, and whether to file a bug with either
> Transtec (Promise) or zfs.

the way i'd try to do this would be to use the same box under solaris
software RAID, or better yet linux or windows software RAID (to make
sure it's not a solaris device driver problem).

does pulling the disk then get noticed?  If so, it's a zfs bug.  

danno
--
Dan Pritts, System Administrator
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs on a raid box

2007-11-16 Thread Dan Pritts
On Fri, Nov 16, 2007 at 11:31:00AM +0100, Paul Boven wrote:
> Thanks for your reply. The SCSI-card in the X4200 is a Sun Single
> Channel U320 card that came with the system, but the PCB artwork does
> sport a nice 'LSI LOGIC' imprint.

That is probably the same card i'm using; it's actually a "Sun" card
but as you say is OEM by LSI.

> So, just to make sure we're talking about the same thing here - your
> drives are SATA, 

yes

> you're exporting each drive through the Western
> Scientific raidbox as a seperate volume, 

yes

> and zfs actually brings in a
> hot spare when you pull a drive?

yes

OS is Sol10U4, system is an X4200, original hardware rev.

> Over here, I've still not been able to accomplish that - even after
> installing Nevada b76 on the machine, removing a disk will not cause a
> hot-spare to become active, nor does resilvering start. Our Transtec
> raidbox seems to be based on a chipset by Promise, by the way.

I have heard some bad things about the Promise RAID boxes but I haven't
had any direct experience.  

I do own one Promise box that accepts 4 PATA drives and exports them to a
host as scsi disks.  Shockingly, it uses a master/slave IDE configuration
rather than 4 separate IDE controllers.  It wasn't super expensive but
it wasn't dirt cheap, either, and it seems it would have cost another
$5 to manufacture the "right way."

I've had fine luck with Promise $25 ATA PCI cards :)

The infortrend units, on the other hand, I have had generally quite good
luck with.  When I worked at UUNet in the late '90s we had hundreds of
their SCSI RAIDs deployed.  

I do have an Infortrend FC-attached raid with SATA disks, which basically
works fine.  It has an external JBOD also SATA disks connecting to
the main raid with FC.  Unfortunately, The RAID unit boots faster than
the JBOD.  So, if you turn them on at the same time, it thinks the JBOD
is gone and doesn't notice it's there until you reboot the controller.

That caused a little pucker for my colleagues when it happened while i
was on vacation.  The support guy at the reseller we were working with
(NOT Western Scientific) told them the raid was hosed and they should
rebuild from scratch, hope you had a backup.  

danno
--
Dan Pritts, System Administrator
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs on a raid box

2007-11-15 Thread Dan Pritts
On Tue, Nov 13, 2007 at 12:25:24PM +0100, Paul Boven wrote:
> Hi everyone,
> 
> We've building a storage system that should have about 2TB of storage
> and good sequential write speed. The server side is a Sun X4200 running
> Solaris 10u4 (plus yesterday's recommended patch cluster), the array we
> bought is a Transtec Provigo 510 12-disk array. The disks are SATA, and
> it's connected to the Sun through U320-scsi.

We are doing basically the same thing with simliar Western Scientific
(wsm.com) raids, based on infortrend controllers.  ZFS notices when we
pull a disk and goes on and does the right thing.

I wonder if you've got a scsi card/driver problem.  We tried using
an Adaptec card with solaris with poor results; switched to LSI,
it "just works".

danno
--
Dan Pritts, System Administrator
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Securing a risky situation with zfs

2007-11-15 Thread Dan Pritts
On Tue, Nov 13, 2007 at 01:20:14AM -0800, Gabriele Bulfon wrote:

> The basic idea was to have a zfs mirror of each iscsi disk on
> scsi-attached disks, so that in case of another panic of the SAN,
> everything should still work on the scsi-attached disks.

> My questions are:
> - is this a good idea?

it's a better idea than just trusting your flaky SAN.

> - should I use zfs mirrors or normal solaris mirrors?

ZFS gives you the big advantage of on-the-fly data checksumming; 
that's very helpful compared to disksuite mirrors.

> - is mirroring the best performance, or should I use zfs raid-z?

mirroring is almost always going to give you the best performance.

> - is there any other possibility I don't see?

call the comcast hammer lady [1] and ask her to come take care
of your SAN.

> - The SAN includes 2 Sun-Solaris-10 machines, and 3 windows
> machinesis there any similar solution on the win machines?

none that i'm aware of; windows does have software mirroring, of 
course.  Make lots of backups :).  



danno
--
Dan Pritts, System Administrator
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224

[1] 
http://www.washingtonpost.com/wp-dyn/content/article/2007/10/17/AR2007101702359.html

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mixing SATA & PATA Drives

2007-11-07 Thread Dan Pritts
On Thu, Nov 08, 2007 at 12:42:01PM +1300, Ian Collins wrote:
> True, but I'd image things go wonky if two PATA drives (master and
> slave) are used.

Absolutely.  Never use PATA slave config if you care at all about
performance.

> > i/o to/from the disk's cache will be marginally slower but you want to
> > disable the write cache for data integrity anyway.
> >   
> Do you?  I though ZFS enabled the drive cache when it used the entire drive.

I think you're right, I think what i was thinking of was that ZFS
(and anyone else, really, but ZFS is where i've heard about it)
wants to be awful sure that the drive actually flushes its write cache
when you ask for it.  

Regardless, the speed difference is marginal.

danno
--
Dan Pritts, System Administrator
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mixing SATA & PATA Drives

2007-11-07 Thread Dan Pritts
On Fri, Sep 14, 2007 at 01:48:40PM -0500, Christopher Gibbs wrote:
> I suspect it's probably not a good idea but I was wondering if someone
> could clarify the details.
> 
> I have 4 250G SATA(150) disks and 1 250G PATA(133) disk.  Would it
> cause problems if I created a raidz1 pool across all 5 drives?
> 
> I know the PATA drive is slower so would it slow the access across the
> whole pool or just when accessing that disk?

...a late reply here, but i'm slightly surprised none of the
other respondents mentioned this.

The PATA drive is not any slower in raw throughput than the SATA disks.

a typical 250G disk has a max transfer rate of maybe 60MB/sec, so the
attachment speed will not make a difference.

i/o to/from the disk's cache will be marginally slower but you want to
disable the write cache for data integrity anyway.

If the SATA disks have NCQ, you'd lose on some random i/o workloads by
adding the PATA disk.  But, i think that you need SATA300 to support
that feature.

danno
--
Dan Pritts, System Administrator
Internet2
office: +1-734-352-4953 | mobile: +1-734-834-7224
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss