Re: [zfs-discuss] alternative hardware configurations for zfs

2009-09-12 Thread Damjan Perenic
On Sat, Sep 12, 2009 at 7:25 AM, Tim Cook t...@cook.ms wrote:


 On Fri, Sep 11, 2009 at 4:46 PM, Chris Du dilid...@gmail.com wrote:

 You can optimize for better IOPS or for transfer speed. NS2 SATA and SAS
 share most of the design, but they are still different, cache, interface,
 firmware are all different.

 And I'm asking you to provide a factual basis for the interface playing any
 role in IOPS.  I know for a fact it has nothing to do with error recovery or
 command queue.

 Regardless, I've never seen either one provide any significant change in
 IOPS.  I feel fairly confident stating that within the storage industry
 there's a pretty well known range of IOPS provided for 7200, 10K, and 15K
 drives respectively, regardless of interface.  You appear to be saying this
 isn't the case, so I'd like to know what data you're using as a reference
 point.

I shopped for 1TB 7200rpm drives recently and I noticed Seagate
Barracude ES.2 has 1TB version with SATA and SAS interface.

In their datasheet at
http://www.seagate.com/www/en-us/products/servers/barracuda_es/ and
product overview they claim following:

---
Choose SAS for the seamless Tier 2 enterprise experience, with
improved data integrity and a 135 percent average performance
boost over SATA. SAS also reduces integration complexity and
optimizes system performance for rich media, reference data
storage and enterprise backup applications.
---
With a choice of either SATA or SAS
interfaces, the Barracuda ES.2 drive
utilizes perpendicular recording technology
to deliver the industry’s highest-capacity
4-platter drive. SAS delivers up to a 38
percent IOPS/watt improvement over
SATA.
---

And in Product overview:
---
• Full internal IOEDC/IOECC* data integrity protection on SAS models
• Dual-ported, multi-initiator SAS provides full-duplex compatibility
and a 135 percent average** performance improvement over SATA.

*IOEDC/IOECC on SATA (writes only), IOEDC/IOECC on SAS (both reads and writes)
**Averaged from random/sequential, read/write activities with write cache off
--

I admit I have no clue why SAS version should be/is faster. I just
pass on things I found out. But I am interested in opinion if there is
any substance in this marketing material.

Kind regards,
Damjan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs fragmentation

2009-08-12 Thread Damjan Perenic
On Tue, Aug 11, 2009 at 11:04 PM, Richard
Ellingrichard.ell...@gmail.com wrote:
 On Aug 11, 2009, at 7:39 AM, Ed Spencer wrote:

 I suspect that if we 'rsync' one of these filesystems to a second
 server/pool  that we would also see a performance increase equal to what
 we see on the development server. (I don't know how zfs send a receive
 work so I don't know if it would address this Filesystem Entropy or
 specifically reorganize the files and directories). However, when we
 created a testfs filesystem in the zfs pool on the production server,
 and copied data to it, we saw the same performance as the other
 filesystems, in the same pool.

 Directory walkers, like NetBackup or rsync, will not scale well as
 the number of files increases.  It doesn't matter what file system you
 use, the scalability will look more-or-less similar. For millions of files,
 ZFS send/receive works much better.  More details are in my paper.

It would be nice if ZFS had something similar to VxFS File Change Log.
This feature is very useful for incremental backups and other
directory walkers, providing they support FCL.

Damjan
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss