G'Day Luke,

On Thu, Nov 29, 2007 at 08:18:09AM -0800, Luke Schwab wrote:
> HI,
> 
> The question is a ZFS performance question in reguards to SAN traffic.
> 
> We are trying to benchmark ZFS vx VxFS file systems and I get the following 
> performance results.
> 
> Test Setup: 
> Solaris 10: 11/06
> Dual port Qlogic HBA with SFCSM (for ZFS) and DMP (of VxFS) 
> Sun Fire v490 server
> LSI Raid 3994 on backend
> ZFS Record Size: 128KB (default)
> VxFS Block Size: 8KB(default)
> 
> The only thing different in setup for the ZFS vs. VxFS tests is the file 
> system and an array support module (ASM) module was installed for the RAID on 
> the VxFS test case.
> 
> Test Case: Run 'iostat', then write a 1GB file using 'mkfile 1g testfile' and 
> then run iostat again.

It will probably be better to run iostat during the write, rather than
looking at iostat's summary-since-boot before and after (which would be
better served from the raw kstats anyway).  Eg: iostat -xnmpz 5

ZFS comes with its own iostat version: zpool iostat -v pool

> ZFS Test Results: The KB written per second averaged around 250KB.
> VxFS Test Results: The KB written per second averaged around 70KB. 

250 Kbytes/sec?  This sounds really wrong for a write benchmark - single
disks these days can deliver between 10 Mbytes/sec and 75 Mbytes/sec for
single stream write.

At least use a much larger filesize than 1 Gbyte (much of which could fit
in a RAM based file system cache if your system had multiple Gbytes of RAM).
The Kbytes written per second value you are using isn't at the application
layer, rather that which made it all the way to disk.  It might be crude,
but running "ptime mkfile ..." might give a better idea of the application
throughput - as it will show the real time taken to create a 1 Gbyte file
(but then you might end up comparing what different file systems consider
sync'd, rather than throughput)...

> When I fixed the ZFS record size to 8KB the KB written per second averaged 
> 110KB.
> 
> My questions by be too general to answer here but I thought I would try.
> 
> Why does ZFS write more traffic to disk then VxFS? Why does ZFS write more 
> traffic to disk when the Record Size is variable instead of fixed in size?

I'd recommend running filebench for filesystem benchmarks, and see what
the results are:

        http://www.solarisinternals.com/wiki/index.php/FileBench

Filebench is able to purge the ZFS cache (export/import) between runs,
and can be customised to match real world workloads.  It should improve
the accuracy of the numbers.  I'm expecting filebench to become *the*
standard tool for filesystem benchmarks.

Brendan

-- 
Brendan
[CA, USA]
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to