Bart Van Assche wrote:
> Hello,
> 
> I just made a setup in our lab which should make ZFS fly, but unfortunately 
> performance is significantly lower than expected: for large sequential data 
> transfers write speed is about 50 MB/s while I was expecting at least 150 
> MB/s.
> 
> Setup
> -----
> The setup consists of five servers in total: one OpenSolaris ZFS server and 
> four SAN servers. ZFS accesses the SAN servers via iSCSI and IPoIB.
> 
> * ZFS Server
> Operating system: OpenSolaris build 78.
> CPU: Two Intel Xeon CPU's, eight cores in total.
> RAM: 16 GB.
> Disks: not relevant for this test.
> 
> * SAN Servers
> Operating system: Linux 2.6.22.18 kernel, 64-bit + iSCSI Enterprise Target 
> (IET). IET has been configured such that it performs both read and write 
> caching.
> CPU: Intel Xeon CPU E5310, 1.60GHz, four cores in total.
> RAM: two servers with 8 GB RAM, one with 4 GB RAM, one with 2 GB RAM.
> Disks: 16 disks in total: two disks with the Linux OS and 14 set up in RAID-0 
> via LVM. The LVM volume is exported via iSCSI and used by ZFS.
> 
> These SAN servers give excellent performance results when accessed via Linux' 
> open-iscsi initiator.
> 
> * Network
> 4x SDR InfiniBand. The raw transfer speed of this network is 8 Gbit/s. 
> Netperf reports 1.6 Gbit/s between the ZFS server and one SAN server (IPoIB, 
> single-threaded). iSCSI transfer speed between the ZFS server and one SAN 
> server is about 150 MB/s.
> 
> 
> Performance test
> ----------------
> Software: xdd (see also http://www.ioperformance.com/products.htm). I 
> modified xdd such that the -dio command line option enables O_RSYNC and 
> O_DSYNC in open() instead of calling directio().
> Test command: xdd -verbose -processlock -dio -op write -targets 1 testfile 
> -reqsize 1 -blocksize $((2**20)) -mbytes 1000 -passes 3
> This test command triggers synchronous writes with a block size of 1 MB 
> (verified this with truss). I am using synchronous writes because these give 
> the same performance results as very large buffered writes (large compared to 
> ZFS' cache size).
> 
> Write performance reported by xdd for synchronous sequential writes: 50 MB/s, 
> which is lower than expected.
> 
> 
> Any help with improving the performance of this setup is highly appreciated.
> 
> 
> Bart Van Assche.
>  
>  
> This message posted from opensolaris.org
> _______________________________________________
> zfs-discuss mailing list
> [email protected]
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

If I understand this correctly, you've stripped the disks together
w/ Linux lvm, then exported a single ISCSI volume to ZFS (or two for 
mirroring;
which isn't clear).

I don't know how many concurrent IOs Solaris thinks your ISCSI volumes 
will handle, but that's one
area to examine.  The only way to realize full performance is going to 
be to get ZFS to issue multiple
IOs to the ISCSI boxes at once.

I'd also suggest just exporting the raw disks to zfs, and have it do the 
stripping.

On 4 commodity 500 GB SATA drives set up w/ RAID Z, my  2.6 Ghz dual 
core AMD box sustains
100+ MB/sec read or write.... it happily saturates a GB nic w/ multiple 
concurrent reads over
Samba.

W/ 16 drives direct attach you should see close to 500 MB/sec sustained 
IO throughput.


- Bart

-- 
Bart Smaalders                  Solaris Kernel Performance
[EMAIL PROTECTED]               http://blogs.sun.com/barts
"You will contribute more with mercurial than with thunderbird."
_______________________________________________
zfs-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to