Guys,

Have an opensolairs x86 box running:

SunOS thsudfile01 5.11 snv_111b i86pc i386 i86pc Solaris

This has 2 old qla2200 1Gbit FC cards attached. Each bus is connected to an old 
transtec F/C raid array. This has a couple of large luns that form a single 
large zpool:

r...@thsudfile01:~# zpool status bucket
  pool: bucket
 state: ONLINE
 scrub: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        bucket      ONLINE       0     0     0
          c5t0d0    ONLINE       0     0     0
          c8t3d0    ONLINE       0     0     0

errors: No known data errors
r...@thsudfile01:~# zfs list bucket
NAME     USED  AVAIL  REFER  MOUNTPOINT
bucket  2.69T  5.31T    22K  /bucket

This is being used as an iSCSI target for an ESX 4.0 development environemnt. I 
found the performance to be really poor and found the culprit seems to be write 
performance to the raw zvol. For example on this zfs filesystem allocated as a 
volume:

r...@thsudfile01:~# zfs list bucket/iSCSI/lun1
NAME                USED  AVAIL  REFER  MOUNTPOINT
bucket/iSCSI/lun1   250G  5.55T  3.64G  -

r...@thsudfile01:~# dd if=/dev/zero of=/dev/zvol/rdsk/bucket/iSCSI/lun1 
bs=65536 count=102400
^C7729+0 records in
7729+0 records out
506527744 bytes (507 MB) copied, 241.707 s, 2.1 MB/

Some zpool iostat 1 1000:

bucket      2.44T  5.68T      0    203      0  2.73M
bucket      2.44T  5.68T      0    216      0  2.83M
bucket      2.44T  5.68T      0    120  63.4K  1.58M
bucket      2.44T  5.68T      2    350   190K  16.9M
bucket      2.44T  5.68T      0    123      0  1.64M
bucket      2.44T  5.68T      0    230      0  3.02M

Read performance from that zvol (assuming /dev/null behaves properly) is fine:

r...@thsudfile01:/bucket/transtec# dd of=/dev/null 
if=/dev/zvol/rdsk/bucket/iSCSI/lun1 bs=65536 count=204800
204800+0 records in
204800+0 records out
13421772800 bytes (13 GB) copied, 47.0256 s, 285 MB/s

Somewhat optimistic that... but iostat shows 100MB/s ish.

Write to a zfs filesystem from that zpool is also fine, here with a a write big 
enough to exhaust the machines 12GB memory:

r...@thsudfile01:/bucket/transtec# dd if=/dev/zero of=FILE bs=65536 count=409600
^C
336645+0 records in
336645+0 records out
22062366720 bytes (22 GB) copied, 176.369 s, 125 MB/s

and bursts of cache flush from iostat:

bucket      2.44T  5.68T      0    342      0  38.7M
bucket      2.44T  5.68T      0  1.47K      0   188M
bucket      2.44T  5.68T      0    240      0  21.3M
bucket      2.44T  5.68T      0  1.54K      0   191M
bucket      2.44T  5.68T      0  1.49K      0   191M
bucket      2.44T  5.68T      0    434      0  44.2M

So we seem to be able to get data down to disk via the cache at a reasonable 
rate and read from a raw zvol OK, but writes  are horribly slow. 

Am I missing something obvious? let me know what info would be diagnostic and 
I'll post it... 

Cheers,

Leon
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to