Re: [zfs-discuss] [osol-discuss] ZFS read performance terrible

2010-07-29 Thread Karol
Hi Eric - thanks for your reply.
Yes, zpool iostat -v

I've re-configured the setup into two pools for a test:
1st pool: 8 disk stripe vdev
2nd pool: 8 disk stripe vdev

The SSDs are currently not in the pool since I am not even reaching what the 
spinning rust is capable of - I believe I have a deeper issue and they would 
only complicate things for me at this point.
I can reconfigure the pool however needed, since this server is not yet in 
production.

My test is through 8gb FC target through comstar from a Windows Workstation.
The pool is currently configured with a default 128k recordsize.

Then I:
touch /pool/file
stmfadm create-lu -p wcd=false -s 10T /pool/file
stmfadm add-view lu
(The lu defaults to reporting a 512 blk size)

I formatted the volume NTFS cluster size default 4k
I do that twice (two seperate pools, two seperate LUNs, etc)

Then I copy a large file (700MB or so) to one of the LUNs from the local 
workstation.
The read performance of my workstation harddrive is about 100+ MBps, and as 
such the file copies at about that speed.
Then I make a few copies of the file on that LUN so that I have about 20+ GB of 
that same file on one of the LUNs.
Then I reboot the opensolaris server (since the cache is nicely populated at 
this point and everything is running fast)

Then I try copying the lot of those files from one lun to the other.
The read performance appears to be limiting my write performance.

I have tried matching recordsize to NTFS cluster size at 4k, 16k, 32 and 64k.
I have tried making NTFS clustersize a multiple of recordsize.
I have seen performance improvements as a result (I dont' have numbers) 
however, none of the cluster/block combinations brought me to where I should be 
on reads.

I've tried many configurations - and I've seen my performance fluctuate up and 
down here and there.  However, it's never on-par with what it should be and the 
reads seem to be a limiting factor.

For clarity - here's some 'zpool iostat -v 1' output from my current 
configuration directly following a reboot of the server 
while copying 13GB of those files from LUN - LUN:



capacity operationsbandwidth
pool alloc   free   read  write   read  write
---  -  -  -  -  -  -

~snip~

edit113.8G  16.3T773  0  96.5M  0
  c0t5000C50020C7A44Bd0  1.54G  1.81T 75  0  9.38M  0
  c0t5000C50020C7C9DFd0  1.54G  1.81T 89  0  11.2M  0
  c0t5000C50020C7CE1Fd0  1.53G  1.81T 82  0  10.3M  0
  c0t5000C50020C7D86Bd0  1.53G  1.81T 85  0  10.6M  0
  c0t5000C50020C61ACBd0  1.55G  1.81T 83  0  10.4M  0
  c0t5000C50020C79DEFd0  1.54G  1.81T 92  0  11.5M  0
  c0t5000C50020CD3473d0  1.53G  1.81T 84  0  10.6M  0
  c0t5000C50020CD5873d0  1.53G  1.81T 87  0  11.0M  0
  c0t5000C500103F36BFd0  1.54G  1.81T 92  0  11.5M  0
---  -  -  -  -  -  -
syspool  35.1G  1.78T  0  0  0  0
  mirror 35.1G  1.78T  0  0  0  0
c0t5000C5001043D3BFd0s0  -  -  0  0  0  0
c0t5000C500104473EFd0s0  -  -  0  0  0  0
---  -  -  -  -  -  -
test111.0G  16.3T850  0   106M  0
  c0t5000C500103F48FFd0  1.23G  1.81T 95  0  12.0M  0
  c0t5000C500103F49ABd0  1.23G  1.81T 92  0  11.6M  0
  c0t5000C500104A3CD7d0  1.22G  1.81T 92  0  11.6M  0
  c0t5000C500104A5867d0  1.24G  1.81T 97  0  12.0M  0
  c0t5000C500104A7723d0  1.22G  1.81T 95  0  11.9M  0
  c0t5000C5001043A86Bd0  1.23G  1.81T 96  0  12.1M  0
  c0t5000C5001043C1BFd0  1.22G  1.81T 91  0  11.3M  0
  c0t5000C5001043D1A3d0  1.23G  1.81T 91  0  11.4M  0
  c0t5000C5001046534Fd0  1.23G  1.81T 97  0  12.2M  0
---  -  -  -  -  -  -

~snip~

Here's some zpool iostat (no -v) output over the same time:


   capacity operationsbandwidth
poolalloc   free   read  write   read  write
--  -  -  -  -  -  -

~snip~

edit1   13.8G  16.3T  0  0  0  0
syspool 35.1G  1.78T  0  0  0  0
test1   11.9G  16.3T  0956  0   120M
--  -  -  -  -  -  -
edit1   13.8G  16.3T  0  0  0  0
syspool 35.1G  1.78T  0  0  0  0
test1   11.9G  16.3T142564  17.9M  52.8M
--  -  -  -  -  -  -
edit1   13.8G  16.3T  0  0  0  0
syspool 35.1G  1.78T  0  0  0  0
test1   11.9G  16.3T723  0  90.3M  0
--  -  -  -  -  -  -
edit1  

Re: [zfs-discuss] [osol-discuss] ZFS read performance terrible

2010-07-29 Thread Karol
Sorry - I said the 2 iostats were run at the same time - the second was run 
after the first during the same file copy operation.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss