Test setup:
  - E2900 with 12 US-IV+ 1.5GHz processor, 96GB memory, 2x2Gbps FC HBAs, MPxIO 
in round-robbin config.
  - 50x64GB EMC disks presented on both 2 FCs.
  - ZFS pool defined using all 50 disks
  - Multiple ZFS filesystems built on the above pool.

I'm observing the following:
  - When the filesystems have compress=OFF and I do bulk reads/writes (8 
parallel 'cp's running between ZFS filesystems) I observe approximately 
200-250MB/S consolidated I/O; writes in the 100MB/S range. I get these numbers 
running 'zpool iostat 5'. I see the same read/write ratio for the duration of 
the test.
  - When the filesystems have compress=ON I see the following: reads from 
compressed filesystems come in waves; zpool will report for long durations (60+ 
seconds) no read activity while the write activity is consistently reported at 
20MB/S (no variation in the write rate throughtout the test.)
  - The machine is mostly idling during the entire test; both cases.
  - ZFS reports 4:1 compresson ratio for my filesystem.

I'm puzzled by the following:
  - Why do reads comes in waves with compression=ON; it almost feels like ZFS 
reads a bunch of data and then proceeds to compress it before writing it out. 
This tells me there is not a read bottleneck; meaning there is no starvation of 
the compress routine due to the following facts: CPU/Machine/IO is not 
saturated in any shape or form.
  - Why then does ZFS generate substantially lower write throughput (magical 
20MB/S spread evenly across the 50 disks, 0.4MB/S each)?

Can anybody shed any light on this anomoloy (?). Mr. Bonwick I hope you're 
reading this post.

BTW, we love the ZFS and are looking forward to rolling out aggresively in our 
new project. I'd like to take advantage of the compression since we're mostly 
I/O bound and we've plenty of CPU/Memory.

Thanks.
 
 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to