I have the following configuation.

My storage:
12 luns from a Clariion 3x80. Each LUN is a whole 6 disk raid-6.

My host:
Sun t5240 with 32 hardware threads and 16gig of ram.

My zpool:
all 12 luns from the clariion in a simple pool

My test data:
A 1 gig backup file of a ufsdump from /opt on a machine with lots of
mixed binary/text data.
A 15gig file that is already tightly compressed.

I wrote some benchmarks and tested. This system is completely idle
except for testing.
With the 1 gig file:
testing record sizes for 8,16,32,128k
testing compression with off,on,gzip
128k record sizes were fastest.
gzip compression was fastest.

Using the best of those results, I then ran the torture test with a
file almost as large as system memory that was already compressed. The
results were the infamous lock up, stutter, cant kill the cp/dd
command, oh god, system console is unresponsive too, what has science
done?!?!.

In the past threads I dug up, it seems that people were using wimpier
hardware or gzip-9 and running into this. I ran into it with very
capable hardware.

I do not get this behavior using the default lzjb compression, and I
was able to also produce it using weaker gzip-3 compression.


Is there a fix for this I am not aware of? Workaround? Etc? gzip
compression works wonderfully with the uncompressed smallers 1-4g
files I am trying. It would be a shame to use the weaker default
compression because of this test case.
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to