Brian wrote:
Interesting comments..

But I am confused.

Performance for my backups (compression/deduplication) would most likely not be 
#1 priority.

I want my VMs to run fast - so is it deduplication that really slows things 
down?
Dedup requires a fair amount of CPU, but it really wants a big L2ARC and RAM. I'd seriously consider no less than 8GB of RAM, and look at getting a smaller-sized (~40GB) SSD, something on the order of an Intel X25-M.

Also, iSCSI-served VMs tend to do mostly random I/O, which is better handled by a striped mirror than RaidZ.
Are you saying raidz2 would overwhelm current I/O controllers to where I could 
not saturate 1 GB network link?
No.

Is the CPU I am looking at not capable of doing dedup and compression?  Or are 
no CPUs capable of doing that currently?  If I only enable it for the backup 
filesystem will all my filesystems suffer performance wise?
All the CPUs you indicate can handle the job, it's a matter of getting enough data to them.

Where are the bottlenecks in a raidz2 system that I will only access over a 
single gigabit link?  Are the insurmountable?
RaidZ is good for streaming writes of large size, where you should get performance roughly equal to the number of data drives. Likewise, for streaming reads. Small writes generally limit performance to a level of about 1 disk, regardless of the number of data drives in the RaidZ. Small reads are in-between in terms of performance.


Personally, I'd look into having 2 different zpools - a striped mirror for your iSCSI-shared VMs, and a raidz2 for your main storage. In any case, for dedup, you really should have an SSD for L2ARC, if at all possible. Being able to store all the metadata for the entire zpool in the L2ARC really, really helps speed up dedup.


Also, about your CPU choices, look here for a good summary of the current AMD processor features:

http://en.wikipedia.org/wiki/List_of_AMD_Phenom_microprocessors

(this covers the Phenom, Phenom II, and Athlon II families).


The main difference between the various models comes down to amount of L3 cache, and HT speed. I'd be interested in doing some benchmarking to see exactly how the variations make a difference.


--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to