I dont think L2 cache is important for ZFS. Because a file server will always 
read new files and a L2 cache can never fit in a file server work load. You 
would need a several GB large L2 cache to do that. With that said, it is 
possible to later add/remove a SSD drive as a cache to your zpool. With the 
right loads, you will get a tremendous performance boost.

For a file server, I would use a small L2 cache. The reason is that a L2 cache 
will not help a file server work load, but instead it will only consume power. 
Therefore, a small L2 cache cpu will be able to achieve 45W TDP. AMD has just 
recently released such a CPU.

Use a cpu with small L2 cache, 2-4 cores, and 2GHz or so, and 64 bit. This will 
give you plenty of horse power.


With 4 discs, you will get 120MB/sec or so. With 46 discs, you will get 
2-3GB/sec read speed and 1GB/write speed. The more discs you use, the faster it 
gets. With 7 discs, you will get 400MB/sec read speed and 200MB/sec write speed:
http://opensolaris.org/jive/thread.jspa?threadID=54481&tstart=45



Also, a zpool is built up from groups (vdev) of discs. Each group can be 
raidz1/raidz2/mirrors/etc. You can add several groups to a zpool. But you can 
never add/remove discs from an existing vdev. A vdev is fixed. But you can swap 
the discs to larger. The more vdevs you have, the faster. It is faster to have 
2 vdevs, than one large vdev.

Also, use raidz2 if you plan to use large discs. The reason is that if a disc 
breaks, it takes long time to repair raid. With 2TB discs, it may take days. 
With 4TB drives, it may take one week. This is because drives get larger, but 
not faster. During that repair time, stress will increase on the other drives, 
causing them to crash. This happens a lot.
-- 
This message posted from opensolaris.org

Reply via email to