>I was planning to mirror them - mainly in the hope that I could hot swap a new 
>one in the event that an existing one started to degrade. I suppose I could 
>start with one of each and convert to a mirror later although the prospect of 
>losing either disk fills me with dread.

You do not need to mirror the L2ARC devices, as the system will just hit disk 
as necessary. Mirroring sounds like a good idea on the SLOG, but this has been 
much discussed on the forums.

>> Why not larger capacity disks?

>We will run out of iops before we run out of space.

Interesting. I find IOPS is more proportional to the number of VMs vs disk 
space. 

User: I need a VM that will consume up to 80G in two years, so give me an 80G 
disk.
Me: OK, but recall we can expand disks and filesystems on the fly, without 
downtime.
User: Well, that is cool, but 80G to start with please.
Me: <sigh> 

I also believe the SLOG and L2ARC will make using high RPM disks not as 
necessary. But, from what I have read, higher RPM disks will greatly help with 
scrubs and reslivers. Maybe two pools - one with fast mirrored SAS, another 
with big SATA. Or all SATA, but one pool with mirrors, another with raidz2. 
Many options. But measure to see what works for you. iometer is great for that, 
I find. 

>Any opinions on the use of battery backed SAS adapters?

Surely these will help with performance in write back mode, but I have not done 
any hard measurements. Anecdotally my PERC5i in a Dell 2950 seemed to greatly 
help with IOPS on a five disk raidz. There are pros and cons. Search the 
forums, but off the top of my head 1) SLOGs are much larger than controller 
caches: 2) only synced write activity is cached in a ZIL, whereas a controller 
cache will cache everything, needed or not, thus running out of space sooner; 
3) SLOGS and L2ARC devices are specialized caches for read and write loads, vs. 
the all in one cache of a controller. 4) A controller *may* be faster, since it 
uses ram for the cache.

One of the benefits of a SLOG on the SAS/SATA bus is for a cluster. If one node 
goes down, the other can bring up the pool, check the ZIL for any necessary 
transactions, and apply them. To do this with battery backed cache, you would 
need fancy interconnects between the nodes, cache mirroring, etc. All of those 
things that SAN array products do. 

Sounds like you have a fun project.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to