> It is hard, as you note, to recommend a box without
> knowing the load. How many linux boxes are you
> talking about?

This box will act as a backing store for a cluster of 3 or 4 XenServers with 
upwards of 50 VMs running at any one time.

> Will you mirror your SLOG, or load balance them? I
> ask because perhaps one will be enough, IO wise. My
> box has one SLOG (X25-E) and can support about 2600
> IOPS using an iometer profile that closely
> approximates my work load. My ~100 VMs on 8 ESX boxes
> average around 1000 IOPS, but can peak 2-3x that
> during backups.

I was planning to mirror them - mainly in the hope that I could hot swap a new 
one in the event that an existing one started to degrade.  I suppose I could 
start with one of each and convert to a mirror later although the prospect of 
losing either disk fills me with dread.
 
> Don't discount NFS. I absolutely love NFS for
> management and thin provisioning reasons. Much easier
> (to me) than managing iSCSI, and performance is
> similar. I highly recommend load testing both iSCSI
> and NFS before you go live. Crash consistent backups
> of your VMs are possible using NFS, and recovering a
> VM from a snapshot is a little easier using NFS, I
> find.

That's interesting feedback.  Given how easy it is to create NFS and iSCSI 
shares in osol, I'll definitely try both and see how they compare.
 
> Why not larger capacity disks?

We will run out of iops before we run out of space.  Its is more likely that we 
will gradually replace some of the SATA drives with 6gbps SAS drives to help 
with that and we've been mulling over using an LSI SAS 9211-8i controller to 
provide that upgrade path:

http://www.lsi.com/storage_home/products_home/host_bus_adapters/sas_hbas/internal/sas9211-8i/index.html

> Hopefully your switches support NIC aggregation?

Yes, we're hoping that a bond of 4 x NICs will cope.

Any opinions on the use of battery backed SAS adapters? - it also occurred to 
me after writing this that perhaps we could use one and configure it to report 
writes as being flushed to disk before they actually were.  That might give a 
slight edge in performance in some cases but I would prefer to have the data 
security instead, tbh.

Matt.
-- 
This message posted from opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to