- Dennis J. denni...@conversis.de wrote:
What I'm aiming for as a starting point is a 3-4 host cluster with
about 10 VMs on each host and a 2 system DRBD based cluster as a
redundant storage backend.
That's a good idea.
The question that bugs me is how I can get enough bandwidth between the
hosts and the storage to provide the VMs with reasonable I/O
performance.
You may also want to investigate whether or not a criss-cross replication setup
(1A-2a, 2B-1b) is worth the complexity to you. That will spread the load
across two drbd hosts and give you approximately the same fault tolerance at a
slightly higher risk. (This is assuming that risk-performance tradeoff is
important enough to your project.)
If all the 40 VMs start copying files at the same time that would mean
that the bandwidth share for each VM would be tiny.
Would they? It's a possibility, and fun to think about, but what are the
chances? You will usually run into this with backups, cron, and other scheduled
[non-business load] tasks. These are far cheaper to fix with manually adjusting
schedules than any other way, unless you are rolling in dough.
Would I maybe get away with 4 bonded gbit ethernet ports? Would I
require fiber channel or 10gbit infrastructure?
Fuck FC, unless you want to get some out of date, used, gently broken, or
no-name stuff, or at least until FCoE comes out. (You're probably better off
getting unmanaged IB switches and using iSER.)
Can't say if 10GbE would even be enough, but it's probably overkill. Just add
up the PCI(-whatever) bus speeds of your hosts, benchmark your current load or
realistically estimate what sort of 95th percentile loads you would have across
the board, multiply by that percentage, and fudge that result for SLAs and
whatnot. Maybe go ahead and do some FMEA and see if losing a host or two is
going to peak the others over that bandwidth. If you find that 10GbE may be
necessary, a lot of mobos and SuperMicro have a better price per port for DDR
IB (maybe QDR now) and that may save you some money. Again, probably overkill.
Check your math. :)
Definitely use bonding. Definitely make sure you aren't going to saturate the
bus that card (or cards, if you are worried about losing an entire adapter) is
plugged into. If you're paranoid, get switches that can do bonding across
supervisors or across physical fixed configuration switches. If you can't
afford those, you may want to opt for 2Nx2N bonding-bridging. That would limit
you to probably two 4-1GbE cards per host, just for your SAN, but that's
probably plenty. Don't waste your money on iSCSI adapters. Just get ones with
TOEs.
--
Christopher G. Stach II
http://ldsys.net/~cgs/
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt