Just to share my experience.
From the beginning of my experience with OpenNebula, I made some
changes in storage technology and now I found a solution that works
pretty well for my necessities.
First: there isn't an "always ok" solution, every solution works in a
particular situation. My situation is, simply, a budget situation where
the pourpose is saving money and have a solution that can fit my I/O
load necessity.
In my private cloud, I have 5 KVM servers and a total of 20-30 VM: some
are computational only VM (mail scanner, DNS, ecc.), some are I/O
intensive (MySQL server for LAMP, Syslog server, ecc.): the seconds are
the problem because the previous solutions didn't fits the I/O necessities.
My first solution was a storage server made with linux that exports an
(c)LVM disk via AoE (just because has less overhead than iSCSI): the
load of the server was very high and the I/O throughput not fast. Ok,
the lesson I learned: if you have to make a shared storage, don't use a
commodity server, use an optimized one with high speed disks, powerful
controllers, 10GB Ethernet... or simply, buy a FC SAN ;-)
My second test was with a cluster storage with MooseFS: here I had
conflicting results. I have a customer with a well working setup with 10
storage server and 10 XEN Hypervisors; in my private cloud, with only 3
storage server (one master and two chunk) the I/O is slow as the first
solution: no benefit from using two server to balance the I/O load, and
2 1 Gb Ethernet card in bonding is another bottleneck. The lesson I
learned here: to have a powerful cluster storage you have to use many
servers.
My third (last) setup that work pretty well is... to not use only one
shared storage, but distribute the storage across the hypervisors.
I thought of a thing: my old hypervisors are some Dell R410 with 2
quad-core CPU and 32 GB Ram; if I can fit in a Rack 1U cabinet, 2
Mini-ITX Motherboard with a Core i7 (not perfectly like the Xeon of R410
but not too far) and 16 GB Ram... it's the same. And if I can fit also 4
high speed disk like WD Velociraptor for the data and two SSD disk for
SO, I can use DRBD in Active/Active mode with cLVM or GFS2 to have a
decent storage in HA for the VMs instantiated in this "double-server".
Now I have 4 mini-itx double server each with this configuration; in
OpenNebula, each double-server is a separate "cluster" and the DRBD disk
is the datastore associated with the cluster.
Certainly I can't migrate a VM across two cluster, but at the moment
this solution fits pretty well my speed necessity, the costs of each
"double-server" are less than a real 1U server and the power consumption
in datacenter decreased.
My 2 cents,
Alberto
--
AZ Network Specialist
via Mare, 36A
36030 Lugo di Vicenza (VI)
ITALY
P.I. IT04310790284
http://www.azns.it
Tel +39.3286268626
Fax +39.0492106654
_______________________________________________
Users mailing list
Users@lists.opennebula.org
http://lists.opennebula.org/listinfo.cgi/users-opennebula.org