I've been experimenting with oVirt 3.2 on some old hardware, and am now preparing to buy new hardware for using oVirt 3.3 in production. I'm interested in any feedback about what I plan to purchase. I want to keep the setup as simple as possible. Our current environment consists of mostly CentOS 6.4 systems.

The combined oVirt engine and file server will be a Dell R720 with dual Xeon E5-2660 and 64 GB of memory. The server would have an LSI 9207-8i HBA connected to the SAS backplane. The R720 enclosure has 16 x 2.5" disk slots. I would get 2 x 500 GB NLSAS drives for mirrored md rood (raid1), use 12 slots for RAID10 SAS 10K rpm drives (either 600 GB or 900 GB), and have an additional 2 spares. Data storage would be virtual machines, and some associated data. The O/S would be CentOS 6.4.

The nodes would be 3 x Dell R620, dual Xeon E5-2690, 128 GB memory, each with just a single, small NL SAS root drive. There would be no other local storage. All VMs would use the file server as the datastore. The nodes would run oVirt node.

In terms of networking, each machine would have 4 ports - 2 x 1 Gb (bonded) giving machines access to "public" network (that we do not control). The 2 x 10 Gb copper would be connected to a locally installed copper 10G switch that we fully control - 1 port used for storage, and 1 for management/consoles/VM migration.

A few additional notes ...

I chose to stick with software raid MD on the file server, mostly for cost, and simplicity. I have a lot of experience good with MD, and performance seems reasonable.

I would have gone SSD for the file server root disk, but the cost from Dell for their SSD is prohibitive, and I want the whole system to be included in the warranty. NLSAS is the cheapest disk that will have support for the duration of the warranty period (with Dell servers, SATA drives are only warranted for 1 year).

The nodes with 1 NLSAS drive... I've thought about replacing that with simply an SD card. It's not clear if this the best solution, or how much space I would need on that card. At least when I configure via the Dell web site, the biggest SD card it seems I can purchase with a server is 2 GB which doesn't seem like very much! I guess people guy bigger cards separately. I know a disk will work, and give me more than enough space and no hassle.

I've chosen to keep the setup simple by using NFS on the file server, but I see a whole lot of people here experimenting with the new Gluster capabilities in oVirt 3.3. It's not clear if that's being used in production, or how reliable that would be. I really can't find information on performance tests, etc with Gluster and oVirt, in particular, with comparison of NFS and Gluster. Would there be a performance advantage to using Gluster here? How would it work? by adding disk to the nodes, and getting rid of the file server (or at least turning the file server into a smaller engine only server)? How would this impact the nodes in terms of their ability to handle VMs? (performance?) I presently have no experience with Gluster whatsoever, though I'm certainly never against learning something new, especially should it benefit my project. Unfortunately, as I'm positive everyone can attest for is that it's just trouble finding the number of hours in the day :) There's one thing for sure - Gluster itself, while maybe not TOO complicated is still more complicated than an NFS only setup.

As I've mentioned before, we don't use LDAP for authentication, so I'll be restricted to one admin user at the moment unless I setup a separate infrastructure for oVirt authentication. That will be fine for a little while. I understand that work may be underway for pluggable authentication with oVirt. I'm not sure if that ties into any of the items on Itamar's list though. Itamar? :) I was hoping to see that pluggable authentication model sooner rather than later so that I could write something to work with our custom auth system.

In terms of power management - my existing machines are using a Raritan KVM with Raritan power management dongles and power bars. I haven't had an opportunity to see if oVirt can manage the devices, but I guess if oVirt can't do it, I can continue to manage power through the KVM interface.

Any feedback would be much appreciated.

Thanks for your time.

Jason Keltz

_______________________________________________
Users mailing list
Users@ovirt.org
http://lists.ovirt.org/mailman/listinfo/users

Reply via email to