Re: [Hampshire] Virtualization Project advice
On Fri, Dec 12, 2008 at 11:27:29 + (+), Simon Capstick wrote: > That's a good comprehensive summary by David. I'll only add our > experience FWIW... One more experience story FWIW. Summary - KVM for the adventurous, VirtualBox (ease of use) or VMware (server for simplicity or ESX(i) for speed) for less adventurous or those averse to non-GPL. I'v used xen both at work and home and generally I've been happy with it. However Xen (host) is stuck at 2.6.18 and recently that's become a big problem (driver support at home, getting occasional spontaneous reboots at work). VMware Server, Virtualbox work the same way - slowly (well, not too bad TBH). ESX and ESXi are better, but they aren't paravirtualised, though there are some drivers for it. KVM can be paravirtualised and everyone is starting to use it. Our new dell 2950 (like a 1950 but physically bigger for more disks) supports hardware virtualisation - however I had to turn it on in the BIOS (ditto for power management). Hardware RAID card with battery backed cache (no point unless you have that battery backed cache IMO). So I've setup half a dozen OS (dapper, etch, rhel4 and solaris 10) on this over the last week - all worked fine, some are paravirtualised. I've also moved my 6 Xen domains at home over to KVM, also fine, all paravirtualised. Whilst I'm confident that KVM is the way of the future (ubuntu, redhat (which have bought the developes of KVM)), the management tools are very rough and immature ATM. virt-manager GUI is almost useless. There are various gotchas - e.g. using libvirt to manage the domains, without adding "" to the libvirt XML file a "graceful" shutdown wasn't (it just forced machine straight off). I found KVM networking very easy actually - it's just network bridging. However one thing I miss from Xen is the ability to passthrough PCI cards to underlying domains (this is being added to KVM but may depend upon quite rare hardware support). Adrian -- Please post to: Hampshire@mailman.lug.org.uk Web Interface: https://mailman.lug.org.uk/mailman/listinfo/hampshire LUG URL: http://www.hantslug.org.uk --
Re: [Hampshire] Virtualization Project advice
David Bell wrote: > Imran Chaudhry wrote: >> Hey all, I'm embarking on a project involving virtualization and >> thought I'd consult the list in a wisdom-of-crowds fashion :-) ... That's a good comprehensive summary by David. I'll only add our experience FWIW... We've opted for free/GPL where possible for our virtualisation. We're using Xen as provided by Debian (an old version I have to admit).It has served us well for several years. We're using an HP DL380 with redundant PSUs and a SAS RAID and a remote (web) management card. It all works with Debian including the management card's serial console access to the server. For even higher availability you can mirror DIMM pairs, or have one DIMM pair as a spare, or just use the whole lot. I've had better experience with HP hardware and Linux than Dell hardware. HP even supports Debian (installation) on certain server hardware, and they use it internally I believe. They're my first port of call for new server hardware now. The RAID has a battery backed cache - a must for reliable operation. Also ensure you use a decent UPS and have the server shutdown on a low battery condition. This is easy with a APC SmartUPS with an ethernet management card. We run 11 (DomU) servers on the DL380 such as mail, samba, proxy, database and our own server apps. We backup the DomUs to another box which is ready to run any of the backup DomUs with a single command. Our biggest bottleneck has been storage I/O. The CPUs are barely touched in comparison when averaged over 24 hours. I could improve write performance by moving from RAID6 to RAID10 but we need the storage space. I could also upgrade to 15K SAS disks, but again we need the space and you can buy bigger and cheaper 10K SAS drives. Much I/O performance is gained by enabling the RAID card's _write_ cache - but you must have a battery backed cache for this, otherwise you risk losing data and breaking your filesystem _and_ journal. I'm starting to take a more serious look at KVM. It's proving a real boon on my desktop PC when I need access to a Windows install. I think KVM will be your ultimate solution, as it will be for us, although we're sticking firmly with Xen for now, especially for Linux VMs anyway. I believe the creators of KVM have a product to help manage VMs but I believe it's orientated towards Windows XP desktops. There must be GPL graphical tools to help with KVM and Xen by now. It's worth a look around. As for Xen, the commercial offering has a management GUI. I'm afraid I've never really spent much time looking for management GUIs so I can't help you there. Your Dell is fine for most virtualisation purposes but if you do buy hardware ensure the processor has virtualisation extensions - hard not to with servers, but do check. Buy as much RAM as you can afford - you don't want 1 VM to start swapping and affect your other VMs (Done that!). This is where KVM and Xen differ. You allocate fixed amounts of memory to Xen VMs (DomUs) where as KVM shares one big pool of memory, with the VMs being processes (Others on the list will be able to explain the difference with more clarity/accuracy!). Storage is easier to upgrade later should you need more. We've opted to use low latency SAS disks in a RAID for VMs and backup to cheaper slower SATA disks. Simon C. -- Please post to: Hampshire@mailman.lug.org.uk Web Interface: https://mailman.lug.org.uk/mailman/listinfo/hampshire LUG URL: http://www.hantslug.org.uk --
Re: [Hampshire] Virtualization Project advice
Imran Chaudhry wrote: > Hey all, I'm embarking on a project involving virtualization and > thought I'd consult the list in a wisdom-of-crowds fashion :-) Hi Imran, > At my workplace we use virtualization to support test and development > of our products. That is, we have a team of about 8 staff creating VMs > of a custom Linux distribution and sometimes many such VMs connected > in virtual networks. There is also a requirement to add VMs of 4 or 5 > other staff (each of those with up to 200Gb of VMs) to this workload. Yes, I'm vaguely familiar with it ;) > Basically, I'm wondering what folks use to provide a reliable, > fast and highly-available virtualization infrastructure for > internal use only to serve the above usage scenario. In my research, > it seems the big virtualization vendors are geared to folks > using constantly running VMs that are serving websites, databases etc. At the University of Southampton we currently use Microsoft Virtual Server for this purpose but we're in the process of migrating to VMWare ESX ("VMWare Infrastructure") 3 on a number of servers for both "production" virtual machines and development/testing virtual machine services. > Our existing infrastructure is creaking a bit - solely for VMWare we > have a single Dell > 1750 1U with Dual Xeon 2.4Ghz (with hyper-threading stuff it acts like > 4 cores), 4 GBs of RAM (10Gb of swap) and SCSI discs in hardware > RAID5 using the Dell hardware raid gubbins. We have problems with slow > response, weird ARP issues and suchlike. I should add that it has a > Gigabit ethernet nic but our network is not all gigabit yet but thats > coming soon. This is with VMWare Server 1.0.4 (Free edition). Although old a PE1750 is still a very good server and that specification isn't "low-end". I'd recommend on keeping it to supplement what you want to buy in future. It lacks on-CPU virtualisation extensions but it will still do a good job. Are you still using the Debian Xen server for "internal" VMs? > We look to spend time to save money so the free versions of software > from the "big 2 or 3" look very attractive. I have been looking at > VMWare (Free or ESXi), Citrix Xen (XenServer 5 Express) and > VirtualIron. My users need some kind of GUI client or agent to perform > management and admin of their own VMs/networks (and ideally kept > separate from everyone else). This means a Linux GUI client (for > Ubuntu/Fedora would be lovely) is a MUST. I take it as read that each > vendor has a Windows client. We're using the free VMWare Workstation > at the moment which has a Linux client. I'm finding it hard to find > out who exactly has a Linux client and their websites are often > confusing if one is new to virtualization. (sometimes Wikipedia can > help cut through the marketing puff in these kinds of cases :-) You have a number of options open to you. I'm not going to mention paravirtualisation options or "operating system level" options because they won't work for your custom linux setup (unless you heavily modify the kernel which I doubt you'll want to do...). These options are all free to use: - VMWare ESXi This option is easy to install, easy to use and is totally free. This is probably the best performing solution as well. However it lacks a Linux GUI client, is limited when it comes to permissions (in that is has virtually none) and has local usernames and passwords (unless you hack it!). - VMWare Server Will be about 20% on average slower than the ESXi solution. Easy to install however, easy to use and again is totally free. It has a Linux client (and a Windows client) and permissions are slightly better (either "Private" or "Shared" virtual machines). The new version, 2, also has a web interface. It uses whatever authentication system the host operating system uses. - VirtualBox 2.0 VirtualBox will perform about the same as VMWare Server, perhaps a little slower. It is easy to setup however, easy to use but networking can be a little tricky on GNU/Linux. I'm not sure how multi-user works with VirtualBox. If you use the official version (not the Open Source version) you can use an RDP client to connect to the console of all of the virtual machines - very nice feature. - Xen on GNU/Linux This is much harder to setup and maintain (unless you use something like CentOS 5x). Performance isn't bad but it isn't fantastic either. It won't be as fast as ESXi. Managing virtual machines however and accessing their consoles is tough - there are a number of third party add ons but none of them are really up to scratch. If you use CentOS/RHEL then you can use the Red Hat GUI tools. I also advise not going near Xen - the open source version has never really seen much love and attention. Most work goes into the commerial XenServer. - KVM on GNU/Linux You could use KVM on Ubuntu 8.04. This has graphical tools and setup isn't that difficult. Networking is a bit of a pain but it possible to get working. Performance might be a little better th
[Hampshire] Virtualization Project advice
Hey all, I'm embarking on a project involving virtualization and thought I'd consult the list in a wisdom-of-crowds fashion :-) At my workplace we use virtualization to support test and development of our products. That is, we have a team of about 8 staff creating VMs of a custom Linux distribution and sometimes many such VMs connected in virtual networks. There is also a requirement to add VMs of 4 or 5 other staff (each of those with up to 200Gb of VMs) to this workload. Basically, I'm wondering what folks use to provide a reliable, fast and highly-available virtualization infrastructure for internal use only to serve the above usage scenario. In my research, it seems the big virtualization vendors are geared to folks using constantly running VMs that are serving websites, databases etc. Our existing infrastructure is creaking a bit - solely for VMWare we have a single Dell 1750 1U with Dual Xeon 2.4Ghz (with hyper-threading stuff it acts like 4 cores), 4 GBs of RAM (10Gb of swap) and SCSI discs in hardware RAID5 using the Dell hardware raid gubbins. We have problems with slow response, weird ARP issues and suchlike. I should add that it has a Gigabit ethernet nic but our network is not all gigabit yet but thats coming soon. This is with VMWare Server 1.0.4 (Free edition). We look to spend time to save money so the free versions of software from the "big 2 or 3" look very attractive. I have been looking at VMWare (Free or ESXi), Citrix Xen (XenServer 5 Express) and VirtualIron. My users need some kind of GUI client or agent to perform management and admin of their own VMs/networks (and ideally kept separate from everyone else). This means a Linux GUI client (for Ubuntu/Fedora would be lovely) is a MUST. I take it as read that each vendor has a Windows client. We're using the free VMWare Workstation at the moment which has a Linux client. I'm finding it hard to find out who exactly has a Linux client and their websites are often confusing if one is new to virtualization. (sometimes Wikipedia can help cut through the marketing puff in these kinds of cases :-) Regarding hardware I was looking at going one of two ways - a) expensive - a Dell 1950 + DAS (direct attached storage) such as their MediaVault MD1000 + some fast SAS discs. This particular Dell has support for VMWare ESXi at the firmware level which effectively turns it into a "VMWare appliance" with the resulting VMs having a smaller footprint. b) cheaper - several cheaper Dells such as the PowerEdge T105 with say 8Gb RAM each and fast SATA drives acting as a cluster. That is, each node with their own VMs running but the whole cluster with one central management console. Both XenSource and VMWare have features to allow moving VMs between nodes in clusters without interruptions to the running VM. I'm going to be at the meeting this Saturday so if anyone there is willing to chat to me about their own virtualization infrastructure experiences then I am all ears :-) Cheers, Imran -- Please post to: Hampshire@mailman.lug.org.uk Web Interface: https://mailman.lug.org.uk/mailman/listinfo/hampshire LUG URL: http://www.hantslug.org.uk --