Re: [CentOS-virt] (KVM) How can I migrate VM in a non shared storage environment?

2010-06-24 Thread Dennis J.
This can be useful in some cases:
http://www.bouncybouncy.net/ramblings/posts/xen_live_migration_without_shared_storage/

With the blocksync.py script on that page you can first make a copy of the 
block device while the VM is still running. Then shut down the VM and make 
another run only this time you only have to copy over the bits that have 
changed during the previous sync. Depending on HD/CPU/Net performance this 
can reduce the downtime a bit.

Regards,
   Dennis

On 06/24/2010 11:22 PM, C.J. Adams-Collier wrote:
 Note, the -x argument will keep the copy to a single partition

 On Thu, 2010-06-24 at 14:12 -0300, Lucas Timm LH wrote:
 Create a new virtual machine on your storage. After this, boot some
 Linux distribution in your new virtual machine (I like SysrescueCD).
 Enable your ssh server, change the root password and so and back to
 your old virtual server and type:


 # dd if=/dev/sda | ssh root@(new_vm) (dd of=/dev/sda)


 Type the root password, shutdown the old VM and reboot your new vm.


 (PS: You don't need to shutdown the old vm to this proccess).


 I do this everytime, I don't like copy the HD files using cp, tar or
 rsync because it try to copy the /proc, /dev and a lot virtual
 devices. Using dd it copies just the HD bits, the boot sector, etc.

 2010/6/24 C.J. Adams-Collierc...@colliertech.org
  I often use rsync -a for remote systems or cp -a for local
  systems.
  I've also used dd.  You can have dd output to stdout, pipe it
  to ssh and
  have ssh output to dd on the other end.

  You can also connect to a SAN device on the source and dd from
  the local
  block device to the SAN device.

  Lots of ways to do it ;)

  Cheers,

  C.J.


  On Thu, 2010-06-24 at 10:52 -0400, Kelvin Edmison wrote:
  
  
On 24/06/10 7:17 AM, Poh Yong Hwangyong...@gmail.com
  wrote:
  
  I have a server running CentOS 5.5 with KVM capabilities.
  I need to migrate
  all the VMs to another server with the exact same hardware
  specs. The problem
  is it is running on individual harddisks, not shared
  storage. What is the best
  way to migrate to minimise downtime?
  
I've had good success using dd and nc (netcat) to copy the
  contents of a
disk or disk image from one machine to another, and
  verifying the copy was
successful with a md5sum or sha1sum of both the original and
  copied disk.
  
Kelvin
  
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


  ___
  CentOS-virt mailing list
  CentOS-virt@centos.org
  http://lists.centos.org/mailman/listinfo/centos-virt




 --
 Lucas Timm, Goiânia/GO.
 http://timmerman.wordpress.com

 (62) 9157-0789




 ___
 CentOS-virt mailing list
 CentOS-virt@centos.org
 http://lists.centos.org/mailman/listinfo/centos-virt

___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] [fedora-virt] Thoughts on storage infrastructure for small scale HA virtual machine deployments

2010-03-07 Thread Dennis J.
On 03/02/2010 04:51 AM, Ask Bjørn Hansen wrote:

 On Mar 1, 2010, at 18:56, Dennis J. wrote:

 The question that bugs me is how I can get enough bandwidth between the
 hosts and the storage to provide the VMs with reasonable I/O performance.
 If all the 40 VMs start copying files at the same time that would mean that
 the bandwidth share for each VM would be tiny.

 It really depends on the specific workloads.  In my experience it's generally 
 the number of IOs per second rather than the bandwidth that's the limiting 
 factor.

 We have a bunch of 4-disk boxes with md raid10 and we generally run out of 
 disk IO before we run out of memory (~24-48GB) or CPU (dual quad core 2.26GHz 
 or some such).

That's very similar to what we are experiencing. The primary Problem for me 
is how to deal with the bottleneck of a shared storage setup. The most 
simple setup is a 2-system criss-cross setup where the two hosts also serve 
as halves for a DRBD cluster. The advantage of this approach is that it's a 
cheap solution, that only a part of the storage-traffic has to go over the 
network between the machines and that the network only hast to handle the 
sorage-traffic of the VMs of those two machines.
The disadvantage of that approach is that you have to keep 50% of potential 
server capacity free in case of a failure of the twin node. That's quite a 
lot of wasted capacity.
To reduce that problem you can increase the number of hosts to let say for 
an example four which would reduce the spare capacity needed to 33% on each 
system but then you really need to separate the storage from the hosts an 
now you have a bottleneck on the storage end. Increase the number of hosts 
to 8 and you get even less wasted capacity but also increase the pressure 
on the storage bottleneck a lot.
Since I'm new to the whole SAN aspect I'm currently just looking at all the 
options that are out there and basically wonder how the big boys are 
handling this who have hundreds if not thousand of VMs running and need to 
be able to deal with physical failures too.
That is why I find the sheepdog project so interesting because it seems to 
address this particular problem in a way that would provide almost linear 
scalability without actually using a SAN at all (well, at least not in the 
traditional sense of the word).

Regards,
   Dennis
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


[CentOS-virt] Thoughts on storage infrastructure for small scale HA virtual machine deployments

2010-03-01 Thread Dennis J.
Hi,
up until now I've always deployed VMs with their storage located directly 
on the host system but as the number of VMs grows and the hardware becomes 
more powerful and can handle more virtual machines I'm concerned about a 
failure of the host taking down too many VMs in one go.
As a result I'm now looking at moving to an infrastructure that uses shared 
storage instead so I can live-migrate VMs or restart them quickly on 
another host if the one they are running on dies.
The problem is that I'm not sure how to go about this bandwidth-wise.
What I'm aiming for as a starting point is a 3-4 host cluster with about 10 
VMs on each host and a 2 system DRBD based cluster as a redundant storage 
backend.
The question that bugs me is how I can get enough bandwidth between the 
hosts and the storage to provide the VMs with reasonable I/O performance.
If all the 40 VMs start copying files at the same time that would mean that 
the bandwidth share for each VM would be tiny.
Granted this is a worst case scenario and that's why I want to ask if 
someone in here has experience with such a setup, can give recommendations 
or comment on alternative setups? Would I maybe get away with 4 bonded gbit 
ethernet ports? Would I require fiber channel or 10gbit infrastructure?

Regards,
   Dennis

PS: The sheepdog project (http://www.osrg.net/sheepdog/) looks interesting 
in that regard but apparently still is far from production-ready.
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] VirtIO with CentOS 5.4

2010-01-21 Thread Dennis J.
On 01/21/2010 10:59 PM, Bill McGonigle wrote:
 On 01/21/2010 04:08 PM, Fabian Arrotin wrote:
 The standard kvm from 5.4 and not the*old*  one from extras

 Hrm, this is probably where I'm going wrong.  I have kvm -36 from
 -extras (which bails if you specify a virtio type device).  I'm not
 seeing kvm in the repos for 5.4 or updates/5.4 on mirror.centos.org, i.e.:

 http://mirror.centos.org/centos-5/5.4/os/i386/CentOS/

 Which version should I be seeing for 5.4?

AFAIK Red Hat only supports KVM for the x86_64 architecture so if you want 
to use it on i386 you have to build your own packages.

Regards,
   Dennis
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] KVM management tools.....

2010-01-12 Thread Dennis J.
On 01/12/2010 05:22 PM, Tom Bishop wrote:
 Looking at what my best options for managing KVM via a gui.  Running
 Centos 5.4 and have several machines and want to migrate off of vmware
 server 2.x.  So far it appears that the management tools haven't quite
 cought up to Vmware but are gaining and closing.  I have been looking at
 convirt, and others.  I like what I see in Ovirt but I'm not sure it is
 available for centos 5.4, or is it?  Is there anyone running ovirt in
 centos?  Also, what are folks using for their management tools for KVM,
 Thanks.

Ovirt looks interesting but is too immature at this point for my taste. 
Once the project stabilizes and maybe does a 1.0 release it's worth another 
look.

I'm mostly using virt-manager and shell tools combined with nagios/cacti 
for monitoring/graphing the systems though I'm looking at Zabbix right now 
which can do both and seems to be a very nice project all around.

Regards,
   Dennis
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] virt-manager issues?

2009-11-23 Thread Dennis J.
Found the cause of the problem. It seems virt-manager parses both its own 
config files and those under /etc/xen. There were still the config files 
for the old VMs in /etc/xen but after the virsh edit libvirt also create 
new ones for the renamed VMs. This apparently confused virt-manager. After 
removing the old config files in /etc/xen things look ok now.

Looks like the rename-case is not properly handled by libvirt (the old 
files should be removed after creating the new ones).

Regards,
   Dennis

On 11/23/2009 06:56 PM, Andri Möll wrote:
 Maybe restarting libvirtd on the host helps.  Or
 virt-manager --debug --no-fork # might say something informative.


 Andri


 On Mon, 2009-11-23 at 17:26 +0100, Dennis J. wrote:
 Hi,
 A short while ago I renamed two VMs by shutting them down, lvrenaming the
 storage devices and adjusting the storage path and vm name using virsh 
 edit.
 This works fine so far and virsh list shows them correctly however
 virt-manager has gone bonkers and still shows them with the old names and
 alternating between the status Shutoff and Running with every display
 refresh and CPU usage alternating between 0% and 100%. All other VMs on the
 host are fine and are displayed correctly by virt-manager.

 Does anybody know what the problem could be and how to fix it? While this
 issue seems to display related rather than being an actual problem with the
 VMs it's pretty irritating to say the least.

 Regards,
 Dennis
 ___
 CentOS-virt mailing list
 CentOS-virt@centos.org
 http://lists.centos.org/mailman/listinfo/centos-virt

 ___
 CentOS-virt mailing list
 CentOS-virt@centos.org
 http://lists.centos.org/mailman/listinfo/centos-virt

___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] XEN and RH 6

2009-11-10 Thread Dennis J.
On 11/10/2009 04:13 PM, Pasi Kärkkäinen wrote:
 On Tue, Nov 10, 2009 at 05:12:50PM +0200, Pasi Kärkkäinen wrote:
 On Tue, Nov 10, 2009 at 03:49:59PM +0100, Dennis J. wrote:
 On 11/10/2009 03:35 PM, Grant McWilliams wrote:

  Both Novell and Oracle having been deeply involved in Xen lately, both
  are developing and supporting their own products based on Xen.

  -- Pasi

  ___



 I have no problem with a better solution than Xen because to be honest
 it's a pain sometimes but at this point virtually all enterprise VM
 deployments are either based on VMware ESX or Xen (Xenserver,
 VirtualIron, Amazon AWS, Oracle, Sun SVM, Redhat and Suse). This tide
 will change as KVM becomes more dominant in the VM space but I don't see
 that happening for a while. I'm also a bit skeptical as to how well a
 fully virtualized system (KVM) will run in comparison to a fully
 paravirtualized system (Xen PV). I have a system with 41 VMs on it and
 I'll be having 2 weeks of planned downtime in the near future. I'd like
 to see how these systems run under KVM.

 I've been wondering about the definition of PV in the context of KVM/Xen.
 In the Linux on Linux case for Xen PV practically means that in the HVM
 case I have to access block devices using /dev/hda while in the PV case I
 can use the faster /dev/xvda. When using KVM which apparently only supports
 HVM I can still install a guest using the virtio drivers which seem to do
 the same as the paravirtualized devices on Xen.

 So what is the KVM+virtio case if not paravirtualization?


 KVM+virtio means you're using paravirtualized disk/net drivers on a
 fully virtualized guest.. where Qemu emulates full PC hardware with BIOS
 and all. So only the disk/net virtio drivers bypass Qemu emulation.
 (Those are the most important and most used devices.)

 Xen paravirtualized guests run natively on Xen, there's no need for
 emulation since the guest kernels are aware that they're being
 virtualized.. There's no Qemu emulating PC hardware with BIOS for PV guests.


 Oh, and Xen also has PV-on-HVM drivers for HVM fully virtualized guests
 to bypass Qemu :)

Which I guess makes describing a guest as fully virtualized or 
paravirtualized rather pointless given that there now is just a degree of 
how paravirtualized a guest is depending on the drivers you use.

Regards,
   Dennis
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] img partitioning swap

2009-11-01 Thread Dennis J.
On 11/01/2009 10:51 AM, Manuel Wolfshant wrote:
 On 11/01/2009 08:37 AM, Brett Worth wrote:
 Christopher G. Stach II wrote:

 I'd recommend not using LVM inside the images because if you just have
 a raw disk image in
 there with regular partitions you can mount it on dom0 (with losetup)
 for maintenance.  I
 don't think that would be possible with LVM.

 But it is.


 I guess that's informative so why don't I feel informed? :-)

 OK.  I'll bite.  How?
 using the procedure described at
 http://www.centos.org/docs/5/html/5.2/Virtualization/sect-Virtualization-How_To_troubleshoot_Red_Hat_Virtualization-Accessing_data_on_guest_disk_image.html

It should be mentioned that it's important not to accept the default volume 
group name when using LVM as that will lead to a collision in a case such 
as this where the VG name of both host and guest might end up beeing 
VolGroup00. I hope RHEL/Centos 6 chooses better defaults based on the 
hostname for example.

Regards,
   Dennis
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


[CentOS-virt] Xen to KVM migration

2009-10-12 Thread Dennis J.
Hi,
I'm thinking about how to go about migrating our Xen VMs to KVM. Migrating 
the configuraton should be easy using the virsh dumpxml/define commands but 
what is the best way to transfer the (logical volume based) images without 
too much downtime for the guest system?

Can rsync operate on logical volumes? If so I could potentially use dd to 
transfer an initial copy of the image to the destination host and then shut 
down the guest, rsync the logical volumes which shouldn't take too long as 
not much data has to be transfered thanks to the initial dd and then boot 
the guest on the new machine.

Is something like this possible or would you do something different?

Regards,
   Dennis
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Xen to KVM migration

2009-10-12 Thread Dennis J.
On 10/12/2009 06:17 PM, Grant McWilliams wrote:

 On Mon, Oct 12, 2009 at 5:48 AM, Dennis J. denni...@conversis.de
 mailto:denni...@conversis.de wrote:

 Hi,
 I'm thinking about how to go about migrating our Xen VMs to KVM.
 Migrating
 the configuraton should be easy using the virsh dumpxml/define
 commands but
 what is the best way to transfer the (logical volume based) images
 without
 too much downtime for the guest system?

 Can rsync operate on logical volumes? If so I could potentially use
 dd to
 transfer an initial copy of the image to the destination host and
 then shut
 down the guest, rsync the logical volumes which shouldn't take too
 long as
 not much data has to be transfered thanks to the initial dd and
 then boot
 the guest on the new machine.

 Is something like this possible or would you do something different?

 Regards,
Dennis


 Can't you just use the LV in place with KVM?

I may be wrong about this but isn't running KVM on top of the Xen 
hypervisor a problem? Maybe this has changed but I thought in order to be 
able to use KVM you first have to disable the Xen hypervisor and boot into 
the regular Kernel.

Regards,
   Dennis
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] Resizing disks for VMs

2009-09-28 Thread Dennis J.
On 09/28/2009 06:37 PM, Fabian Arrotin wrote:
 Dennis J. wrote:
 Hi,
 Is there a way to make a PV xen guest aware of a size change of the host
 disk? In my case I'm talking about a Centos 5.3 host using logical volumes
 as storage for the guests and the guests running Centos 5.3 and LVM too.
 What I'm trying to accomplish is to resize the logical volume for the guest
 by adding a few gigs and then make the guest see this change without
 requiring a reboot. Is this possible maybe using some kind of bus rescan in
 the guest?


 No, it's not possible unfortunately. On a traditionnal SCSI bus you can
 rescan the whole bus to see newer/added devices or just the device to
 see newer size, but not on a Xen domU .
 At least that's what i found when i blogged about that . See that thread
 on the Xen list :
 http://lists.xensource.com/archives/html/xen-users/2008-04/msg00246.html

 So what i do since then is to use lvm in the domU as well and add a new
 xvd block device to the domU (aka a new LV on the dom0) and then the
 traditionnal pvcreate/vgextend/lvextend. Working correctly for all my
 domU's ..

I just tested this and it works great, thanks!

Regards,
   Dennis
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] High CPU usage when running a CentOS guestinVirtualBox

2009-09-14 Thread Dennis J.
On 09/14/2009 04:53 PM, Akemi Yagi wrote:
 On Mon, Sep 14, 2009 at 7:24 AM, Hildebrand, Nils, 232
 nils.hildebr...@bamf.bund.de  wrote:
 Hi Akemi,

 KVM uses a para-virtualized approach?

 Not at this moment according to this Red Hat virtualization guide:

 http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5.4/html/Virtualization_Guide/chap-Virtualization-Guest_operating_system_installation_procedures.html#sect-Virtualization-Installing_Red_Hat_Enterprise_Linux_5_as_a_para_virtualized_guest

Ugh, I guess that means my plans to switch from Xen to KVM have to wait 
until RHEL 6 is released.

I wondering why that is though. Since 5.3 the kernel comes with the virtio 
drivers and you can install it paravirtualized under e.g. Fedora 11 so I'm 
not sure what actually prevents PV from working in KVM in 5.4.

Regards,
   Dennis
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] The best network to use.

2009-07-27 Thread Dennis J.
On 07/25/2009 10:24 PM, Christopher G. Stach II wrote:
 - Richrhd...@gmail.com  wrote:

 When using a para-virtualized guest, which network type should be
 used?

 Are you asking whether NAT vs. bridged is better?  If so, it doesn't matter.  
 The guest is virtualized, remember?  It doesn't know or care about what's 
 underneath.  What matters is your physical host's network.


That might be true for desktop virtualization but when virtualizing a 
server you probably want to go for the bridge to have your system reachable 
from the outside and make it a proper member of your network infrastructure.

Regards,
   Dennis
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


[CentOS-virt] I/O load distribution

2009-07-27 Thread Dennis J.
Hi,
What is the best way to deal with I/O load when running several VMs on a 
physical machine with local or remote storage?

What I'm primarily worried about is the case when several VMs cause disk 
I/O at the same time. One example would be the updatedb cronjob of the 
mlocate package. If you have say 5 VMs running on a physical System with a 
local software raid-1 as storage and the all run updatedb at the same time 
that causes all of them to run really slowly because the starve each other 
fighting over the disk.

What is the best way to soften the impact of such a situation? Does it make 
sense to use a hardware raid instead? How would the raid type affect the 
performance in this case? Would the fact that the I/O load gets distributed 
across multiple spindles in, say, a 4 disk hardware raid-5 have a big 
impact on this?

I'm currently facing the problem where I fear that random disk I/O by too 
many VMs on a physical system could cripple their performance even though I 
have plenty of CPU cores/RAM left to run them.

Has anyone experience with this problem and maybe some data to shed some 
light on this potential bottleneck for virtualization?

Regards,
   Dennis
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] I/O load distribution

2009-07-27 Thread Dennis J.
On 07/27/2009 04:53 PM, Karanbir Singh wrote:
 On 07/27/2009 02:15 PM, Dennis J. wrote:
 Hi,
 What is the best way to deal with I/O load when running several VMs on a
 physical machine with local or remote storage?


 have you looked at :

 http://sourceforge.net/apps/trac/ioband/


Yes, I've taken a look at that but before I get to the tuning on the 
software side I want to get a feel for the options and their impact on the 
hardware side.

Regards,
   Dennis
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] I/O load distribution

2009-07-27 Thread Dennis J.
On 07/27/2009 05:12 PM, Ben Montanelli wrote:
 I am certainly no expert on Xen. I have read through docs and various
 threads a bit considering the I/O demands and have the impression that
 there are a couple of primary factors to work with (please correct me if
 I'm wrong). My comprehension is far from complete.

 1 - Select which scheduler, weight and cap to use. Some favor
 computation over I/O and vice versa. If your dom's are fighting over
 disk I/O this is the referee and enforces the rules YOU choose to keep
 it fair and efficient as possible.

 Suggested read, document date apparently Jul-18-2009:
 http://cseweb.ucsd.edu/~dgupta/papers/per07-3sched-xen.pdf


Thanks for that. I'll probably move most of my Xen VMs over to KVM as soon 
as that becomes a viable option but this should help me getting my bearings 
with regards to understanding the overall scheduling and I/O topics.

Regards,
   Dennis
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


[CentOS-virt] No cpu throttling for Xeon E5405?

2009-05-22 Thread Dennis J.
Hi,
we bought some machines with 2 x quad core Xeon E5405 processors and 
installed centos 5.3 on them. My problem is that I can't get the cpuspeed 
service to work. No driver seems to claim responsibility for the throttling 
and the fallback modprobe acpi_cpufreq in the cpuspeed init script just 
yields a No such device message. According to the acpi information the 
CPUs should support this just fine:

cat /proc/acpi/processor/CPU0/info:
processor id:0
acpi id: 0
bus mastering control:   yes
power management:no
throttling control:  yes
limit interface: yes

cat /proc/acpi/processor/CPU0/throttling:
state count: 8
active state:T0
states:
*T0:  00%
 T1:  12%
 T2:  25%
 T3:  37%
 T4:  50%
 T5:  62%
 T6:  75%
 T7:  87%

At least half of the cores aren't really used at the moment under non-peak 
load so we are wasting quite a bit of power with this. Any idea on how to 
get this working?

Regards,
   Dennis
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


[CentOS-virt] Centos 5.2, Xen and /lib[64]/tls

2009-02-13 Thread Dennis J.
Hi,

I'm setting up a few machines for virtualization using Xen on Centos 5.2 
x86_64. A lot of how-to's out there tell me to do something like mv 
/lib/tls /lib/tls.disabled or similar actions or else Xen might not work 
correctly. Is this something that is still relevant or does it only apply 
to older versions of Xen/Centos? If this is still necessary what is the 
best way to disable this permantently so that after an update the directory 
doesn't get created again?

Regards,
   Dennis
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt


[CentOS-virt] Mixed dom0/domU usage?

2009-02-04 Thread Dennis J.
Hi,
I'm wondering about the impact of using both dom0 and domU's on a server at 
the same time. I'm worried about the performance impact of running a Mysql 
server in a domU and now I'm thinking about moving the Mysql part of a LAMP 
setup into dom0 and running a few Apache guests as domUs. Since the Apaches 
will serve mostly from an NFS share they won't have much impact on the disk 
i/o so the database should be able to utilize the local storage without 
much interference from the guests. The plan is to limit dom0 to let's say 
4gb of ram and then use the rest of it for the VMs.

Has anyone experinece with this kind of mixed setup (physical/virtual). 
Are there any known problems with this approach?

Regards,
   Dennis
___
CentOS-virt mailing list
CentOS-virt@centos.org
http://lists.centos.org/mailman/listinfo/centos-virt