http://fedoraproject.org/wiki/Virtualization_Quick_StartGetting started with virtualizationFrom FedoraProject(Redirected from Virtualization Quick Start)
Using virtualization on fedoraFedora provides virtualization with both the KVM and the Xen virtualization platforms. For information on other virtualization platforms, refer to http://virt.kernelnewbies.org/TechComparison. Xen supports para-virtualized guests as well as fully virtualized guests with para-virtualized drivers. Para-virtualization is faster than full virtualization but does not work with non-Linux operating systems or Linux operating system without the Xen kernel extensions. Xen fully virtualized are slower than KVM fully virtualized guests. KVM offers fast full virtualization, which requires the virtualization instructions sets on your processor. KVM requires an x86 intel or AMD processors with virtualization extensions enabled. Without these extensions KVM uses QEMU software virtualization. Other virtualization products and packages are available but are not covered by this guide. For information on Xen, refer to http://wiki.xensource.com/xenwiki/ and the Fedora Xen pages. For information on KVM, refer to http://kvm.qumranet.com/kvmwiki. Fedora uses Xen version 3.0.x. Xen 3.0.0 was released in December of 2005 and is incompatible with guests created using Xen 2.0.x versions. Installing and configuring fedora for virtualized guestsThis section covers setting up Xen, KVM or both on your system. After the successful completion of this section you will be able to create virtualized guest operating systems. System requirementsThe common system requirements for virtualization on fedora are:
Additional requirements for para-virtualized guests
$ grep pae /proc/cpuinfo flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 mmx fxsr sse syscall mmxext 3dnowext 3dnow up ts The above output shows a CPU with the PAE extensions. If the command returns nothing, then the CPU does not support para-virtualization. Additional requirements for fully virtualized guestsFull virtualization with Xen or KVM requires a CPU with virtualization extensions, that is, the Intel VT or AMD-V extensions. Verify whether your Intel CPU has Intel VT support (the 'vmx' flag): $ grep vmx /proc/cpuinfo flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm constant_tsc pni monitor ds_cpl vmx est tm2 cx16 xtpr lahf_lm On some Intel based systems(usually laptops) the Intel VT extensions are disabled in BIOS. Enter BIOS and enable Intel-VT or Vanderpool Technology which is usually located in the CPU options or Chipset menus. Verify whether your AMD CPU has AMD-V support (the 'svm' flag): $ grep svm /proc/cpuinfo flags : fpu tsc msr pae mce cx8 apic mtrr mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow pni cx16 lahf_lm cmp_legacy svm cr8_legacy Via Nano processors use the 'vmx' instruction set. You can use QEMU software emulation for full virtualization. Software virtualization is far slower than virtualization using the Intel VT or AMD-V extensions. QEMU can also virtualize other processor architectures like ARM or PowerPC. Installing the virtualization packagesWhen installing fedora, the virtualization packages can be installed by selecting Virtualization in the Base Group in the installer. For existing fedora installations, QEMU, KVM, and other virtualization tools can be installed by running the following command: su -c "yum groupinstall 'Virtualization'" This will install Introduction to virtualization with fedoraFedora supports multiple virtualization platforms. Different platforms require slightly different methods. When using KVM, to display all domains on the local system the
command is To verify that virtualization is enabled on the system, run the
following command, where <URI> is a valid URI that $ su -c "virsh -c <URI> list" Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 610 1 r----- 12492.1 The above output indicates that there is an active hypervisor. If virtualization is not enabled an error similar to the following appears: $ su -c "virsh -c <URI> list" libvir: error : operation failed: xenProxyOpen error: failed to connect to the hypervisor error: no valid connection If the above error appears, make sure that:
Creating a fedora guestThe installation of Fedora guests using anaconda is supported. The
installation can be started on the command line via the Creating a fedora guest with virt-install
su -c "/usr/sbin/virt-install" The following questions for the new guest will be presented.
These options can be passed as command line options, execute
If graphics were enabled, a VNC window will open and present the graphical installer. If graphics were not enabled, a text installer will appear. Proceed with the fedora installation. Creating a fedora guest with virt-managerStart the GUI Virtual Machine Manager by selecting it from the "Applications-->System Tools" menu, or by running the following command: su -c "virt-manager" Enter the
Remote managementThe following remote management options are available:
Guest system administrationWhen the installation of the guest operating system is complete, it
can be managed using the GUI Managing guests with virt-managerStart the Virtual Machine Manager. Virtual Machine Manager is in the "Applications-->System Tools" menu, or execute: su -c "virt-manager" {1} If you are not root, you will be prompted to enter the root
password. Choose
For further information about Bugs in the Managing guests with virshThe
To start a virtual machine: su -c "virsh -c <URI> create <name of virtual machine>" To list the virtual machines currently running: su -c "virsh -c <URI> list" To gracefully power off a guest: su -c "virsh -c <URI> shutdown <virtual machine (name | id | uuid)>" To save a snapshot of the machine to a file: su -c "virsh -c <URI> save <virtual machine (name | id | uuid)> <filename>" To restore a previously saved snapshot: su -c "virsh -c <URI> restore <filename>" To export the configuration file of a virtual machine: su -c "virsh -c <URI> dumpxml <virtual machine (name | id | uuid)" For a complete list of commands available for use with su -c "virsh help" Or consult the manual page: Bugs in the Managing guests with qemu-kvmKVM virtual machines can also be managed in the command line using
the 'qemu-kvm' command. See Troubleshooting virtualizationSELinuxThe SELinux policy in Fedora has the necessary rules to allow the
use of virtualization. The main caveat to be aware of is that any file
backed disk images need to be in the directory Beginning with Fedora 11, virtual machines under SELinux are isolated from each other with sVirt. Log filesThe graphical interface, The Logging from All QEMU command lines executed by The There are two log files stored on the host system to assist with
debugging Xen related problems. The file The second file, When reporting errors, always include the output from both If starting a fully-virtualized domains (ie unmodified guest OS)
there are also logs in Xen hypervisor logs can be seen by running the ' Serial console access for troubleshooting and managementSerial console access is useful for debugging kernel crashes and remote management can be very helpful. Accessing the serial consoles of xen kernels or virtualized guests is slightly different to the normal procedure. Host serial console accessIf the Xen kernel itself has died and the hypervisor has generated an error, there is no way to record the error persistently on the local host. Serial console lets you capture it on a remote host. The Xen host must be setup for serial console output, and a remote host must exist to capture it. For the console output, set the appropriate options in /etc/grub.conf: title Fedora root (hd0,1) kernel /vmlinuz-current.running.version com1=38400,8n1 sync_console module /vmlinuz-current.running.version ro root=LABEL=/ rhgb quiet console=ttyS0 console=tty pnpacpi=off module /initrd-current.running.version for a 38400-bps serial console on com1 (ie. /dev/ttyS0 on Linux.)
The "sync_console" works around a problem that can cause hangs with
asynchronous hypervisor console output, and the "pnpacpi=off" works
around a problem that breaks input on serial console. "console=ttyS0
console=tty" means that kernel errors get logged both on the normal VGA
console and on serial console. Once that is done, install and set up su -c "ttywatch --name myhost --port /dev/ttyS0" Will log output from /dev/ttyS0 into a file /var/log/ttywatch/myhost.log Para-virtualized guest serial console accessPara-virtualized guest OS will automatically have a serial console configured, and plumbed through to the Domain-0 OS. This can be accessed from the command line using su -c "virsh console <domain name>" Alternatively, the graphical Fully virtualized guest serial console accessFully-virtualized guest OS will automatically have a serial console configured, but the guest kernel will not be configured to use this out of the box. To enable the guest console in a Linux fully-virt guest, edit the /etc/grub.conf in the guest and add 'console=ttyS0 console=tty0'. This ensures that all kernel messages get sent to the serial console, and the regular graphical console. The serial console can then be access in same way as paravirt guests: su -c "virsh console <domain name>" Alternatively, the graphical Accessing data on guest disk imagesThere are two tools which can help greatly in accessing data within a guest disk image: lomount and kpartx.
su -c "lomount -t ext3 -diskimage /xen/images/fc5-file.img -partition 1 /mnt/boot" lomount only works with small disk images and cannot deal with LVM volumes, so for more complex cases, kpartx (from the device-mapper-multipath RPM) is preferred:
su -c "yum install device-mapper-multipath" su -c "kpartx -av /dev/xen/guest1" add map guest1p1 : 0 208782 linear /dev/xen/guest1 63 add map guest1p2 : 0 16563015 linear /dev/xen/guest1 208845 Note that this only works for block devices, not for images installed on regular files. To use file images, set up a loopback device for the file first: su -c "losetup -f" /dev/loop0 su -c "losetup /dev/loop0 /xen/images/fc5-file.img" su -c "kpartx -av /dev/loop0" add map loop0p1 : 0 208782 linear /dev/loop0 63 add map loop0p2 : 0 12370050 linear /dev/loop0 208845 In this case we have added an image formatted as a default Fedora install, so it has two partitions: one /boot, and one LVM volume containing everything else. They are accessible under /dev/mapper: su -c "ls -l /dev/mapper/ | grep guest1" brw-rw---- 1 root disk 253, 6 Jun 6 10:32 xen-guest1 brw-rw---- 1 root disk 253, 14 Jun 6 11:13 guest1p1 brw-rw---- 1 root disk 253, 15 Jun 6 11:13 guest1p2 su -c "mount /dev/mapper/guest1p1 /mnt/boot/" To access LVM volumes on the second partition, rescan LVM with su -c "kpartx -a /dev/xen/guest1" su -c "vgscan" Reading all physical volumes. This may take a while... Found volume group "VolGroup00" using metadata type lvm2 su -c "vgchange -ay VolGroup00" 2 logical volume(s) in volume group "VolGroup00" now active su -c "lvs" LV VG Attr LSize Origin Snap% Move Log Copy% LogVol00 VolGroup00 -wi-a- 5.06G LogVol01 VolGroup00 -wi-a- 800.00M su -c "mount /dev/VolGroup00/LogVol00 /mnt/" ... su -c "umount /mnt" su -c "vgchange -an VolGroup00" su -c "kpartx -d /dev/xen/guest1" Getting helpIf the Troubleshooting section above does not help you to solve your problem, check the list of existing virtualization bugs, and search the archives of the mailing lists in the resources section. If you believe your problem is a previously undiscovered bug, please report it to Bugzilla. Resources
References
Previous Fedora Virtualization Guides: |