[Bug 550409] Re: Slow disk IO for all guests with all disc modes and types.
People should be aware that cache=writeback should only be used if you do not care about the very real possibility of guest disk corruption should the guest or host fail unexpectedly. Given that there is no facility for shutting down guests cleanly on host shutdown one should be very wary of risking their guest data for anything important. -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to libvirt in Ubuntu. https://bugs.launchpad.net/bugs/550409 Title: Slow disk IO for all guests with all disc modes and types. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 550409] Re: Slow disk IO for all guests with all disc modes and types.
Thanks müzso, this worked for me (kvm, ubuntu, xp 32 guest). -- You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to libvirt in ubuntu. https://bugs.launchpad.net/bugs/550409 Title: Slow disk IO for all guests with all disc modes and types. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 550409] Re: Slow disk IO for all guests with all disc modes and types.
** Changed in: libvirt (Ubuntu) Status: Incomplete = Confirmed -- Slow disk IO for all guests with all disc modes and types. https://bugs.launchpad.net/bugs/550409 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to libvirt in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 550409] Re: Slow disk IO for all guests with all disc modes and types.
I can confirm Fabian's tip about the cache=writeback tip. I've one more tip for you: if you are using image files for virtual disks, then using ext3 as the filesystem for the host OS will greatly improve the disk I/O of your guest operating systems. With a Lucid+ext4+KVM+WindowsXP guest combination I've measured a write throughput of 1-3 MB/s during a Windows XP installation. Awfully slow. Regardless of the image format (qcow2 or raw) or the virtual I/O controller (IDE or virtio). However using the same parameters with an ext3 host filesystem, I've measured 20-30 MB/s with raw images and IDE controller (note that this was still during a standard Windows XP installation, not in a disk performance test). I've not yet tested qcow2 images and virtio, but I'm pretty certain that they will show similiar performance boost. So either use cache=writeback, or switch back to ext3. Or if you're familiar with ext4 tweaks, you might already know which option you've to disable to get back the ext3-like disk performance of qemu. P.S.: one might argue whether this is a libvirt, a qemu or ... bug. Qemu will most probably not change it's default cache policy, thus I see the best chance for fixing this problem in libvirt. It already handles tons of special cases where this or that option has to be applied if this or that environment is present. I guess libvirt could (silently) use cache=writeback for guests that have their booting hard disk in an image file that is on an ext4 filesystem. P.S.2: virt-manager's (a GUI to libvirt) latest version (0.8.5-1) introduces the option to manually specify the cache mode for a disk in a guest. However the current version honors libvirt's cache mode for disks. Thus if you manually edit libvirt's guest definition (eg. using virsh's edit command) and specify a cache mode for a disk, virt-manager will keep it intact throughout further guest modifications. The virt-manager change for this is here: http://hg.fedorahosted.org/hg/virt-manager/rev/0241648684d1 The virt-manager 0.8.5 changelog is here: http://hg.fedorahosted.org/hg/virt-manager/file/0b54eb67ce31/ChangeLog -- Slow disk IO for all guests with all disc modes and types. https://bugs.launchpad.net/bugs/550409 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to libvirt in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 550409] Re: Slow disk IO for all guests with all disc modes and types.
** Tags added: cache ext3 ext4 libvirt qemu writeback ** Tags removed: amd64 apport-bug -- Slow disk IO for all guests with all disc modes and types. https://bugs.launchpad.net/bugs/550409 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to libvirt in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 550409] Re: Slow disk IO for all guests with all disc modes and types.
I've tested this with Karmic+KVM+ext4 and the problem is present as well. Using Karmic+KVM+ext3 resulted in the same performance boost as on Lucid. I'd be nice if others could comment on whether this problem is present with Lucid/Karmic+Xen+ext4 too. I guess it should be, because the problem seems to be between the default cache=writethrough policy and an ext4 host OS filesystem. And changing either of them solves the issue. Btw. I was using in Lucid qemu-kvm-0.12.3 and in Karmic qemu-kvm-0.11.0. Probably this is not important in this bug. -- Slow disk IO for all guests with all disc modes and types. https://bugs.launchpad.net/bugs/550409 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to libvirt in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 550409] Re: Slow disk IO for all guests with all disc modes and types.
I forgot to add: the cache policy is a QEMU thing and since Xen uses QEMU too, it should be affected as well. -- Slow disk IO for all guests with all disc modes and types. https://bugs.launchpad.net/bugs/550409 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to libvirt in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 550409] Re: Slow disk IO for all guests with all disc modes and types.
With the cache=writeback in a Lucid+KVM+libvirt+WindowXP guest setup I've experienced near host-level I/O performance in the guest. Thus this makes use of a virtio driver unnecessary. Btw. for casual libvirt users ... adding the cache=writeback option means: 1. stop the VM (in case it's running) 2. execute virsh edit guestdomain on the host OS (where guestdomain should be the name of your guest domain) 3. edit the domain XML by adding a driver cache='writeback'/ element under the disk ... device='disk'.../disk element -- Slow disk IO for all guests with all disc modes and types. https://bugs.launchpad.net/bugs/550409 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to libvirt in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 550409] Re: Slow disk IO for all guests with all disc modes and types.
If you want to read more about QEMU disk cache internals, I've found a short presentation on the topic here: The KVM/qemu storage stack (by Christoph Hellwig) www.linuxfoundation.jp/jp_uploads/JLS2009/jls09_hellwig.pdf -- Slow disk IO for all guests with all disc modes and types. https://bugs.launchpad.net/bugs/550409 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to libvirt in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 550409] Re: Slow disk IO for all guests with all disc modes and types.
I want to start kvm without libvirt and with additional -drive options. Can someone please guide me for how I can avoid libvirt? -- Slow disk IO for all guests with all disc modes and types. https://bugs.launchpad.net/bugs/550409 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to libvirt in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 550409] Re: Slow disk IO for all guests with all disc modes and types.
Successfully started kvm without libvirt and with additional -drive options: cache=writeback,aio=native Copied 1,3GB to windows guest share over GB network with 41MB/s sustained transfer rate. Windows and linux Guests are running very smooth again. Maybe libvirt should use those -drive options by default?! -- Slow disk IO for all guests with all disc modes and types. https://bugs.launchpad.net/bugs/550409 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to libvirt in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 550409] Re: Slow disk IO for all guests with all disc modes and types.
I second this - just incredibly slow I/O results in nearly unusable KVM guests. Tested with lucid guests and windows 7 using virtio drivers. I am using a luks crypted lvm partition on top of a sata md raid. Changing I/O schedulers doesn´t help. -- Slow disk IO for all guests with all disc modes and types. https://bugs.launchpad.net/bugs/550409 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to libvirt in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
Re: [Bug 550409] Re: Slow disk IO for all guests with all disc modes and types.
I also just ran IOmeter on the Windows guest again, against both IDE and SCSI emulation, against guest discs hosted on the mdraid5 with lvm, as well as a single SATA2 7500RPM disc. Identical results against both targets. 4096B blocks = 0.75MBps. 16KB blocks = 1.0MBps, 32KB blocks = 4MBps, =64KB blocks = 4MBps. All tests 50/50 read/write, and run again 50/50 sequential/random. -- Slow disk IO for all guests with all disc modes and types. https://bugs.launchpad.net/bugs/550409 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to libvirt in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 550409] Re: Slow disk IO for all guests with all disc modes and types.
Could you describe which commands you've used to: 1. create the guests 2. start the guests 3. run the benchmarks ** Changed in: libvirt (Ubuntu) Status: New = Incomplete ** Changed in: libvirt (Ubuntu) Importance: Undecided = Low -- Slow disk IO for all guests with all disc modes and types. https://bugs.launchpad.net/bugs/550409 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to libvirt in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 550409] Re: Slow disk IO for all guests with all disc modes and types.
1. create the guests The first set of guests I created (both Windows and Linux) were created with virt-manager. I used default settings for both Windows and Linux guests. Windows guests defaulted to IDE emulation, and Linux guests defaulted to Virtio emulation. I have since tried all possible emulation modes on both guest OS's. Without virt-manager, I have created Linux guests using vmbuilder. vmbuilder kvm ubuntu --suite lucid --flavour virtual --arch i386 --libvirt qemu:///system --user=foo --pass=bar 2. start the guests Guests are normally started with virt-manager, which results in the following command: /usr/bin/kvm -S -M pc-0.12 -cpu qemu32 -enable-kvm -m 512 -smp 1 -name winxp -uuid eb0416e2-54d2-d4cb-936d-331edbe443c0 -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/winxp.monitor,server,nowait -monitor chardev:monitor -localtime -boot c -drive file=/datahauz/storage/software/Windows/EN_WINDOWS_XP_PRO_WITH_SP2.ISO,if=ide,media=cdrom,index=2 -drive file=/datahauz/storage/kvm/tinyxp.img,if=ide,index=0,boot=on,format=raw -net nic,macaddr=52:54:00:0d:11:23,vlan=0,name=nic.0 -net tap,fd=37,vlan=0,name=tap.0 -chardev pty,id=serial0 -serial chardev:serial0 -parallel none -usb -usbdevice tablet -vnc 127.0.0.1:0 -k en-us -vga cirrus 3. run the benchmarks Several benchmarks were used. Under windows I used IOmeter, and also timed file copies while observing both host and guest disc access. I did the following file copy tests. 1. Scp a 2GB file from the hosts disc, to the guests disc. 2. Scp a 2GB file from the hosts memory to the guests disc. 3. Scp a 2GB file from the guests disc to the hosts disc. 4. Scp a 2GB file from the guests disc to the hosts memory. Then I put the guest disc image in /dev/shm and tried the same tests again. With the guest disc hosted on /dev/shm, I get slightly better disc speeds (15MBps windows, 30MBps Linux) Another point to note, is that when scping to the Linux guest disc, I get about 20MBps, but to the guest memory (/dev/shm) I get 38MBps. Either way, Linux guests do better on both disc and memory throughput, but are still much slower (1/4) The speed I would expect. My expectations are based on performance results under VMware, Virtualbox, and Xen, which are all quite similar (~40-70MBps disc throughput) -- Slow disk IO for all guests with all disc modes and types. https://bugs.launchpad.net/bugs/550409 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to libvirt in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
Re: [Bug 550409] Re: Slow disk IO for all guests with all disc modes and types.
On Tue, Mar 30, 2010 at 03:42:56PM -, Ben Selinger wrote: 2. start the guests Guests are normally started with virt-manager, which results in the following command: /usr/bin/kvm -S -M pc-0.12 -cpu qemu32 -enable-kvm -m 512 -smp 1 -name winxp -uuid eb0416e2-54d2-d4cb-936d-331edbe443c0 -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/winxp.monitor,server,nowait -monitor chardev:monitor -localtime -boot c -drive file=/datahauz/storage/software/Windows/EN_WINDOWS_XP_PRO_WITH_SP2.ISO,if=ide,media=cdrom,index=2 -drive file=/datahauz/storage/kvm/tinyxp.img,if=ide,index=0,boot=on,format=raw -net nic,macaddr=52:54:00:0d:11:23,vlan=0,name=nic.0 -net tap,fd=37,vlan=0,name=tap.0 -chardev pty,id=serial0 -serial chardev:serial0 -parallel none -usb -usbdevice tablet -vnc 127.0.0.1:0 -k en-us -vga cirrus Thanks. Could you also paste the kvm command used for the linux guest. 3. run the benchmarks Several benchmarks were used. Under windows I used IOmeter, and also timed file copies while observing both host and guest disc access. I did the following file copy tests. 1. Scp a 2GB file from the hosts disc, to the guests disc. 2. Scp a 2GB file from the hosts memory to the guests disc. 3. Scp a 2GB file from the guests disc to the hosts disc. 4. Scp a 2GB file from the guests disc to the hosts memory. Then I put the guest disc image in /dev/shm and tried the same tests again. With the guest disc hosted on /dev/shm, I get slightly better disc speeds (15MBps windows, 30MBps Linux) Using Scp means you're including the network stack in the benchmark. I'd recommend to use a benchmark running in the guest only (like bonnie+) to test the IO layer. Could you also paste the command line used for benchmarking? Another point to note, is that when scping to the Linux guest disc, I get about 20MBps, but to the guest memory (/dev/shm) I get 38MBps. Either way, Linux guests do better on both disc and memory throughput, but are still much slower (1/4) The speed I would expect. My expectations are based on performance results under VMware, Virtualbox, and Xen, which are all quite similar (~40-70MBps disc throughput) Have these benchmarks been run on the same hardware? With the same benchmarking tools? -- Mathias Gug Ubuntu Developer http://www.ubuntu.com -- Slow disk IO for all guests with all disc modes and types. https://bugs.launchpad.net/bugs/550409 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to libvirt in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
Re: [Bug 550409] Re: Slow disk IO for all guests with all disc modes and types.
This is the command used to start one of the affect Linux guests: /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 512 -smp 1 -name ubuntu -uuid 9de5914b-f448-cb8d-066f-ec51286c80c0 -chardev socket,id=monitor,path=/var/lib/libvirt/qemu/ubuntu.monitor,server,nowait -monitor chardev:monitor -boot c -drive file=/datahauz/kvm/ubuntu.qemu,if=virtio,index=0,boot=on,format=qcow2 -drive if=ide,media=cdrom,index=2 -net nic,macaddr=52:54:00:12:34:56,vlan=0,name=nic.0 -net tap,fd=39,vlan=0,name=tap.0 -chardev pty,id=serial0 -serial chardev:serial0 -parallel none -usb -vnc 127.0.0.1:0 -k en-us -vga cirrus The SCP file transfers (Linux guest) were tested against both the guest disk as well as guest /dev/shm. Pushing a file through the network stack to /dev/shm was faster than pushing a file from the guest disc to shm. This at least tells us that the guest machine is capable of dealing with greater throughput than the disc. I just wanted to make sure the bottleneck was confined to the disc. FYI, the disc throughput on the Linux guests is poor, but it is at least usable (though not in a production environment). The throughput on Windows guests is 1/10th the speed. For all tests, I verified results by at least two methods. Under Windows guests, I performed file copies to/from guest disc, ftp transfer of large files to the guest disc, and used IOmeter. All three tests provided nearly identical results. ~2MBps R and ~2MBps W +/-500KBps. Under Linux guests, I performed file copies to/from guest disc, scp transfer of large files to/from the guest disc, and ran Bonnie++ (results follow). I also created 2GB files using 'dd' from both /dev/zero and from /dev/mem. Bonnie++ Linux Guest: b...@webtest:~$ time dd if=/dev/zero of=1g bs=4096 count=25 25+0 records in 25+0 records out 102400 bytes (1.0 GB) copied, 67.6536 s, 15.1 MB/s real1m7.663s user0m0.080s sys 0m1.510s And again to a raw device without a file system: r...@webtest:~# time dd if=/dev/zero of=/dev/vdb bs=4096 count=40 40+0 records in 40+0 records out 163840 bytes (1.6 GB) copied, 66.2002 s, 24.7 MB/s real1m6.208s user0m0.070s sys 0m1.750s b...@webtest:~$ bonnie++ Writing with putc()...done Writing intelligently...done Rewriting...done Reading with getc()...done Reading intelligently...done start 'em...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.03c --Sequential Output-- --Sequential Input- --Random- -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP webtest 1G 12754 17 11505 2 18583 3 65015 93 500392 53 + +++ --Sequential Create-- Random Create -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 + +++ + +++ + +++ + +++ + +++ + +++ webtest,1G,12754,17,11505,2,18583,3,65015,93,500392,53,+,+++,16,+,+++,+,+++,+,+++,+,+++,+,+++,+,+++ Linux Host: b...@octillion:~$ time dd if=/dev/zero of=1g bs=4096 count=25 25+0 records in 25+0 records out 102400 bytes (1.0 GB) copied, 5.2139 s, 196 MB/s real0m5.412s user0m0.080s sys 0m2.800s b...@octillion:~$ bonnie++ Writing a byte at a time...done Writing intelligently...done done Reading a byte at a time...done Reading intelligently...done start 'em...done...done...done...done...done... Create files in sequential order...done. Stat files in sequential order...done. Delete files in sequential order...done. Create files in random order...done. Stat files in random order...done. Delete files in random order...done. Version 1.96 --Sequential Output-- --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP octillion 10G 161 97 76547 18 42493 12 1976 96 114159 10 262.1 3 Latency 147ms4926ms 16159ms 21546us 176ms 198ms Version 1.96 --Sequential Create-- Random Create octillion -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 15145 28 + +++ 22692 29 18729 30 + +++ 25217 30 Latency 695us 749us 577us 728us 60us 131us
[Bug 550409] Re: Slow disk IO for all guests with all disc modes and types.
** Attachment added: Dependencies.txt http://launchpadlibrarian.net/42298604/Dependencies.txt -- Slow disk IO for all guests with all disc modes and types. https://bugs.launchpad.net/bugs/550409 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to libvirt in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs
[Bug 550409] Re: Slow disk IO for all guests with all disc modes and types.
** Description changed: - Very slow disk throughput for all guest OS's. Windows seems to be the slowest at ~3MBps. Linux guests usually get ~18MBps. All tests performed on two physical hosts. + Very slow disk throughput for all guest OS's. Windows seems to be the slowest at ~3MBps. Linux guests usually get ~18MBps. All tests performed on two physical hosts. Host1: Intel Core2 - 2.8GHz, 4GB RAM, 2 SATA2 7500RPM discs in an mdraid0. Host2: Intel Core2 Quad - 2.4GHz, 5GB RAM, 6 SATA2 7500RPM discs. Mix of hw raid0/5, mdraid0/5, LVM, and LVM on mdraid5. Tested on both Ubuntu 9.10 and Ubuntu Lucid Beta1. Tested disc IO by copying a large file (100MB). Tested against the following guests with the following average results: -Windows XP x86 - 4MBps - Installation takes 8 hours. -Ubuntu 9.10 server - =20MBps -Ubuntu 9.10 server built for virtualization (vmbuilder virt optimization) - =20MBps Tested on multiple hosts with multiple configurations. Same results on all physical hosts. Guest disk types: raw/qcow2, preallocated/thin provisioning, IDE/SCSI/VirtIO emulation. (Virtio emul only on Linux guests) Host disk types: ext3/4 on disc, ext3/4 on LVM on disc, ext3/4 on LVM on mdraid5 (3 7500RPM SATA2). Average host disk throughput: ext3/4 on disc - 80MBps, ext3/4 on LVM on disc - 80MBps, ext3/4 on LVM on mdraid5 - 75MBps Average guest disk throughput (WindowsXP): 3MBps on all hosts. Average guest disk throughput (Linux): 18MBps on all hosts. I also did a few non standard tests to rule some things out. 1. Hosted WindowsXP guest image on /dev/shm. This yielded much better, but still very poor results of ~20MBps. 2. Tested all guests on all hosts using both KVM and QEMU with the same results. 3. Tested all possible guest disc emulation modes on all guests, with no variation in results. 3. Tested WindowsXP guest image on an mdraid0 of 2 7500RPM Sata2 discs (160MBps on host). Same result. 4MBps. + Possibly related: + -When creating guest images using virt-manager, on a FS under LVM, thin provisioning is ALWAYS the norm. Specifying a disc allocation the same size as the guest image, always results in a thin-provisioned guest image. Creating images on a FS without LVM works correctly. As stated though, I've seen the same performance regardless of thin-provisioning. + + I'm not sure what package is causing the problem. I tested against both + kvm and qemu with the same result, so I suspect libvirt, but I don't + know enough yet about how the 2/3 interact. I believe my testing has + ruled out the possibility of any problems with the host FS/discs. + ProblemType: Bug DistroRelease: Ubuntu 10.04 Package: libvirt-bin 0.7.5-5ubuntu15 ProcVersionSignature: Ubuntu 2.6.32-16.25-generic Uname: Linux 2.6.32-16-generic x86_64 NonfreeKernelModules: nvidia Architecture: amd64 Date: Sun Mar 28 12:13:13 2010 InstallationMedia: Ubuntu 10.04 Lucid Lynx - Beta amd64 (20100317.1) ProcEnviron: - LANGUAGE= - PATH=(custom, no user) - LANG=en_US.utf8 - SHELL=/bin/bash + LANGUAGE= + PATH=(custom, no user) + LANG=en_US.utf8 + SHELL=/bin/bash SourcePackage: libvirt -- Slow disk IO for all guests with all disc modes and types. https://bugs.launchpad.net/bugs/550409 You received this bug notification because you are a member of Ubuntu Server Team, which is subscribed to libvirt in ubuntu. -- Ubuntu-server-bugs mailing list Ubuntu-server-bugs@lists.ubuntu.com Modify settings or unsubscribe at: https://lists.ubuntu.com/mailman/listinfo/ubuntu-server-bugs