Re: Nic Bonding - throughput for guest

2014-10-24 Thread Brian Jackson
On Friday, October 24, 2014 02:37:28 PM Stefan Bauer wrote:
> Hi,
> 
>  
> please CC me - I'm not subscribed to this list.
> 
>  
> I'm looking  forward to bond the 2 Nics from my KVM Host (2 x 1GbE) to
> increase throughput for my guest.
> 
>  
> Currently my guest has a e1000 nic attached.


That probably won't cut it. You'll want to go with virtio-net and vhost.


> 
>  
> Do i have to tune anything else so my kvm guest can operate with 2 x 1gbit
> (to different clients) ?


Nope. Just use a standard bridge. The reported speed in the guest really has 
nothing to do with the actual achievable speed. I've seen close to 20gbit/sec 
guest to host with virtio+vhost.


> 
>  
> I'm using 802.1d with hash_policy layer3+4 for bonding (this is tested and
> working with our switches).


Most network setups I've seen required pretty specific traffic patterns to 
actually get >1gbit speed. i.e. you're unlikely to get >1gbit between 2 hosts. 
You're going to need multiple hosts in the mix to get it to work.


> 
>  
> Kind Regards.
> 
>  
> Stefan
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: New improvements on KVM

2014-06-24 Thread Brian Jackson



On 6/23/2014 10:09 PM, Oscar Garcia wrote:

Hello,

I am planning to study a phd for the next year, and I would like to
spend part on my time studying KVM code in order to suggest new
improvements.
I know that this mail list is used by experts related to KVM, I would
like to ask you, what  improvements are needed in KVM in order to
advance in that direction?



For starters I'd check out the KVM TODO page on the wiki and the GSOC 
pages on the Qemu wiki. Aside from that, just hang out and try to pick 
up what you can. Realistically though (unless you just love hypervisor 
code), all the interesting work is going on in Qemu and 
libvirt/openstack/other management tools.


http://www.linux-kvm.org/page/TODO
http://wiki.qemu.org/Google_Summer_of_Code_2014
http://wiki.qemu.org/Google_Summer_of_Code_2013
http://wiki.qemu.org/Google_Summer_of_Code_201



Thank you

Oscar
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: FreeBSD filesystem sharing

2014-04-09 Thread Brian Jackson
On Thu, 3 Apr 2014 16:04:44 +
Andre Goree  wrote:

> Hello list.  I wanted to ask if anyone has been able to make filesystem 
> mounting work under a FreeBSD guest?  For example, I've added the following 
> to the guest's xml config using 'virsh edit':
> 
>  
> 
> 
> 
> 
> 
> However, I don't know how to mount the above from within the FreeBSD guest.  
> That which is instructed to do so on Linux guests does not work on the 
> FreeBSD guest:
> 
> root@freebsd9-test:~ # mount -t 9p -o trans=virtio,version=9p2000.L tag 
> /mnt/shared/
> mount: tag: Operation not supported by device


It's only supported by Linux.


> 
> The hosts OS is Ubuntu 12.04 LTS.  Thanks in advance for any answers on 
> this.--
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html


-- 
Brian Jackson 
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Qemu v1.7.1 & CentOS 6.4

2014-03-28 Thread Brian Jackson
On 03/28/2014 03:55 PM, Lane Eckley wrote:
> Hi Everyone,
>
> I am running into performance issues with Windows guest VM's in
> conjunction with the rather old version of Qemu-KVM that is currently
> being shipped with rhel 6.4 and as such I am looking to upgrade to the
> latest stable release of qemu (v1.7.1 if I am not mistaken).
>
> As it stands now I have been unsuccessful in locating a good
> guide/tutorial on how to correctly update (or remove & install) v1.7.1
> - Does anyone have a good tutorial on how to properly get it installed
> under CentOS 6 (or rhel 6 in general)?
>
> The reason for the upgrade is due high context switch rate when the
> Windows virtual VM's are idle causing the hypervisor machine to
> utilize a lot more CPU than it should be. Based on my Google skills I
> have located several threads noting upgrade to at least 0.12.4 should
> help resolve the issue, however as it stands right now I really do not
> have any confirmation of this.


RedHat very likely has backported any fixes and probably most
performance improvements from 0.12-1.7. I'd suggest upgrading to the
latest RHEL release and try to enable the hv-* options (hv-spinlocks,
hv-relaxed, hv-vapic, hv-time or whatever is supported by RHEL's kvm).
IIRC, there is also a document/page in the RHEL docs that talks
specifically about Windows guest performance.


>
> As to CentOS, unfortunately the control panel system we are using
> currently limits us to rhel 6 based distros and as such swapping to a
> debian based distro or otherwise is not currently an option.
>
> Any advice & feedback would be very much appreciated.
>
> Thanks!
>
> -Lane
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: lightscribe support

2014-02-15 Thread Brian Jackson
On 02/14/2014 03:29 PM, Nerijus Baliunas wrote:
> Hello,
>
> is it possible to support lightscribe in KVM? Now Windows VM sees QEMU
> DVD-ROM ATA Device and labeling software shows "No LightScribe Drives Found".


You could try to use virtio-scsi and scsi passthru. I believe there's
been a post on this in the past (more than likely on the qemu-devel
mailing list.


>
> Regards,
> Nerijus
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] Speed disk virtio

2014-02-07 Thread Brian Jackson
On 02/04/2014 08:21 AM, XliN wrote:
> Good day. Very little speed drives Virtio. Drivers are the latest
> guest on the system "Windows server 2008". Host system centos 6.5.


What latest? There are a few different places to get drivers (Fedora
site, RHEL subscription, build yourself, etc). From the graphs, it looks
like the speed isn't too bad at times. But it's hard to tell with the
information you've given about your particular config. I mean 50MB/s
isn't bad for a single rotating disk on raw storage. But we don't know
what kind of setup you have since you didn't tell us.


>
> All that can be tried, but failed to increase the speed. And there I
> have a database running.


All? You should try being specific about what you've tried. How you are
running the guest. The underlying hardware. Command line options. Too
much detail is better than none (which is about what you've given us).


>
> Screenshots test speed drives
>
> http://itmages.ru/image/view/1471772/feec35c3
> http://itmages.ru/image/view/1471774/2b0baeae
> http://itmages.ru/image/view/1471785/9fffb8f5
>
> Thanks in advance. Apply nowhere else.


I'm not really sure what this test that you've run is. For all I know,
your results look spectacular.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: How to share filesystem

2013-09-25 Thread Brian Jackson

On Tuesday, September 24, 2013 3:38:39 AM CDT, Ross Boylan wrote:

I would like to have access to the same file system from the host and
the guest.  Can anyone recommend the best way to do this, considering
ease of use, safety (concurrent access from guest and host does not
corrupt) and performance?

For example, I would like to restore files from backup using the host,
but write to filesystems used by the guest.

I have previously used kvm mostly with disks that are based on LVM
logical volumes, e.g. -hda /dev/turtle/Squeeze00.  Since the LVs are
virtual disks, I can't just mount them in the host AFAIK.

Among the alternatives I can think of are using NFS and using NBD.
Maybe there's some kind of loopback device I could use on the disk image
to access it from the host.



I would suggest NFS/CIFS generally speaking. 9p/virtio is an option, but 
NFS and CIFS are going to be far more stable, tested, and fast.





Host: Debian GNU/Linux wheezy, amd64 architecture, qemu-kvm 1.1.2
Guest: Debian GNU/Linux lenny i386.
Host processor is a recent i5 with good virtualization (flaga: fpu vme
de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush
dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx rdtscp lm
constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc
aperfmperf pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3
cx16 xtpr pdcm pcid sse4_1 sse4_2 x2apic popcnt tsc_deadline_timer aes
xsave avx f16c rdrand lahf_lm ida arat epb xsaveopt pln pts dtherm
tpr_shadow vnmi flexpriority ept vpid fsgsbase smep erms)

Thanks.
Ross Boylan

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html




--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kernel panic

2013-09-17 Thread Brian Jackson

On Monday, September 16, 2013 9:57:26 PM CDT, zhang xintao wrote:

2.6.32-279.el6.x86_64 #1 SMP Thu Sep 12 12:59:17 CST 2013 x86_64
x86_64 x86_64 GNU/Linux



You should probably report this to your distribution.



=
PID: 2621   TASK: 881fe10d2aa0  CPU: 14  COMMAND: "qemu-kvm"
 #0 [880028307b00] machine_kexec at 8103281b
 #1 [880028307b60] crash_kexec at 810ba322
 #2 [880028307c30] panic at 814fce74
 #3 [880028307cb0] watchdog_overflow_callback at 810daf7d
 #4 [880028307cd0] __perf_event_overflow at 8110dbbd
 #5 [880028307d70] perf_event_overflow at 8110e174
 #6 [880028307d80] intel_pmu_handle_irq at 8101e976
 #7 [880028307e90] perf_event_nmi_handler at 81501519
 #8 [880028307ea0] notifier_call_chain at 81503065
 #9 [880028307ee0] atomic_notifier_call_chain at 815030ca
#10 [880028307ef0] notify_die at 81097d6e
#11 [880028307f20] do_nmi at 81500ce3
#12 [880028307f50] nmi at 815005f0
[exception RIP: _spin_lock+33]
RIP: 814ffe61  RSP: 8800283033d0  RFLAGS: 0097
RAX: 8dbb  RBX: 00016680  RCX: 00c1
RDX: 8db4  RSI: 880028303438  RDI: 880028336680
RBP: 8800283033d0   R8: 00c1   R9: 0005
R10:   R11: 0001  R12: 880bf2fd0080
R13: 880028303438  R14: 880028336680  R15: 0001
ORIG_RAX:   CS: 0010  SS: 0018
---  ---
#13 [8800283033d0] _spin_lock at 814ffe61
#14 [8800283033d8] task_rq_lock at 8105350d
#15 [880028303408] try_to_wake_up at 8105bfbc
#16 [880028303478] default_wake_function at 8105c372
#17 [880028303488] pollwake at 8118fbd6
#18 [8800283034c8] __wake_up_common at 8104e189
#19 [880028303518] __wake_up at 81053268
#20 [880028303558] tun_net_xmit at a0156495 [tun]
#21 [880028303588] dev_hard_start_xmit at 8143ad9c
#22 [8800283035e8] sch_direct_xmit at 814588ea
#23 [880028303638] __qdisc_run at 814589bb
#24 [880028303668] dev_queue_xmit at 8143f4c3
#25 [8800283036b8] br_dev_queue_push_xmit at a03796bc 

[bridge]

#26 [8800283036d8] br_nf_dev_queue_xmit at a037f378 [bridge]
#27 [8800283036e8] br_nf_post_routing at a037fe10 [bridge]
#28 [880028303738] nf_iterate at 814662b9
#29 [880028303788] nf_hook_slow at 81466474
#30 [880028303808] br_forward_finish at a0379733 [bridge]
#31 [880028303838] br_nf_forward_finish at a037f9b8 [bridge]
#32 [880028303878] br_nf_forward_ip at a0380ea8 [bridge]
#33 [8800283038d8] nf_iterate at 814662b9
#34 [880028303928] nf_hook_slow at 81466474
#35 [8800283039a8] __br_forward at a03797c2 [bridge]
#36 [8800283039d8] deliver_clone at a037939e [bridge]
#37 [880028303a08] br_flood at a03795b9 [bridge]
#38 [880028303a48] br_flood_forward at a0379625 [bridge]
#39 [880028303a58] br_handle_frame_finish at a037a7ae 

[bridge]
#40 [880028303aa8] br_nf_pre_routing_finish at a0380318 

[bridge]

#41 [880028303b48] br_nf_pre_routing at a038088f [bridge]
#42 [880028303b98] nf_iterate at 814662b9
#43 [880028303be8] nf_hook_slow at 81466474
#44 [880028303c68] br_handle_frame at a037a95c [bridge]
#45 [880028303ca8] __netif_receive_skb at 8143a509
#46 [880028303d08] netif_receive_skb at 8143c708
#47 [880028303d48] napi_skb_finish at 8143c810
#48 [880028303d68] napi_gro_receive at 8143ed49
#49 [880028303d88] ixgbe_receive_skb at a00df9df [ixgbe]
#50 [880028303d98] ixgbe_poll at a00e0dda [ixgbe]
#51 [880028303e68] net_rx_action at 8143ee63
#52 [880028303ec8] __do_softirq at 81073b81
#53 [880028303f38] call_softirq at 8100c24c
#54 [880028303f50] do_softirq at 8100de85
#55 [880028303f70] irq_exit at 81073965
#56 [880028303f80] do_IRQ at 81505835
---  ---
#57 [881e47ce3a48] ret_from_intr at 8100ba53
[exception RIP: x86_emulate_insn+9030]
RIP: a01c69f6  RSP: 881e47ce3af8  RFLAGS: 0202
RAX:   RBX: 881e47ce3b88  RCX: 0010
RDX: 881c26cbc5c0  RSI: 0003  RDI: 881c26cbc5c0
RBP: 8100ba4e   R8: 0001   R9: a01b3bf0
R10:   R11:   R12: a01ae2c6
R13: 881e47ce3a98  R14:   R15: 881c26cbc5c0
ORIG_RAX: ff86  CS: 0010  SS: 0018
#58 [881e47ce3b90] x86_emulate_instruction at a01b47ab 

Re: KVM in HA active/active + fault-tolerant configuration

2013-08-21 Thread Brian Jackson

On Wednesday, August 21, 2013 3:49:09 PM CDT, g.da...@assyoma.it wrote:

On 2013-08-21 21:40, Brian Jackson wrote:
On Wednesday, August 21, 2013 6:02:31 AM CDT, 
g.da...@assyoma.it wrote: ...


Hi Brian,
thank you for your reply.

As I googled extensively without finding anything, I was 
prepared to a similar response.


Anyway, from what I understand, Qemu already use a similar 
approach (tracking dirty memory pages) when live migrating 
virtual machines to another host.


So what is missing is the "glue code" between Qemu and 
KVM/libvirt stack, right?


Live migration isn't what you asked about (at least not from what I 
understood). Live migration is just moving a VM from one host to another. That 
is definitely supported by libvirt. Having a constantly running lock-step sync 
of guest state is what Qemu/KVM does not support. So with Qemu's current live 
migration abilities, if HostA dies, all it's guests will have downtime while 
they are restarted on other hosts.




Thanks again.


 ...

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majordomo@vger.kernelorg
More majordomo info at  http://vger.kernel.org/majordomo-info.html




--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM in HA active/active + fault-tolerant configuration

2013-08-21 Thread Brian Jackson

On Wednesday, August 21, 2013 6:02:31 AM CDT, g.da...@assyoma.it wrote:

Hi all,
I have a question about Linux KVM HA cluster.

I understand that in a HA setup I can live migrate virtual 
machine between host that shares the same storage (via various 
methods, eg: DRDB). This enable us to migrate the VMs based on 
hosts loads and performance.


ìMy current understanding is that, with this setup, an host 
crash will cause the VMs to be restarded on another host.


However, I wonder if there is a method to have a fully 
fault-tolerant HA configuration, where for "fully 
fault-tolerant" I means that an host crash (eg: power failures) 
will cause the VMs to be migrated to another hosts with no state 
change. In other word: it is possible to have an 
always-synchronized (both disk & memory) VM instance on another 
host, so that the migrated VM does not need to be restarted but 
only restored/unpaused? For disk data synchronization we can use 
shared storages (bypassing the problem) or something similar do 
DRDB, but what about memory?



You're looking for something that doesn't exist for KVM. There was a project 
once for it called Kemari, but afaik, it's been abandoned for a while.




Thank you,
regards.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html




--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Network performance data

2013-06-27 Thread Brian Jackson

On Thursday, June 27, 2013 1:09:37 AM CDT, Bill Rich wrote:

Hello All,

I've run into a problem with getting network performance data on
Windows VMs running on KVM. When I check the network data in the
Windows task manager on the VM, it remains at zero, even if large
amounts of data are being transferred. This has been tested on Windows
Server 2008r2 using the standard Windows driver and the e1000 nic. I
searched the web and the bug reports specifically, but I didn't find
this issue mentioned. Is this expected behavior, or is there something
I can do to fix it?


Personally, I'd try a newer version of Qemu. There have been lots of fixes 
since 0.12 was released (almost 4 years ago). Barring that, you might want to 
seek support from your distribution.





Below is the info on the hypervisor the VM is running on:

OS: CentOS release 6.4
Kernel: 2.6.32-358.11.1.el6.x86_64
qemu-kvm: 0.12.1.2-3.209.el6.4.x86_64



P.S. In the future, it's sufficient to send the command line options used 
instead of the XML config file from libvirt.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Migration route from Parallels on Mac for Windows images?

2013-06-27 Thread Brian Jackson

On Wednesday, June 26, 2013 8:25:54 PM CDT, Ken Roberts wrote:
Sorry for the user query but I'm not finding expertise on the 
Linux mailing lists I belong to.  The web site says one-off user 
questions are OK.


I have a few VM images on Parallels 8 for Mac. I want them to 
be on KVM/Linux.


Some of the images are Linux, but the critical ones are a few 
types of Windows.  I don't want to trash my licenses.


I noticed that kvm-img has a parallels format option, and it 
seems to work while the conversion is going on.  I've tried 
kvm-img to convert to qcow2 and to raw, both cases the image 
converts but the disk is not bootable.  The only file the 
kvm-img doesn't immediately fail on is the one that contains the 
data.



More details on "not bootable" would be nice. Do you get a blue screen? Seabios 
screen? You may need to prep the image before you convert it (google mergeide).





The best answer to my problem is to find out how to make the disk bootable.

The next best answer is to find out if there is a reliable 
migration path, even if it means going to VMware first.


Also, if VMware is a necessary intermediate point, it would 
help to know which VMware format to use for best results.


I'm not a KVM expert, I've made some VMs on LVM and installed 
Linux on them with bridged networking, that's about the extent 
of it.  For the record that was insanely simple.


Thanks.



--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM VM's facing public network

2013-01-29 Thread Brian Jackson
On Tue, 29 Jan 2013 17:15:54 -0500
"Hugo R. Hernandez-Mora"  wrote:

> Brian,
> thanks for having the time and look into my problem.   I have set my
> VMs by using virt-manager but here is how it looks the qemu/kvm
> process running for my client:
> 
> [root@kvm1 ~]# ps -efl | grep qemu
> 6 S qemu  3532 1  1  80   0 - 2834530 poll_s 11:38 ?
> 00:03:20 /usr/libexec/qemu-kvm -S -M rhel6.3.0 -enable-kvm -m 8192
> -smp 2,sockets=2,cores=1,threads=1 -name jacobi -uuid
> 740569a2-613f-ee1b-14fd-02772e28b211 -nodefconfig -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/jacobi.monitor,server,nowait
> -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc
> -no-shutdown -boot order=cd,menu=on -device
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
> if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw -device
> ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -drive
> file=/ifs/virt/vm3/jacobi.img,if=none,id=drive-virtio-disk0,format=raw,cache=none
> -device
> virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk0,id=virtio-disk0
> -netdev tap,fd=24,id=hostnet0,vhost=on,vhostfd=25 -device
> virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:ea:44:67,bus=pci.0,addr=0x3
> -chardev pty,id=charserial0 -device
> isa-serial,chardev=charserial0,id=serial0 -device
> usb-tablet,id=input0 -vnc 127.0.0.1:0 -vga cirrus -device
> intel-hda,id=sound0,bus=pci.0,addr=0x4 -device
> hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0 -incoming fd:22
> -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6


Unfortunately, from that it's hard to tell what's actually connected to
what. Curse libvirt for that.


> 
> I'm using a standard way for setting up networking as assigning a
> static IP for iface eth0 52:54:00:ea:44:67.  I have changed my
> firewall rules to use only this as from documentation and by having in
> mind what you said about having the VM on same network as the KVM
> host:
> 
> iptables -I FORWARD -m physdev --physdev-is-bridged -j ACCEPT
> 
> I'm not sure if the problem is because a port blocking on the network
> switch or a misconfiguration from my side.   Anyways, I have tried to
> route VM by using the same default gateway used by the KVM host, or to
> use the KVM host as gateway but any of these two options work in my
> case.


A "normal" bridge setup wouldn't require any iptables rules to work, so
why don't you try disabling all your iptables rules on the host and
guest and setting the guest to use the same router as the host. See
what that gets you. Try pinging and tcpdumping at different points to
see where exactly things are failing.


> 
> Thoughts?
> 
> Regards,
> -Hugo

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM VM's facing public network

2013-01-29 Thread Brian Jackson
On Tue, 29 Jan 2013 12:53:21 -0500
Hugo R Hernández-Mora  wrote:

> Hello there,
> we are experiencing a problem by configuring a KVM bridged networking
> to share a public network interface between the KVM host and the VMs. 
> Currently, our KVM server has set three network interfaces as follows:
> 
> * eth0: 192.168.10.101/23 (main interface for public network - no
> bridge)
> * eth1 <--> br1: 192.168.10.201/23 (KVM VMs connected to public
> network)
> * eth3 <--> br3: 10.7.10.201/23 (KVM VMs connected to LAN)
> 
> We have followed instructions as from Red Hat as well as from
> diferrent web sites and we are not able to get the VMs to get access
> into/from the public network. Here is a more detailed configuration
> for the KVM host:
> 
> ifcfg-eth0
> DEVICE=eth0
> ONBOOT=yes
> HWADDR=AC:80:B2:14:C5:EE
> BOOTPROTO=none
> IPADDR=192.168.10.101
> NETMASK=255.255.254.0
> 
> ifcfg-eth1
> DEVICE=eth1
> ONBOOT=yes
> HWADDR=AC:80:B2:4E:D3:28
> BRIDGE=br1
> 
> ifcfg-br1
> DEVICE=br1
> ONBOOT=yes
> TYPE=Bridge
> BOOTPROTO=none
> IPADDR=192.168.10.201
> NETMASK=255.255.254.0
> STP=off
> DELAY=0
> 
> ifcfg-eth3
> DEVICE=eth3
> ONBOOT=yes
> HWADDR=AC:80:B2:4E:D3:2A
> BRIDGE=br3
> 
> ifcfg-br3
> DEVICE=br3
> ONBOOT=yes
> TYPE=Bridge
> BOOTPROTO=static
> IPADDR=10.7.10.201
> NETMASK=255.255.254.0
> STP=off
> DELAY=0
> 
> network
> NETWORKING=yes
> HOSTNAME=kvm1.public-lan.net
> GATEWAY=192.168.10.1
> 
> For iptables/routing, we have followed instructions as explained on 
> http://www.linux-kvm.org/page/Networking#public_bridge
> *nat
> :POSTROUTING ACCEPT [0:0]
> -A POSTROUTING --out-interface br1 -j MASQUERADE
> COMMIT
> :FORWARD ACCEPT [0:0]
> -A FORWARD --in-interface br1 -j ACCEPT
> 
> Hostside:
> Allow IPv4 forwarding and add route to client (could be put in a
> script 
> - route has to be added after the client has started):
> sysctl -w net.ipv4.ip_forward=1 # allow forwarding of IPv4
> route add -host  dev  # add route to the
> client
> 
> Clientside:
> Default GW of the client is of course then the host ( has
> to be in same subnet as  ...):
> route add default gw 


What do the client configs look like? What network options are you
passing to qemu/kvm (or just the whole command line)? If your guests
and host are in the same subnet, why are you masquerading/routing? Why
not just use standard bridging?


> 
> But it doesn't seem to work. My assumption the problem is related
> with a wrong setting of the firewall on the iptables. Could you
> please advice? Your help will be greatly appreciated!
> 
> We are running Scientific Linux 6.2 on the KVM server as well as on
> the VMs. There is no network issue by accessing the LAN between VMs
> but only to face the public network.
> 
> Thanks in advance,
> -Hugo
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Status of Fault Tolerance feature?

2013-01-29 Thread Brian Jackson
On Tue, 29 Jan 2013 15:16:13 +0200
Andres Toomsalu  wrote:

> But is there any other projects in (planned) development with the
> same goal(s)?


I haven't heard of any. But then again, a lot of things get developed
in secret and then dumped on the community.


> Im just really puzzled that while QEMU/KVM being kind a
> mature solution already no true fault tolerance/HA solutions exist
> (Im aware about stateless HA solutions with RHCS etc stacks -  but
> its hardly the "true" HA) - and if I get it correctly - no real
> plans/development in that direction also near-term?


Most people that I know that have tried similar solutions on other
products give up on it because the performance is abysmal. It's
generally faster and better tested to do this stuff at the application
layer.


> 
> Kind regards,

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Status of Fault Tolerance feature?

2013-01-28 Thread Brian Jackson
On Mon, 21 Jan 2013 14:24:12 +0200
Andres Toomsalu  wrote:

> Hi,
> 
> Could anyone shed a light what happened to Kemari project and are
> there any upcoming development planned in order to provide continous
> non-blocking VM checkpointing and VM HA with state replication?
> 
> Kind regards,

The project hasn't been actively developed in years and there has been
no public information about it in probably longer. So state is
"unknown".
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: I/O errors in guest OS after repeated migration

2012-10-17 Thread Brian Jackson
On Wednesday, October 17, 2012 10:45:14 AM Guido Winkelmann wrote:
> Am Dienstag, 16. Oktober 2012, 12:44:27 schrieb Brian Jackson:
> > On Tuesday, October 16, 2012 11:33:44 AM Guido Winkelmann wrote:
> > > The commandline, as generated by libvirtd, looks like this:
> > > 
> > > LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
> > > QEMU_AUDIO_DRV=none /usr/bin/qemu-kvm -S -M pc-0.15 -enable-kvm -m 1024
> > > -smp 1,sockets=1,cores=1,threads=1 -name migratetest2 -uuid
> > > ddbf11e9-387e-902b-4849-8c3067dc42a2 -nodefconfig -nodefaults -chardev
> > > socket,id=charmonitor,path=/var/lib/libvirt/qemu/migratetest2.monitor,s
> > > erv e
> > > r,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc
> > > -no-reboot -no- shutdown -device
> > > piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
> > > file=/data/migratetest2_system,if=none,id=drive-virtio-
> > > disk0,format=qcow2,cache=none -device virtio-blk-
> > > pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
> > > disk0,bootindex=1 -drive
> > > file=/data/migratetest2_data-1,if=none,id=drive-
> > > virtio-disk1,format=qcow2,cache=none -device virtio-blk-
> > > pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk1,id=virtio-disk
> > > 1 - netdev tap,fd=27,id=hostnet0,vhost=on,vhostfd=28 -device
> > > virtio-net-
> > > pci,netdev=hostnet0,id=net0,mac=02:00:00:00:00:0c,bus=pci.0,addr=0x3
> > > -vnc 127.0.0.1:2,password -k de -vga cirrus -incoming
> > > tcp:0.0.0.0:49153 -device
> > > virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6
> > 
> > I see qcow2 in there. Live migration of qcow2 was a new feature in 1.0.
> > Have you tried other formats or different qemu/kvm versions?
> 
> I tried the same thing with a raw image file instead of qcow2, and the
> problem still happens. From the /var/log/messages of the guest:
> 
> Oct 17 17:10:34 localhost sshd[2368]: nss_ldap: could not search LDAP
> server - Server is unavailable
> Oct 17 17:10:39 localhost kernel: [  126.800075] eth0: no IPv6 routers
> present Oct 17 17:10:52 localhost kernel: [  140.335783] Clocksource tsc
> unstable (delta = -70265501 ns)
> Oct 17 17:12:04 localhost /O error on device vda1, logical block 1858765
> Oct 17 17:12:04 localhost kernel: [  212.070584] Buffer I/O error on device
> vda1, logical block 1858766
> Oct 17 17:12:04 localhost kernel: [  212.070587] Buffer I/O error on device
> vda1, logical block 1858767
> Oct 17 17:12:04 localhost kernel: [  212.070589] Buffer I/O error on device
> vda1, logical block 1858768
> Oct 17 17:12:04 localhost kernel: [  212.070592] Buffer I/O error on device
> vda1, logical block 1858769
> Oct 17 17:12:04 localhost kernel: [  212.070595] Buffer I/O error on device
> vda1, logical block 1858770
> Oct 17 17:12:04 localhost kernel: [  212.070597] Buffer I/O error on device
> vda1, logical block 1858771
> Oct 17 17:12:04 localhost kernel: [  212.070600] Buffer I/O error on device
> vda1, logical block 1858772
> Oct 17 17:12:04 localhost kernel: [  212.070602] Buffer I/O error on device
> vda1, logical block 1858773
> Oct 17 17:12:04 localhost kernel: [  212.070605] Buffer I/O error on device
> vda1, logical block 1858774
> Oct 17 17:12:04 localhost kernel: [  212.070607] Buffer I/O error on device
> vda1, logical block 1858775
> Oct 17 17:12:04 localhost kernel: [  212.070610] Buffer I/O error on device
> vda1, logical block 1858776
> Oct 17 17:12:04 localhost kernel: [  212.070612] Buffer I/O error on device
> vda1, logical block 1858777
> Oct 17 17:12:04 localhost kernel: [  212.070615] Buffer I/O error on device
> vda1, logical block 1858778
> Oct 17 17:12:04 localhost kernel: [  212.070617] Buffer I/O error on device
> vda1, logical block 1858779
> 
> (I was writing a large file at the time, to make sure I actually catch I/O
> errors as they happen)


What about newer versions of qemu/kvm? But of course if those work, your next 
task is going to be git bisect it or file a bug with your distro that is using 
an ancient version of qemu/kvm.


> 
>   Guido
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: I/O errors in guest OS after repeated migration

2012-10-17 Thread Brian Jackson
On Wednesday, October 17, 2012 06:54:00 AM Guido Winkelmann wrote:
> Am Dienstag, 16. Oktober 2012, 12:44:27 schrieb Brian Jackson:
> > On Tuesday, October 16, 2012 11:33:44 AM Guido Winkelmann wrote:
> [...]
> 
> > > The commandline, as generated by libvirtd, looks like this:
> > > 
> > > LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
> > > QEMU_AUDIO_DRV=none /usr/bin/qemu-kvm -S -M pc-0.15 -enable-kvm -m 1024
> > > -smp 1,sockets=1,cores=1,threads=1 -name migratetest2 -uuid
> > > ddbf11e9-387e-902b-4849-8c3067dc42a2 -nodefconfig -nodefaults -chardev
> > > socket,id=charmonitor,path=/var/lib/libvirt/qemu/migratetest2.monitor,s
> > > erv e
> > > r,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc
> > > -no-reboot -no- shutdown -device
> > > piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
> > > file=/data/migratetest2_system,if=none,id=drive-virtio-
> > > disk0,format=qcow2,cache=none -device virtio-blk-
> > > pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
> > > disk0,bootindex=1 -drive
> > > file=/data/migratetest2_data-1,if=none,id=drive-
> > > virtio-disk1,format=qcow2,cache=none -device virtio-blk-
> > > pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk1,id=virtio-disk
> > > 1 - netdev tap,fd=27,id=hostnet0,vhost=on,vhostfd=28 -device
> > > virtio-net-
> > > pci,netdev=hostnet0,id=net0,mac=02:00:00:00:00:0c,bus=pci.0,addr=0x3
> > > -vnc 127.0.0.1:2,password -k de -vga cirrus -incoming
> > > tcp:0.0.0.0:49153 -device
> > > virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6
> > 
> > I see qcow2 in there. Live migration of qcow2 was a new feature in 1.0.
> > Have you tried other formats or different qemu/kvm versions?
> 
> Are you sure about that? Because I'm fairly certain I have been using live
> migration since at least 0.14, if not 0.13, and I have always been using
> qcow2 as the image format for the disks...
> 
> I can still try with other image formats, though.


Yes, see the release notes for 1.0. It may have worked by chance before that, 
but it wasn't guaranteed to work. There was no blacklisting feature then like 
there is now to stop it.


> 
>   Guido
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: I/O errors in guest OS after repeated migration

2012-10-16 Thread Brian Jackson
On Tuesday, October 16, 2012 11:33:44 AM Guido Winkelmann wrote:
> Hi,
> 
> I'm experiencing I/O errors in a guest machine after migrating it from one
> host to another, and then back to the original host. After doing this, I
> find the following in the dmesg output of the guest machine:
> 
> [  345.390543] end_request: I/O error, dev vda, sector 273871
> [  345.391125] end_request: I/O error, dev vda, sector 273871
> [  345.391705] end_request: I/O error, dev vda, sector 273871
> [  345.394796] end_request: I/O error, dev vda, sector 1745983
> [  345.396005] end_request: I/O error, dev vda, sector 1745983
> [  346.083160] end_request: I/O error, dev vdb, sector 54528008
> [  346.083179] Buffer I/O error on device dm-0, logical block 6815745
> [  346.083181] lost page write due to I/O error on dm-0
> [  346.083193] end_request: I/O error, dev vdb, sector 54528264
> [  346.083195] Buffer I/O error on device dm-0, logical block 6815777
> [  346.083197] lost page write due to I/O error on dm-0
> [  346.083201] end_request: I/O error, dev vdb, sector 2056
> [  346.083204] Buffer I/O error on device dm-0, logical block 1
> [  346.083206] lost page write due to I/O error on dm-0
> [  346.083209] Buffer I/O error on device dm-0, logical block 2
> [  346.083211] lost page write due to I/O error on dm-0
> [  346.083215] end_request: I/O error, dev vdb, sector 10248
> [  346.083217] Buffer I/O error on device dm-0, logical block 1025
> [  346.083219] lost page write due to I/O error on dm-0
> [  346.091499] end_request: I/O error, dev vdb, sector 76240
> [  346.091506] Buffer I/O error on device dm-0, logical block 9274
> [  346.091508] lost page write due to I/O error on dm-0
> [  346.091572] JBD2: Detected IO errors while flushing file data on dm-0-8
> [  346.091915] end_request: I/O error, dev vdb, sector 38017360
> [  346.091956] Aborting journal on device dm-0-8.
> [  346.092557] end_request: I/O error, dev vdb, sector 38012928
> [  346.092566] Buffer I/O error on device dm-0, logical block 4751360
> [  346.092569] lost page write due to I/O error on dm-0
> [  346.092624] JBD2: I/O error detected when updating journal superblock
> for dm-0-8.
> [  346.100940] end_request: I/O error, dev vdb, sector 2048
> [  346.100948] Buffer I/O error on device dm-0, logical block 0
> [  346.100952] lost page write due to I/O error on dm-0
> [  346.101027] EXT4-fs error (device dm-0): ext4_journal_start_sb:327:
> Detected aborted journal
> [  346.101038] EXT4-fs (dm-0): Remounting filesystem read-only
> [  346.101051] EXT4-fs (dm-0): previous I/O error to superblock detected
> [  346.101836] end_request: I/O error, dev vdb, sector 2048
> [  346.101845] Buffer I/O error on device dm-0, logical block 0
> [  346.101849] lost page write due to I/O error on dm-0
> [  373.006680] end_request: I/O error, dev vda, sector 624319
> [  373.007543] end_request: I/O error, dev vda, sector 624319
> [  373.008327] end_request: I/O error, dev vda, sector 624319
> [  374.886674] end_request: I/O error, dev vda, sector 624319
> [  374.887563] end_request: I/O error, dev vda, sector 624319
> 
> The hosts are both running Fedora 17 with qemu-kvm-1.0.1-1.fc17.x86_64. The
> guest machine has been started and migrated using libvirt (0.9.11). Kernel
> version is 3.5.6-1.fc17.x86_64 on the first host and 3.5.5-2.fc17.x86_64 on
> the second.
> The guest machine is on Kernel 3.3.8 and uses ext4 on its disks.
> 
> The commandline, as generated by libvirtd, looks like this:
> 
> LC_ALL=C PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin
> QEMU_AUDIO_DRV=none /usr/bin/qemu-kvm -S -M pc-0.15 -enable-kvm -m 1024
> -smp 1,sockets=1,cores=1,threads=1 -name migratetest2 -uuid
> ddbf11e9-387e-902b-4849-8c3067dc42a2 -nodefconfig -nodefaults -chardev
> socket,id=charmonitor,path=/var/lib/libvirt/qemu/migratetest2.monitor,serve
> r,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc
> -no-reboot -no- shutdown -device
> piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive
> file=/data/migratetest2_system,if=none,id=drive-virtio-
> disk0,format=qcow2,cache=none -device virtio-blk-
> pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-
> disk0,bootindex=1 -drive file=/data/migratetest2_data-1,if=none,id=drive-
> virtio-disk1,format=qcow2,cache=none -device virtio-blk-
> pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk1,id=virtio-disk1 -
> netdev tap,fd=27,id=hostnet0,vhost=on,vhostfd=28 -device virtio-net-
> pci,netdev=hostnet0,id=net0,mac=02:00:00:00:00:0c,bus=pci.0,addr=0x3 -vnc
> 127.0.0.1:2,password -k de -vga cirrus -incoming tcp:0.0.0.0:49153 -device
> virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x6


I see qcow2 in there. Live migration of qcow2 was a new feature in 1.0. Have 
you tried other formats or different qemu/kvm versions?


> 
> The second host has an ext4 filesystem mounted under /data, which it
> exports using NFSv3 over TCP to the first host, which also mounts it under
> /data.
> 
> So far, the problem

Re: Error: Not supported image type "twoGbMaxExtentFlat". - problems using virt-convert

2012-09-04 Thread Brian Jackson
On Tuesday, September 04, 2012 11:26:49 AM Lentes, Bernd wrote:
> Hi,
> 
> i want to convert a sles 11 sp2 64bit system (running on VMWare Server
> 1.09) to libvirt format. Host OS is SLES 11 SP2 64bit. I tried
> "virt-convert --os-variant=sles11 sles_11_vmx/ sles_11_kvm/" .
> 
> This is what i got:
> Generating output in 'virt-image' format to sles_11_kvm//
> Converting disk 'tomcat_6.vmdk' to type raw...
> ERRORCouldn't convert disks: Disk conversion failed with exit status 1:
> VMDK: Not supported image type "twoGbMaxExtentFlat". qemu-img: Could not
> open '/var/lib/kvm/images/sles_11_vmx/tomcat_6.vmdk': Operation not
> supported qemu-img: Could not open
> '/var/lib/kvm/images/sles_11_vmx/tomcat_6.vmdk'
> 
> It seems that virt-convert does not like the 2GB files form VMWare Server.
> 
> How can i convert my system from VMWare Server 1.09 to libvirt format ?

I don't know what libvirt format is, but to get a raw file if it's Linux:
vmware-vdiskmanager -r  -t 2 

At least I think that should work... That might still be a vmdk file, but it 
should work with qemu-img. You might try -t 0 if you are short on space.


> 
> 
> Thanks for any hints.
> 
> 
> Bernd
> 
> --
> Bernd Lentes
> 
> Systemadministration
> Institut für Entwicklungsgenetik
> Gebäude 35.34 - Raum 208
> HelmholtzZentrum münchen
> bernd.len...@helmholtz-muenchen.de
> phone: +49 89 3187 1241
> fax:   +49 89 3187 3826
> http://www.helmholtz-muenchen.de/idg
> 
> Wir sollten nicht den Tod fürchten, sondern
> das schlechte Leben
> 
> Helmholtz Zentrum München
> Deutsches Forschungszentrum für Gesundheit und Umwelt (GmbH)
> Ingolstädter Landstr. 1
> 85764 Neuherberg
> www.helmholtz-muenchen.de
> Aufsichtsratsvorsitzende: MinDir´in Bärbel Brumme-Bothe
> Geschäftsführer: Prof. Dr. Günther Wess und Dr. Nikolaus Blum
> Registergericht: Amtsgericht München HRB 6466
> USt-IdNr: DE 129521671
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Question about host CPU usage/allocation by KVM

2012-04-20 Thread Brian Jackson
On Thu, 19 Apr 2012 13:01:54 -0500, Alexander Lyakas  
 wrote:



Hi Stuart,
I have been doing some experiments, and I noticed that there are
additional QEMU threads, besides the ones reported by "info cpus"
command. In particular, the main QEMU thread (the one whose LWP is the
same as its PID), also consumes significant CPU time. Is this
expected?


The extra threads are for various things. It can be for the vnc server if  
you are using it. Threads are used to mimic aio in certain situations.  
Etc. The main thread also does a lot of the device emulation work  
(console, network, serial, block, etc.)





Alex.


On Wed, Apr 18, 2012 at 8:24 PM, Stuart Yoder  wrote:

On Tue, Apr 17, 2012 at 4:54 PM, Alexander Lyakas
 wrote:

Greetings everybody,

Can anybody please point me to code/documentation regarding the
following questions I have:

- What does it actually mean using "-smp N" option, in terms of CPU
sharing between the host and the guest?
- How are guest CPUs mapped to host CPUs (if at all)?


Each guest CPU (vcpu) corresponds to a QEMU thread.
You can see the thread ids in QEMU with "info cpus" in the
QEMU monitor.

Since a vcpu is a thread you can apply standard Linux
mechanisms to managing those threads-- CPU affinity, etc.

Stuart

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: AESNI and guest hosts

2012-02-14 Thread Brian Jackson
On Tuesday, February 14, 2012 03:31:10 AM Ryan Brown wrote:
> Sorry for being a noob here, Any clues with this?, anyone ...
> 
> On Mon, Feb 13, 2012 at 2:05 AM, Ryan Brown  wrote:
> > Host/KVM server is running linux 3.2.4 (Debian wheezy), and guest
> > kernel is running 3.2.5. The cpu is an E3-1230, but for some reason
> > its not able to supply the guest with aesni. Is there a config option
> > or is there something we're missing?



I don't think it's supported to pass that functionality to the guest.



> > 
> >
> > x86_64
> > Westmere
> > Intel
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> > 
> >
> >  
> > 
> > Guest:
> > [root@fanboy:~]# cat /proc/cpuinfo
> > processor   : 0
> > vendor_id   : GenuineIntel
> > cpu family  : 6
> > model   : 2
> > model name  : QEMU Virtual CPU version 1.0
> > stepping: 3
> > microcode   : 0x1
> > cpu MHz : 3192.748
> > cache size  : 4096 KB
> > fdiv_bug: no
> > hlt_bug : no
> > f00f_bug: no
> > coma_bug: no
> > fpu : yes
> > fpu_exception   : yes
> > cpuid level : 4
> > wp  : yes
> > flags   : fpu de pse tsc msr pae mce cx8 apic mtrr pge mca
> > cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm pni cx16 popcnt
> > hypervisor lahf_lm
> > bogomips: 6385.49
> > clflush size: 64
> > cache_alignment : 64
> > address sizes   : 40 bits physical, 48 bits virtual
> > power management:
> > 
> > processor   : 1
> > vendor_id   : GenuineIntel
> > cpu family  : 6
> > model   : 2
> > model name  : QEMU Virtual CPU version 1.0
> > stepping: 3
> > microcode   : 0x1
> > cpu MHz : 3192.748
> > cache size  : 4096 KB
> > fdiv_bug: no
> > hlt_bug : no
> > f00f_bug: no
> > coma_bug: no
> > fpu : yes
> > fpu_exception   : yes
> > cpuid level : 4
> > wp  : yes
> > flags   : fpu de pse tsc msr pae mce cx8 apic mtrr pge mca
> > cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm pni cx16 popcnt
> > hypervisor lahf_lm
> > bogomips: 6385.49
> > clflush size: 64
> > cache_alignment : 64
> > address sizes   : 40 bits physical, 48 bits virtual
> 
> > power management:
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: performance trouble

2012-01-30 Thread Brian Jackson
On Monday, January 30, 2012 09:36:55 AM David Cure wrote:
> Le Mon, Jan 23, 2012 at 09:28:37AM +0100, David Cure ecrivait :
> > I use several kvm box, and no problem at all except for 1
> > 
> > application that have bad response time.
> > 
> > The VM runs Windows 2008R2 and the application is an
> > 
> > client-server app develop with progress software and talk to an Oracle
> > databasei (on another server) and we access this app with RDS/TSE.
> > 
> > The physical server runs Debian testing to have qemu-kvm 1.0 and
> > 
> > linux kernel 3.1 and libvirt 0.9.8. We use virtio for disk and network
> > and use the last driver for Windows (from RH).
> > 
> > We have 2 references servers : one physical and one running
> > 
> > Vmware.
> > 
> > Response time :
> > o physical = 7s
> > o VM under vmware = 8s
> > o VM under KVM = 12s (to complete with qemu-kvm 0.12.5 and
> > 
> > kernel 2.6.32 we have 22s ...).
> > 
> > I attach the libvirt xml of my vm.
> > 
> > How can I see what's append ? Do you have idea to increase
> > 
> > performance ?
> 
>   no one interesting to try to track this issue where kvm is so
> slow ?
> 
>   David.


Without more info or some way to reproduce the problem, it would be pointless 
for one of the devs to spend much time on it.



--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can I configure cores instead of CPU's

2011-12-02 Thread Brian Jackson

On 12/2/2011 4:27 PM, Todd And Margo Chester wrote:

Hi All,

Scientific Linux 6.1 x64
qemu-kvm-0.12.1.2-2.160.el6_1.2.x86_64

My XP-Pro guest will only let me use two CPUs.

Is there a way I can tell Virt-Manager to use
one CPU with four cores instead of four separate
CPUs?



Don't know about how to do it with virt-manager (I believe they have 
their own mailing list), but for standard qemu/kvm you can use "-smp 
4,cores=4"





Many thanks,
-T
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Is it possible to have SDL without X?

2011-11-30 Thread Brian Jackson

On 11/29/2011 9:29 PM, Matt Graham wrote:

Hello,

Can a guest with SDL graphics run on a host without X? I get an error:
"init kbd.
Could not initialize SDL - exiting"

The above happens on a host with X after running "/etc/init.d/xdm stop" and "chmod 
-R 777 /dev".
If I don't do the chmod, SDL complains about not being able to open the 
framebuffer and exits.

The host is Debian Squeeze with the standard qemu-kvm package in their 
repository (version 0.12.5).
The guest xml does not specify a keyboard device.
The same guest runs fine under X.

If there is any other information that could be useful, I will be very happy to 
provide it.
If this is not the right place for such questions, apologies, please let me 
know what the right place is.



The qemu list might be better since that code all originated there. What 
exactly are you trying to achieve? It sounds like you are trying to get 
the guest to display on a Linux console via sdl. From what I understand 
that's going to be severely limited functionality wise.





Thanks!
Richard

  --
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: user: time jump in slave machine leads to freeze

2011-01-25 Thread Brian Jackson

On 1/25/2011 9:21 AM, gnafou wrote:

Hello

We have had several cases where  a slave machine freezes, eating all available
cpu. ( this happends randomly, say, after 3 months of correct functionning )

After reboot, looking at the syslog when the freeze occured,  the few ( ~5 )
last lines written show a date which has jumped in the future ( roughly, 15 days
ahead  ), but nothing related to the crash is logged.

our salves machines are ntp-synchronized


we run under debian/lenny
we launch the machines with the kvm command

kernel of host :  2.6.26-2-amd64
kernel of host :  2.6.26-2-686



I'm pretty sure this issue has been fixed in newer versions of qemu-kvm 
and the kernel. You might want to try using the versions from backports. 
You are unlikely to get much support on a kernel that's 10 releases old 
and iirc, the kvm that came with lenny was only a development snapshot.




command launched :

kvm -name svn -drive file=xxx -net tap -m 256 -net nic,macaddr=xxx  -pidfile
/var/run/kvm/xx.pid -daemonize -k fr -vnc :63006 -monitor
unix:xx_monitor,server,nowait -vnc unix:xx_vnc


If you ever have an idea  how to solve or to debug the problem ...

Thanks


Fred



--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: network performance between host and guest...

2010-12-17 Thread Brian Jackson

On 12/17/2010 4:29 PM, Erik Brakkee wrote:

Hi,


For a backup of data from a VM to a USB mounted disk I want to 
circumvent the USB 1.1 limitations on the guest and instead copy the 
data over to the host using scp/ssh. I have setup a network using 
virtio and NAT like this:








function='0x0'/>





What does that equate to in command line options? Check libvirt logs 
maybe. What version of qemu-kvm? Guest details? Host details?





When I now create a 1GB file using dd and copy it over from the guest 
to the host, I am seeing a performance between 25-30 MB/s.



Is it to and from the same disk? If so, maybe you could try a tmpfs in 
the guest or host so you aren't constantly seeking back and forth on the 
same disk.


Also have you tried something like rsyncd instead of scp? Maybe you are 
hitting some sort of encryption limitation.





My question is if this is normal because I have seen others on the 
internet achieve far greater speeds.



Depends on a lot of factors. Certainly raw bandwidth wise, virtio-net is 
capable of a lot more than that. With vhost-net here, I can get over 
5gbps guest to host. And that's on crappy old first gen cpus (no ept/etc.).





In any case the speeds are comparable to current USB 2.0 speeds but I 
intend on using USB 3.0 so would like to get a little bit more out of it.


What would I use to speed this up a bit futher?

Cheers
  Erik

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kvm hangs on mkfs

2010-12-08 Thread Brian Jackson
On Wednesday, December 08, 2010 01:09:25 pm Hanno Böck wrote:
> Hi,
> 
> I tried a relatively simple task with qemu-kvm. I have two qcow hd images
> and try to create filesystems on them using a gentoo installation disk.


qcow2 (I hope you are using that vs just qcow) is known to be a tad on the 
slow side on metadata heavy operations (i.e. mkfs, installing lots of files, 
etc.). One trick some of us use is to use the -drive syntax (vs -hda) and set 
the cache option to unsafe or writeback for the install process. The other 
alternative is to use preallocated raw images (i.e. made with dd vs qemu-img). 
I've been informed that in 0.12.5 the writeback trick won't do any good due to 
some extra fsync()s. So your best bet is to upgrade to 0.13 and use 
cache=unsafe.


> 
> Starting qemu with:
> qemu -m 512 -cdrom install-x86-minimal-20101116.iso -hda hda.img -hdb
> hdb.img
> 
> 
> However, mkfs always hangs indefinitely. Doesn't really matter if ext2/3/4,
> it always hangs at
> "Writing superblocks and filesystem accounting information:"


Have you tried strace'ing to see if it's actually doing something (just very 
slowly)?


> 
> Any idea where to start looking for the problem? (please cc me as I'm not
> subscribed to this list)
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [libvirt-users] Problems with libvirt / qemu

2010-10-11 Thread Brian Jackson
On Monday, October 11, 2010 11:57:27 am Dan Johansson wrote:
> Yes, the image contains an OS (it works if I start the guest manually).
> 
> On Monday 11 October 2010 03.41:29 jbuy0710 wrote:
> > The image contains an OS or not?
> > If not, you can't choose to boot from disk like ""
> > 
> > 2010/10/10 Dan Johansson :
> > > HI,
> > > 
> > > I have a small problem with libvirt / qemu. I have created a guest and
> > > when I start it from the command-line the guests starts OK, but when I
> > > start the guest through libvirt with "virsh start" I get "Booting from
> > > Hard Disk... Boot failed: not a bootable disk
> > > No bootable device"
> > > 
> > > This is the command-line I use to start the guest (which works)
> > > "cd /var/lib/kvm/Wilmer;  /usr/bin/qemu-system-x86_64 --enable-kvm \
> > > 
> > >-net nic,vlan=1,model=rtl8139,macaddr=DE:ED:BE:EF:01:03 -net
> > >tap,vlan=1,ifname=qtap13,script=no,downscript=no \ -net
> > >nic,vlan=3,model=rtl8139,macaddr=DE:ED:BE:EF:03:03 -net
> > >tap,vlan=3,ifname=qtap33,script=no,downscript=no \ -m 2048 -k
> > >de-ch -vnc :3 -daemonize \
> > >Wilmer.qcow2"


My guess would be that libvirt is using the -drive syntax with if=ide and 
boot=on. Those 2 options together are known to be broken in certain versions 
of qemu/kvm. You can find out by checking in the libvirt logs to see what kvm 
command it's running to start the guest.


> > > 
> > > The libvirt XML-file was created using "virsh domxml-from-native
> > > qemu-argv" and this is the result of that conversion:  > > type='kvm'>
> > > 
> > >  wilmer
> > >  a421968d-0573-1356-8cb7-32caff525a03
> > >  2097152
> > >  2097152
> > >  2
> > >  
> > >  
> > >hvm
> > >
> > >  
> > >  
> > >  
> > >  
> > >
> > >  
> > >  
> > >  
> > >  destroy
> > >  restart
> > >  destroy
> > >  
> > >  
> > >/usr/bin/qemu-system-x86_64
> > >
> > >
> > >  
> > >  
> > >  
> > >
> > >
> > >
> > >
> > >   > >  function='0x1'/>
> > >
> > >
> > >
> > >
> > >  
> > >  
> > >  
> > >  
> > >   > >  function='0x0'/>
> > >
> > >
> > >
> > >
> > >  
> > >  
> > >  
> > >  
> > >   > >  function='0x0'/>
> > >
> > >
> > >
> > >
> > >
> > >
> > >  
> > >   > >  function='0x0'/>
> > >
> > >
> > >  
> > >  
> > > 
> > > 
> > > 
> > > 
> > > Anyone seeing something obvious that I have missed?
> > > 
> > >  Regards,
> > > 
> > > --
> > > Dan Johansson, 
> > > ***
> > > This message is printed on 100% recycled electrons!
> > > ***
> > > 
> > > ___
> > > libvirt-users mailing list
> > > libvirt-us...@redhat.com
> > > https://www.redhat.com/mailman/listinfo/libvirt-users
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: 8 NIC limit

2010-10-05 Thread Brian Jackson

 On 10/5/2010 9:48 AM, linux_...@proinbox.com wrote:

Hello list:

I'm working on a project that calls for the creation of a firewall in
KVM.
While adding a 20-interface trunk of virtio adapters to bring in a dual
10GB bond, I've discovered an 8 NIC limit in QEMU.

I found the following thread in the list archives detailing a similar
problem:
http://kerneltrap.org/mailarchive/linux-kvm/2009/1/29/4848304

It includes a patch for the file qemu/net.h to allow 24 NICs:
https://bugs.launchpad.net/ubuntu/+source/qemu-kvm";>qemu-kvm/+bug/595873/+attachment/1429544/+files/max_nics.patch

In my case I want to attach 29, and have simply changed line 8 to 30
from 24.



I'd guess you'll bump into a pci device number limit (I believe it is 32 
at the moment).




This will be the first patch I've ever had to do, and so far my internet
search yields results that don't seem to apply.

Would someone like to recommend a pertinent tutorial?

Many thanks
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: vhost not working with version 12.5 with kernel 2.6.35.4

2010-09-09 Thread Brian Jackson
On Wednesday 08 September 2010 21:14:39 matthew.r.roh...@l-3com.com wrote:
> > >>When trying to use vhost I get the error "vhost-net requested but
> 
> could
> 
> > >>not be initialized".  The only thing I have been able to find about
> 
> this
> 
> > >>problem relates to SElinux being turned off which mine is disabled
> 
> and
> 
> > >>permissive.  Just wondering if there were any other thoughts on this
> > >>error? Am I correct that it should work with the .35.4 kernel and
> > >>version 12.5 KVM?
> > 
> > If you mean 0.12.5, no. If you mean 0.12.50 (i.e. a git checkout from
> 
> some
> 
> > point after 0.12.0 was released), then it depends on when the checkout
> 
> is
> 
> > from.
> 
> I do mean 0.12.50 checked out from qemu-kvm via git a couple of weeks
> ago.  If I can ask, is 0.12.5 just regular qemu and 0.12.50 qemu-kvm?



0.12.5 is a release, 0.12.50 is a git checkout. I don't remember exactly when 
vhost support was added in the qemu repository, but you might try a more 
recent checkout or the 0.13 rc should work too.



> 
> > >>KVM Host OS: Fedora 12 x86_64
> > >>KVM Guest OS Tiny Core Linux 2.6.33.3 kernel
> > >>
> > >> Host kernel 2.6.35.4 and qemu-system-x86_64 12.5 compiled from from
> > >> 
> > >> qemu-kvm repo.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: GPU passthrough again ...

2010-09-05 Thread Brian Jackson

 On 9/5/2010 3:22 PM, Christian Voß wrote:

Hi,

a short question: Is GPU passthrough into KVM guests now available?



No. I'm sure it'll be announced fairly loudly when/if it does work.




Thanks a lot.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: vhost not working with version 12.5 with kernel 2.6.35.4

2010-09-02 Thread Brian Jackson
On Thursday, September 02, 2010 06:27:26 pm matthew.r.roh...@l-3com.com wrote:
> When trying to use vhost I get the error "vhost-net requested but could
> not be initialized".  The only thing I have been able to find about this
> problem relates to SElinux being turned off which mine is disabled and
> permissive.  Just wondering if there were any other thoughts on this
> error? Am I correct that it should work with the .35.4 kernel and
> version 12.5 KVM?


If you mean 0.12.5, no. If you mean 0.12.50 (i.e. a git checkout from some 
point after 0.12.0 was released), then it depends on when the checkout is 
from.



> 
> 
> KVM Host OS: Fedora 12 x86_64
> KVM Guest OS Tiny Core Linux 2.6.33.3 kernel
> 
> Host kernel 2.6.35.4 and qemu-system-x86_64 12.5 compiled from from
> qemu-kvm repo.
> 
> Starting with:
>   modprobe kvm
>   modprobe kvm_intel
>   modprobe tun
>   echo -e "Setting up bridge device br0" "\r"
>   brctl addbr br0
>   ifconfig br0 192.168.100.254 netmask 255.255.255.0 up
>   brctl addif br0 eth7
>   ifconfig eth7 down
>   ifconfig eth7 0.0.0.0
>   for ((i=0; i < NUM_OF_DEVICES ; i++)); do
>echo -e "Setting up " "\r"
>tunctl -b -g ${KVMNET_GID} -t kvmnet$i
>brctl addif br0 kvmnet$i
>ifconfig kvmnet$i up 0.0.0.0 promisc
>done
>   echo "1" > /proc/sys/net/ipv4/ip_forward
>   iptables -t nat -A POSTROUTING -s 192.168.100.0/24 -o eth7 -j
> MASQUERADE
>   for ((i=0; i < NUM_OF_DEVICES ; i++)); do
> 
>   qemu-img create -f qcow2 vdisk_$i.img $VDISK_SIZE
> 
>   /root/bin/qemu-system-x86_64 -cpu host -drive
> file=./vdisk_$i.img,if=virtio,boot=on -cdrom ./$2 -boot d \
>   -netdev
> type=tap,id=tap.0,script=no,ifname=kvmnet$i,vhost=on \
>   -device
> virtio-net-pci,netdev=tap.0,mac=52:54:00:12:34:3$i \
>   -m 1024 \
>   -smp 2 \
>   -usb \
>   -usbdevice tablet \
>   -localtime \
>   -daemonize \
> -vga std
> 
> 
> Thanks in advance for the input! -Matt
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Degrading Network performance as KVM/kernel version increases

2010-08-31 Thread Brian Jackson

 On 8/31/2010 6:00 PM, matthew.r.roh...@l-3com.com wrote:

I have been getting degrading network performance with newer versions of
KVM and was wondering if this was expected?  It seems like a bug, but I
am new to this and maybe I am doing something wrong so I thought I would
ask.

KVM Host OS: Fedora 12 x86_64
KVM Guest OS Tiny Core Linux 2.6.33.3 kernel

I have tried multiple host kernels 2.6.31.5, 2.6.31.6, 2.6.32.19 and
2.6.35.4 along with versions qemu-kvm 11.0 and qemu-system-x86_64 12.5
compiled from from qemu-kvm repo.



I can't say anything about the kernel version making things worse. At 
least for the qemu-kvm version, you should be using -device and -netdev 
instead of -net nic -net tap (see 
*http://git.qemu.org/qemu.git/tree/docs/qdev-device-use.txt since it's 
not in the 0.12 tree).*




Setup is: 2 hosts with 1 guest on each connected by 10 Gb nic.

I am using virtio and have checked that hardware acceleration is
working.

Processor usage is less than 50% on host and guests.

Here is what I am seeing, I will just include guest to guest statistics,
I do have more (host to guest, etc.) if interested:






My goal is to get as much bandwidth as I can between the 2 guests
running on separate hosts.  The most I have been able to get is ~4 Gb/s
running 4 threads on iperf from guest A to guest B.  I cannot seem to
get much over 1.5Gb/s from guest to guest with a single iperf thread.
Is there some sort of know send limit per thread?  Is it expected that
the latest version of the kernel and modules perform worse than earlier
versions in the area of network performance ( I am guessing not, am I
doing something wrong?)?  I am using virtio and have checked that
hardware acceleration is working.  4 iperf threads host to host yields
~9.5 Gb/s.  Any ideas on how I can get better performance with newer
versions?  I have tried using vhost in 2.6.35 but I get the vhost could
not be initialized error.  The only thing I could find on the vhost
error is that selinux should be off which it is.

I am looking for ideas on increasing the bandwidth between guests and
thoughts on the degrading performance.



Vhost-net is probably your best bet for maximizing throughput. You might 
try a separate post just for the vhost error if nobody chimes in about 
it here.




Thanks for your help! --Matt


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Is it possible to live migrate guest OS'es between different versions of kvm/qemu-kvm?

2010-08-30 Thread Brian Jackson
On Monday, August 30, 2010 07:04:51 am Nils Cant wrote:
> Hey guys,
> 
> next try is without libvirt, but still no joy.
> 
> After issuing 'migrate -d ' on my sending host (qemu-kvm 0.11.0), I
> get the following output on the receiving host (qemu-kvm 0.12.4):


A quick search of this and/or the qemu mailing lists would have told you this 
is unsupported.


> 
> (qemu) Unknown savevm section or instance 'slirp' 0
> load of migration failed
> 
> ... leaving my vm broken.
> 
> Am I doing something wrong, or is the format of the savevm file just
> different and should I abandon all hope of ever doing a live migration
> to a newer version of qemu-kvm?
> 
> Thanks in advance,
> 
> 
> Nils
> 
> ---
> 
> Here are my upstart options on the 'sender':
> 
> /usr/bin/kvm -S \
>  -M pc-0.11 \
>  -enable-kvm \
>  -m 512 \
>  -smp 2,sockets=2,cores=1,threads=1 \
>  -name testserver \
>  -uuid 890a0156-0542-32d8-66d7-b36a711084cc \
>  -monitor
> unix:/var/lib/libvirt/qemu/testserver.monitor,server,nowait \
>  -boot c \
>  -drive
> file=/dev/disk/by-path/ip-192.168.3.100:3260-iscsi-iqn.2003-10.com.lefthand
> networks:lefthand0:64:testserver-lun-0,if=virtio,boot=on \
>  -drive media=cdrom \
>  -usb \
>  -vnc 0.0.0.0:10 \
>  -k en-us \
>  -vga cirrus &
> 
> And the 'receiver':
> 
> /usr/bin/kvm -S \
>  -M pc-0.11 \
>  -enable-kvm \
>  -m 512 \
>  -smp 2,sockets=2,cores=1,threads=1 \
>  -name testserver \
>  -uuid 890a0156-0542-32d8-66d7-b36a711084cc \
>  -nodefaults \
>  -chardev
> socket,id=monitor,path=/var/lib/libvirt/qemu/testserver.monitor,server,nowa
> it \
>  -mon chardev=monitor,mode=readline \
>  -boot c \
>  -drive
> file=/dev/disk/by-path/ip-192.168.3.100:3260-iscsi-iqn.2003-10.com.lefthand
> networks:lefthand0:64:testserver-lun-0,if=none,id=drive-virtio-disk0,boot=o
> n \
>  -drive if=none,media=cdrom,id=drive-ide0-1-0 \
>  -device
> ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 \
>  -device
> virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0
> \ -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x3 \ -chardev
> pty,id=serial0 \
>  -device isa-serial,chardev=serial0 \
>  -usb \
>  -vnc 0.0.0.0:0 \
>  -k en-us \
>  -vga cirrus \
>  -incoming tcp:0: &
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM Bridged Networking Issue in Ubuntu Lucid

2010-08-24 Thread Brian Jackson
On Tuesday, August 24, 2010 09:08:30 am Rus Hughes wrote:
> Hi guys,
> 
> I'm trying to sort out networking for a VM I've created on my Ubuntu
> Lucid box but the VM cannot access the Internet.
> I can connect to the VM (crisps) from the host (holly) and to the host
> from the VM.
> But the VM cannot connect to the Internet and the Internet cannot
> connect to the VM.
> 
> I followed the guide here: https://help.ubuntu.com/community/KVM
> 
> I created it using ubuntu-vm-builder using :
> 
> ubuntu-vm-builder kvm lucid \
>   --domain crisps \
>   --dest crisps \
>   --arch amd64 \
>   --hostname crisps \
>   --mem 2048 \
>   --rootsize 4096 \
>   --swapsize 1024 \
>   --user magicalusernameofdoom \
>   --pass somesecretawesomepassword \
>   --ip 178.63.60.159 \
>   --mask 255.255.255.192 \
>   --bcast 178.63.60.191 \
>   --gw 178.63.60.129 \
>   --dns 213.133.99.99 \
>   --mirror http://de.archive.ubuntu.com/ubuntu \
>   --components main,universe,restricted,multiverse \
>   --addpkg openssh-server \
>   --libvirt qemu:///system \
> --bridge br0;
> 
> "virsh start crisps" starts the vm up :
> 
> root 29297  0.5  3.3 2280396 271220 ?  Sl   14:12   0:13
> /usr/bin/kvm -S -M pc-0.12 -enable-kvm -m 2048 -smp 1 -name crisps
> -uuid e2cda480-29e7-3740-d6c6-816e2f78223e -chardev
> socket,id=monitor,path=/var/lib/libvirt/qemu/crisps.monitor,server,nowait
> -monitor chardev:monitor -boot c -drive
> file=/home/idimmu/vms/crisps/tmplXQBtt.qcow2,if=ide,index=0,boot=on
> -net nic,macaddr=52:54:00:a6:28:b9,vlan=0,model=virtio,name=virtio.0
> -net tap,fd=41,vlan=0,name=tap.0 -serial none -parallel none -usb -vnc
> 127.0.0.1:0 -vga cirrus
> 
> Networking on the host is like so:
> 
> br0   Link encap:Ethernet  HWaddr 26:5d:c9:d2:75:2e
>   inet addr:178.63.60.138  Bcast:178.63.60.191 
> Mask:255.255.255.192 inet6 addr: fe80::4261:86ff:fee9:d69a/64 Scope:Link
>   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>   RX packets:49080900 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:54086580 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 txqueuelen:0
>   RX bytes:35266886777 (35.2 GB)  TX bytes:60292047383 (60.2 GB)
> 
> eth0  Link encap:Ethernet  HWaddr 40:61:86:e9:d6:9a
>   inet6 addr: fe80::4261:86ff:fee9:d69a/64 Scope:Link
>   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>   RX packets:49418744 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:54348661 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 txqueuelen:1000
>   RX bytes:36337218359 (36.3 GB)  TX bytes:60378047518 (60.3 GB)
>   Interrupt:29 Base address:0x4000
> 
> eth0:0Link encap:Ethernet  HWaddr 40:61:86:e9:d6:9a
>   inet addr:178.63.60.178  Bcast:178.63.60.191 
> Mask:255.255.255.192 UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>   Interrupt:29 Base address:0x4000
> 


Maybe look here. Should this maybe be br0:0 (or better yet, just use ip addr 
add to add the second address to the bridge instead of having aliases).



> loLink encap:Local Loopback
>   inet addr:127.0.0.1  Mask:255.0.0.0
>   inet6 addr: ::1/128 Scope:Host
>   UP LOOPBACK RUNNING  MTU:16436  Metric:1
>   RX packets:1063249 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:1063249 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 txqueuelen:0
>   RX bytes:325111540 (325.1 MB)  TX bytes:325111540 (325.1 MB)
> 
> vnet0 Link encap:Ethernet  HWaddr 26:5d:c9:d2:75:2e
>   inet6 addr: fe80::245d:c9ff:fed2:752e/64 Scope:Link
>   UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>   RX packets:72 errors:0 dropped:0 overruns:0 frame:0
>   TX packets:33 errors:0 dropped:0 overruns:0 carrier:0
>   collisions:0 txqueuelen:500
>   RX bytes:3412 (3.4 KB)  TX bytes:2698 (2.6 KB)
> 
> the hosts /etc/network/interfaces is like so :
> 
> auto lo
> iface lo inet loopback
> 
> # device: eth0
> auto  eth0
> iface eth0 inet manual
> 
> auto br0
> iface br0 inet static
>   address   178.63.60.138
>   broadcast 178.63.60.191
>   netmask   255.255.255.192
>   gateway   178.63.60.129
>   network 178.63.60.128
>   bridge_ports eth0
>   bridge_stp off
>   bridge_fd 0
>   bridge_maxwait 0
> 
> # subli.me.uk
> auto eth0:0
> iface eth0:0 inet static
>   address 178.63.60.178
>   netmask 255.255.255.192
> 
> 
> If I ping the VM I see the packets on the host br0 but not vnet0, this
> is the output of brctl show on the host :
> 
> bridge name bridge id   STP enabled interfaces
> br0 8000.265dc9d2752e

Re: virtio driver

2010-08-13 Thread Brian Jackson
On Friday, August 13, 2010 02:08:07 am Nirmal Guhan wrote:
> Hi,
> 
> My guest (2.6.32 kernel with some patches unrelated to kvm) does not
> seem to work with virtio driver (model=virtio in qemu-kvm). My rootfs
> is over nfs. If I change model=pcnet, guest comes up fine. With
> virtio, I get error as "No network devices available". I assume I have
> to add virtio driver to the kernel i.e
> 
> CONFIG_VIRTIO_NET=y (Device Drivers -> Network device support ->
> Virtio network driver)


You'll also need to enable Virtio PCI (at least).


> 
> Is there anything else I should do?
> 
> Thanks,
> Nirmal
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] Re: KVM Call agenda for July 13th

2010-07-13 Thread Brian Jackson
On Tuesday, July 13, 2010 12:01:22 pm Avi Kivity wrote:
> On 07/13/2010 07:57 PM, Anthony Liguori wrote:
> >> I'd like to see more frequent stable releases, at least if the stable
> >> branch contains fixes to user-reported bugs (or of course security or
> >> data integrity fixes).
> > 
> > Would you like to see more frequent stable releases or more frequent
> > master releases?
> 
> Yes.  But in this context I'm interested in stable releases.  We have
> bugs reported, fixed, and the fix applied, yet the fixes are unreachable
> to users.

Especially so since qemu-kvm 0.12-stable hasn't been merged with qemu 
basically since 0.12.4 came out. I was trying to help one of the Gentoo 
maintainers find post 0.12.4 patches the other day and had to point them to 
the upstream qemu stable tree.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] increase MAX_NICS value

2010-07-09 Thread Brian Jackson
On Friday, July 09, 2010 10:39:17 am Alessandro Bono wrote:
> Hi all
> 
> max number of allowed nics per vm is hardcoded on net.h to 8. This value
> it's too low in situation with big network, increase to 24
> this should be safe as mentioned on this thread
> http://kerneltrap.org/mailarchive/linux-kvm/2009/1/29/4848304/thread


A better long term solution might be to convert it to QLIST or something 
instead of hardcoding the array size.


> 
> Signed-off-by: Alessandro Bono 
> 
> --- net.h.old   2010-07-09 17:30:39.542170103 +0200
> +++ net.h   2010-07-09 17:30:48.842166029 +0200
> @@ -121,7 +121,7 @@
> 
>  /* NIC info */
> 
> -#define MAX_NICS 8
> +#define MAX_NICS 24
> 
>  struct NICInfo {
>  uint8_t macaddr[6];
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: qemu-kvm 0.12.4 hanging forever

2010-07-01 Thread Brian Jackson
On Thursday, July 01, 2010 05:37:29 pm Zach Carter wrote:
> Hi:
> 
> Under certain 100% reproducible circumstances, when I try to run qemu-kvm,
> it completely hangs without appearing to do anything
> 
> I have tried this with qemu-kvm 0.12.4 and with the latest code from the
> git repository.
> 
> Here are some more interesting details.  The 64 bit host, kernel, and kvm
> kernel module are all vanilla CentOS 5.4.  If I use the kmod-
> kvm-83-105.el5_4.9 version of the kernel module, it works fine.  Yum update
> to anything past that and the hang occurs 100% of the time.  Update all
> the way to the very latest CentOS 5.5 packages, and I still get the hang. 
> Downgrade just the kvm kernel modules and it starts working again.


That's pretty well known. Use a newer kernel with a newer qemu-kvm, or use the 
packages that come with CentOS.


> 
> I'm sure I could use the qemu-kvm that ships from CentOS with the
> corresponding kernel module, however that lacks certain essential features,
> including support for scsi disk drive emulation.


There's a reason Redhat disables scsi support in their kvm... it's not really 
suggested to use it.


> 
> After messing around with gdb and some printf statements, it looks to me
> like it just loops forever and ever in the "while (1)" loop on about line
> 1710 of qemu-kvm.c.   Killing it and grabbing a backtrace shows that it is
> spending its time mostly sitting in the "select" system call on about line
> 1288 in vt.c.  Maybe its normal to sit in that loop and poll for events,
> but something else should happen eventually.  What is it waiting for?
> 
> If I add the -no-kvm option to the command line, it works fine.
> 
> Any ideas on how to further troubleshoot?
> 
> thanks!
> 
> -Zach
> 
> My command line:
> 
> /opt/qemu-git/bin/qemu-system-x86_64 -drive
> file=my.vmdk,if=scsi,cache=writeback -cdrom my.iso -boot d -nographic -m
> 1024 - no-kvm-irqchip -kernel my.vmlinuz -initrd my.initrd.img -append
> 'initrd=initrd.img root=/dev/ram0 console=tty0 console=ttyS0,19200 quiet
> action=install mode=silent ramdisk_size=147169 mkvm_cpu_lm
> mkvm_device=/dev/sda mkvm'
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: random crash in post_kvm_run()

2010-06-28 Thread Brian Jackson
On Monday, June 28, 2010 12:28:52 pm BuraphaLinux Server wrote:
> Hello,
> 
> I have tried qemu_kvm 0.12.4 release and also git from about 1/2
> an hour ago.  In both cases, I crash in the post_kvm_run() function on
> the line about:
> 
> pthread_mutex_lock(&qemu_mutex);
> 
> The command I use to run qemu worked great with
> glibc-2.11.1,linux-2.6.32.14,and gcc-4.4.3,
> but I have upgraded to glibc-2.11.2, linux-2.6.34, and gcc-4.4.4 and get
> this:
> 
> (gdb) bt
> #0  post_kvm_run (kvm=0x84cde04, env=0x84e7168)
> at /tmp/qemu-kvm-201006282359/qemu-kvm.c:566
> #1  0x08086ccf in kvm_run (env=0x84e7168)
> at /tmp/qemu-kvm-201006282359/qemu-kvm.c:619
> #2  0x080882d0 in kvm_cpu_exec (env=0x84e7168)
> at /tmp/qemu-kvm-201006282359/qemu-kvm.c:1238
> #3  0x08088cf6 in kvm_main_loop_cpu (env=0x84e7168)
> at /tmp/qemu-kvm-201006282359/qemu-kvm.c:1495
> #4  0x08088e72 in ap_main_loop (_env=0x84e7168)
> at /tmp/qemu-kvm-201006282359/qemu-kvm.c:1541
> #5  0x55598690 in start_thread () from /lib/libpthread.so.0
> #6  0x55a8ca7e in clone () from /lib/libc.so.6
> (gdb) list
> 561 in /tmp/qemu-kvm-201006282359/qemu-kvm.c
> (gdb) print qemu_mutex
> $1 = {__data = {__lock = 0, __count = 0, __owner = 0, __kind = 0,
> __nusers = 0, {__spins = 0, __list = {__next = 0x0}}},
>   __size = '\000' , __align = 0}
> (gdb)
> 
> I rebuilt the kernel, then glibc, then the entire graphics stack, then
> qemu_kvm to try and be sure I have no problems about headers.  All my
> other software works, but qemu_kvm does not.  About 1 time in 10 it
> will actually run fine, but the other times it will crash as shown.  I
> use a dedicated LV for this.  I have a 32bit userland with a 64bit
> kernel.  Here is the script I use:
> 
> #! /sbin/bash
> INSTANCE=0
> NAME=VM${INSTANCE}
> FAKEDISK=/dev/mapper/vmland-vmdisk${INSTANCE}
> ((MACNO=22+INSTANCE))
> ulimit -S -c unlimited
> echo qemu-system-x86_64 \
>   -cpu core2duo -smp 2 -m 512 \
>   -vga std \
>   -vnc :${INSTANCE} -monitor stdio \
>   -localtime -usb -usbdevice mouse \
>   -net nic,vlan=0,model=rtl8139,macaddr=DE:AD:BE:EF:25:${MACNO} \
>   -net
> tap,ifname=tap${INSTANCE},script=/etc/qemu-ifup,downscript=/etc/qemu-ifdow
> n \
>   -name ${NAME} \
>   -hda ${FAKEDISK} \
>   -boot c
> qemu-system-x86_64 \
>   -cpu core2duo -smp 2 -m 512 \


try without -cpu core2duo



>   -vga std \
>   -vnc :${INSTANCE} -monitor stdio \
>   -localtime -usb -usbdevice mouse \
>   -net nic,vlan=0,model=rtl8139,macaddr=DE:AD:BE:EF:25:${MACNO} \
>   -net
> tap,ifname=tap${INSTANCE},script=/etc/qemu-ifup,downscript=/etc/qemu-ifdow
> n \
>   -name ${NAME} \
>   -hda ${FAKEDISK} \
>   -boot c
> # just in case
> /usr/sbin/brctl delif br0 tap${INSTANCE}
> 
> The bridging and taps all worked before.   The CPU is a core i7 950,
> I've got 12GB of RAM, and I'm going nuts trying to debug this.  Since
> it sometimes works, I wonder if there is some uninitialized variable
> that sometimes is set so I get lucky but usually is set where things
> crash.
> 
> I don't want to place blame, I just want to get it working.  Any
> hints?  I'm not subscribed, but the page at
> http://www.linux-kvm.org/page/Lists,_IRC said it's ok to send a
> message anyway.  Please cc: me so I get a copy, or if I need to join
> the list please tell me.
> 
> I compile it all from source (similar to linux from scratch) so there
> is no upstream distro to go ask for help.  Since everything else
> works, I suspect something strange in qemu_kvm.  I did google a lot
> but found nothing helpful.
> 
> The ISO image used works on real hardware, and uses the same kernel
> and userland.  The isolinux shows the menu and works great, but when
> it is time to boot the kernel I get the crash.
> 
> The kernel modules kvm and kvm_intel are loaded when I try to start
> qemu_kvm.
> 
> The /var/log/messages just shows this:
> 
> Jun 29 00:05:47 banpuk kernel: [20299.236926] qemu-system-x86[31375]:
> segfault at 14 ip 08086a64 sp 5601e180 error 4 in
> qemu-system-x86_64[8048000+256000]
> 
> The /var/log/syslog show this:
> 
> Jun 29 00:06:00 banpuk kernel: [20312.302498] kvm: 31383: cpu0
> unhandled wrmsr: 0x198 data 0
> Jun 29 00:06:00 banpuk kernel: [20312.302606] kvm: 31383: cpu1
> unhandled wrmsr: 0x198 data 0
> 
> JGH
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Loss of network connectivity with high load

2010-06-15 Thread Brian Jackson
On Tuesday 15 June 2010 16:00:38 Daniel Bareiro wrote:
> Hi all!
> 
> I'm using Linux 2.6.31.13 compiled with the kernel.org source code on
> KVM host with Debian GNU/Linux Lenny amd64. Also I'm using Debian
> GNU/Linux Lenny amd64 virtual machine with kernel 2.6.26-2 from Debian
> repositories. I'm using qemu-kvm 0.12.3.
> 
> These are the parameters I'm using to start the virtual machine:
> 
> 
> escher:~# ps ax|grep kvm
>  6299 ?Rl   113:40 /usr/local/qemu-kvm/bin/qemu-system-x86_64
> -drive file=/dev/vm/belvedere-raiz,cache=none,if=virtio,boot=on -drive
> file=/dev/vm/belvedere-u01,cache=none,if=virtio -drive
> file=/dev/vm/belvedere-u02,cache=none,if=virtio -drive
> file=/dev/vm/belvedere-u03,cache=none,if=virtio -drive
> file=/dev/vm/belvedere-u04,cache=none,if=virtio -drive
> file=/dev/vm/belvedere-u05,cache=none,if=virtio -drive
> file=/dev/vm/belvedere-u06,cache=none,if=virtio -drive
> file=/dev/vm/belvedere-u07,cache=none,if=virtio -drive
> file=/dev/vm/belvedere-u08,cache=none,if=virtio -drive
> file=/dev/vm/belvedere-u09,cache=none,if=virtio -m 3072 -smp 2 -net
> nic,model=virtio,macaddr=00:16:3e:00:00:56 -net tap -daemonize -vnc :2
> -k es -localtime -monitor telnet:localhost:4002,server,nowait -serial
> telnet:localhost:4042,server,nowait
> 
> 
> When I try to copy directories of several gigabytes in this VM using
> rsync to other hosts, this virtual machine loses connectivity with the
> rest of the network. Checking, using a serial console connection,
> /var/log/syslog and /var/log/messages in search of the problem, I don't
> see information that might give a clue to the problem.
> 
> This problem is not with the network driver by default (rtl8139), so I
> think that should be something related to Virtio.


There have been a few bugs similar to this reported that should be pretty easy 
to find. Basically, I'd say try 0.12.4, it should have a few fixes in this 
area over 0.12.3.


> 
> Thanks in advance for your reply.
> 
> Regards,
> Daniel
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Opteron AMD-V support

2010-06-03 Thread Brian Jackson
On Thursday 03 June 2010 21:33:24 Govender, Sashan wrote:
> Hi
> 
> We bumped into this issue with VMWare ESX 4 where it doesn't support
> hardware virtualization if the processor is an AMD Athlon/Opteron
> (http://communities.vmware.com/docs/DOC-9150). Does linux-kvm have a
> similar issue? More specifically will the the module kvm_amd.ko support
> AMD-V on an Opteron 2218?


Yes, KVM doesn't try to be too smart. If you have svm/vt, it runs. If you 
don't, it falls back to tcg (qemu's normal/slow mode). The kvm-amd module will 
load as long as the bios and the CPU both support and enable svm.


> 
> Thanks
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Gentoo guest with smp: emerge freeze while recompile world

2010-05-21 Thread Brian Jackson
On Friday, May 21, 2010 10:46:10 am Riccardo wrote:
> -- Original Message ---
>  From: Avi Kivity 
>  To: Riccardo 
>  Cc: kvm@vger.kernel.org
>  Sent: Fri, 21 May 2010 18:21:20 +0300
>  Subject: Re: Gentoo guest with smp: emerge freeze while recompile world
> 
>  > On 05/21/2010 04:16 PM, Riccardo wrote:
>  > > ...
>  > > 
>  > >> There are almost impossible to debug.
>  > >> 
>  > >> Try copying vmlinux out of your guest and attach with gdb when it
>  > >> hangs.  Then issue the command
>  > >> 
>  > >>(gdb) thread apply all backtrace
>  > >> 
>  > >> to see what the guest is doing.
>  > > 
>  > > panic.
>  > > --- End of Original Message ---
>  > > 
>  > > Hi,
>  > > I compile gentoo-sources-2.6.31-r10 and with this kernel "emerge -e
>  > > world" complete without errors!
>  > 
>  > Interesing.  Can you so a git bisect to see where it stops working?
> 
>  Ehm sorry I don't understand the request have you a link?
> 
>  > > I always use the same .config
>  > > 
>  > > After I try gentoo-sources-2.6.34 and vanilla-sources-2.6.34 but the
>  > > problem remain, the compile freeze and I see this in ps -elf:
>  > > 
>  > > 5 S root  1013 1  0  76  -4 -  3125 poll_s 13:00 ?   
>  > > 00:00:00 /sbin/udevd --daemon
>  > > 1 S root  2669 1  0  80   0 -  7523 wait   13:00 ?   
>  > > 00:00:00 supervising syslog-ng
>  > > 5 S root  2670  2669  0  80   0 -   7556 poll_s 13:00 ?   
>  > > 00:00:00 /usr/sbin/syslog-ng
>  > > 1 S root  3258 1  0  80   0 -  9505 poll_s 13:00 ?   
>  > > 00:00:00 /usr/sbin/sshd
>  > > 1 S root  3378 1  0  80   0 -  4115 hrtime 13:00 ?   
>  > > 00:00:00 /usr/sbin/cron
>  > > 0 S root  3446 1  0  80   0 -  1493 n_tty_ 13:00 tty2
>  > > 00:00:00 /sbin/agetty 38400 tty2 linux
>  > > 0 S root  3447 1  0  80   0 -  1493 n_tty_ 13:00 tty3
>  > > 00:00:00 /sbin/agetty 38400 tty3 linux
>  > > 0 S root  3448 1  0  80   0 -  1493 n_tty_ 13:00 tty4
>  > > 00:00:00 /sbin/agetty 38400 tty4 linux
>  > > 0 S root  3449 1  0  80   0 -  1493 n_tty_ 13:00 tty5
>  > > 00:00:00 /sbin/agetty 38400 tty5 linux
>  > > 0 S root  3450 1  0  80   0 -  1493 n_tty_ 13:00 tty6
>  > > 00:00:00 /sbin/agetty 38400 tty6 linux
>  > > 5 S root  3457 1  0  80   0 -  5959 poll_s 13:00 ?   
>  > > 00:00:00 SCREEN -S sb1
>  > > 4 S root  3458  3457  0  80   0 -   4454 wait   13:00 pts/0   
>  > > 00:00:00 -/bin/bash
>  > > 4 S root  3462  3458  0  75  -5 - 45171 poll_s 13:00 pts/0   
>  > > 00:00:34 /usr/bin/python2.6 /usr/bin/emerge -e world
>  > > 4 S root  3613 1  0  80   0 - 14014 wait   13:01 tty1
>  > > 00:00:00 /bin/login --
>  > > 4 S root  3953  3613  0  80   0 -   4429 n_tty_ 13:01 tty1
> 
>  00:00:00 -bash
> 
>  > > 0 S root  6614  3462  0  75  -5 -   972 wait   14:26 pts/0   
>  > > 00:00:00 [dev-util/pkgconfig-0.23] sandbox
>  > > "/usr/lib64/portage/bin/ebuild.sh" compile 4 S root  6615  6614 
>  > > 0  75  -5 -   6362 wait   14:26 pts/000:00:00 /bin/bash
>  > > /usr/lib64/portage/bin/ebuild.sh compile
>  > > 5 S root  6646  6615  0  75  -5 -   6745 wait   14:26 pts/0   
>  > > 00:00:00 /bin/bash /usr/lib64/portage/bin/ebuild.sh compile
>  > > 4 S root 13235  6646  0  75  -5 -   3651 wait   14:27 pts/0   
>  > > 00:00:00 make -j8
>  > > 4 S root 13238 13235  0  75  -5 -  3652 wait   14:27 pts/0   
>  > > 00:00:00 make all-recursive
>  > > 4 S root 13239 13238  0  75  -5 -  5956 wait   14:27 pts/0   
>  > > 00:00:00 /bin/sh -c set fnord $MAKEFLAGS; amf=$2; \?dot_seen=no;
>  > > \?target=`echo all-recursive | sed s/-recursive//`; \?list=
>  > > 5 S root 13243 13239  0  75  -5 -  5956 wait   14:27 pts/0   
>  > > 00:00:00 /bin/sh -c set fnord $MAKEFLAGS; amf=$2; \?dot_seen=no;
>  > > \?target=`echo all-recursive | sed s/-recursive//`; \?list=
>  > > 4 S root 13244 13243  0  75  -5 -  3686 wait   14:27 pts/0   
>  > > 00:00:00 make all
>  > > 4 S root 13358 13244  0  75  -5 -  3684 wait   14:27 pts/0   
>  > > 00:00:00 make all-recursive
>  > > 4 S root 13359 13358  0  75  -5 -  5956 wait   14:27 pts/0   
>  > > 00:00:00 /bin/sh -c set fnord $MAKEFLAGS; amf=$2; \?dot_seen=no;
>  > > \?target=`echo all-recursive | sed s/-recursive//`; \?list=
>  > > 5 S root 16546 13359  0  75  -5 -  5956 wait   14:28 pts/0   
>  > > 00:00:00 /bin/sh -c set fnord $MAKEFLAGS; amf=$2; \?dot_seen=no;
>  > > \?target=`echo all-recursive | sed s/-recursive//`; \?list=
>  > > 4 S root 16547 16546  0  75  -5 -  3652 wait   14:28 pts/0   
>  > > 00:00:00 make all
>  > > 4 S root 16548 16547  0  75  -5 -  3652 n_tty_ 14:28 pts/0   
>  > > 00:00:00 make all-am
>  > > 4 S root 16599  3258  0  80   0 - 17937 poll_s 15:07 ?   
>  > > 00:00:00 sshd: r...@pts/2
>  > > 4 S root 16602 16599  0  80   0 -  4429 wait   15:07 pts/2   
>  > > 00:00:00
> 
>  -bash
> 
>  > > 4 R root 16611 16602  0  80   0 -  36

Re: computer frozen

2010-05-20 Thread Brian Jackson
On Thursday, May 20, 2010 02:03:31 am magicboiz wrote:
> Hello
> 
> since kernel 2.6.28 or 2.6.29, I don't remember exactly, whenever I try to
> run KVM in my laptop, I get my computer totally frozen.
> 
> I'd try:
>  - "-no-kvm" flag: works, but very slow
>  - "-cpu qemu32,-nx": frozen
>  - "-no-acpi" flag: frozen
> 
> I'd try with several kernels (ubuntu and openssuse kernels), also with
> custom kernels compiled by me (with the minimal options enabled)but
> always the same result: computer frozen


It's been a long time since KVM caused host lockups. That is almost always 
something hardware/bios/local config related.


> 
> An interesting point: with Sun VirtualBox 3.1, the same frozen result.
> 
> My laptop is a TOSHIBA TECRA S4 (europe model only).
> 
> magicb...@linux-ue9l:~/> cat /proc/cpuinfo



> 
> Anyone can help me?
> 
> Thx in advance.
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM call minutes for May 18

2010-05-18 Thread Brian Jackson
On Tuesday 18 May 2010 09:29:25 Chris Wright wrote:
> 0.13 release
> - push out to July 1st
> - esp. important to solidy/crispen QMP
> - 1 rc to shake out brown paper bag bugs, then final release
> 
> block i/o performance (high CPU consumption)
> - memset can be removed (patch posted, queued by kwolf)
> - cpu_physical_memory_rw known, should gather data
> 
> block 1TB limit
> - sounds like integer issue
> - bug report needs to be updated (verify it's still happening in 0.12.4?
> - should be able to easily reproduce (can use lvm to create sparse volume)
> 
> sourceforge bug tracker...
> - sucks


Agreed.


> - unclear if there's active triage


I do. I go in every couple of weeks and go through the last two pages of bugs 
or so.


> - anthony prefers the launchpad instance


I prefer one tracker that more than just I look at. If that's launchpad, I'm 
fine with that.

Avi/KVM devs, what are your feelings?


> - alex likes the sf email to list, wuld be good to keep that feature


It looks (at first glance) that we can still have this functionality. It's 
certainly available to individuals.


> - had to migrate existing bugs (be nice if we could stop sf from growing)


A lot of the existing bugs are irrelevant and/or woefully out of date. I've 
been hesitant to go back and mess with too many old bugs for fear of making 
too much noise that I know isn't going to do anything useful (i.e. marking the 
100 oldest bugs as Closed - Out Of Date)


> - need more people involved w/ bug work


And need a better way for those of us that do to be able to get ahold of devs 
to look at things that are actually important to users.


> - possible bug-day before next release
>   - suggested June 1st


Personally (and in general for volunteer projects), weekends are better for 
bugs days. That said, I realize that most of the developers for qemu/kvm do 
this for their day job.


> 
> 0.12.4 bugs
> - migration regression...follow-up on email, open a bug ;-)
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM call agenda for May 18

2010-05-18 Thread Brian Jackson
On Tuesday 18 May 2010 08:52:36 Anthony Liguori wrote:
> On 05/18/2010 01:59 AM, Brian Jackson wrote:
> > On Monday 17 May 2010 22:23:46 Chris Wright wrote:
> >> Please send in any agenda items you are interested in covering.
> >> 
> >> If we have a lack of agenda items I'll cancel the week's call.
> > 
> > Perceived long standing bugs that nobody seems to care about. There are a
> > few, one of which is the>  1TB [1] bug that has existed for 4+ months.
> 
> s/care about/know about/g
> 
> This should be filed in launchpad as a qemu bug and it should be tested
> against the latest git.  This bug sounds like we're using an int to
> represent sector offset somewhere but there's not enough info in the bug
> report to figure out for sure.  I just audited the virtio-blk -> raw ->
> aio=threads path and I don't see an obvious place that we're getting it
> wrong.
> 
> > And others.
> 
> Bugs that affect qemu should be filed in launchpad.  Launchpad has nice
> features like the able to mark bugs as affecting many users which help
> raise visibility.  I can't speak for the source forge tracker, but I do
> regular triage on launchpad for qemu bugs.


I wonder how everyone would feel about closing the kvm tracker to new 
submissions and move everything over to launchpad?


> 
> Regards,
> 
> Anthony Liguori
> 
> > This can wait for a later call if necessary... not worth a call on it's
> > own.
> > 
> > 
> > Etc:
> > [1]
> > http://sourceforge.net/tracker/?func=detail&aid=2933400&group_id=180599&;
> > atid=893831
> > 
> >> thanks,
> >> -chris
> >> --
> >> To unsubscribe from this list: send the line "unsubscribe kvm" in
> >> the body of a message to majord...@vger.kernel.org
> >> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> > 
> > --
> > To unsubscribe from this list: send the line "unsubscribe kvm" in
> > the body of a message to majord...@vger.kernel.org
> > More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM call agenda for May 18

2010-05-17 Thread Brian Jackson
On Monday 17 May 2010 22:23:46 Chris Wright wrote:
> Please send in any agenda items you are interested in covering.
> 
> If we have a lack of agenda items I'll cancel the week's call.


Perceived long standing bugs that nobody seems to care about. There are a few, 
one of which is the > 1TB [1] bug that has existed for 4+ months.

And others.

This can wait for a later call if necessary... not worth a call on it's own.


Etc:
[1] 
http://sourceforge.net/tracker/?func=detail&aid=2933400&group_id=180599&atid=893831


> 
> thanks,
> -chris
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH RFC] virtio_blk: Use blk-iopoll for host->guest notify

2010-05-14 Thread Brian Jackson
On Friday, May 14, 2010 03:47:37 pm Stefan Hajnoczi wrote:
> This patch adds blk-iopoll interrupt mitigation to virtio-blk.  Instead
> of processing completed requests inside the virtqueue interrupt handler,
> a softirq is scheduled to process up to a maximum number of completed
> requests in one go.
> 
> If the number of complete requests exceeds the maximum number, then another
> softirq is scheduled to continue polling.  Otherwise the virtqueue
> interrupt is enabled again and we return to interrupt-driven mode.
> 
> The patch sets the maximum number of completed requests (aka budget, aka
> weight) to 4.  This is a low number but reflects the expensive context
> switch between guest and host virtio-blk emulation.
> 
> The blk-iopoll infrastructure is enabled system-wide by default:
> 
> kernel.blk_iopoll = 1
> 
> It can be disabled to always use interrupt-driven mode (useful for
> comparison):
> 
> kernel.blk_iopoll = 0


Any preliminary numbers? latency, throughput, cpu use? What about comparing 
different "weights"?


> 
> Signed-off-by: Stefan Hajnoczi 
> ---
> No performance figures yet.
> 
>  drivers/block/virtio_blk.c |   71
> ++- 1 files changed, 62
> insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/block/virtio_blk.c b/drivers/block/virtio_blk.c
> index 2138a7a..1523895 100644
> --- a/drivers/block/virtio_blk.c
> +++ b/drivers/block/virtio_blk.c
> @@ -6,6 +6,7 @@
>  #include 
>  #include 
>  #include 
> +#include 
> 
>  #define PART_BITS 4
> 
> @@ -26,6 +27,9 @@ struct virtio_blk
> 
>   mempool_t *pool;
> 
> + /* Host->guest notify mitigation */
> + struct blk_iopoll iopoll;
> +
>   /* What host tells us, plus 2 for header & tailer. */
>   unsigned int sg_elems;
> 
> @@ -42,16 +46,18 @@ struct virtblk_req
>   u8 status;
>  };
> 
> -static void blk_done(struct virtqueue *vq)
> +/* Assumes vblk->lock held */
> +static int __virtblk_end_requests(struct virtio_blk *vblk, int weight)
>  {
> - struct virtio_blk *vblk = vq->vdev->priv;
>   struct virtblk_req *vbr;
>   unsigned int len;
> - unsigned long flags;
> + int error;
> + int work = 0;
> 
> - spin_lock_irqsave(&vblk->lock, flags);
> - while ((vbr = vblk->vq->vq_ops->get_buf(vblk->vq, &len)) != NULL) {
> - int error;
> + while (!weight || work < weight) {
> + vbr = vblk->vq->vq_ops->get_buf(vblk->vq, &len);
> + if (!vbr)
> + break;
> 
>   switch (vbr->status) {
>   case VIRTIO_BLK_S_OK:
> @@ -74,10 +80,53 @@ static void blk_done(struct virtqueue *vq)
>   __blk_end_request_all(vbr->req, error);
>   list_del(&vbr->list);
>   mempool_free(vbr, vblk->pool);
> + work++;
>   }
> +
>   /* In case queue is stopped waiting for more buffers. */
>   blk_start_queue(vblk->disk->queue);
> + return work;
> +}
> +
> +static int virtblk_iopoll(struct blk_iopoll *iopoll, int weight)
> +{
> + struct virtio_blk *vblk =
> + container_of(iopoll, struct virtio_blk, iopoll);
> + unsigned long flags;
> + int work;
> +
> + spin_lock_irqsave(&vblk->lock, flags);
> +
> + work = __virtblk_end_requests(vblk, weight);
> + if (work < weight) {
> + /* Keep polling if there are pending requests. */
> + if (vblk->vq->vq_ops->enable_cb(vblk->vq))
> + __blk_iopoll_complete(&vblk->iopoll);
> + else
> + vblk->vq->vq_ops->disable_cb(vblk->vq);
> + }
> +
>   spin_unlock_irqrestore(&vblk->lock, flags);
> + return work;
> +}
> +
> +static void blk_done(struct virtqueue *vq)
> +{
> + struct virtio_blk *vblk = vq->vdev->priv;
> + unsigned long flags;
> +
> + if (blk_iopoll_enabled) {
> + if (!blk_iopoll_sched_prep(&vblk->iopoll)) {
> + spin_lock_irqsave(&vblk->lock, flags);
> + vblk->vq->vq_ops->disable_cb(vblk->vq);
> + spin_unlock_irqrestore(&vblk->lock, flags);
> + blk_iopoll_sched(&vblk->iopoll);
> + }
> + } else {
> + spin_lock_irqsave(&vblk->lock, flags);
> + __virtblk_end_requests(vblk, 0);
> + spin_unlock_irqrestore(&vblk->lock, flags);
> + }
>  }
> 
>  static bool do_req(struct request_queue *q, struct virtio_blk *vblk,
> @@ -289,11 +338,14 @@ static int __devinit virtblk_probe(struct
> virtio_device *vdev) goto out_free_vq;
>   }
> 
> + blk_iopoll_init(&vblk->iopoll, 4 /* budget */, virtblk_iopoll);
> + blk_iopoll_enable(&vblk->iopoll);
> +
>   /* FIXME: How many partitions?  How long is a piece of string? */
>   vblk->disk = alloc_disk(1 << PART_BITS);
>   if (!vblk->disk) {
>   err = -ENOMEM;
> - goto out_mempool;
> + goto out_iopoll;
>   }
> 
>   q = vblk->disk->queue = blk_init_queue(

Re: virtio-win problem

2010-05-06 Thread Brian Jackson
On Thursday, May 06, 2010 04:05:17 pm Jernej Simončič wrote:
> On Thursday, May 6, 2010, 22:36:02, Brian Jackson wrote:
> > What about the XP32 drivers from:
> > http://theiggy.com/tmp/virtio-20091208.zip
> 
> This is what I currently use on XP, and it works fine (I think I
> mentioned this on IRC - my nickname's ender` there).

Ahh, didn't make the connection. I'm still working on new drivers that 
hopefully will be fixed.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: virtio-win problem

2010-05-06 Thread Brian Jackson
On Thursday, May 06, 2010 03:11:00 pm Jernej Simončič wrote:
> On Thursday, May 6, 2010, 21:59:21, Brian Jackson wrote:
> > http://theiggy.com/tmp/virtio-20100228.zip
> > These are not guaranteed to work and they will probably kill kittens.
> > That said, I've had luck with them and had only a few reports of things
> > not working (mostly with the balloon drivers).
> 
> XP32 drivers from this pack didn't work for me, but Vista64 work fine.

What about the XP32 drivers from:

http://theiggy.com/tmp/virtio-20091208.zip
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: virtio-win problem

2010-05-06 Thread Brian Jackson
On Thursday, May 06, 2010 07:10:07 am Riccardo Veraldi wrote:
> Hello,
> if I install virtio-win drivers on windows 2008 Server R2, I have the
> problem of signed device drivers.
> I Can install the drivers but Windows 2008 server refuses to use them
> unless I start
> the machine pressing F8 every time at each reboot bypassing the checking
> of signed certified drivers, and this is annoying,
> since I Cannot reboot the virtual machien automatically.


I have some compiled and signed here:

http://theiggy.com/tmp/virtio-20100228.zip

These are not guaranteed to work and they will probably kill kittens. That 
said, I've had luck with them and had only a few reports of things not working 
(mostly with the balloon drivers).



> 
> Anyone solved this issue ?
> thanks
> 
> Rick
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can I simulate a virtual Dual-Head Graphiccard?

2010-04-28 Thread Brian Jackson
On Wednesday, April 28, 2010 03:08:24 pm Axel Kittenberger wrote:
> Hello,
> 
> This is a question I was not able to answer with a search. I've been
> using kvm now quite successfully as server side solution. Now I want to
> use it on a particular desktop to have a Windows 7 Guest on a native
> Linux system. Well this desktop has two Screens, and I'm sure its
> expected to have the Guest also on both screens.
> 
> Supposevly I could just simulate a very wide Screen and have the host
> split it (either SDL or VNC). However, this is not quite the same, as
> the guest will think exactly that, 1 wide screen. Meaning it will put
> all the messageboxes exactly in the middle between the two screens, have
> the startbar spawn on both screens and what not. So two screens are
> handled a tad differently than one wide.
> 
> Is this possible with kvm?
> Either simulate a dual head mapping to one wide SDL/VNC display. Or
> having two SDL/VNC displays?


No. It was brought up before on the qemu list I believe. I think the gist was 
that qemu didn't support more than one vga card.


> 
> Kind regards,
> Axel Kittenberger
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Jumbo frames with virtio net

2010-04-27 Thread Brian Jackson
On Monday, April 26, 2010 09:23:49 am carlopmart wrote:
> Hi all
> 
>   Is it possible to configure jumbo frames (mtu=9000) on a kvm guest
> using virtio net drivers?


Yes. The same rules apply as to physical computers. Every step of the path has 
to have the same mtu (i.e. the bridge, tap, host interface, switches, etc.)


> 
> Thanks.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: RHEL5.5, 32-bit VM repeatedly locks up due to kvmclock

2010-04-23 Thread Brian Jackson
On Friday 23 April 2010 12:08:22 David S. Ahern wrote:
> After a few days of debugging I think kvmclock is the source of lockups
> for a RHEL5.5-based VM. The VM works fine on one host, but repeatedly
> locks up on another.
> 
> Server 1 - VM locks up repeatedly
> -- DL580 G5
> -- 4 quad-core X7350 processors at 2.93GHz
> -- 48GB RAM
> 
> Server 2 - VM works just fine
> -- DL380 G6
> -- 2 quad-core E5540 processors at 2.53GHz
> -- 24GB RAM
> 
> Both host servers are running Fedora Core 12, 2.6.32.11-99.fc12.x86_64
> kernel. I have tried various versions of qemu-kvm -- the version in
> FC-12 and the version for FC-12 in virt-preview. In both cases the
> qemu-kvm command line is identical.
> 
> VM
> - RHEL5.5, PAE kernel (also tried standard 32-bit)
> - 2 vcpus
> - 3GB RAM
> - virtio network and disk
> 
> When the VM locks up both vcpu threads are spinning at 100%. Changing
> the clocksource to jiffies appears to have addressed the problem.


Does changing the guest to -smp 1 help?


> 
> David
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] block: Free iovec arrays allocated by multiwrite_merge()

2010-04-21 Thread Brian Jackson
On Wednesday 21 April 2010 13:35:36 Ryan Harper wrote:
> * Stefan Hajnoczi  [2010-04-21 13:27]:
> > A new iovec array is allocated when creating a merged write request.
> > This patch ensures that the iovec array is deleted in addition to its
> > qiov owner.
> 
> Nice catch.  Send this to qemu-devel and Avi and merge into qemu-kvm
> once it's commited there.


And tag for the stable release that should be coming soon?


> 
> > Signed-off-by: Stefan Hajnoczi 
> > ---
> >  block.c |3 +++
> >  1 files changed, 3 insertions(+), 0 deletions(-)
> >
> > diff --git a/block.c b/block.c
> > index e891544..2d31474 100644
> > --- a/block.c
> > +++ b/block.c
> > @@ -1731,6 +1731,9 @@ static void multiwrite_user_cb(MultiwriteCB *mcb)
> >
> >  for (i = 0; i < mcb->num_callbacks; i++) {
> >  mcb->callbacks[i].cb(mcb->callbacks[i].opaque, mcb->error);
> > +if (mcb->callbacks[i].free_qiov) {
> > +qemu_iovec_destroy(mcb->callbacks[i].free_qiov);
> > +}
> >  qemu_free(mcb->callbacks[i].free_qiov);
> >  qemu_vfree(mcb->callbacks[i].free_buf);
> >  }
> 
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [Qemu-devel] KVM call agenda for Apr 20

2010-04-19 Thread Brian Jackson
On Monday 19 April 2010 18:30:44 Chris Wright wrote:
> Please send in any agenda items you are interested in covering.


0.12.4?


> 
> thanks,
> -chris
> 
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM warning about uncertified CPU for SMP for AMD model 2, stepping3

2010-03-30 Thread Brian Jackson
On Tuesday 30 March 2010 06:03:02 pm Jiri Kosina wrote:
> Hi,
> 
> booting 32bit guest on 32bit host on AMD system gives me the following
> warning when KVM is instructed to boot as SMP:


This has been discussed before (fairly recently). Subject was "tainted Linux 
kernel in default SMP QEMU/KVM guests". A few solutions were mentioned, so I 
won't bother boring everyone with repeating them.



> 
> 
> 
> CPU0: AMD QEMU Virtual CPU version 0.9.1 stepping 03
> Booting Node   0, Processors  #1
> Initializing CPU#1
> Leaving ESR disabled.
> Mapping cpu 1 to node 0
> [ cut here ]
> WARNING: at linux-2.6.32/arch/x86/kernel/cpu/amd.c:187
> init_amd_k7+0x178/0x187() Hardware name:
> WARNING: This combination of AMD processors is not suitable for SMP.
> Modules linked in:
> Pid: 0, comm: swapper Not tainted 2.6.32.9-0.5-pae #1
> Call Trace:
>  [] try_stack_unwind+0x1b1/0x1f0
>  [] dump_trace+0x3f/0xe0
>  [] show_trace_log_lvl+0x4b/0x60
>  [] show_trace+0x18/0x20
>  [] dump_stack+0x6d/0x74
>  [] warn_slowpath_common+0x6f/0xd0
>  [] warn_slowpath_fmt+0x2b/0x30
>  [] init_amd_k7+0x178/0x187
>  [] init_amd+0x138/0x279
>  [] identify_cpu+0xc2/0x223
>  [] identify_secondary_cpu+0xc/0x1a
>  [] smp_callin+0xd4/0x1a1
>  [] start_secondary+0xa/0xe7
> 
> 
> 
> The virtual CPU identifies itself as cpu family 6, model 2, stepping 3 in
> /proc/cpuinfo.
> 
> Model 2 is indeed not handled by amd_k7_smp_check() and thus this warning
> is spit out.
> 
> Is that correct? Model 2 refers to Pluto/Orion (K75) if I remember
> correctly, right? That one is not oficially certified for SMP by AMD?
> 
> If it is not, maybe KVM should better emulate different CPU for
> SMP-enabled configurations, right?
> On the other hand, if it is certified (I have no idea), amd_k7_smp_check()
> should handle this model properly.
> 
> Thanks,
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: usb_linux_update_endp_table: No such file or directory

2010-03-28 Thread Brian Jackson
On Sunday 28 March 2010 16:23:24 scar wrote:
> Brian Jackson @ 10/18/2008 10:23 AM:
> > On Oct 18, 2008, at 9:57 AM, Xavier Gnata  wrote:
> >> Hi,
> >>
> >> I'm trying to plug an Ipod on a winXP guest.
> >> The host is a 2.6.27 and I'm using kvm-77.
> >>
> >> I get this (as root to avoid stupid +w problems):
> >>
> >> husb: open device 7.7
> >> husb: config #1 need -1
> >> husb: 1 interfaces claimed for configuration 1
> >> husb: grabbed usb device 7.7
> >> usb_linux_update_endp_table: No such file or directory
> >> Warning: could not add USB device host:05ac:1262
> >>
> >> kvm is supposed to work with every usb device, isn' it?
> >> This one is nothing else but a usb_mass_storage device so I cannot see
> >> where the problem is.
> >
> > A lot of newer iPods and iPhones require usb2 which qemu/KVM does not
> > emulate. You also want to make sure nothing in the host is claiming it
> > before the guest does.
> 
> apparently this is still the case?  or is there a way to turn on usb2?
> ;)  i am on ubuntu 9.04/linux 2.6.28-18-generic and kvm 84 and getting
> the same error.
> 
> thanks
> 

USB2/ehci is still not supported (certainly not in something as old as 
kvm-84). There was some work going on recently to support it, but I don't know 
how far they have made it.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PCI passthrough resource remapping

2010-03-25 Thread Brian Jackson
It's only in qemu-kvm.git. Maybe it should go into qemu-kvm-0.12.4 if  
there is one


Sent from my iPhone

On Mar 25, 2010, at 9:37 PM, Kenni Lund  wrote:


2010/1/9 Alexander Graf :


On 09.01.2010, at 03:45, Ryan C. Underwood wrote:



I have a multifunction PCI device that I'd like to pass through to  
KVM.
In order to do that, I'm reading that the PCI memory region must  
be 4K-page
aligned and the PCI memory resources itself must also be exact  
multiples

of 4K pages.

I have added the following on my kernel command line:
reassign_resources  
reassigndev=08:09.0,08:09.1,08:09.2,08:09.3,08:09.4


But I don't know if it has any effect.  The resources are still not
sized in 4K pages.  Also, this seems to screw up the last device.


I submitted a patch to qemu-kvm recently that got rid of that  
limitation. Please try out if the current git head works for you.


Alex--


I just upgraded to kernel 2.6.32.10 with qemu-kvm  0.12.3 and I still
get the following error when trying to pass through a dedicated PCI
USB card:

"Unable to assign device: PCI region 0 at address 0xe9403000 has size
0x100,  which is not a multiple of 4K
Error initializing device pci-assign"

Didn't the above patch make it into qemu-kvm? I don't know why, but I
was under the impression that this was fixed when I upgraded to
qemu-kvm 0.12.3.

Thanks

Best Regards
Kenni
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: linux-aio usable?

2010-03-08 Thread Brian Jackson
On Monday 08 March 2010 03:27:36 pm Nikola Ciprich wrote:
> > It's faster.
> 
> Hi Avi,
> Could You give some rough estimate on how much faster?
> I'm stuck with glibc-2.5 now, but I'm always eager to improve performance,
> so I wonder if it would make sense to either port eventfd + aio stuff, or
> switch to glibc-2.8 for me...


I saw approx. 10% improvement in sequential i/o. Random i/o was only 
marginally faster in our setup. We generally have problems with random i/o 
here... Something to do with our setup.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: About creating VM

2010-03-02 Thread Brian Jackson



Sent from my iPhone

On Mar 2, 2010, at 7:19 PM, sati...@pacific.net.hk wrote:


Hi folks,


Host - Ubuntu 9.10 64bit
Virtualizer - KVM


I followed;
Virtualization With KVM On Ubuntu 9.10
http://www.howtoforge.com/virtualiza...on-ubuntu-9.10


to install this Virtual Machine. The steps worked without problem.  
But I have following points can't be resolved:-



1)
I can't find the option on vmbuilder for selecting the packages to  
install. (no GUI)



Vmbuilder support should be sought elsewhere.




2)
If the OS to be installed is an iso image what command shall I run,  
especially Windows?



I usually change -m and -vga as a bare minimum




3)
I tried to run "rdesktop" to connect the VM without success. I have  
no idea what shall I replace for "abcd", e.g


$ rdesktop -a 16 -N 192.168.0.200:abcd

192.168.0.200 is the host ip address.



Usually you shouldn't have to change the port, so leave off the : and  
everything after





4)
Finally I install virt-manager. It works to connect the VM



virt-manager support should also be sought elsewhere.





Please help. TIA


B.R.
Stephen L

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Windows guest freezes with black screen

2010-03-02 Thread Brian Jackson
On Tuesday 02 March 2010 10:33:19 am Harald Braumann wrote:
> Hi,
> 
> quite often my Windows guest freezes. The window is just black
> and it uses 100% CPU. I don't think it's a guest problem, because
> I have kernel debugging enabled and a debugger running in another
> VM is connected through a serial line. It doesn't show any problems
> and when the guest freezes the debugger connection is dead as well.
> 
> Host
> 
> OS: Debian Unstable
> CPU: AMD Athlon 64 X2
> Kernel: Linux 2.6.33
> Arch: amd64
> KVM version: 0.12.3
> Command line:
> kvm \
>   -monitor unix:/tmp/.kvm-1000/kvm.LuHAkMFOVV/monitor,server,nowait \
>   -pidfile /tmp/.kvm-1000/kvm.LuHAkMFOVV/pid \
>   -daemonize \
>   -serial unix:/tmp/.kvm-1000/kvm.LuHAkMFOVV/console,server,nowait \
>   -drive file=hdd.vdi,if=virtio,boot=on \
>   -boot c \
>   -m 1024M \
>   -sdl \
>   -net nic,model=virtio,macaddr=52:54:00:12:34:57 \
>   -net user \
>   -usbdevice tablet \
>   -vga vmware \
>   -serial unix:/tmp/kvm-ttyS
> 
> Guest:
> --
> OS: Windows XP SP3
> Arch: 32bit
> Video: SVGA-II (vmware video driver)


Have you tried to reproduce without vmware vga? That support was developed 
against the linux drivers (and possibly some loose specifications) and has had 
known issues in the past. It would at least be a data point of where to look.


> 
> Cheers,
> harry
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: windows server 2008 hyper-v on kvm

2010-02-15 Thread Brian Jackson
On Monday 15 February 2010 12:59:54 Evan Ingram wrote:
> On 15/02/2010 18:39, Gleb Natapov wrote:
> > I guess original poster needs to clarify what he actually means :)
> 
> maybe i'm getting the words wrong.
> 
> ive got an ubuntu 9.10 server with kvm installed on it. ive then
> installed windows server 2008 into kvm. i now want to install hyper-v in
> windows 2008.

So you want to run hyper-v guests inside of a kvm guest? So there would be 2 
levels of virtualization? It's called nested virtualization. It only currently 
works with AMD cpus, and even then it's pretty slow unless you've got npt. 
It's also not very well tested (last I heard the only thing that worked 
reliably was kvm in kvm).


> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM RAM limitation

2010-02-04 Thread Brian Jackson
On Thursday 04 February 2010 11:47:13 am Daniel Bareiro wrote:
> Hi, Brian.
> 
> On Wednesday, 03 February 2010 16:44:28 -0600,
> 
> Brian Jackson wrote:
> > > Anthony Liguori wrote:
> > > >>> Are you sure you enabled KVM? Are you sure you are using the KVM
> > > >>> binary and not some QEMU binary that's sitting around. This is one
> > > >>> of those situations where the KVM command you are running might
> > > >>> help.  Also the same binary you are running's version ($QEMU_BIN -h
> > > >>> 
> > > >>> | head -n1)
> > > >> 
> > > >> wilson:/usr/local/qemu-kvm/bin# ./qemu-system-x86_64 -h | head -n1
> > > >> QEMU PC emulator version 0.12.2 (qemu-kvm-0.12.2), Copyright (c)
> > > >> 2003-2008 Fabrice Bellard
> > > >> 
> > > >> 
> > > >> The procedure that I used to compile qemu-kvm is the same of always:
> > > >> to download qemu-kvm-0.12.2, to install the packages (Debian)
> > > >> zlib1g-dev and libpci-dev, and to compile of the following way:
> > > >> 
> > > >> # cd qemu-kvm-0.12.2
> > > >> # ./configure --prefix=/usr/local/qemu-kvm
> > > >> # make
> > > >> # make install
> > > >> 
> > > >> Until the moment I never got to use qemu-kvm with VMs of more than
> > > >> 2048 MB. In an installation that I have with KVM-88 and kernel
> > > >> x86_64 I don't have this problem.
> > > > 
> > > > QEMU and KVM only support 2GB of memory on a 32-bit host.
> > > > 
> > > > Both need to create a userspace mapping of the guests memory.  In a
> > > > 32-bit environment, you only have enough usable address space in a
> > > > process to create a 2GB region.
> > > 
> > > But, according to what I read in the link [1] that commented, just by
> > > to have a x86_64 kernel would have to be sufficient to serve more than
> > > 2047 MB of RAM.
> > 
> > The kvm userspace would also have to be compiled as a 64bit binary.
> > Possibly statically compiled somewhere else (if that's even possible)
> > or with a 64bit chroot.
> 
> Hmmm... and there is some way to compile qemu-kvm as a 64bit binary on a
> operating system userspace of 32bit?

I covered two options for doing that in my last email. You either build a 
static 64bit build on a 64bit host, or you install a 64bit chroot and 
compile/run from there.



> 
> I tried with ARCH=x86_64 with make but when using this I obtain several
> messages of the type "cast to/from pointer from/to integer of different
> size".
> 
> Thanks for your reply.
> 
> Regards,
> Daniel
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM RAM limitation

2010-02-03 Thread Brian Jackson
On Wednesday 03 February 2010 02:06:53 pm Daniel Bareiro wrote:
> Hi, Anthony.
> 
> On Wednesday, 03 February 2010 13:20:12 -0600,
> 
> Anthony Liguori wrote:
> >>> Are you sure you enabled KVM? Are you sure you are using the KVM
> >>> binary and not some QEMU binary that's sitting around. This is one
> >>> of those situations where the KVM command you are running might
> >>> help.  Also the same binary you are running's version ($QEMU_BIN -h
> >>> 
> >>> | head -n1)
> >> 
> >> wilson:/usr/local/qemu-kvm/bin# ./qemu-system-x86_64 -h | head -n1
> >> QEMU PC emulator version 0.12.2 (qemu-kvm-0.12.2), Copyright (c)
> >> 2003-2008 Fabrice Bellard
> >> 
> >> 
> >> The procedure that I used to compile qemu-kvm is the same of always:
> >> to download qemu-kvm-0.12.2, to install the packages (Debian)
> >> zlib1g-dev and libpci-dev, and to compile of the following way:
> >> 
> >> # cd qemu-kvm-0.12.2
> >> # ./configure --prefix=/usr/local/qemu-kvm
> >> # make
> >> # make install
> >> 
> >> Until the moment I never got to use qemu-kvm with VMs of more than
> >> 2048 MB. In an installation that I have with KVM-88 and kernel x86_64
> >> I don't have this problem.
> > 
> > QEMU and KVM only support 2GB of memory on a 32-bit host.
> > 
> > Both need to create a userspace mapping of the guests memory.  In a
> > 32-bit environment, you only have enough usable address space in a
> > process to create a 2GB region.
> 
> But, according to what I read in the link [1] that commented, just by to
> have a x86_64 kernel would have to be sufficient to serve more than 2047
> MB of RAM.
> 

The kvm userspace would also have to be compiled as a 64bit binary. Possibly 
statically compiled somewhere else (if that's even possible) or with a 64bit 
chroot.



> Regards,
> Daniel
> 
> [1]
> https://help.ubuntu.com/community/KVM/Installation#Use%20a%2064%20bit%20ke
> rnel%20if%20possible
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM RAM limitation

2010-02-03 Thread Brian Jackson
On Wednesday 03 February 2010 09:55:41 am Daniel Bareiro wrote:
> Hi all!
> 
> I'm trying to boot a VM with 2048 MB in a VMHost with Linux 2.6.32.6 and
> qemu-kvm-0.12.2, but when doing it, I obtain it the following message:
> 
> qemu: at most 2047 MB RAM can be simulated.

Are you sure you enabled KVM? Are you sure you are using the KVM binary and 
not some QEMU binary that's sitting around. This is one of those situations 
where the KVM command you are running might help. Also the same binary you are 
running's version ($QEMU_BIN -h | head -n1)



> 
> This happened to me previously if in the VMHost I used kernel that is
> not x86_64 [1], but it is not the case:
> 
> # uname -a
> Linux wilson 2.6.32.6-dgb #1 SMP Mon Feb 1 17:10:30 ART 2010 x86_64
> GNU/Linux
> 
> 
> Which can be the problem?
> 
> Thanks in advance for your replies.
> 
> Regards,
> Daniel
> 
> [1]
> https://help.ubuntu.com/community/KVM/Installation#Use%20a%2064%20bit%20ke
> rnel%20if%20possible
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: No sound

2010-02-02 Thread Brian Jackson
This and any other libvirt/virt-manager/etc. questions should be addressed to 
the proper channels.

http://libvirt.org/contact.html

Maybe that helps.




On Tuesday 02 February 2010 02:10:09 am sati...@pacific.net.hk wrote:
> Hi folks,
> 
> Host - Debian 5.0
> KVM
> VM - Ubuntu 9.10
> 
> No sound on playing youtube.  Volume on host and VM has been turned to max,
> 
> Virtual Machine Manager
> Edit -> Preferences
> Install Audio Device [check] Local VM
> 
> Please help.  TIA
> 
> B.R.
> Stephen L
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: KVM problems with Xeon L5530

2010-01-27 Thread Brian Jackson
On Wednesday 27 January 2010 05:37:14 pm Matteo Ghezzi wrote:
> Hi!
> 
> I'm a long time KVM user, but I've encountered a problem that I couldn't
> solve. I've switched my good old Core2 Quad with gentoo (2.6.27 kernel)
> for a Dual Xeon L5530 with Arch (2.6.32 kernel).
> I've tried starting the old vmachines on the new hardware but if I
> enable kvm acceleration in qemu I got a black screen via vnc, and the
> logs are filled with this message:

What version of qemu-kvm?

There was a similar thread a while back, maybe you could try some of the 
suggestions and/or info gathering tips from it.

http://www.mail-archive.com/kvm@vger.kernel.org/msg24304.html

> 
> handle_exception: unexpected, vectoring info 0x800d intr info
> 0x8b0d
> 
> I've tried creating new virtual machines directly on the new hardware
> but with the same result.
> I've tried any combination of virtualization options in the bios,
> tested any bios revision for my motherboard (Asus Z8DNA-D6), checked
> the ram on another system... all in vain.
> 
> If I've to provide more info please let me know.
> Thanks for your help.
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can KVM PassThrough specifically my PCI cards to fully-virt'd KVM Guests with my CPU? Yet?

2010-01-25 Thread Brian Jackson
On Tuesday 26 January 2010 00:22:25 Ben DJ wrote:
> On Mon, Jan 25, 2010 at 9:44 PM, Brian Jackson  wrote:
> > You do need iommu support in your system. Unfortunately there are very
> > few AMD motherboards that have an iommu. Only 1 server level board I know
> > of has one and is close to hitting the markets. So chances are you don't
> > have one.
> 
> Well, rats.  Thx for a clear answer, though!
> 
> And, there's no iommu emulation in software for KVM that'd do it?

Nope. When support was being developed, there was, but it was never merged, 
and I highly doubt the patches would be remotely able to be applied at this 
point with all the code churn qemu has had.


> 
> BenDJ
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Can KVM PassThrough specifically my PCI cards to fully-virt'd KVM Guests with my CPU? Yet?

2010-01-25 Thread Brian Jackson
On Monday 25 January 2010 21:11:12 Ben DJ wrote:
> Hi,
> 
> I have a box with an AMD Phenom II X4 920 CPU
> 
> Reading http://www.linux-kvm.org/page/FAQ#What_do_I_need_to_use_KVM.3F,
> I've verified with 'cat /proc/cpuinfo' that the CPU has the AMD-V
> "svm" extension.
> 
> I'm specifically interested in whether or not this CPU's capabilities
> will allow PCI PassThrough of hardware from the Host to the Guest.
> I've read the KVM 'ToDo' & 'FAQ', and tbh am unclear if everything I
> need is actually in KVM already.  It's just that I don't get all the
> terminology yet.
> 
> I've read that VideoCards are still a no-go. NP for me, atm.
> 
> The cards I'm interested in are:
> 
>  -- a SiliconImage 3124 (sil24 module) based SATA card -- RAID
> capable, but I'm only using it to attach drives, and doing the RAID
> with Linux's 'md', and,
>  -- A number of Gigabit NICs, including an Intel e1000 card.
> 
> I'm thoroughly confused as to whether or not I can PassThrough these
> cards using this CPU, &/or if I need AMD-Vi/IOMMU in hardware.  I
> can't figure out if I do or don't ... AMD's "product sheets" have me
> baffled -- and I haven't figured out if /proc/cpuinfo etc 'shows' me
> definitively.

You do need iommu support in your system. Unfortunately there are very few AMD 
motherboards that have an iommu. Only 1 server level board I know of has one 
and is close to hitting the markets. So chances are you don't have one.

> 
> So, let's just ask this:  *can* I do hardware PassThrough of these PCI
> cards from KVM Host to a fully virtualized KVM guest?  and, if 'yes',
> is there a specific/minimum kernel version I need?
> 
> Thanks!
> 
> BenDJ
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: PCIe device pass-through - No IOMMU, Failed to deassign device error

2010-01-23 Thread Brian Jackson
On Saturday 23 January 2010 05:20:49 Yigal Korman wrote:
> Hi,
> I'm trying to pass a second video card to a Windows 7 virtual machine
> with KVM, and I get the following error:


KVM doesn't support assigning graphics cards to VMs yet. There are people 
working on it afaik, but I don't know the progress.


> "
> r...@ubuntu-desktop:~# kvm -cpu qemu64 -hda /dev/sdb -cdrom /dev/cdrom
> -boot order=dc -m 2000 -usb -name Win7x64 -enable-kvm -device
> pci-assign,host=80:00.0
> No IOMMU found.  Unable to assign device "(null)"
> Failed to deassign device "(null)" : Invalid argument
> Error initializing device pci-assign
> "
> Now it look like I don't have VT-d, but I do, here is my cpuinfo:
> "
> processor : 0
> vendor_id : GenuineIntel
> cpu family : 6
> model : 23
> model name : Intel(R) Xeon(R) CPU   E5440  @ 2.83GHz
> stepping : 10
> cpu MHz : 1998.000
> cache size : 6144 KB
> physical id : 0
> siblings : 4
> core id : 0
> cpu cores : 4
> apicid : 0
> initial apicid : 0
> fpu : yes
> fpu_exception : yes
> cpuid level : 13
> wp : yes
> flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov
> pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx
> lm constant_tsc arch_perfmon pebs bts rep_good aperfmperf pni dtes64
> monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr pdcm dca sse4_1 xsave
> lahf_lm tpr_shadow vnmi flexpriority
> bogomips : 5667.49
> clflush size : 64
> cache_alignment : 64
> address sizes : 38 bits physical, 48 bits virtual
> power management:
> " ... this continues until processor reaches 7 (dual Xeon quad core)
> I've enabled vt-d in the BIOS, and added this parameter to the kernel:
> "intel_iommu=on"
> I've ran these to unbind the card from the host OS:
> "
> modprobe pci_stub
> echo "10de 040f" > /sys/bus/pci/drivers/pci-stub/new_id
> echo :80:00.0 > /sys/bus/pci/devices/\:80\:00.0/driver/unbind
> echo :80:00.0 > /sys/bus/pci/drivers/pci-stub/bind
> "
> this is the relevant part from lspci -vv (I have two video cards, one
> I'd like for the host and one for the guest):
> "
> 60:00.0 VGA compatible controller: nVidia Corporation G84 [Quadro FX
> 1700] (rev a1)
> Subsystem: nVidia Corporation Device 049a
> Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr-
> Stepping- SERR- FastB2B- DisINTx-
> Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
> SERR-  Latency: 0
> Interrupt: pin A routed to IRQ 28
> Region 0: Memory at f600 (32-bit, non-prefetchable) [size=16M]
> Region 1: Memory at a000 (64-bit, prefetchable) [size=512M]
> Region 3: Memory at f400 (64-bit, non-prefetchable) [size=32M]
> Region 5: I/O ports at 2000 [size=128]
> Capabilities: [60] Power Management version 2
> Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
> Status: D0 PME-Enable- DSel=0 DScale=0 PME-
> Capabilities: [68] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0
>  Enable- Address:   Data: 
> Capabilities: [78] Express (v2) Endpoint, MSI 00
> DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <512ns, L1 <4us
> ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset-
> DevCtl: Report errors: Correctable- Non-Fatal- Fatal+ Unsupported-
> RlxdOrd+ ExtTag+ PhantFunc- AuxPwr- NoSnoop+
> MaxPayload 128 bytes, MaxReadReq 512 bytes
> DevSta: CorrErr- UncorrErr- FatalErr- UnsuppReq- AuxPwr- TransPend-
> LnkCap: Port #8, Speed 5GT/s, Width x16, ASPM L0s L1, Latency L0 <512ns, L1
>  <4us ClockPM- Suprise- LLActRep- BwNot-
> LnkCtl: ASPM Disabled; RCB 128 bytes Disabled- Retrain- CommClk+
> ExtSynch- ClockPM- AutWidDis- BWInt- AutBWInt-
> LnkSta: Speed 5GT/s, Width x16, TrErr- Train- SlotClk+ DLActive-
> BWMgmt- ABWMgmt-
> Capabilities: [100] Virtual Channel 
> Capabilities: [128] Power Budgeting 
> Capabilities: [600] Vendor Specific Information 
> Kernel driver in use: nvidia
> Kernel modules: nvidia, nvidiafb
> 80:00.0 VGA compatible controller: nVidia Corporation G84 [Quadro FX
> 1700] (rev a1)
> Subsystem: nVidia Corporation Device 049a
> Control: I/O+ Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr-
> Stepping- SERR- FastB2B- DisINTx-
> Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort-
> SERR-  Latency: 0
> Interrupt: pin A routed to IRQ 24
> Region 0: Memory at f200 (32-bit, non-prefetchable) [size=16M]
> Region 1: Memory at c000 (64-bit, prefetchable) [size=512M]
> Region 3: Memory at f000 (64-bit, non-prefetchable) [size=32M]
> Region 5: I/O ports at 1000 [size=128]
> Capabilities: [60] Power Management version 2
> Flags: PMEClk- DSI- D1- D2- AuxCurrent=0mA PME(D0-,D1-,D2-,D3hot-,D3cold-)
> Status: D0 PME-Enable- DSel=0 DScale=0 PME-
> Capabilities: [68] Message Signalled Interrupts: Mask- 64bit+ Queue=0/0
>  Enable- Address:   Data: 
> Capabilities: [78] Express (v2) Endpoint, MSI 00
> DevCap: MaxPayload 128 bytes, PhantFunc 0, Latency L0s <512ns, L1 <4us
> ExtTag+ AttnBtn- AttnInd- PwrInd- RBE+ FLReset-
> DevCtl: Report errors: Correctable- 

Re: virtio bonding bandwidth problem

2010-01-22 Thread Brian Jackson
On Friday 22 January 2010 07:52:49 am Didier Moens wrote:
> (initially posted to libvirt-us...@redhat.com, but by request of Daniel
> P. Berrange cross-posted to this list)
> 
> 
> Dear all,
> 
> 
> I have been wrestling with this issue for the past few days ; googling
> around doesn't seem to yield anything useful, hence this cry for help.
> 
> 
> 
> Setup (RHEL5.4) :
> 
> * kernel-2.6.18-164.10.1.el5
> * kvm-83-105.el5
> * libvirt-0.6.3-20.el5
> * net.bridge.bridge-nf-call-{arp,ip,ip6}tables = 0
> * tested with/without jumbo frames
> 
> 
> - I am running several RHEL5.4 KVM virtio guest instances on a Dell PE
> R805 RHEL5.4 host. Host and guests are fully updated ; I am using iperf
> to test available bandwidth from 3 different locations (clients) in the
> network to both the host and the guests .
> 
> - To increase both bandwidth and fail-over, 3 1Gb network interfaces
> (BCM5708, bnx2 driver) on the host are bonded (802.3ad) to a 3 Gb/s
> bond0, which is bridged. As all guest interfaces are connected to the
> bridge, I would expect total available bandwidth to all guests to be in
> the range of 2-2.5 Gb/s.
> 
> - Testing with one external client connection to the bare metal host
> yields approx. 940 Mb/s ;
> 
> - Testing with 3 simultaneous connections to the host yields 2.5 Gb/s,
> which confirms a successful bonding setup.
> 
> 
> Problem :
> 
> Unfortunately, available bandwidth to the guests proves to be problematic :
> 
> 1. One client to one guest : 250-600 Mb/s ;
> 2a. One client to 3 guests : 300-350 Mb/s to each guest, total not
> exceeding 980 Mb/s;
> 2b. Three clients to 3 guests : 300-350 Mb/s to each guest ;
> 2c. Three clients to host and 2 guests : 940 Mb/s (host) + 500 Mb/s to
> each guest.
> 
> 
> Conclusions :
> 
> 1. I am experiencing a 40% performance hit (600 Mb/s) on each individual
> virtio guest connection ;

I don't know what all features RHEL5.4 enables for kvm, but that doesn't seem 
outside the realm of possibility. Especially depending on what OS is running 
in the guest. I think RHEL5.4 has an older version of virtio, but I won't 
swear to it. Fwiw, I get ~1.5Gbps guest to host on a Ubuntu 9.10 guest, 
~850mbit/s guest to host on a Windows 7 guest. To get those speeds, I have to 
up the window sizes a good bit (the default is 8K, those numbers are at 1M). 
At the default Windows 7 gets ~250mbit/s. 


> 2. Total simultaneous bandwidth to all guests seems to be capped at 1
> Gb/s ; quite problematic, as this renders my server consolidation almost
> useless.

I don't know about 802.3ad bonding, but I know the other linux bonding 
techniques are very hard to benchmark due to the way the mac's are handled. I 
would start by examining/describing your testing a little more. At the very 
least what tools you're using to test, etc. would be helpful.


> 
> 
> I could bridge each host network interface separately and assign guest
> interfaces by hand, but that would defy the whole idea of load balancing
> and failover which is provided by the host bonding.
> 
> 
> 
> Any ideas anyone, or am I peeking in the wrong direction (clueless
> setup, flawed testing methodology, ...) ?
> 
> 
> 
> I monitor the list; a CC: would be appreciated.
> 
> Thanks in advance for any help,
> Didier
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Issues with qemu-kvm.git from today

2009-12-15 Thread Brian Jackson
With qemu-kvm.git from this morning (about an hour ago), I see the following 
message. Qemu continues to run after this, but the guest is unresponsive and 
the qemu process is chewing up 100% cpu.


rom: out of memory (rom pxe-virtio.bin, addr 0x000de800, size 0xdc00, 
max 0x000e)



I also rolled back 14 commits and built that. It runs, but has about half the 
network performance that my previous checkout (from sometime in October iirc) 
had.

--Iggy
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Memory under KVM?

2009-12-11 Thread Brian Jackson
On Friday 11 December 2009 15:43:01 rek2 wrote:
> Hi everyone, I'm new to the list and I have a couple questions that we
> are wondering about here at work...
> we have notice that the KVM processes on the host take much more memory
> than the memory we have told the VM to use..  a ruff example..
> if we tell KVM to use 2 gigs for one VM it will end up showing on the
> host process list for that VM like 3 gigs or more...
> Why do I ask this? well we need to figure out how much memory to add to
> our host server so we can calculate the number of VM's we can run there
> etc etc..
> 
> Thanks for the help
> 

My guests' RES size are all either right at or below what the -m parameter is 
set to.

So I think you have some sort of issue.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Windows XP Viostor driver not building

2009-12-09 Thread Brian Jackson
When I try to build it recently I get the following:

Compiling - virtio_stor_hw_helper.c
1>errors in directory c:\src\kvm-guest-drivers-windows\viostor
1>c:\src\kvm-guest-drivers-windows\viostor\virtio_stor_hw_helper.c(99) : error 
C2039: 'requests' : is not a member of '_ADAPTER_EXTENSION'
Compiling - virtio_pci.c
Compiling - virtio_ring.c
Compiling - generating code...
Linking Executable - objfre_wxp_x86\i386\viostor.sys
1>link : error LNK1181: cannot open input file 'c:\src\kvm-guest-drivers-
windows\viostor\objfre_wxp_x86\i386\virtio_stor_hw_helper.obj'


Anybody seen this and know if there's a fix?
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: libvirt bug #532480

2009-11-03 Thread Brian Jackson
On Tuesday 03 November 2009 06:02:42 am roma1390 wrote:
> Lib virt thinks that bug #532480 must be addressed to quemu/kvm team.
> 
>https://bugzilla.redhat.com/show_bug.cgi?id=532480


For future reference adding some overview to your email instead of making all 
the devs with arguably limited time go read through a bug report is probably a 
good idea.


> 
> Any ideas how to fix this issue?


Iirc, it's being worked on. And yes, it is the developers of said drivers 
responsibility to do the signing. Keep watching the url from the bug for 
updated drivers. Until then, there are workarounds to this issue also 
mentioned at that url.


> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: CPU change causes hanging of .NET apps

2009-10-28 Thread Brian Jackson
On Wednesday 28 October 2009 16:13:37 Erik Rull wrote:
> Hi all,
> 
> when changing the CPU from the default QEMU32 one to e.g. the n270 or the
> core2duo no .NET apps will work under Windows XP as guest. Switching back
> and everything is fine. The Pentium Emulation on the other side works fine!


Have you tried with -cpu core2duo,-nx ?

--Iggy


> 
> The Application loads but it hangs with 99% CPU usage and ca. 3-4 MB Memory
>   Consumption.
> 
> Normally, .NET is capable to run on all x86 Processors >= Pentium. XP and
> non-.NET Apps work fine.
> 
> Any Ideas what happens here? I also started applications that were NOT
> started with the QEMU32 CPU to prevent a caching - same problem.
> 
> Best regards,
> 
> Erik
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Is AMD rev F the same thing as socket F?

2009-10-19 Thread Brian Jackson
On Monday 19 October 2009 09:21:48 am Neil Aggarwal wrote:
> Chris:
> > > cpu family  : 15
> >
> > ^^ means that you have a Rev F (cpu_family 15 == 0xf in hex).
> 
> That is good to know.  Thanks for the info.
> 
> I am actually quite surprised this processor does not have
> the constant time stamp counter because the RHEL virtualization
> guide states it is a feature of modern CPUs.  The Opteron is a
> modern CPU to me.
> 
> Is there a listing of CPUs that have it?  I tried
> searching in Google to no avail.

I believe that feature started with the Phenom's for AMD. IIRC, Intel always 
had it.

> 
> Thanks,
>   Neil
> 
> --
> Neil Aggarwal, (281)846-8957, www.JAMMConsulting.com
> Will your e-commerce site go offline if you have
> a DB server failure, fiber cut, flood, fire, or other disaster?
> If so, ask about our geographically redundant database system.
> 
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Modifying RAM during runtime on guest

2009-09-08 Thread Brian Jackson
On Tuesday 08 September 2009 03:52:07 pm Daniel Bareiro wrote:
> Hi all!
> 
> I'm trying to modify the amount of RAM that has some of guests. Host has
> 2.6.30 kernel with KVM-88.
> 
> In one of guest I didn't have problems when decreasing the amount of memory
> from 3584 MIB to 1024 MiB. This guest has 2.6.26-2-686 stock kernel. Also I
> was trying to decrease the amount RAM of another guest from 3584 MiB to
> 2048 MiB, but it didn't work. This other guest has
> 2.6.24-etchnhalf.1-686-bigmem stock kernel. Does Ballooning in guest
> require 2.6.25 or superior?


I don't know, if that kernel has a virtio-balloon driver, I'd think that was 
all you need to balloon memory.


> 
> Thinking that it could be an impediment related to the kernel version of
> guest, I tried to increase the memory of another one guest with
> 2.6.26-2-686 from 512 MIB to 1024 MIB, but this didn't work either.


You can only grow memory up to the amount you specified on the command line if 
you've already ballooned down. So if you specify "-m 1024M" on the command 
line, then shrink it to 512, you could then balloon it back up to a max of 
1024.


> 
> These are the statistics of of memory usage in host:
> 
> # free
>  total   used   free sharedbuffers cached
> Mem:  16469828   147634601706368  07800712 202044
> -/+ buffers/cache:67607049709124
> Swap:  8319948  192408300708
> 
> 
> 
> Which can be the cause?
> 
> Thanks in advance for your reply.
> 
> Regards,
> Daniel
> 
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] KVM: Use thread debug register storage instead of kvm specific data

2009-09-04 Thread Brian Jackson
On Friday 04 September 2009 11:08:51 am Andrew Theurer wrote:
> Brian Jackson wrote:
> > On Friday 04 September 2009 09:48:17 am Andrew Theurer wrote:
> > 
> >
> >>> Still not idle=poll, it may shave off 0.2%.
> >>
> >> Won't this affect SMT in a negative way?  (OK, I am not running SMT now,
> >> but eventually we will be) A long time ago, we tested P4's with HT, and
> >> a polling idle in one thread always negatively impacted performance in
> >> the sibling thread.
> >>
> >> FWIW, I did try idle=halt, and it was slightly worse.
> >>
> >> I did get a chance to try the latest qemu (master and next heads).  I
> >> have been running into a problem with virtIO stor driver for windows on
> >> anything much newer than kvm-87.  I compiled the driver from the new git
> >> tree, installed OK, but still had the same error.  Finally, I removed
> >> the serial number feature in the virtio-blk in qemu, and I can now get
> >> the driver to work in Windows.
> >
> > What were the symptoms you were seeing (i.e. define "a problem").
> 
> Device manager reports "a problem code 10" occurred, and the driver
> cannot initialize.


Yes! I was getting this after I moved from 0.10.6 to 0.11.0-rc1. Now I know 
how to fix it. Thank you. Thank you.


> 
> Vadim Rozenfeld informed me:
> > There is a sanity check in the code, which checks the I/O range and fails
> > if is not equal to 40h. Resent virtio-blk devices have I/O range equal to
> > 0x400 (serial number feature). So, out signed  viostor driver will fail
> > on the latest KVMs. This problem was fixed
> 
> and committed to SVN some time ago.
> 
> I assumed the fix was to the virtio windows driver, but I could not get
> the driver I compiled from latest git to work either (only on
> qemu-kvm-87).  So, I just backed out the serial number feature in qemu,
> and it worked.  FWIW, the linux virtio-blk driver never had a problem.


There have been very few changes to the viostor windows git repo since it was 
opened. Unless it was done before they were open sourced. In any case, it 
doesn't seem to be working with what's publicly available, so I think maybe 
there is something missing internal to external.


> 
> >> So, not really any good news on performance with latest qemu builds.
> >> Performance is slightly worse:
> >>
> >> qemu-kvm-87
> >> user  nice  system   irq  softirq guest   idle  iowait
> >> 5.79  0.009.28  0.08 1.00 20.81  58.784.26
> >> total busy: 36.97
> >>
> >> qemu-kvm-88-905-g6025b2d (master)
> >> user  nice  system   irq  softirq guest   idle  iowait
> >> 6.57  0.00   10.86  0.08 1.02 21.35  55.904.21
> >> total busy: 39.89
> >>
> >> qemu-kvm-88-910-gbf8a05b (next)
> >> user  nice  system   irq  softirq guest   idle  iowait
> >> 6.60  0.00  10.91   0.09 1.03 21.35  55.714.31
> >> total busy: 39.98
> >>
> >> diff of profiles, p1=qemu-kvm-87, p2=qemu-master
> >
> > 
> >
> >> 18x more samples for gfn_to_memslot_unali*, 37x for
> >> emulator_read_emula*, and more CPU time in guest mode.
> >>
> >> One other thing I decided to try was some cpu binding.  I know this is
> >> not practical for production, but I wanted to see if there's any benefit
> >> at all.  One reason was that a coworker here tried binding the qemu
> >> thread for the vcpu and the qemu IO thread to the same cpu.  On a
> >> networking test, guest->local-host, throughput was up about 2x.
> >> Obviously there was a nice effect of being on the same cache.  I
> >> wondered, even without full bore throughput tests, could we see any
> >> benefit here.  So, I bound each pair of VMs to a dedicated core.  What I
> >> saw was about a 6% improvement in performance.  For a system which has
> >> pretty incredible memory performance and is not that busy, I was
> >> surprised that I got 6%.  I am not advocating binding, but what I do
> >> wonder:  on 1-way VMs, if we keep all the qemu threads together on the
> >> same CPU, but still allowing the scheduler to move them (all of them at
> >> once) to different cpus over time, would we see the same benefit?
> >>
> >> One other thing:  So far I have not been using preadv/pwritev.  I assume
> >> I need a more recent glibc (on 2.5 now) for qemu to take advantage of
> >> this?
> >
> > Getting p(read|write)v working almost doubled my virtio-net throughput in
> > a Linux guest. Not quite as much in Windows guests. Yes you need
> > glibc-2.10. I think some distros might have backported it to 2.9. You
> > will also need some support for it in your system includes.
> 
> Thanks, I will try a newer glibc, or maybe just move to a newer Linux
> installation which happens to have a newer glic.


Fwiw... In Debian, I had to get glibc from the experimental tree. So some 
distros might not even have it. 


> 
> -Andrew
> 

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [PATCH] KVM: Use thread debug register storage instead of kvm specific data

2009-09-04 Thread Brian Jackson
On Friday 04 September 2009 09:48:17 am Andrew Theurer wrote:

> >
> > Still not idle=poll, it may shave off 0.2%.
> 
> Won't this affect SMT in a negative way?  (OK, I am not running SMT now,
> but eventually we will be) A long time ago, we tested P4's with HT, and
> a polling idle in one thread always negatively impacted performance in
> the sibling thread.
> 
> FWIW, I did try idle=halt, and it was slightly worse.
> 
> I did get a chance to try the latest qemu (master and next heads).  I
> have been running into a problem with virtIO stor driver for windows on
> anything much newer than kvm-87.  I compiled the driver from the new git
> tree, installed OK, but still had the same error.  Finally, I removed
> the serial number feature in the virtio-blk in qemu, and I can now get
> the driver to work in Windows.

What were the symptoms you were seeing (i.e. define "a problem").

> 
> So, not really any good news on performance with latest qemu builds.
> Performance is slightly worse:
> 
> qemu-kvm-87
> user  nice  system   irq  softirq guest   idle  iowait
> 5.79  0.009.28  0.08 1.00 20.81  58.784.26
> total busy: 36.97
> 
> qemu-kvm-88-905-g6025b2d (master)
> user  nice  system   irq  softirq guest   idle  iowait
> 6.57  0.00   10.86  0.08 1.02 21.35  55.904.21
> total busy: 39.89
> 
> qemu-kvm-88-910-gbf8a05b (next)
> user  nice  system   irq  softirq guest   idle  iowait
> 6.60  0.00  10.91   0.09 1.03 21.35  55.714.31
> total busy: 39.98
> 
> diff of profiles, p1=qemu-kvm-87, p2=qemu-master
> 

> 
> 18x more samples for gfn_to_memslot_unali*, 37x for
> emulator_read_emula*, and more CPU time in guest mode.
> 
> One other thing I decided to try was some cpu binding.  I know this is
> not practical for production, but I wanted to see if there's any benefit
> at all.  One reason was that a coworker here tried binding the qemu
> thread for the vcpu and the qemu IO thread to the same cpu.  On a
> networking test, guest->local-host, throughput was up about 2x.
> Obviously there was a nice effect of being on the same cache.  I
> wondered, even without full bore throughput tests, could we see any
> benefit here.  So, I bound each pair of VMs to a dedicated core.  What I
> saw was about a 6% improvement in performance.  For a system which has
> pretty incredible memory performance and is not that busy, I was
> surprised that I got 6%.  I am not advocating binding, but what I do
> wonder:  on 1-way VMs, if we keep all the qemu threads together on the
> same CPU, but still allowing the scheduler to move them (all of them at
> once) to different cpus over time, would we see the same benefit?
> 
> One other thing:  So far I have not been using preadv/pwritev.  I assume
> I need a more recent glibc (on 2.5 now) for qemu to take advantage of
> this?

Getting p(read|write)v working almost doubled my virtio-net throughput in a 
Linux guest. Not quite as much in Windows guests. Yes you need glibc-2.10. I 
think some distros might have backported it to 2.9. You will also need some 
support for it in your system includes.

--Iggy

> 
> Thanks!
> 
> -Andrew
> 

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: vhost_net_init returned -7

2009-08-29 Thread Brian Jackson
I'm guessing that's not something the windows virtio drivers support  
yet. Do you plan on adding support for guests without msi or am I  
stuck waiting for the windows drivers to add support for msi?




On Aug 29, 2009, at 4:01 PM, "Michael S. Tsirkin"   
wrote:



On Fri, Aug 28, 2009 at 03:10:43PM -0500, Brian Jackson wrote:
I'm trying to tinker with vhost_net. I have a 2.6.31-rc4 host  
kernel patched
to support vhost_net (v5) (along with ksm if it matters). The guest  
is a
debian-5.0.2 (2.6.26) install CD for now. When the guest tries to  
load the
virtio-net drivers, kvm closes and prints "vhost_net_init returned  
-7". KVM is

patched with your patches from 20090817.

I tried looking through the code but without adding some other  
debug code,
it's not obvious where exactly it's failing. Was going to see if  
anyone (I

know, not a lot of users yet) had any ideas before I dig too deep.

--Iggy


Something I forgot to mention is that userspace I posted currently  
rely on
guest MSI support which requires guest v2.6.31 (any rc will do). You  
don't need

kvm.git in guest though: kernel.org kernels should work ok.

--
MST
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


vhost_net_init returned -7

2009-08-28 Thread Brian Jackson
I'm trying to tinker with vhost_net. I have a 2.6.31-rc4 host kernel patched 
to support vhost_net (v5) (along with ksm if it matters). The guest is a 
debian-5.0.2 (2.6.26) install CD for now. When the guest tries to load the 
virtio-net drivers, kvm closes and prints "vhost_net_init returned -7". KVM is 
patched with your patches from 20090817.

I tried looking through the code but without adding some other debug code, 
it's not obvious where exactly it's failing. Was going to see if anyone (I 
know, not a lot of users yet) had any ideas before I dig too deep.

--Iggy
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Poor network performance with cable modem assigned to guest

2009-08-28 Thread Brian Jackson
On Friday 28 August 2009 01:14:42 pm Jon Fairbairn wrote:
> I'm experimenting with a virtual router. I did this a few years ago with
> Xen and it worked well enough, but then fedora changed and it stopped
> working, so I gave up for a while. Now I have a machine that supports
> hardware virtualisation, I thought I'd try again.
>
> The setup was done through virt-manager. The network between the host
> and guest is a virtual bridge. What I've been trying to do is to assign
> a USB cable modem to the guest,


This is probably your problem here. KVM only emulates a usb1.1 controller, and 
from all reports, it doesn't really do that very well. There have been 
numerous reports of poor performance even for a usb1.1 device. You should 
check the archives to see if there was ever any kind of tips or resolution to 
some of those problems.


> and connect to the internet through
> that. I'd expect some degradation in performance, especially since
> there's a firewall on both the virtual router and on the host. Here's
> some figures wgetting a 12802500 byte file thrice from a nearby web
> server:
>
> Via hardware router: 1009K/s 1008K/s 1010K/s (12 or 13s)
> Cable modem on host:   1.00M/s 1.00M/s 1.00M/s(ditto)
>
> (wait for it)
>
> Via virtual router, assigned usb: 21.1K/s   (9m 58s!)
>
> Now, as I said, I expected some performance hit doing it this way, but a
> factor of fifty takes the biscuit.
>
> What can be wrong?
>
>  * * *
>
> Details:
>
> "Cable modem on host" above just means that I attached the cable modem
> to the host and configured it as a network device in the usual way.
>
> From the host to the guest I get about 12MB/s using scp, from the guest
> to the host (initiated from the host) I get 7MB/s.
>
> The host is AMD Athlon(tm) 64 X2 Dual Core Processor 4400+ with 6G of
> RAM of which 256M is assigned to the guest (the hardware version only
> has 188M) neither virtual nor hardware router has any swap.
>
> The hardware router is the same kernel and nearly (modulo IP addresses
> etc) the same configuration as the virtual router, running on an old ibm
> pc (500MHz pentium III).
>
> kernel on host: 2.6.29.6-217.2.16.fc11.x86_64
> kernel on routers: 2.6.29.6-217.2.8.fc11.i586
>
> All running fedora 11 (though the routers are very much cut down
> installations).
>
> qemu-kvm-0.10.6
> libvirt-0.6.2-15.1.fc11.x86_64
>
> libvirt uses this command to start the virtual machine:
>
> LC_ALL=C PATH=/sbin:/usr/sbin:/bin:/usr/bin /usr/bin/qemu-kvm -S -M pc \
>  -m 256 -smp 1 -name monogramme-virtual \
>  -uuid [redacted] -monitor pty \
>  -pidfile /var/run/libvirt/qemu//monogramme-virtual.pid \
>  -boot d \
>  -drive file=/[wherever]/livecd-fedora-monogramme.iso,\
> if=ide,media=cdrom,index=2 \
>  -net nic,macaddr=54:52:00:14:4f:18,vlan=0 \
>  -net tap,fd=11,vlan=0 -serial pty -parallel none -usb -vnc 127.0.0.1:0 \
>  -k en-gb \
>  -usbdevice host:0bb2:6098
>
> 0bb2:6098 = Ambit Microsystems Corp. USB Cable Modem (a usb 1.1 device)
>
> Selinux is on on both machines.
>
> I can't think of anything else relevant at the moment.
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Performace data when running Windows VMs

2009-08-26 Thread Brian Jackson
On Wednesday 26 August 2009 11:14:57 am Andrew Theurer wrote:

> >
> > > I/O on the host was not what I would call very high:  outbound network
> > > averaged at 163 Mbit/s inbound was 8 Mbit/s, while disk read ops was
> > > 243/sec and write ops was 561/sec
> >
> > What was the disk bandwidth used?  Presumably, direct access to the
> > volume with cache=off?
>
> 2.4 MB/sec write, 0.6MB/sec read, cache=none
> The VMs' boot disks are IDE, but apps use their second disk which is
> virtio.


In my testing, I got better performance from IDE than the new virtio block 
driver for windows. There appears to be some optimization left to do on them.


--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Windows guest CPU socket/core recognition

2009-08-18 Thread Brian Jackson
On Monday 17 August 2009 22:28:35 Zdenek Kaspar wrote:
> Hello everyone,
> 
> I guess I'm not the first one who hit the problem with Microsoft's
> licensing model..
> 
> Nowadays the common single or dual quad-core workstation can't be fully
> used because it's limited by example: license up to 2 physical
> processors. Such VM acts like 4-way or 8-way machine.


Nine times out of ten, a single cpu guest is going to be a better option than 
a smp/multie core guest. I've seen idle windows guests go from using nearly 
200% cpu for -smp 2 to ~5-10% for -smp 1. Unless your guest is actually using 
all that cpu all the time, you're going to be wasting a decent amount of 
cycles.



> 
> Is there any way howto expose CPUs differently for this kind of problem?


There have been patches (from Andre Pryzwara and maybe others) to support 
multi-core vs mult-socket smp.


> 
> TIA, Z.
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> 
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: qemu/kvm exchangeable with same windowsXP diskimage ?

2009-08-14 Thread Brian Jackson
On Friday 14 August 2009 01:54:26 pm Daniel Schwager wrote:
> Hi,
>
> i installed a MS windows xp running on kvm-86. Now,
> I tried to run this image directly on qemu-0.10.5 - but windows
> told me about problems while booting and reset the vm.
>
> Do I have to install some drivers first on the v...@kvm so,
> the vm will also run in v...@qemu-0.10.5 ?

Upstream qemu's kvm support is not so great. Best to stick with qemu-kvm or 
kvm.

>
> regards
> Danny
>
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kvm userspace: ksm support

2009-08-05 Thread Brian Jackson
On Monday 03 August 2009 02:04:15 pm Izik Eidus wrote:
> Brian Jackson wrote:
> > Look okay?
>
> Yes.


Okay I got it working after I figured out there were 2 
kvm_setup_guest_memory()'s in qemu-kvm

I have debian-5 packages of linux-2.6.31-rc4 with ksm patches and qemu-
kvm-0.10.6 with ksm patches if anyone is interested.

--Iggy
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kvm userspace: ksm support

2009-08-03 Thread Brian Jackson
On Monday 03 August 2009 01:09:38 pm Izik Eidus wrote:
> Brian Jackson wrote:
> > If someone wanted to play around with ksm in qemu-kvm-0.x.x would it be
> > as simple as adding the below additions to kvm_setup_guest_memory in
> > kvm-all.c
>
> qemu-kvm-0.x.x doesnt tell me much, but if it is the function that
> register the memory than yes...
>
> (I just remember that qemu used to have something called phys_ram_base,
> in that case it would be just making madvise on phys_ram_base with the
> same of phys_ram_size)

Sorry, I'm using qemu-kvm-0.10.6


This is what qemu_ram_alloc looks like:



/* XXX: better than nothing */
ram_addr_t qemu_ram_alloc(ram_addr_t size)
{
ram_addr_t addr;
if ((phys_ram_alloc_offset + size) > phys_ram_size) {
fprintf(stderr, "Not enough memory (requested_size = %" PRIu64 ", max 
memory = %" PRIu64 ")\n",
(uint64_t)size, (uint64_t)phys_ram_size);
abort();
}
addr = phys_ram_alloc_offset;
phys_ram_alloc_offset = TARGET_PAGE_ALIGN(phys_ram_alloc_offset + size);

if (kvm_enabled())
kvm_setup_guest_memory(phys_ram_base + addr, size);

return addr;
}


And this is what my new kvm_setup_guest_memory looks like:


void kvm_setup_guest_memory(void *start, size_t size)
{
if (!kvm_has_sync_mmu()) {
#ifdef MADV_DONTFORK
int ret = madvise(start, size, MADV_DONTFORK);

if (ret) {
perror("madvice");
exit(1);
}
#else
fprintf(stderr,
"Need MADV_DONTFORK in absence of synchronous KVM MMU\n");
exit(1);
#endif
}
#ifdef MADV_MERGEABLE
madvise(start, size, MADV_MERGEABLE);
#endif
}



Look okay?


>
> > (and adding the necessary kernel changes of course)?
> >
> > On Tuesday 28 July 2009 11:39:59 am Izik Eidus wrote:
> >> This patch is not for inclusion just rfc.
> >>
> >> Thanks.
> >>
> >>
> >> From 1297b86aa257100b3d819df9f9f0932bf4f7f49d Mon Sep 17 00:00:00 2001
> >> From: Izik Eidus 
> >> Date: Tue, 28 Jul 2009 19:14:26 +0300
> >> Subject: [PATCH] kvm userspace: ksm support
> >>
> >> rfc for ksm support to kvm userpsace.
> >>
> >> thanks
> >>
> >> Signed-off-by: Izik Eidus 
> >> ---
> >>  exec.c |3 +++
> >>  1 files changed, 3 insertions(+), 0 deletions(-)
> >>
> >> diff --git a/exec.c b/exec.c
> >> index f6d9ec9..375cc18 100644
> >> --- a/exec.c
> >> +++ b/exec.c
> >> @@ -2595,6 +2595,9 @@ ram_addr_t qemu_ram_alloc(ram_addr_t size)
> >>  new_block->host = file_ram_alloc(size, mem_path);
> >>  if (!new_block->host) {
> >>  new_block->host = qemu_vmalloc(size);
> >> +#ifdef MADV_MERGEABLE
> >> +madvise(new_block->host, size, MADV_MERGEABLE);
> >> +#endif
> >>  }
> >>  new_block->offset = last_ram_offset;
> >>  new_block->length = size;
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kvm userspace: ksm support

2009-08-03 Thread Brian Jackson
If someone wanted to play around with ksm in qemu-kvm-0.x.x would it be as 
simple as adding the below additions to kvm_setup_guest_memory in kvm-all.c 
(and adding the necessary kernel changes of course)?


On Tuesday 28 July 2009 11:39:59 am Izik Eidus wrote:
> This patch is not for inclusion just rfc.
>
> Thanks.
>
>
> From 1297b86aa257100b3d819df9f9f0932bf4f7f49d Mon Sep 17 00:00:00 2001
> From: Izik Eidus 
> Date: Tue, 28 Jul 2009 19:14:26 +0300
> Subject: [PATCH] kvm userspace: ksm support
>
> rfc for ksm support to kvm userpsace.
>
> thanks
>
> Signed-off-by: Izik Eidus 
> ---
>  exec.c |3 +++
>  1 files changed, 3 insertions(+), 0 deletions(-)
>
> diff --git a/exec.c b/exec.c
> index f6d9ec9..375cc18 100644
> --- a/exec.c
> +++ b/exec.c
> @@ -2595,6 +2595,9 @@ ram_addr_t qemu_ram_alloc(ram_addr_t size)
>  new_block->host = file_ram_alloc(size, mem_path);
>  if (!new_block->host) {
>  new_block->host = qemu_vmalloc(size);
> +#ifdef MADV_MERGEABLE
> +madvise(new_block->host, size, MADV_MERGEABLE);
> +#endif
>  }
>  new_block->offset = last_ram_offset;
>  new_block->length = size;
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Any efforts going on concerning virtio Host Drivers for Windows?

2009-07-22 Thread Brian Jackson
On Wednesday 22 July 2009 01:23:56 pm Yaniv Kaul wrote:
> On 7/22/2009 8:26 PM, Wilken Haase wrote:
> > Hi List,
> > we are already using kvm successfully a while here for our internal
> > Infrastructure. Up to now we're having good results with Linux Guests
> > and are quite happy with kvm.
> > Now it's time to replace some of our current Windows Infrastructure
> > and I'm evaluating Windows 2k3 and 2k8 on KVM vhosts. While virtio
> > does currently successfully boost our Linux Guests I only found
> > network virtio drivers for Windows. These help quiet a bit speeding
> > things but we're well behind our hopes currently. Blockdevice drivers
> > seem to be missing at all.
> >
> > Since I found no evidence of anything evolving here: Are any efforts
> > going on creating or optimizing such drivers ? Can anyone expect
> > seeing something anytime soon ?
>
> Yes, efforts are under way to both certify them (get them signed via the
> MS WHQL process) and optimize them. Soon is the best time estimation I
> can give, perhaps very soon.
> Y.


What about open sourcing?


>
> > If anyone can shed some light in this i would be pleased to read your
> > answers.
> >
> > Greetings !
> > Wilken Haase
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: USB passthrough does not work

2009-07-21 Thread Brian Jackson
I don't know if this might be effecting you, but KVM does not support USB-2. 
More and more devices these days are USB-2 only. It's at least worth checking 
out. In any case... copying files over USB-1.1 is going to be terribly 
painful.

--Iggy

On Tuesday 21 July 2009 19:44:33 Andreas Kinzler wrote:
> I am using kvm-88 and trying to passthrough an USB
> storage device (usb memory stick) via
>
> -usb -usbdevice host::
>
> In Vista x64 the device appears but has error code
> 10 (this device cannot start).
>
> Any ideas?
>
>   Andreas
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Problem with Grub and KVM 88

2009-07-15 Thread Brian Jackson
On Wednesday 15 July 2009 06:06:54 am Erik Wartusch wrote:
> Hi all,
>
> Following problem.
> I recently upgraded kvm from 7.2 (Debian Lenny repository version) to
> the newest 88 KVM.


How did you install kvm-88? Did you do a proper install? including the bios 
files, extboot, etc?

--Iggy


>
> Since then when I first started and stopped a Debian Lenny virtual
> instance (guest) at the next (second) start I get the following error:
> "Grub loading, please wait. Error 2". So the first time its booting
> the second not, resulting with this error.
>
> KVM Version 88
> Kernel Host system: 2.6.26-2-amd64
> Kernel guest system: 2.6.26-2-amd64
> CPU: Intel(R) Xeon(R) CPU E5420 @ 2.50GHz
> OS for host and guest: Debian Lenny
>
> Qemu command line from a script:
> #!/bin/sh
>
> /usr/local/kvm/bin/qemu-system-x86_64 -m 256  \
>   -name test3 \
>   -boot c \
>   -hda /var/kvm/test3.img \
>   -net nic,macaddr=00:30:1c:45:35:04,model=virtio,vlan=0 \
>   -net tap,script=/etc/kvm/kvm-ifup,vlan=0 \
>   -k de \
>   -vnc 192.168.125.21:4 \
>   -monitor tcp:127.0.0.1:2024,server,nowait \
>   -serial none \
>   -parallel none \
>   -daemonize > /dev/null 2>&1
>
> A CentOS or Windows XP are working fine.
>
> Can somebody confirm this or any solution?
> Kind Regards,
> Erik
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


[WIKI] email confirmation not working

2009-07-08 Thread Brian Jackson
I tried doing the email confirmation on the wiki so I could be emailed on page 
changes, etc. Every time I hit the "Mail a confirmation code" button it says:

"Could not send confirmation mail. Check address for invalid characters. "

I've re-entered my email address and tried again. Same results.

-Iggy
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: [RFC] allow multi-core guests: introduce cores= option to -cpu

2009-07-03 Thread Brian Jackson



Andre Przywara wrote:

Hi,

currently SMP guests happen to see  vCPUs as  different sockets.
Some guests (Windows comes to mind) have license restrictions and refuse
to run on multi-socket machines.
So lets introduce a "cores=" parameter to the -cpu option to let the user
specify the number of _cores_ the guest should see.

This patch has not been tested with all corner cases, so I just want to
hear your comments whether
a) we need such an option  and
b) you like this particular approach.

Applying this qemu.git patch to qemu-kvm.git fixes Windows SMP boot on
some versions, I successfully tried up to -smp 16 -cpu host,cores=8 with
WindowsXP Pro.  



Personally, I'd like to see it as an extra arg to the -smp option. We've 
seen too many people use -cpu incorrectly in #kvm, so we've gotten into 
the habit of telling people not to touch that option unless they know 
exactly what they are doing. Plus it seems odd to have to use -cpu foo 
when you just want more cpus, not a specific cpu.


--Iggy




Regards,
Andre.

Signed-off-by: Andre Przywara 
---
 target-i386/cpu.h|1 +
 target-i386/helper.c |   26 --
 2 files changed, 25 insertions(+), 2 deletions(-)

diff --git a/target-i386/cpu.h b/target-i386/cpu.h
index 4a8608e..96fa471 100644
--- a/target-i386/cpu.h
+++ b/target-i386/cpu.h
@@ -657,6 +657,7 @@ typedef struct CPUX86State {
 uint32_t cpuid_ext3_features;
 uint32_t cpuid_apic_id;
 int cpuid_vendor_override;
+int cpuid_cores;
 
 /* MTRRs */

 uint64_t mtrr_fixed[11];
diff --git a/target-i386/helper.c b/target-i386/helper.c
index 82e1ff1..9c54fb9 100644
--- a/target-i386/helper.c
+++ b/target-i386/helper.c
@@ -103,6 +103,7 @@ typedef struct x86_def_t {
 uint32_t xlevel;
 char model_id[48];
 int vendor_override;
+int cores;
 } x86_def_t;
 
 #define I486_FEATURES (CPUID_FP87 | CPUID_VME | CPUID_PSE)

@@ -351,7 +352,7 @@ static int cpu_x86_find_by_name(x86_def_t *x86_cpu_def, 
const char *cpu_model)
 char *featurestr, *name = strtok(s, ",");
 uint32_t plus_features = 0, plus_ext_features = 0, plus_ext2_features = 0, 
plus_ext3_features = 0;
 uint32_t minus_features = 0, minus_ext_features = 0, minus_ext2_features = 
0, minus_ext3_features = 0;
-int family = -1, model = -1, stepping = -1;
+int family = -1, model = -1, stepping = -1, cores = 1;
 
 def = NULL;

 for (i = 0; i < ARRAY_SIZE(x86_defs); i++) {
@@ -406,6 +407,14 @@ static int cpu_x86_find_by_name(x86_def_t *x86_cpu_def, 
const char *cpu_model)
 goto error;
 }
 x86_cpu_def->stepping = stepping;
+} else if (!strcmp(featurestr, "cores")) {
+char *err;
+cores = strtol(val, &err, 10);
+if (!*val || *err || cores < 1 || cores > 0xff) {
+fprintf(stderr, "bad numerical value %s\n", val);
+goto error;
+}
+x86_cpu_def->cores = cores;
 } else if (!strcmp(featurestr, "vendor")) {
 if (strlen(val) != 12) {
 fprintf(stderr, "vendor string must be 12 chars long\n");
@@ -473,6 +482,7 @@ static int cpu_x86_register (CPUX86State *env, const char 
*cpu_model)
 env->cpuid_vendor3 = CPUID_VENDOR_INTEL_3;
 }
 env->cpuid_vendor_override = def->vendor_override;
+env->cpuid_cores = def->cores;
 env->cpuid_level = def->level;
 if (def->family > 0x0f)
 env->cpuid_version = 0xf00 | ((def->family - 0x0f) << 20);
@@ -1562,9 +1572,14 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, 
uint32_t count,
 break;
 case 1:
 *eax = env->cpuid_version;
-*ebx = (env->cpuid_apic_id << 24) | 8 << 8; /* CLFLUSH size in quad 
words, Linux wants it. */
+/* CLFLUSH size in quad words, Linux wants it. */
+*ebx = (env->cpuid_apic_id << 24) | 8 << 8;
 *ecx = env->cpuid_ext_features;
 *edx = env->cpuid_features;
+if (env->cpuid_cores > 1) {
+*ebx |= env->cpuid_cores << 16;   /* LogicalProcessorCount */
+*edx |= 1 << 28;/* HTT bit */
+}
 break;
 case 2:
 /* cache info: needed for Pentium Pro compatibility */
@@ -1642,6 +1657,10 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, 
uint32_t count,
 *ecx = env->cpuid_ext3_features;
 *edx = env->cpuid_ext2_features;
 
+if (env->cpuid_cores > 1) {

+*ecx |= 1 << 1;/* CmpLegacy bit */
+}
+
 if (kvm_enabled()) {
 /* Nested SVM not yet supported in KVM */
 *ecx &= ~CPUID_EXT3_SVM;
@@ -1696,6 +1715,9 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, 
uint32_t count,
 *ebx = 0;
 *ecx = 0;
 *edx = 0;
+if (env->cpuid_cores > 1) {
+*ecx |= env->cpuid_cores - 1;/* NC: Number of CPU cores */
+}
 break;
 

Wiki updates

2009-06-12 Thread Brian Jackson
I've made some changes to the wiki to hopefully improve it's  
usefulness. Especially to new kvm users.


I table-ized the management tools page info

I added some links to the main page for commonly used pages (inside  
and outside the wiki)
Some of those links duplicate some of the links in the navbar on the  
left of the pages. My thinking was that they may not need to be in the  
main site navigation anymore. (i.e. TODO and Documents)


I made some changes to the Code page. Added a note about the stable  
tags in qemu-kvm




I also noticed that there are some dead links in the FAQ. Some I fixed  
myself. But there are two missing pages. The Intel Real Mode Emulation  
Problems page linked from the Exception 13 FAQ is non-existent.  
There's also a missing page called Windows PAE Workaround.


If anybody sees anything wrong or that could be done better, let me  
know.


--Iggy
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


extboot and qemu-kvm-0.10.x

2009-06-12 Thread Brian Jackson
Is it expected that qemu-kvm-0.10.x doesn't build/install extboot.bin?  
This came up on the IRC channel last night. I would expect it to, but  
the person in the IRC channel didn't get extboot.bin from the tarball  
and none of the tags of 0.10.x I tried built it either.


--Iggy
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: XP smp using a lot of CPU [SOLVED]

2009-05-15 Thread Brian Jackson


On May 15, 2009, at 3:24 PM, Ross Boylan wrote:


Using ACPI fixes the problem; CPU useage is now quite low.  Start line
was
sudo vdeq kvm -net nic,vlan=1,macaddr=52:54:a0:12:01:00 \
   -net vde,vlan=1,sock=/var/run/vde2/tap0.ctl \
   -boot d -cdrom /usr/local/backup/XPProSP3.iso \
   -std-vga -hda /dev/turtle/XP00 \
   -soundhw es1370 -localtime -m 1G -smp 2
I switched to -boot c later.

I ended up doing a fresh install; my repair got mucked up and I got  
the
message "The requested lookup key was not found in any active  
activation
context" when I entered a location into MSIE, including when I tried  
to
run Windows Update.  Googling showed this might indicate some  
permission

or file corruption issues.  They may have happened during my earlier
(virtual) system hang.

My experience suggests a theory: if you use SMP with XP (i.e., more  
than
1 virtual processor) you should enable acpi, i.e., not say -no- 
acpi.  It
this is true, the advice to run windows with -no-acpi should  
probably be

updated.  It's possible single CPU systems are affected as well.



I removed the note about -no-acpi from the howto on the wiki. I don't  
think that's been true for a long time.


--Iggy





Ross



--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: kvm-85: virtio-blk not working

2009-04-24 Thread Brian Jackson
On Friday 24 April 2009 09:35:52 Gerd v. Egidy wrote:
> Hi Bernhard,
>
> On Friday 24 April 2009 14:56:15 Bernhard Held wrote:
> > > does not boot, BIOS complains "Boot failed: could not read the boot
> > > disk":
> > >
> > > -drive file=/dev/VolGroup00/testpart,if=virtio,index=0 \
> >
> > Please try with:
> > -drive file=/dev/VolGroup00/testpart,if=virtio,index=0,boot=on \
>
> That's it! With boot=on it works.
>
> Thanks for pointing this out.
>
> Was this change intentional? I didn't see it mentioned in the changelog and
> could not even find the "boot"-parameter in the qemu-kvm manpage.


The boot=on parameter has been required since virtio_blk existed (or very 
close to it). There is no official qemu/kvm manpage. That's something some 
distros pulled out of thin air. So bugs with it should be reported to your 
distro.


>
> I usually start kvm via libvirt and libvirt doesn't know anything about
> boot=on, at least not in 0.6.2. I did not have time to try 0.6.3 as it was
> released just yet.


I don't really use libvirt and friends, but I'd imagine that I'd have dealt 
with a lot more issues in the irc channel if it didn't support booting from 
virtio devices in some way. Maybe someone else will speakup and tell you how 
to do it within the confines of libvirt.

--Brian Jackson


>
> Is there some way in a running qemu to find out if a virtio blockdevice is
> activated this way? When running "info block" I always get this result if
> the device has boot=on or not:
>
> virtio0: type=hd removable=0 file=/dev/VolGroup00/testpart ro=0
> drv=host_device encrypted=0
>
> Kind regards,
>
> Gerd
>
> --
> To unsubscribe from this list: send the line "unsubscribe kvm" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


  1   2   >