[Qemu-devel] high-level view of packet processing for virtio NIC?

2019-07-23 Thread Chris Friesen
Hi, I'm looking for information on what the qemu architecture looks like for processing virtio network packets in a two-vCPU guest. It looks like there's an IO thread doing a decent fraction of the work, separate from the vCPU threads--is that correct? There's no disk involved in this

Re: [Qemu-devel] strange situation, guest cpu thread spinning at ~100%, but display not yet initialized

2018-11-21 Thread Chris Friesen
On 11/2/2018 2:45 PM, Chris Friesen wrote: On 11/2/2018 11:51 AM, Dr. David Alan Gilbert wrote:     so the fix is Fam's 'aio: Do aio_notify_accept only during blocking aio_poll'.  I see you're running the qemu-kvm-ev from centos, if I read the version tea-leaves right, then I think that patch

Re: [Qemu-devel] strange situation, guest cpu thread spinning at ~100%, but display not yet initialized

2018-11-02 Thread Chris Friesen
On 11/2/2018 11:51 AM, Dr. David Alan Gilbert wrote: This is ringing a bell; if it's actually suck in the BIOS, then please: a) Really make sure all your vCPUs are actually pinned/free on real CPUs b) I suspect it is https://lists.gnu.org/archive/html/qemu-devel/2018-08/msg00470.html

Re: [Qemu-devel] strange situation, guest cpu thread spinning at ~100%, but display not yet initialized

2018-11-02 Thread Chris Friesen
On 11/2/2018 1:51 AM, Alex Bennée wrote: Chris Friesen writes: Hi all, I have an odd situation which occurs very infrequently and I'm hoping to get some advice on how to debug. Apologies for the length of this message, I tried to include as much potentially useful information as possible

Re: [Qemu-devel] strange situation, guest cpu thread spinning at ~100%, but display not yet initialized

2018-11-02 Thread Chris Friesen
On 11/2/2018 10:55 AM, Alex Bennée wrote: Chris Friesen writes: Given the "not initialized" message on the console, I wasn't sure whether the kernel had even started yet. There will be a lot that happens between the kernel decompressing and some sort of video hardware output bei

[Qemu-devel] strange situation, guest cpu thread spinning at ~100%, but display not yet initialized

2018-11-01 Thread Chris Friesen
Hi all, I have an odd situation which occurs very infrequently and I'm hoping to get some advice on how to debug. Apologies for the length of this message, I tried to include as much potentially useful information as possible. In the context of an OpenStack compute node I have a qemu guest

Re: [Qemu-devel] The side effect of changing Processor Brand string?

2018-06-05 Thread Chris Friesen
On 06/04/2018 10:09 PM, You, Lizhen wrote: Hi All, I'd like to change the Processor Brand String(CPUID[0x8002|0x8003|0x8004]) of my guest OSS cpu model to a string that won't be standard Intel or AMD related processor brand string. Would this change have any side effect on the

[Qemu-devel] anyone seen or heard of large delays/stalls running qemu with kvm support?

2017-08-23 Thread Chris Friesen
Hi all, I need to apologize up front, this is going to be a long email. Basically the executive summary is that we're seeing issues where a VM is apparently not making forward progress in the guest, while at the same time spinning in a busy loop on the host. I'm looking for any

Re: [Qemu-devel] [dpdk-dev] Will huge page have negative effect on guest vm in qemu enviroment?

2017-06-21 Thread Chris Friesen
On 06/21/2017 01:16 PM, Dr. David Alan Gilbert wrote: * Sam (batmanu...@gmail.com) wrote: Thank you~ 1. We have a compare test on qemu-kvm enviroment with huge page and without huge page. Qemu start process is much longer in huge page enviromwnt. And I write an email titled with '[DPDK-memory]

Re: [Qemu-devel] [Bug 1248959] Re: pdpe1gb flag is missing in guest running on Intel h/w

2017-06-21 Thread Chris Friesen
On 06/21/2017 11:38 AM, Anatol Pomozov wrote: I observe the same situation. My host CPU (Intel Xeon CPU E5-2690) supports 1gb pages but qemu keeps it disabled by default. I have to use either '-cpu phenom' or '-cpu host' with KVM. It makes me wondering what is the default CPU for QEMU? Is it

Re: [Qemu-devel] call flow to hit get_pci_config_device() during live migration

2017-06-09 Thread Chris Friesen
On 06/09/2017 02:00 PM, Chris Friesen wrote: I think what I end up with is that byte 0x20 (ie 32) of the PCI config for the virtio-blk device is 0 in the data coming over the wire, and 0xC in the local copy. Since cmask is 0xff we need to check all the bits in the byte, and both wmask

Re: [Qemu-devel] call flow to hit get_pci_config_device() during live migration

2017-06-09 Thread Chris Friesen
On 06/09/2017 09:42 AM, Chris Friesen wrote: Hi, I'm investigating an issue seen over a live migration from a modified qemu-kvm-ev-2.3.0-31.el7_2.7.1 to a modified qemu-kvm-ev-2.6.0-28.el7_3.9.1. We hit an issue down in get_pci_config_device() that caused the migration to fail. The qemu logs

[Qemu-devel] call flow to hit get_pci_config_device() during live migration

2017-06-09 Thread Chris Friesen
Hi, I'm investigating an issue seen over a live migration from a modified qemu-kvm-ev-2.3.0-31.el7_2.7.1 to a modified qemu-kvm-ev-2.6.0-28.el7_3.9.1. We hit an issue down in get_pci_config_device() that caused the migration to fail. The qemu logs on the destination are included below. I'm

Re: [Qemu-devel] question about block size and virtual disks

2017-04-21 Thread Chris Friesen
On 04/20/2017 03:21 PM, Eric Blake wrote: On 04/20/2017 04:03 PM, Chris Friesen wrote: Also, does the 4KB block size get "passed-through" to the guest somehow so that the guest knows it needs to use 4KB blocks, or does that need to be explicitly specified via virtio-blk-pci.logical_

[Qemu-devel] question about block size and virtual disks

2017-04-20 Thread Chris Friesen
Hi, Suppose the host has a physical disk that only supports 4KB access, with no 512B fallback. If we boot a guest with "cache=none", does the guest need to be able to handle disks with 4KB blocks or is qemu able to handle guests trying to access the disk with 512B granularity? Also, does

Re: [Qemu-devel] hitting intermittent issue with live migration from qemu-kvm-ev 2.3.0 to qemu-kvm-ev 2.6.0

2017-04-04 Thread Chris Friesen
On 04/04/2017 07:56 AM, Ladi Prosek wrote: On Mon, Apr 3, 2017 at 9:11 PM, Stefan Hajnoczi <stefa...@gmail.com> wrote: On Fri, Mar 31, 2017 at 02:12:36PM -0600, Chris Friesen wrote: Initially we have a bunch of guests running on compute-2 (which is running qemu-kvm-ev 2.3.0

[Qemu-devel] hitting intermittent issue with live migration from qemu-kvm-ev 2.3.0 to qemu-kvm-ev 2.6.0

2017-03-31 Thread Chris Friesen
Hi, I'm running into an issue with live-migrating a guest from a host running qemu-kvm-ev 2.3.0-31 to a host running qemu-kvm-ev 2.6.0-27.1. This is a libvirt-tunnelled migration, in the context of upgrading an OpenStack install to newer software. The source host is running CentOS 7.2.1511,

Re: [Qemu-devel] kvm bug in __rmap_clear_dirty during live migration

2017-02-24 Thread Chris Friesen
On 02/23/2017 08:23 PM, Herongguang (Stephen) wrote: On 2017/2/22 22:43, Paolo Bonzini wrote: Hopefully Gaohuai and Rongguang can help with this too. Paolo . Yes, we are looking into and testing this. I think this can result in any memory corruption, if VM1 writes its PML buffer into

Re: [Qemu-devel] kvm bug in __rmap_clear_dirty during live migration

2017-02-22 Thread Chris Friesen
On 02/22/2017 05:15 AM, Paolo Bonzini wrote: On 22/02/2017 04:08, Chris Friesen wrote: On 02/19/2017 10:38 PM, Han, Huaitong wrote: Hi, Gaohuai I tried to debug the problem, and I found the indirect cause may be that the rmap value is not cleared when KVM mmu page is freed. I have read code

Re: [Qemu-devel] kvm bug in __rmap_clear_dirty during live migration

2017-02-21 Thread Chris Friesen
On 02/19/2017 10:38 PM, Han, Huaitong wrote: Hi, Gaohuai I tried to debug the problem, and I found the indirect cause may be that the rmap value is not cleared when KVM mmu page is freed. I have read code without the root cause. Can you stable reproduce the the issue? Many guesses need to be

Re: [Qemu-devel] kvm bug in __rmap_clear_dirty during live migration

2017-02-10 Thread Chris Friesen
erongguang (Stephen) wrote: Hi, Chris Friesen, did you solve the problem? On 2017/2/9 22:37, Herongguang (Stephen) wrote: Hi. I had a problem when I just repeatedly live migrate a vm between two compute nodes. The phenomenon was that the KVM module was crashed and then the host rebooted. Howeve

Re: [Qemu-devel] [libvirt] inconsistent handling of "qemu64" CPU model

2016-05-26 Thread Chris Friesen
On 05/26/2016 04:41 AM, Jiri Denemark wrote: The qemu64 CPU model contains svm and thus libvirt will always consider it incompatible with any Intel CPUs (which have vmx instead of svm). On the other hand, QEMU by default ignores features that are missing in the host CPU and has no problem using

[Qemu-devel] inconsistent handling of "qemu64" CPU model

2016-05-25 Thread Chris Friesen
Hi, I'm not sure where the problem lies, hence the CC to both lists. Please copy me on the reply. I'm playing with OpenStack's devstack environment on an Ubuntu 14.04 host with a Celeron 2961Y CPU. (libvirt detects it as a Nehalem with a bunch of extra features.) Qemu gives version 2.2.0

[Qemu-devel] block IO thread creation question

2016-03-24 Thread Chris Friesen
Hi, Could someone point me at the code for creating threads to handle block IO? I'm seeing up to 30 threads per virtual disk, which seems high. In case it's related, the block devices are iSCSI with the host acting as the initiator and exposing block devices to qemu. I'm particularly

[Qemu-devel] high outage times for qemu virtio network links during live migration, trying to debug

2016-01-26 Thread Chris Friesen
Hi, I'm using libvirt (1.2.12) with qemu (2.2.0) in the context of OpenStack. If I live-migrate a guest with virtio network interfaces, I see a ~1200msec delay in processing the network packets, and several hundred of them get dropped. I get the dropped packets, but I'm not sure why the

Re: [Qemu-devel] high outage times for qemu virtio network links during live migration, trying to debug

2016-01-26 Thread Chris Friesen
On 01/26/2016 10:50 AM, Paolo Bonzini wrote: On 26/01/2016 17:41, Chris Friesen wrote: I'm using libvirt (1.2.12) with qemu (2.2.0) in the context of OpenStack. If I live-migrate a guest with virtio network interfaces, I see a ~1200msec delay in processing the network packets, and several

Re: [Qemu-devel] high outage times for qemu virtio network links during live migration, trying to debug

2016-01-26 Thread Chris Friesen
On 01/26/2016 11:31 AM, Paolo Bonzini wrote: On 26/01/2016 18:21, Chris Friesen wrote: My question is, why doesn't qemu continue processing virtio packets while the dirty page scanning and memory transfer over the network is proceeding? QEMU (or vhost) _are_ processing virtio traffic

Re: [Qemu-devel] high outage times for qemu virtio network links during live migration, trying to debug

2016-01-26 Thread Chris Friesen
On 01/26/2016 10:45 AM, Daniel P. Berrange wrote: On Tue, Jan 26, 2016 at 10:41:12AM -0600, Chris Friesen wrote: My question is, why doesn't qemu continue processing virtio packets while the dirty page scanning and memory transfer over the network is proceeding

[Qemu-devel] [Bug 1441781] Re: qemuProcessSetEmulatorAffinity() called before emulator process actually running

2015-04-08 Thread Chris Friesen
** Description changed: - In qemuProcessStart() the code looks like this: + When running on a 24-CPU host and using vCPU and emulator pinning I've + seen cases where the specified emulator pinning isn't applied as + expected. - VIR_DEBUG(Setting cgroup for emulator (if required)); - if

[Qemu-devel] [Bug 1441781] Re: qemuProcessSetEmulatorAffinity() not behaving as expected

2015-04-08 Thread Chris Friesen
** Summary changed: - qemuProcessSetEmulatorAffinity() called before emulator process actually running + qemuProcessSetEmulatorAffinity() not behaving as expected ** Description changed: When running on a 24-CPU host and using vCPU and emulator pinning I've seen cases where the specified

[Qemu-devel] [Bug 1441775] [NEW] possible null pointer dereference in qemuDomainPinEmulator()

2015-04-08 Thread Chris Friesen
Public bug reported: In src/qemu/qemu_driver.c the qemuDomainPinEmulator() routine basically does this virDomainObjPtr vm; if (!(vm = qemuDomObjFromDomain(dom))) goto cleanup; cleanup: qemuDomObjEndAPI(vm); If vm is null, then this will crash. The bug seems to have

[Qemu-devel] [Bug 1441781] [NEW] qemuProcessSetEmulatorAffinity() called before emulator process actually running

2015-04-08 Thread Chris Friesen
Public bug reported: In qemuProcessStart() the code looks like this: VIR_DEBUG(Setting cgroup for emulator (if required)); if (qemuSetupCgroupForEmulator(vm) 0) goto cleanup; VIR_DEBUG(Setting affinity of emulator threads); if (qemuProcessSetEmulatorAffinity(vm) 0)

Re: [Qemu-devel] is there a limit on the number of in-flight I/O operations?

2014-08-26 Thread Chris Friesen
On 08/25/2014 03:50 PM, Chris Friesen wrote: I think I might have a glimmering of what's going on. Someone please correct me if I get something wrong. I think that VIRTIO_PCI_QUEUE_MAX doesn't really mean anything with respect to max inflight operations, and neither does virtio-blk calling

Re: [Qemu-devel] is there a limit on the number of in-flight I/O operations?

2014-08-25 Thread Chris Friesen
On 08/23/2014 01:56 AM, Benoît Canet wrote: The Friday 22 Aug 2014 à 18:59:38 (-0600), Chris Friesen wrote : On 07/21/2014 10:10 AM, Benoît Canet wrote: The Monday 21 Jul 2014 à 09:35:29 (-0600), Chris Friesen wrote : On 07/21/2014 09:15 AM, Benoît Canet wrote: The Monday 21 Jul 2014 à 08:59

Re: [Qemu-devel] is there a limit on the number of in-flight I/O operations?

2014-08-25 Thread Chris Friesen
On 08/25/2014 09:12 AM, Chris Friesen wrote: I set up another test, checking the inflight value every second. Running just dd if=/dev/zero of=testfile2 bs=1M count=700 oflag=nocache gave a bit over 100 inflight requests. If I simultaneously run dd if=testfile of=/dev/null bs=1M count=700

Re: [Qemu-devel] is there a limit on the number of in-flight I/O operations?

2014-08-25 Thread Chris Friesen
On 08/23/2014 01:56 AM, Benoît Canet wrote: The Friday 22 Aug 2014 à 18:59:38 (-0600), Chris Friesen wrote : On 07/21/2014 10:10 AM, Benoît Canet wrote: The Monday 21 Jul 2014 à 09:35:29 (-0600), Chris Friesen wrote : On 07/21/2014 09:15 AM, Benoît Canet wrote: The Monday 21 Jul 2014 à 08:59

Re: [Qemu-devel] is there a limit on the number of in-flight I/O operations?

2014-08-22 Thread Chris Friesen
On 07/21/2014 10:10 AM, Benoît Canet wrote: The Monday 21 Jul 2014 à 09:35:29 (-0600), Chris Friesen wrote : On 07/21/2014 09:15 AM, Benoît Canet wrote: The Monday 21 Jul 2014 à 08:59:45 (-0600), Chris Friesen wrote : On 07/19/2014 02:45 AM, Benoît Canet wrote: I think in the throttling

[Qemu-devel] [bug?] getting EAGAIN on connect() to virtio-serial unix socket on host

2014-08-05 Thread Chris Friesen
Hi, I'm running qemu 1.4.2 (soon planning on moving to 1.7). I'm running two instances of qemu with a virtio-serial channel each, exposed on the host via unix stream sockets. I've got an app that tries to connect() to both of them in turn. The connect() to the first socket fails with

Re: [Qemu-devel] questions about host side of virtio-serial

2014-07-31 Thread Chris Friesen
On 07/31/2014 03:32 AM, Richard W.M. Jones wrote: On Wed, Jul 30, 2014 at 12:52:41PM -0600, Chris Friesen wrote: In particular, assuming that the host side is using a chardev mapped to a unix socket: 1) Is there any way for the host app to get information about whether or not the guest

[Qemu-devel] questions about host side of virtio-serial

2014-07-30 Thread Chris Friesen
Hi, I'm working on a native user of virtio-serial (ie, not going via the qemu guest agent). The information at http://www.linux-kvm.org/page/Virtio-serial_API; does a good job of describing the guest side of things, but has very little information about the host side of things. In

Re: [Qemu-devel] is there a limit on the number of in-flight I/O operations?

2014-07-21 Thread Chris Friesen
On 07/19/2014 02:45 AM, Benoît Canet wrote: I think in the throttling case the number of in flight operation is limited by the emulated hardware queue. Else request would pile up and throttling would be inefective. So this number should be around: #define VIRTIO_PCI_QUEUE_MAX 64 or something

Re: [Qemu-devel] is there a limit on the number of in-flight I/O operations?

2014-07-21 Thread Chris Friesen
On 07/21/2014 09:15 AM, Benoît Canet wrote: The Monday 21 Jul 2014 à 08:59:45 (-0600), Chris Friesen wrote : On 07/19/2014 02:45 AM, Benoît Canet wrote: I think in the throttling case the number of in flight operation is limited by the emulated hardware queue. Else request would pile up

Re: [Qemu-devel] is there a limit on the number of in-flight I/O operations?

2014-07-21 Thread Chris Friesen
On 07/21/2014 01:47 PM, Benoît Canet wrote: The Monday 21 Jul 2014 à 09:35:29 (-0600), Chris Friesen wrote : On 07/21/2014 09:15 AM, Benoît Canet wrote: The Monday 21 Jul 2014 à 08:59:45 (-0600), Chris Friesen wrote : On 07/19/2014 02:45 AM, Benoît Canet wrote: I think in the throttling

Re: [Qemu-devel] is there a limit on the number of in-flight I/O operations?

2014-07-19 Thread Chris Friesen
On 07/18/2014 11:49 PM, Paolo Bonzini wrote: Il 19/07/2014 00:48, Chris Friesen ha scritto: I forgot about -drive ...,iops_max=NNN. :) I'm not sure it's actually useful though, since it specifies the max IO operations per second, not the maximum number of in-flight operations. No, that's

Re: [Qemu-devel] is there a limit on the number of in-flight I/O operations?

2014-07-18 Thread Chris Friesen
On 07/18/2014 09:54 AM, Andrey Korolyov wrote: On Fri, Jul 18, 2014 at 6:58 PM, Chris Friesen chris.frie...@windriver.com wrote: Hi, I've recently run up against an interesting issue where I had a number of guests running and when I started doing heavy disk I/O on a virtio disk (backed via

Re: [Qemu-devel] is there a limit on the number of in-flight I/O operations?

2014-07-18 Thread Chris Friesen
On 07/18/2014 10:30 AM, Andrey Korolyov wrote: On Fri, Jul 18, 2014 at 8:26 PM, Chris Friesen chris.frie...@windriver.com wrote: On 07/18/2014 09:54 AM, Andrey Korolyov wrote: On Fri, Jul 18, 2014 at 6:58 PM, Chris Friesen chris.frie...@windriver.com wrote: Hi, I've recently run up against

Re: [Qemu-devel] is there a limit on the number of in-flight I/O operations?

2014-07-18 Thread Chris Friesen
On 07/18/2014 02:13 PM, Paolo Bonzini wrote: Il 18/07/2014 18:22, Chris Friesen ha scritto: On 07/18/2014 09:24 AM, Paolo Bonzini wrote: Il 18/07/2014 16:58, Chris Friesen ha scritto: I've recently run up against an interesting issue where I had a number of guests running and when I started

Re: [Qemu-devel] virtio-serial-pci very expensive during live migration

2014-06-19 Thread Chris Friesen
On 05/09/2014 02:19 AM, Paolo Bonzini wrote: Il 09/05/2014 02:53, Chris Friesen ha scritto: Turns out I spoke too soon. With the patch applied, it boots, but if I try to do a live migration both the source and destination crash. This happens for both the master branch as well as the stable

Re: [Qemu-devel] [bug] busy-loop in send_all()

2014-05-27 Thread Chris Friesen
On 05/26/2014 10:41 PM, Amit Shah wrote: On (Fri) 23 May 2014 [13:55:40], Stefan Hajnoczi wrote: On Thu, May 15, 2014 at 11:23:54AM -0600, Chris Friesen wrote: Looking at the implementation of send_all(), the core loop looks like: while (len 0) { ret = write(fd, buf, len

[Qemu-devel] [bug] busy-loop in send_all()

2014-05-15 Thread Chris Friesen
Hi, I've run into a situation that seems like a bug. I'm using qemu 1.4.2 (with additional patches) from within openstack. I'm using virtio-serial-pci to provide a channel between the guest and host. On occasion when doing suspend/resume I run into a case where the main qemu thread ends up

Re: [Qemu-devel] virtio-serial-pci very expensive during live migration

2014-05-08 Thread Chris Friesen
On 05/08/2014 07:30 AM, Amit Shah wrote: On (Thu) 08 May 2014 [15:14:26], Paolo Bonzini wrote: Il 08/05/2014 15:02, Amit Shah ha scritto: I tried the patch below. Unfortunately it seems to cause qemu to crash. This doesn't remove the memory_region_transaction_begin() and _commit() from

Re: [Qemu-devel] virtio-serial-pci very expensive during live migration

2014-05-08 Thread Chris Friesen
On 05/08/2014 07:47 AM, Amit Shah wrote: Chris, I just tried a simple test this way: ./x86_64-softmmu/qemu-system-x86_64 -device virtio-serial-pci -device virtserialport -S -monitor stdio -nographic and it didn't crash for me. This was with qemu.git. Perhaps you can try in a similar way.

Re: [Qemu-devel] virtio-serial-pci very expensive during live migration

2014-05-08 Thread Chris Friesen
On 05/08/2014 08:34 AM, Paolo Bonzini wrote: Il 08/05/2014 16:31, Chris Friesen ha scritto: The fact remains that qemu crashes when I apply the patch. I also tried patching it as below in virtio_pci_vmstate_change(). That would allow the VM to boot, but it would crash when I tried to do

Re: [Qemu-devel] virtio-serial-pci very expensive during live migration

2014-05-08 Thread Chris Friesen
On 05/08/2014 10:02 AM, Paolo Bonzini wrote: Il 08/05/2014 17:57, Chris Friesen ha scritto: The fact remains that qemu crashes when I apply the patch. I also tried patching it as below in virtio_pci_vmstate_change(). That would allow the VM to boot, but it would crash when I tried to do

Re: [Qemu-devel] virtio-serial-pci very expensive during live migration

2014-05-08 Thread Chris Friesen
On 05/08/2014 09:40 AM, Chris Friesen wrote: On 05/08/2014 07:47 AM, Amit Shah wrote: Chris, I just tried a simple test this way: ./x86_64-softmmu/qemu-system-x86_64 -device virtio-serial-pci -device virtserialport -S -monitor stdio -nographic and it didn't crash for me

Re: [Qemu-devel] virtio-serial-pci very expensive during live migration

2014-05-08 Thread Chris Friesen
On 05/08/2014 07:44 PM, ChenLiang wrote: Hi, I have test the patch at the qemu.git, qemu crashed when vm is booting. the backtrace is: Program received signal SIGABRT, Aborted. [Switching to Thread 0x7f6bf67f9700 (LWP 9740)] 0x7f6bfacb2b55 in raise () from /lib64/libc.so.6 (gdb) bt

Re: [Qemu-devel] virtio-serial-pci very expensive during live migration

2014-05-07 Thread Chris Friesen
On 05/07/2014 12:39 AM, Paolo Bonzini wrote: Il 06/05/2014 22:01, Chris Friesen ha scritto: It seems like the main problem is that we loop over all the queues, calling virtio_pci_set_host_notifier_internal() on each of them. That in turn calls memory_region_add_eventfd(), which calls

Re: [Qemu-devel] qemu leaving unix sockets behind after VM is shut down

2014-05-06 Thread Chris Friesen
On 05/06/2014 07:39 AM, Stefan Hajnoczi wrote: On Tue, Apr 01, 2014 at 02:34:58PM -0600, Chris Friesen wrote: When running qemu with something like this -device virtio-serial \ -chardev socket,path=/tmp/foo,server,nowait,id=foo \ -device virtserialport,chardev=foo,name=host.port.0 the VM

[Qemu-devel] virtio-serial-pci very expensive during live migration

2014-05-06 Thread Chris Friesen
Hi, I recently made the unfortunate discovery that virtio-serial-pci is quite expensive to stop/start during live migration. By default we support 32 ports, each of which uses 2 queues. In my case it takes 2-3ms per queue to disconnect on the source host, and another 2-3ms per queue to

[Qemu-devel] status of cpu hotplug work for x86_64?

2014-04-28 Thread Chris Friesen
Hi, I'm trying to figure out what the current status is for cpu hotplug and hot-remove on x86_64. As far as I can tell, it seems like currently there is a QMP cpu-add command but no matching remove...is that correct? At http://wiki.qemu.org/Features/CPUHotplug; I found mention of CPU

[Qemu-devel] qemu leaving unix sockets behind after VM is shut down

2014-04-01 Thread Chris Friesen
When running qemu with something like this -device virtio-serial \ -chardev socket,path=/tmp/foo,server,nowait,id=foo \ -device virtserialport,chardev=foo,name=host.port.0 the VM starts up as expected and creates a socket at /tmp/foo as expected. However, when I shut down the VM the socket at

[Qemu-devel] [BUG] need to export variables in config.mak

2011-09-30 Thread Chris Friesen
We've been playing a bit with kvm-kmod-3.0b. We use a cross compile environment, and one of my coworkers noticed that the variables in config.mak weren't actually exported and so didn't actually have any effect. I think something like the following patch is required. Thanks, Chris Friesen

[Qemu-devel] [PATCH] need to export variables in config.mak

2011-09-30 Thread Chris Friesen
The variables being written to config.mak by configure need to be exported in order to take effect when building the package. The following patch fixes this in our environment. Signed-off-by: Chris Friesen chris.frie...@genband.com Index: kvm-kmod-3.0b/configure

Re: [Qemu-devel] [BUG] need to export variables in config.mak

2011-09-30 Thread Chris Friesen
On 09/30/2011 11:52 AM, Richard Henderson wrote: On 09/30/2011 10:07 AM, Chris Friesen wrote: We've been playing a bit with kvm-kmod-3.0b. We use a cross compile environment, and one of my coworkers noticed that the variables in config.mak weren't actually exported and so didn't actually have