On 21.02.2012 12:00, Gleb Natapov wrote:
On Tue, Feb 21, 2012 at 11:59:23AM +0100, Peter Lieven wrote:
On 21.02.2012 11:56, Gleb Natapov wrote:
On Tue, Feb 21, 2012 at 11:50:47AM +0100, Peter Lieven wrote:
I hope it will make Windows use TSC instead, but you can't be sure
about anything
On 21.02.2012 12:46, Gleb Natapov wrote:
On Tue, Feb 21, 2012 at 12:16:16PM +0100, Peter Lieven wrote:
On 21.02.2012 12:00, Gleb Natapov wrote:
On Tue, Feb 21, 2012 at 11:59:23AM +0100, Peter Lieven wrote:
On 21.02.2012 11:56, Gleb Natapov wrote:
On Tue, Feb 21, 2012 at 11:50:47AM +0100
insn_emulation and exists
On 21.02.2012 12:46, Gleb Natapov wrote:
On Tue, Feb 21, 2012 at 12:16:16PM +0100, Peter Lieven wrote:
On 21.02.2012 12:00, Gleb Natapov wrote:
On Tue, Feb 21, 2012 at 11:59:23AM +0100, Peter Lieven wrote:
On 21.02.2012 11:56, Gleb Natapov wrote:
On Tue, Feb 21, 2012
On 15.12.2010 12:53, Ulrich Obergfell wrote:
- Anthony Liguorianth...@codemonkey.ws wrote:
On 12/14/2010 06:09 AM, Ulrich Obergfell wrote:
[...]
Parts 1 thru 4 of this RFC contain experimental source code which
I recently used to investigate the performance benefit. In a Linux
guest, I
Hi,
I came a across an issue with a Windows 7 (32-bit) as well as with a
Windows 2008 R2 (64-bit) guest.
If I transfer a file from the VM via CIFS or FTP to a remote machine,
i get very poor read performance (around 13MB/s). The VM peaks at 100%
cpu and I see a lot of insn_emulations and all
On 20.02.2012 19:40, Gleb Natapov wrote:
On Mon, Feb 20, 2012 at 07:17:55PM +0100, Peter Lieven wrote:
Hi,
I came a across an issue with a Windows 7 (32-bit) as well as with a
Windows 2008 R2 (64-bit) guest.
If I transfer a file from the VM via CIFS or FTP to a remote machine,
i get very poor
On 20.02.2012 20:04, Gleb Natapov wrote:
On Mon, Feb 20, 2012 at 08:40:08PM +0200, Gleb Natapov wrote:
On Mon, Feb 20, 2012 at 07:17:55PM +0100, Peter Lieven wrote:
Hi,
I came a across an issue with a Windows 7 (32-bit) as well as with a
Windows 2008 R2 (64-bit) guest.
If I transfer a file
On 20.02.2012 20:04, Gleb Natapov wrote:
On Mon, Feb 20, 2012 at 08:40:08PM +0200, Gleb Natapov wrote:
On Mon, Feb 20, 2012 at 07:17:55PM +0100, Peter Lieven wrote:
Hi,
I came a across an issue with a Windows 7 (32-bit) as well as with a
Windows 2008 R2 (64-bit) guest.
If I transfer a file
Anyone?
Peter Lieven wrote:
Hi,
i recently started updating our VMs to qemu-kvm 1.0. Since that I see
that the usb tablet device (used for as pointer device for accurate
mouse positioning) becomes unavailable after live migrating.
If I migrate a few times a Windows 7 VM reliable stops using
Am 11.02.2012 um 09:55 schrieb Corentin Chary:
On Thu, Feb 9, 2012 at 7:08 PM, Peter Lieven p...@dlh.net wrote:
Hi,
is anyone aware if there are still problems when enabling the threaded vnc
server?
I saw some VMs crashing when using a qemu-kvm build with
--enable-vnc-thread.
qemu-kvm
Hi,
is anyone aware if there are still problems when enabling the threaded
vnc server?
I saw some VMs crashing when using a qemu-kvm build with
--enable-vnc-thread.
qemu-kvm-1.0[22646]: segfault at 0 ip 7fec1ca7ea0b sp
7fec19d056d0 error 6 in libz.so.1.2.3.3[7fec1ca75000+16000]
Hi,
i recently started updating our VMs to qemu-kvm 1.0. Since that I see
that the usb tablet device (used for as pointer device for accurate
mouse positioning) becomes unavailable after live migrating.
If I migrate a few times a Windows 7 VM reliable stops using
the USB tablet and fails back
On 04.01.2012 02:38, Shu Ming wrote:
On 2012-1-4 2:04, Peter Lieven wrote:
Hi all,
is there any known issue when migrating VMs with a lot of (e.g. 32GB)
of memory.
It seems that there is some portion in the migration code which takes
too much time when the number
of memory pages is large
On 04.01.2012 12:05, Paolo Bonzini wrote:
On 01/04/2012 11:53 AM, Peter Lieven wrote:
On 04.01.2012 02:38, Shu Ming wrote:
On 2012-1-4 2:04, Peter Lieven wrote:
Hi all,
is there any known issue when migrating VMs with a lot of (e.g. 32GB)
of memory.
It seems that there is some portion
On 04.01.2012 12:28, Paolo Bonzini wrote:
On 01/04/2012 12:22 PM, Peter Lieven wrote:
There were patches to move RAM migration to a separate thread. The
problem is that they broke block migration.
However, asynchronous NBD is in and streaming will follow suit soon.
As soon as we have those two
On 04.01.2012 13:28, Paolo Bonzini wrote:
On 01/04/2012 12:42 PM, Peter Lieven wrote:
ok, then i misunderstood the ram blocks thing. i thought the guest ram
would consist of a collection of ram blocks.
then let me describe it differntly. would it make sense to process
bigger portions
On 04.01.2012 15:14, Paolo Bonzini wrote:
don't hold your breath
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 04.01.2012 15:14, Paolo Bonzini wrote:
On 01/04/2012 02:08 PM, Peter Lieven wrote:
thus my only option at the moment is to limit the runtime of the while
loop in stage 2 or
are there any post 1.0 patches in git that might already help?
No; even though (as I said) people are aware
Hi all,
is there any known issue when migrating VMs with a lot of (e.g. 32GB)
of memory.
It seems that there is some portion in the migration code which takes
too much time when the number
of memory pages is large.
Symptoms are: Irresponsive VNC connection, VM stalls and also
irresponsive
Hi,
when I specify something like
qemu -boot order=dc -cdrom image.iso -drive file=img.raw,if=virtio,boot=yes
or
qemu -boot order=n -cdrom image.iso -drive file=img.raw,if=virtio,boot=yes
with qemu-kvm 0.15.0
it will always directly boot from the hardrive and not from cdrom or
network.
is
On 09.03.2011 08:26, Stefan Weil wrote:
Am 08.03.2011 23:53, schrieb Peter Lieven:
Hi,
during testing of qemu-kvm-0.14.0 i can reproduce the following
segfault. i have seen similar crash already in 0.13.0, but had no
time to debug.
my guess is that this segfault is related to the threaded
Hi,
i'm currently testing qemu-kvm 0.14.0 in conjunction with Linux 2.6.38
on the host system.
As there are some old kernels out that support kvm_clock but not
reliably we used to run some
of them with clocksource=acpi_pm.
However, on this new combination of qemu-kvm and linux kernel I see
On 14.03.2011 10:19, Corentin Chary wrote:
On Thu, Mar 10, 2011 at 3:13 PM, Corentin Chary
corentin.ch...@gmail.com wrote:
The threaded VNC servers messed up with QEMU fd handlers without
any kind of locking, and that can cause some nasty race conditions.
Using qemu_mutex_lock_iothread()
On 15.03.2011 17:55, Peter Lieven wrote:
On 14.03.2011 10:19, Corentin Chary wrote:
On Thu, Mar 10, 2011 at 3:13 PM, Corentin Chary
corentin.ch...@gmail.com wrote:
The threaded VNC servers messed up with QEMU fd handlers without
any kind of locking, and that can cause some nasty race
Am 14.03.2011 um 10:19 schrieb Corentin Chary:
On Thu, Mar 10, 2011 at 3:13 PM, Corentin Chary
corentin.ch...@gmail.com wrote:
The threaded VNC servers messed up with QEMU fd handlers without
any kind of locking, and that can cause some nasty race conditions.
Using
On 10.03.2011 13:59, Corentin Chary wrote:
The threaded VNC servers messed up with QEMU fd handlers without
any kind of locking, and that can cause some nasty race conditions.
Using qemu_mutex_lock_iothread() won't work because vnc_dpy_cpy(),
which will wait for the current job queue to finish,
Am 09.03.2011 um 08:26 schrieb Stefan Weil:
Am 08.03.2011 23:53, schrieb Peter Lieven:
Hi,
during testing of qemu-kvm-0.14.0 i can reproduce the following segfault. i
have seen similar crash already in 0.13.0, but had no time to debug.
my guess is that this segfault is related
Am 09.03.2011 um 08:37 schrieb Jan Kiszka:
On 2011-03-08 23:53, Peter Lieven wrote:
Hi,
during testing of qemu-kvm-0.14.0 i can reproduce the following segfault. i
have seen similar crash already in 0.13.0, but had no time to debug.
my guess is that this segfault is related
Am 09.03.2011 um 08:37 schrieb Jan Kiszka:
On 2011-03-08 23:53, Peter Lieven wrote:
Hi,
during testing of qemu-kvm-0.14.0 i can reproduce the following segfault. i
have seen similar crash already in 0.13.0, but had no time to debug.
my guess is that this segfault is related
Am 09.03.2011 um 11:20 schrieb Jan Kiszka:
On 2011-03-09 11:16, Peter Lieven wrote:
Am 09.03.2011 um 08:37 schrieb Jan Kiszka:
On 2011-03-08 23:53, Peter Lieven wrote:
Hi,
during testing of qemu-kvm-0.14.0 i can reproduce the following segfault.
i have seen similar crash already
Am 09.03.2011 um 11:41 schrieb Corentin Chary:
The threaded VNC servers messed up with QEMU fd handlers without
any kind of locking, and that can cause some nasty race conditions.
The IO-Thread provides appropriate locking primitives to avoid that.
This patch makes CONFIG_VNC_THREAD
Am 09.03.2011 um 12:25 schrieb Jan Kiszka:
On 2011-03-09 12:05, Stefan Hajnoczi wrote:
On Wed, Mar 9, 2011 at 10:57 AM, Corentin Chary
corentin.ch...@gmail.com wrote:
The threaded VNC servers messed up with QEMU fd handlers without
any kind of locking, and that can cause some nasty race
Am 09.03.2011 um 12:42 schrieb Jan Kiszka:
On 2011-03-09 11:41, Corentin Chary wrote:
The threaded VNC servers messed up with QEMU fd handlers without
any kind of locking, and that can cause some nasty race conditions.
The IO-Thread provides appropriate locking primitives to avoid that.
Hi,
during testing of qemu-kvm-0.14.0 i can reproduce the following segfault. i
have seen similar crash already in 0.13.0, but had no time to debug.
my guess is that this segfault is related to the threaded vnc server which was
introduced in qemu 0.13.0. the bug is only triggerable if a vnc
Hi,
is there any known issue when migrating a WinXP SP3 guest with qemu-kvm 0.13.0
or qemu-kvm-0.12.5?
If I migrate such a guest with a Realtek rtl8139 Network Device and an USB
Mouse Tablet
after migration the USB Tablet doesn't work any more and network stalls. I have
seen the mouse
moving
Hi,
as reported several times guests with virtio-net crash under heavy network load.
There have been several patches submitted to the upstream kernel e.g:
virtio_net: do not reschedule rx refill forever
virtio_net: fix oom handling on tx
udp/tcp/..: use limited socket backlog
but that did not
On 27.12.2010 04:51, Stefan Hajnoczi wrote:
On Sun, Dec 26, 2010 at 9:21 PM, Peter Lievenp...@dlh.net wrote:
Am 25.12.2010 um 20:02 schrieb Peter Lieven:
Am 23.12.2010 um 03:42 schrieb Stefan Hajnoczi:
On Wed, Dec 22, 2010 at 10:02 AM, Peter Lievenp...@dlh.net wrote:
If I start a VM
Am 25.12.2010 um 20:02 schrieb Peter Lieven:
Am 23.12.2010 um 03:42 schrieb Stefan Hajnoczi:
On Wed, Dec 22, 2010 at 10:02 AM, Peter Lieven p...@dlh.net wrote:
If I start a VM with the following parameters
qemu-kvm-0.13.0 -m 2048 -smp 2 -monitor tcp:0:4014,server,nowait -vnc :14
-name
Am 27.12.2010 um 04:51 schrieb Stefan Hajnoczi:
On Sun, Dec 26, 2010 at 9:21 PM, Peter Lieven p...@dlh.net wrote:
Am 25.12.2010 um 20:02 schrieb Peter Lieven:
Am 23.12.2010 um 03:42 schrieb Stefan Hajnoczi:
On Wed, Dec 22, 2010 at 10:02 AM, Peter Lieven p...@dlh.net wrote:
If I
Am 23.12.2010 um 03:42 schrieb Stefan Hajnoczi:
On Wed, Dec 22, 2010 at 10:02 AM, Peter Lieven p...@dlh.net wrote:
If I start a VM with the following parameters
qemu-kvm-0.13.0 -m 2048 -smp 2 -monitor tcp:0:4014,server,nowait -vnc :14
-name 'ubuntu.test' -boot order=dc,menu=off -cdrom
Hi,
I came across a strange issue when updating from qemu-kvm 0.12.5 to
qemu-kvm-0.13.0
If I start a VM with the following parameters
qemu-kvm-0.13.0 -m 2048 -smp 2 -monitor tcp:0:4014,server,nowait -vnc :14 -name
'ubuntu.test' -boot order=dc,menu=off -cdrom ubuntu-10.04.1-desktop-amd64.iso
Am 04.06.2010 um 02:02 schrieb Bruce Rogers:
On 6/3/2010 at 04:51 PM, Greg KH g...@kroah.com wrote:
On Thu, Jun 03, 2010 at 04:17:34PM -0600, Bruce Rogers wrote:
On 6/3/2010 at 03:03 PM, Greg KH g...@kroah.com wrote:
On Thu, Jun 03, 2010 at 01:38:31PM -0600, Bruce Rogers wrote:
Peter Lieven wrote:
Am 03.10.2010 um 01:48 schrieb Peter Lieven:
Hi,
running 0.12.5 with a Ubuntu LTS 10.04.1 64-bit kernel I see the following
error after a few days with severe load (constant 100-200mbps input).
# uname -a
Linux ubuntu-newsfeed 2.6.32-24-server #43-Ubuntu SMP Thu Sep
Am 03.10.2010 um 01:48 schrieb Peter Lieven:
Hi,
running 0.12.5 with a Ubuntu LTS 10.04.1 64-bit kernel I see the following
error after a few days with severe load (constant 100-200mbps input).
# uname -a
Linux ubuntu-newsfeed 2.6.32-24-server #43-Ubuntu SMP Thu Sep 16 16:05:42 UTC
Hi,
running 0.12.5 with a Ubuntu LTS 10.04.1 64-bit kernel I see the following
error after a few days with severe load (constant 100-200mbps input).
# uname -a
Linux ubuntu-newsfeed 2.6.32-24-server #43-Ubuntu SMP Thu Sep 16 16:05:42 UTC
2010 x86_64 GNU/Linux
Oct 2 10:09:35 ubuntu-newsfeed
Am 25.09.2010 um 20:11 schrieb Peter Lieven:
Am 25.09.2010 um 17:58 schrieb Christoph Hellwig:
On Sat, Sep 25, 2010 at 05:40:34PM +0200, Peter Lieven wrote:
Am 25.09.2010 um 17:37 schrieb Christoph Hellwig:
FYI, qemu 0.12.2 is missing:
you mean 0.12.4 not 0.12.2, don't you?
Yes
Hi all,
we experience filesystem corruption using virtio-blk on some guest systems
togehter with XFS. We still use qemu-kvm 0.12.4.
Does someone remember if there has been a fix submitted meanwhile?
It seems that 64-bit Ubuntu LTS 10.04.1 is affected as well as an older
openSuse 11.1 system
Am 25.09.2010 um 16:44 schrieb Stefan Hajnoczi:
On Sat, Sep 25, 2010 at 2:43 PM, Peter Lieven p...@dlh.net wrote:
we experience filesystem corruption using virtio-blk on some guest systems
togehter with XFS. We still use qemu-kvm 0.12.4.
[...]
It seems that 64-bit Ubuntu LTS 10.04.1
Am 25.09.2010 um 17:37 schrieb Christoph Hellwig:
FYI, qemu 0.12.2 is missing:
you mean 0.12.4 not 0.12.2, don't you?
block: fix sector comparism in multiwrite_req_compare
which in the past was very good at triggering XFS guest corruption.
Please try with the patch applied or
Am 25.09.2010 um 17:58 schrieb Christoph Hellwig:
On Sat, Sep 25, 2010 at 05:40:34PM +0200, Peter Lieven wrote:
Am 25.09.2010 um 17:37 schrieb Christoph Hellwig:
FYI, qemu 0.12.2 is missing:
you mean 0.12.4 not 0.12.2, don't you?
Yes, sorry. (but 0.12.2 is of course missing it, too
Am 25.09.2010 um 17:58 schrieb Christoph Hellwig:
On Sat, Sep 25, 2010 at 05:40:34PM +0200, Peter Lieven wrote:
Am 25.09.2010 um 17:37 schrieb Christoph Hellwig:
FYI, qemu 0.12.2 is missing:
you mean 0.12.4 not 0.12.2, don't you?
Yes, sorry. (but 0.12.2 is of course missing it, too
Am 17.09.2010 um 13:36 schrieb Bernhard Kohl:
Am 16.09.2010 15:57, schrieb ext Peter Lieven:
Hi,
I found the following assertion in my log files after a system reset was
executed
kvm: lsi_scsi: error: ORDERED queue not implemented
last message repeated 5 times
kvm: lsi_scsi: error
Am 17.09.2010 um 14:30 schrieb Jan Kiszka:
Am 17.09.2010 14:26, Peter Lieven wrote:
Am 17.09.2010 um 13:36 schrieb Bernhard Kohl:
Am 16.09.2010 15:57, schrieb ext Peter Lieven:
Hi,
I found the following assertion in my log files after a system reset was
executed
kvm: lsi_scsi
Am 16.09.2010 um 19:13 schrieb Michael Tokarev:
16.09.2010 18:01, Peter Lieven wrote:
Hi,
with Ubuntu 10.04 Desktop guest in Qemu-KVM 0.12.4 I sometimes see a
condition where the CTRL key
is locked and cannot be released anymore. The only way I have found to
solve this is to restart
Hi,
we today saw a Win2k8 VM with first device IDE and second device SCSI crash
reprocably during boot.
Win2k8 32-bit Server was installed with IDE only. The second SCSI device was
added later.
Here is the output and commandline:
Aug 5 20:42:55 172.21.59.142 exec: /usr/bin/qemu-kvm-0.12.4
Avi Kivity wrote:
On 06/08/2010 04:44 PM, Peter Lieven wrote:
-cpu host is good if you have identical machines and don't plan to
add new ones.
i will likely add new ones, but my plan would be to use qemu64 and
then add all flags manually that
are common to all cpus in the pool.
would
sorry, the subject should read 2.6.35-rc2
Peter Lieven wrote:
Hi,
I freshly installed kernel 2.6.35-rc2 using userspace qemu-kvm 0.12.4.
When I live migrate a 32-bit opensuse-11.2 VM, the incoming VM shows
the following error after the mem transfer has finished:
kvm: unhandled exit 8022
Jan Kiszka wrote:
Juan Quintela wrote:
Jan Kiszka jan.kis...@web.de wrote:
Juan Quintela wrote:
Lack of proper subsections. IDE is something like:
const VMStateDescription vmstate_ide_drive = {
.version_id = 4,
}
static const VMStateDescription vmstate_bmdma = {
Avi Kivity wrote:
On 06/08/2010 12:13 PM, Peter Lieven wrote:
sorry, the subject should read 2.6.35-rc2
Peter Lieven wrote:
Hi,
I freshly installed kernel 2.6.35-rc2 using userspace qemu-kvm 0.12.4.
When I live migrate a 32-bit opensuse-11.2 VM, the incoming VM shows
the following error
Avi Kivity wrote:
On 06/08/2010 02:31 PM, Avi Kivity wrote:
On 06/08/2010 02:29 PM, Avi Kivity wrote:
On 06/08/2010 12:13 PM, Peter Lieven wrote:
sorry, the subject should read 2.6.35-rc2
Peter Lieven wrote:
Hi,
I freshly installed kernel 2.6.35-rc2 using userspace qemu-kvm
0.12.4.
When
Avi Kivity wrote:
On 06/08/2010 03:49 PM, Peter Lieven wrote:
And finally, perhaps you have NX disabled in the bios of one of the
machines?
What does 'dmesg | grep NX' show on both hosts?
nx was disabled on one of the nodes.
That explains the problem.
i will retry the case later
Avi Kivity wrote:
On 06/08/2010 03:49 PM, Peter Lieven wrote:
And finally, perhaps you have NX disabled in the bios of one of the
machines?
What does 'dmesg | grep NX' show on both hosts?
nx was disabled on one of the nodes.
That explains the problem.
i will retry the case later
Avi Kivity wrote:
On 06/08/2010 04:28 PM, Peter Lieven wrote:
i will retry the case later today and send info register output.
what is the recommended value for nx (and why)?
Enabled (so you get no-execute memory protection).
do you have a guideline which flags should be identical
Avi Kivity wrote:
On 06/08/2010 04:38 PM, Peter Lieven wrote:
Avi Kivity wrote:
On 06/08/2010 04:28 PM, Peter Lieven wrote:
i will retry the case later today and send info register output.
what is the recommended value for nx (and why)?
Enabled (so you get no-execute memory protection
Am 06.06.2010 um 16:53 schrieb Faidon Liambotis:
Peter Lieven wrote:
i was looking for a generic way to disable it in the hypervisor
without the need to touch every guests kernel commandline.
this would be easy revertible once its all working as expected.
This is a huge hack
Am 05.06.2010 um 03:34 schrieb Faidon Liambotis:
Hi,
Peter Lieven wrote:
i was looking for a generic way to disable it in the hypervisor
without the need to touch every guests kernel commandline.
this would be easy revertible once its all working as expected.
This is a huge hack
Am 04.06.2010 um 17:31 schrieb Marcelo Tosatti:
On Wed, Jun 02, 2010 at 05:13:30PM +0200, Peter Lieven wrote:
Hi,
I would like to get latest stable qemu-kvm (0.12.4) to a usable state
regarding live-migration.
Problems are fixed in git, but there is so much new stuff that has
Am 04.06.2010 um 17:31 schrieb Marcelo Tosatti:
On Wed, Jun 02, 2010 at 05:13:30PM +0200, Peter Lieven wrote:
Hi,
I would like to get latest stable qemu-kvm (0.12.4) to a usable state
regarding live-migration.
Problems are fixed in git, but there is so much new stuff that has
Avi Kivity wrote:
On 06/01/2010 04:57 PM, Peter Lieven wrote:
Avi Kivity wrote:
On 06/01/2010 04:12 PM, Peter Lieven wrote:
hi,
is it possible to avoid detection of clocksource=kvm_clock in a
linux guest by
patching the qemu-kvm binary?
i would like to be able to avoid a guest detecting
works well
BTW the guest is a tiny core linux image
--
Mit freundlichen Grüßen/Kind Regards
Peter Lieven
..
KAMP Netzwerkdienste GmbH
Vestische Str. 89-91 | 46117 Oberhausen
Tel: +49
Hi,
I would like to get latest stable qemu-kvm (0.12.4) to a usable state
regarding live-migration.
Problems are fixed in git, but there is so much new stuff that has not
extensively tested and therefore I would like to stay at 0.12.4 at the
moment.
Therefore I would appreciate your help
Hi,
I just compiled latest git to work on Bug #585113 .
Unfortunately, I can't start the the VMs with the device mappings
generated by our multipath
setup.
cmdline:
/usr/bin/qemu-kvm-devel -net none -drive
Kevin Wolf wrote:
Am 01.06.2010 12:59, schrieb Peter Lieven:
Hi,
I just compiled latest git to work on Bug #585113 .
Unfortunately, I can't start the the VMs with the device mappings
generated by our multipath
setup.
cmdline:
/usr/bin/qemu-kvm-devel -net none -drive
file=/dev/mapper
Kevin Wolf wrote:
Am 01.06.2010 12:59, schrieb Peter Lieven:
Hi,
I just compiled latest git to work on Bug #585113 .
Unfortunately, I can't start the the VMs with the device mappings
generated by our multipath
setup.
cmdline:
/usr/bin/qemu-kvm-devel -net none -drive
file=/dev/mapper
Kevin Wolf wrote:
Am 01.06.2010 12:59, schrieb Peter Lieven:
Hi,
I just compiled latest git to work on Bug #585113 .
Unfortunately, I can't start the the VMs with the device mappings
generated by our multipath
setup.
cmdline:
/usr/bin/qemu-kvm-devel -net none -drive
file=/dev/mapper
Avi Kivity wrote:
On 06/01/2010 04:12 PM, Peter Lieven wrote:
hi,
is it possible to avoid detection of clocksource=kvm_clock in a linux
guest by
patching the qemu-kvm binary?
i would like to be able to avoid a guest detecting kvm-clock until
bug #584516
is fixed without modifying all guest
Avi Kivity wrote:
On 06/01/2010 04:57 PM, Peter Lieven wrote:
Avi Kivity wrote:
On 06/01/2010 04:12 PM, Peter Lieven wrote:
hi,
is it possible to avoid detection of clocksource=kvm_clock in a
linux guest by
patching the qemu-kvm binary?
i would like to be able to avoid a guest detecting
Avi Kivity wrote:
On 06/01/2010 05:06 PM, Peter Lieven wrote:
avi, i do not know whats going on. but if i supply -cpu xxx,-kvmclock
the guest
still uses kvm-clock, but it seems bug #584516 is not triggered...
thats weird...
I guess that bug was resolved in qemu-kvm.git. Likely 1a03675db1
Michael Tokarev wrote:
25.05.2010 15:03, Peter Lieven wrote:
Michael Tokarev wrote:
23.05.2010 13:55, Peter Lieven wrote:
[]
[64442.298521] irq 10: nobody cared (try booting with the irqpoll
option)
[]
[64442.299433] handlers:
[64442.299840] [ab80] (e1000_intr+0x0/0x190 [e1000
Michael Tokarev wrote:
23.05.2010 13:55, Peter Lieven wrote:
Hi,
after live migrating ubuntu 9.10 server (2.6.31-14-server) and suse
linux 10.1 (2.6.16.13-4-smp)
it happens sometimes that the guest runs into irq problems. i mention
these 2 guest oss
since i have seen the error
Am 18.05.2010 um 15:51 schrieb Alexander Graf:
Peter Lieven wrote:
Alexander Graf wrote:
Peter Lieven wrote:
we are running on intel xeons here:
That might be the reason. Does it break when passing -no-kvm?
processor: 0
vendor_id: GenuineIntel
cpu family: 6
model
Hi,
after live migrating ubuntu 9.10 server (2.6.31-14-server) and suse linux 10.1
(2.6.16.13-4-smp)
it happens sometimes that the guest runs into irq problems. i mention these 2
guest oss
since i have seen the error there. there are likely others around with the same
problem.
on the host i
Am 19.05.2010 um 10:18 schrieb Peter Lieven:
Kevin Wolf wrote:
Am 19.05.2010 09:29, schrieb Christoph Hellwig:
On Tue, May 18, 2010 at 03:22:36PM +0200, Kevin Wolf wrote:
I think it's stuck here in an endless loop:
while (laiocb-ret == -EINPROGRESS
Am 23.05.2010 um 12:38 schrieb Michael Tokarev:
23.05.2010 13:55, Peter Lieven wrote:
Hi,
after live migrating ubuntu 9.10 server (2.6.31-14-server) and suse linux
10.1 (2.6.16.13-4-smp)
it happens sometimes that the guest runs into irq problems. i mention these
2 guest oss
since i
Kevin Wolf wrote:
Am 19.05.2010 09:29, schrieb Christoph Hellwig:
On Tue, May 18, 2010 at 03:22:36PM +0200, Kevin Wolf wrote:
I think it's stuck here in an endless loop:
while (laiocb-ret == -EINPROGRESS)
qemu_laio_completion_cb(laiocb-ctx);
Can you verify this by
Hi,
we try to migrate some Suse Linux Enterprise 10 64-bit guests from abandoned
Virtual Iron by Iron Port to qemu-kvm 0.12.4. Unfortunately the guests
are not
very stable by now. With ACPI they end up in a kernel panic at boot time
and without
they occasionally hang during boot or shortly
Alexander Graf wrote:
On 18.05.2010, at 11:14, Peter Lieven wrote:
Hi,
we try to migrate some Suse Linux Enterprise 10 64-bit guests from abandoned
Virtual Iron by Iron Port to qemu-kvm 0.12.4. Unfortunately the guests are not
very stable by now. With ACPI they end up in a kernel panic
hi alex,
what 64-bit -cpu types do you suggest?
when i boot the kernel with nohpet it simply hangs shortly after
powersaved...
br,
peter
Alexander Graf wrote:
On 18.05.2010, at 11:57, Peter Lieven wrote:
Alexander Graf wrote:
On 18.05.2010, at 11:14, Peter Lieven wrote
wrote:
On 18.05.2010, at 11:57, Peter Lieven wrote:
Alexander Graf wrote:
On 18.05.2010, at 11:14, Peter Lieven wrote:
Hi,
we try to migrate some Suse Linux Enterprise 10 64-bit guests from abandoned
Virtual Iron by Iron Port to qemu-kvm 0.12.4. Unfortunately the guests
Alexander Graf wrote:
On 18.05.2010, at 12:12, Peter Lieven wrote:
hi alex,
what 64-bit -cpu types do you suggest?
For starters I'd say try -cpu host.
Alex
--
Mit freundlichen Grüßen/Kind Regards
Peter Lieven
Alexander Graf wrote:
On 18.05.2010, at 13:01, Peter Lieven wrote:
hi alex,
unfortunately -cpu host, -cpu qemu64, -cpu core2duo, -cpu kvm64 (which should
be default) doesn't help
all other cpus available are 32-bit afaik.
as i said if i boot with with kernel parameter nohpet, but acpi
hi kevin,
here is the backtrace of (hopefully) all threads:
^C
Program received signal SIGINT, Interrupt.
[Switching to Thread 0x7f39b72656f0 (LWP 10695)]
0x7f39b6c3ea94 in __lll_lock_wait () from /lib/libpthread.so.0
(gdb) thread apply all bt
Thread 2 (Thread 0x7f39b57b8950 (LWP 10698)):
Grüßen/Kind Regards
Peter Lieven
..
KAMP Netzwerkdienste GmbH
Vestische Str. 89-91 | 46117 Oberhausen
Tel: +49 (0) 208.89 402-50 | Fax: +49 (0) 208.89 402-40
mailto:p...@kamp.de | http
) at /usr/src/qemu-kvm-0.12.4/vl.c:6252
Alexander Graf wrote:
On 18.05.2010, at 13:08, Peter Lieven wrote:
Alexander Graf wrote:
On 18.05.2010, at 13:01, Peter Lieven wrote:
hi alex,
unfortunately -cpu host, -cpu qemu64, -cpu core2duo, -cpu kvm64 (which should
be default
Alexander Graf wrote:
Peter,
Peter Lieven wrote:
hi alex,
here is a backtrace of the sles 10 sp1 vm running on qemu-kvm 0.12.4.
hanging at boot.
Please do not top post! Seriously. One more time and I'll stop responding.
I tried to reproduce this locally on an openSUSE 11.1 system
Alexander Graf wrote:
Peter Lieven wrote:
we are running on intel xeons here:
That might be the reason. Does it break when passing -no-kvm?
processor: 0
vendor_id: GenuineIntel
cpu family: 6
model: 26
model name: Intel(R) Xeon(R) CPU L5530
Alexander Graf wrote:
Peter Lieven wrote:
Alexander Graf wrote:
Peter Lieven wrote:
we are running on intel xeons here:
That might be the reason. Does it break when passing -no-kvm?
processor: 0
vendor_id: GenuineIntel
cpu family: 6
model
Hi Qemu/KVM Devel Team,
if I create a VM with more than 2 harddisks and a CDROM Image and want
to boot from CDROM this is not working.
From my understanding at least 3 IDE Drives + 1 IDE CDROM should work.
cmdline:
/usr/bin/qemu-kvm-0.12.4 -net none -drive
Hi,
I can confirm that reverting this patch makes Live Migration from 0.12.2
to 0.12.4
again possible.
Br,
Peter
Juan Quintela wrote:
Peter Lieven p...@dlh.net wrote:
Hi Qemu/KVM Devel Team,
Live Migration from a 0.12.2 qemu-kvm to a 0.12.3 (and 0.12.4)
does not work: load of migration
Hi Qemu/KVM Devel Team,
Live Migration from a 0.12.2 qemu-kvm to a 0.12.3 (and 0.12.4)
does not work: load of migration failed
Is there any way to find out, why exactly it fails? I have
a lot of VMs running on 0.12.2 and would like to migrate
them to 0.12.4
cmdline:
-net
101 - 200 of 202 matches
Mail list logo