On 10/04/2009 09:07 PM, Jan Kiszka wrote:
btw, instead of adding a new ioctl, perhaps it makes sense to define a
new KVM_VCPU_STATE structure that holds all current and future state
(with generous reserved space), instead of separating state over a dozen
ioctls.
OK, makes sense. With our
On Sun, Oct 04, 2009 at 04:01:02PM +0400, Michael Tokarev wrote:
> Marcelo Tosatti wrote:
>> Michael,
>>
>> Can you please give the patch below a try please? (without acpi_pm
>> timer or priority adjustments for the guest).
>
> Sure. I'll try it out in a hour or two, while I can experiment freely
The Buildbot has detected a new failure of disable_kvm_x86_64_out_of_tree on
qemu-kvm.
Full details are available at:
http://buildbot.b1-systems.de/qemu-kvm/builders/disable_kvm_x86_64_out_of_tree/builds/34
Buildbot URL: http://buildbot.b1-systems.de/qemu-kvm/
Buildslave for this Build: b1_qem
One possible long term goal is to stop adding
#define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
to source files to prefix modulename to logging output.
It might be useful to eventually have kernel.h
use a standard #define pr_fmt which includes KBUILD_MODNAME
instead of a blank or empty define.
Perh
Add pr_fmt(fmt) "pit: " fmt
Strip pit: prefixes from pr_debug
Signed-off-by: Joe Perches
---
arch/x86/kvm/i8254.c | 12 +++-
1 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/arch/x86/kvm/i8254.c b/arch/x86/kvm/i8254.c
index 82ad523..fa83a15 100644
--- a/arch/x86/kvm/i8254
Richard Wurman wrote:
> So far I've been using files and/or LVM partitions for my VMs --
> basically by using virt-manager and modifying existing XML configs and
> just copying my VM files to be reused.
>
> I'm wondering how KVM storage pools work -- at first I thought it was
> something like KVM'
Avi Kivity wrote:
> On 10/04/2009 12:50 PM, Jan Kiszka wrote:
>> Avi Kivity wrote:
>>
>>> On 10/04/2009 10:59 AM, Jan Kiszka wrote:
>>>
Hi,
while preparing new IOCTLs to let user space query& set the yet
unaccessible NMI states (pending and masked) I also came across t
Avi Kivity wrote:
> On 10/03/2009 12:31 AM, Jan Kiszka wrote:
>> Give user space more flexibility /wrt its IOCTL order. So far updating
>> the rflags via KVM_SET_REGS ignored potentially set single-step flags.
>> Now they will be kept.
>>
>
>>
>> kvm_rip_write(vcpu, regs->rip);
>> -k
On 09/30/2009 08:58 AM, Jan Lübbe wrote:
Hi!
On Wed, 2009-08-26 at 13:29 +0300, Avi Kivity wrote:
From: Jan Kiszka
So far unprivileged guest callers running in ring 3 can issue, e.g., MMU
hypercalls. Normally, such callers cannot provide any hand-crafted MMU
command structure as it has to
On 10/04/2009 05:21 PM, Daniel Schwager wrote:
How long is this after the 'stop'?
30 seconds or 2 days ... the process takes CPU all the time
Can you take an oprofile run to see where it's spending its time?
--
error compiling committee.c: too many arguments to function
--
To uns
> > After 'stop'ing, the vm's still using CPU-load, like the "top" will
tell
> >
> >PID USER PR NI VIRT RES SHR S %CPU %MEMTIME+
COMMAND
> > 25983 root 20 0 495m 407m 1876 R 8.9 2.5 228:09.15
qemu-system-x86
> It shouldn't do that.
ok.
> How long is this after the 's
On 10/02/2009 12:28 AM, Marcelo Tosatti wrote:
Disable paravirt MMU capability reporting, so that new (or rebooted)
guests switch to native operation.
Paravirt MMU is a burden to maintain and does not bring significant
advantages compared to shadow anymore.
Applied, thanks.
--
error compi
On 10/01/2009 01:47 PM, Daniel Schwager wrote:
If i send a signal STOP/CONT (kill -STOP or kill -CONT)
to the KVM-process, it looks like the kvm does not (sure ;-) use
any host CPU usage.
- Are there some side effects using this approach ?
(e.g. with networking, ...)
The monitor, vnc,
On 10/01/2009 12:32 PM, Daniel Schwager wrote:
Hi,
we are running some stopped (sending "stop" via kvm-monitor socket)
vm's on our system. My intention was to pause (stop) the vm's and
unpause (cont) them on demand (very fast, without time delay, within 2
seconds ..).
After 'stop'ing, the vm's
On 10/03/2009 12:31 AM, Jan Kiszka wrote:
Give user space more flexibility /wrt its IOCTL order. So far updating
the rflags via KVM_SET_REGS ignored potentially set single-step flags.
Now they will be kept.
kvm_rip_write(vcpu, regs->rip);
- kvm_x86_ops->set_rflags(vcpu, regs
On 10/03/2009 12:31 AM, Jan Kiszka wrote:
Much of so far vendor-specific code for setting up guest debug can
actually be handled by the generic code. This also fixes a minor deficit
in the SVM part /wrt processing KVM_GUESTDBG_ENABLE.
Applied both, thanks.
--
error compiling committee.c:
The Buildbot has detected a new failure of default_i386_out_of_tree on qemu-kvm.
Full details are available at:
http://buildbot.b1-systems.de/qemu-kvm/builders/default_i386_out_of_tree/builds/33
Buildbot URL: http://buildbot.b1-systems.de/qemu-kvm/
Buildslave for this Build: b1_qemu_kvm_2
Buil
The Buildbot has detected a new failure of default_x86_64_debian_5_0 on
qemu-kvm.
Full details are available at:
http://buildbot.b1-systems.de/qemu-kvm/builders/default_x86_64_debian_5_0/builds/94
Buildbot URL: http://buildbot.b1-systems.de/qemu-kvm/
Buildslave for this Build: b1_qemu_kvm_1
B
On 10/04/2009 02:16 PM, Izik Eidus wrote:
From a8ca226de8efb4f0447e4ef87bf034cf18996745 Mon Sep 17 00:00:00 2001
From: Izik Eidus
Date: Sun, 4 Oct 2009 14:01:31 +0200
Subject: [PATCH] kvm-userspace: add ksm support
Calling to madvise(MADV_MERGEABLE) on the memory allocations.
Applied, tha
>From a8ca226de8efb4f0447e4ef87bf034cf18996745 Mon Sep 17 00:00:00 2001
From: Izik Eidus
Date: Sun, 4 Oct 2009 14:01:31 +0200
Subject: [PATCH] kvm-userspace: add ksm support
Calling to madvise(MADV_MERGEABLE) on the memory allocations.
Signed-off-by: Izik Eidus
---
exec.c |3 +++
1 files c
Marcelo Tosatti wrote:
Michael,
Can you please give the patch below a try please? (without acpi_pm timer
or priority adjustments for the guest).
Sure. I'll try it out in a hour or two, while I can experiment freely because
it's weekend.
But I wonder...
[]
hrtimer: interrupt too slow, forci
On 10/04/2009 12:50 PM, Jan Kiszka wrote:
Avi Kivity wrote:
On 10/04/2009 10:59 AM, Jan Kiszka wrote:
Hi,
while preparing new IOCTLs to let user space query& set the yet
unaccessible NMI states (pending and masked) I also came across the
interrupt shadow masks. Unless I missed some
Avi Kivity wrote:
> On 10/04/2009 10:59 AM, Jan Kiszka wrote:
>> Hi,
>>
>> while preparing new IOCTLs to let user space query& set the yet
>> unaccessible NMI states (pending and masked) I also came across the
>> interrupt shadow masks. Unless I missed something I would say that we so
>> far break
On 10/02/2009 10:19 PM, Gregory Haskins wrote:
This allows a scatter-gather approach to IO, which will be useful for
building high performance interfaces, like zero-copy and low-latency
copy (avoiding multiple calls to copy_to/from).
The interface is based on the existing scatterlist infrastruct
On 10/02/2009 10:19 PM, Gregory Haskins wrote:
What: xinterface is a mechanism that allows kernel modules external to
the kvm.ko proper to interface with a running guest. It accomplishes
this by creating an abstracted interface which does not expose any
private details of the guest or its relate
On 10/02/2009 10:19 PM, Gregory Haskins wrote:
We want to add a more efficient way to get PIO signals out of the guest,
so we add an "xioevent" interface. This allows a client to register
for notifications when a specific MMIO/PIO address is touched by
the guest. This is an alternative interfac
On 10/04/2009 10:59 AM, Jan Kiszka wrote:
Hi,
while preparing new IOCTLs to let user space query& set the yet
unaccessible NMI states (pending and masked) I also came across the
interrupt shadow masks. Unless I missed something I would say that we so
far break them in the rare case that a migra
Hi,
while preparing new IOCTLs to let user space query & set the yet
unaccessible NMI states (pending and masked) I also came across the
interrupt shadow masks. Unless I missed something I would say that we so
far break them in the rare case that a migration happens right while any
of them is asse
Hello virtualists,
Avi asked me to bring this issue to the ML, so here it is:
Searching for solutions to a persistant problem with our KVM hosts, I
stumbled across Avis post
[REGRESSION] High, likely incorrect process cpu usage counters with
kvm and 2.6.2[67]
dated "Sun, 31 Aug 2008 08:43:41 -
29 matches
Mail list logo