A minor patch to avoid heavy is_empty_shadow_page for release version.
diff --git a/drivers/kvm/mmu.c b/drivers/kvm/mmu.c
index e85b4c7..58fdd7b 100644
--- a/drivers/kvm/mmu.c
+++ b/drivers/kvm/mmu.c
@@ -52,11 +52,15 @@ static void kvm_mmu_audit(struct kvm_vcpu *vcpu,
const char *msg) {}
Casey Jeffery wrote:
I just wanted to make the comment that the vmx_msr_index[] array
should probably have a note indicating that MSR_K6_STAR is always
assumed to be the last entry in the array by setup_msrs(). This just
caused me some trouble when I updated to the latest code.
I added a
Dong, Eddie wrote:
A minor patch to avoid heavy is_empty_shadow_page for release version.
Please resend as attachment, you email client mangled the patch.
--
error compiling committee.c: too many arguments to function
This patch is to remove the redundant MSR save/restore at lightweight VM
EXIT time. With this patch VMX guest KB performance can increase 17-37%.
(I only tested it on KVM-18 since the git tree still has problem in my
side, but I'd like to post this earlier to collect more feedback)
Thanks, eddie
Sorry for that, attached now.
Eddie
-Original Message-
From: Avi Kivity [mailto:[EMAIL PROTECTED]
Sent: 2007年4月25日 16:03
To: Dong, Eddie
Cc: kvm-devel
Subject: Re: [kvm-devel] [PATCH] Remove heavy is_empty_shadow_page call.
Dong, Eddie wrote:
A minor patch to avoid heavy
Dong, Eddie wrote:
Sorry for that, attached now.
Applied. Please provide a Signed-off-by: line in the future.
--
error compiling committee.c: too many arguments to function
-
This SF.net email is sponsored by DB2
Dong, Eddie wrote:
A minor patch to avoid heavy is_empty_shadow_page for release version.
diff --git a/drivers/kvm/mmu.c b/drivers/kvm/mmu.c
index e85b4c7..58fdd7b 100644
--- a/drivers/kvm/mmu.c
+++ b/drivers/kvm/mmu.c
@@ -52,11 +52,15 @@ static void kvm_mmu_audit(struct kvm_vcpu
O, yes typo at different tree porting time.
Michael Riepe wrote:
Dong, Eddie wrote:
A minor patch to avoid heavy is_empty_shadow_page for release
version.
diff --git a/drivers/kvm/mmu.c b/drivers/kvm/mmu.c
index e85b4c7..58fdd7b 100644
--- a/drivers/kvm/mmu.c
+++ b/drivers/kvm/mmu.c
Dong, Eddie wrote:
Avi:
Thanks!
Another found is that HOST_FS/GS_BASE MSR_FS/GS_BASE
save/restore is quit expansive. I am not sure why it needs to be
save/restored given that the VMCS regster will be loaded into FS/gs at
VM EXIT time automatically by HW and will not be changed
on Wed Apr 25 2007, Avi Kivity avi-AT-qumranet.com wrote:
David Abrahams wrote:
on Tue Apr 24 2007, Avi Kivity avi-AT-qumranet.com wrote:
Can you try the following experiments (independently):
- run kvm with the '-no-rtc' option
Makes no noticeable difference that I can see.
A minor change to reduce vcpu_put/vcpu_load frequency (still base on
KVM-18). Not sure if you would like to see this?
Signed-off-by: Yaozu Dong [EMAIL PROTECTED]
--- vmx.old 2007-04-25 20:28:19.0 +0800
+++ vmx.new 2007-04-25 20:28:10.0 +0800
@@ -1945,7 +1945,8 @@
Dong, Eddie wrote:
A minor change to reduce vcpu_put/vcpu_load frequency (still base on
KVM-18). Not sure if you would like to see this?
Signed-off-by: Yaozu Dong [EMAIL PROTECTED]
--- vmx.old 2007-04-25 20:28:19.0 +0800
+++ vmx.new 2007-04-25 20:28:10.0 +0800
@@
Avi Kivity wrote:
Dong, Eddie wrote:
A minor change to reduce vcpu_put/vcpu_load frequency (still base on
KVM-18). Not sure if you would like to see this?
Signed-off-by: Yaozu Dong [EMAIL PROTECTED]
--- vmx.old 2007-04-25 20:28:19.0 +0800
+++ vmx.new 2007-04-25
Dong, Eddie wrote:
Here it is.
Applied, thanks.
--
error compiling committee.c: too many arguments to function
-
This SF.net email is sponsored by DB2 Express
Download DB2 Express C - the FREE version of DB2 express
Dong, Eddie wrote:
A minor change to reduce vcpu_put/vcpu_load frequency (still base on
KVM-18). Not sure if you would like to see this?
Signed-off-by: Yaozu Dong [EMAIL PROTECTED]
--- vmx.old 2007-04-25 20:28:19.0 +0800
+++ vmx.new 2007-04-25 20:28:10.0 +0800
@@
Anthony Liguori wrote:
Dong, Eddie wrote:
A minor change to reduce vcpu_put/vcpu_load frequency (still base on
KVM-18). Not sure if you would like to see this?
Signed-off-by: Yaozu Dong [EMAIL PROTECTED]
--- vmx.old2007-04-25 20:28:19.0 +0800
+++ vmx.new2007-04-25
Google couldn't help me find what I want (unless I'm searching wrong).
I would like to set up a disaster recovery box offsite. I would like to have
incremental P2V
snapshots sent every so often to a KVM-based Windows virtual machine. It would
be powered off
most of the time. I would only
Chris de Vidal wrote:
Google couldn't help me find what I want (unless I'm searching wrong).
I would like to set up a disaster recovery box offsite. I would like to have
incremental P2V
snapshots sent every so often to a KVM-based Windows virtual machine. It
would be powered off
most of
--- Avi Kivity [EMAIL PROTECTED] wrote:
Chris de Vidal wrote:
Google couldn't help me find what I want (unless I'm searching wrong).
I would like to set up a disaster recovery box offsite. I would like to
have incremental P2V
snapshots sent every so often to a KVM-based Windows
Anthony Liguori wrote:
This should get moved to kvm_resched() since both VT/SVM would benefit
from this.
I would suggest we just add similar code in SVM side. After we
optimize the MSR/VMCS register save/restore to skip for
those lightweight VM EXIT (handled by KVM). Giving up preemption
Dong, Eddie wrote:
Anthony Liguori wrote:
This should get moved to kvm_resched() since both VT/SVM would benefit
from this.
I would suggest we just add similar code in SVM side. After we
optimize the MSR/VMCS register save/restore to skip for
those lightweight VM EXIT (handled
Anthony Liguori wrote:
Dong, Eddie wrote:
Anthony Liguori wrote:
This should get moved to kvm_resched() since both VT/SVM would
benefit from this.
I would suggest we just add similar code in SVM side. After we
optimize the MSR/VMCS register save/restore to skip for
those lightweight
Dong, Eddie wrote:
In this case, IOCTL return to Qemu will trigger scheduling at least.
I think a scheduling change won't happen until the next timer tick.
AFAICT, there's nothing explicit in the ioctl return path that will
result in rescheduling.
I'm not entirely confident in how the
Dong, Eddie wrote:
Anthony Liguori wrote:
Dong, Eddie wrote:
In this case, IOCTL return to Qemu will trigger scheduling at least.
I think a scheduling change won't happen until the next timer tick.
AFAICT, there's nothing explicit in the ioctl return path that will
result
Dong, Eddie wrote:
Actually I am thinking to totally give up kvm_resched and just let
control return to Qemu which is much clean and provide Qemu
more chance to handle some kind of hardware event such as network
packet arrive etc. Today Qemu is totally depending on heavyweight VM
Exit
to
Anthony Liguori wrote:
Dong, Eddie wrote:
In this case, IOCTL return to Qemu will trigger scheduling at least.
I think a scheduling change won't happen until the next timer tick.
AFAICT, there's nothing explicit in the ioctl return path that will
result in rescheduling.
I'm not
Avi Kivity wrote:
Anthony Liguori wrote:
Dong, Eddie wrote:
In this case, IOCTL return to Qemu will trigger scheduling at least.
I think a scheduling change won't happen until the next timer tick.
AFAICT, there's nothing explicit in the ioctl return path that will
Avi, Hi
I just tried the latest git tree, and I'm getting like (from
arch/x86_64/boot/setup.S):
Your CPU does not support long mode. Use a 32bit distribution.
I don't see the problem with kvm-20. I tried that quickly, so I can be
wrong. But can you please check that?
Thanks,
Jun
---
Intel
28 matches
Mail list logo