cpuinfo and HVM features (was: Host latency peaks due to kvm-intel)

2009-07-27 Thread Jan Kiszka
[ carrying this to LKML ]

Yang, Sheng wrote:
> On Monday 27 July 2009 03:16:27 H. Peter Anvin wrote:
>> Jan Kiszka wrote:
>>> Avi Kivity wrote:
 On 07/24/2009 12:41 PM, Jan Kiszka wrote:
> I vaguely recall that someone promised to add a feature reporting
> facility for all those nice things, modern VM-extensions may or may not
> support (something like or even an extension of /proc/cpuinfo). What is
> the state of this plan? Would be specifically interesting for Intel
> CPUs as there seem to be many of them out there with restrictions for
> special use cases - like real-time.
 Newer kernels do report some vmx features (like flexpriority) in
 /proc/cpuinfo but not all.
>>> Ah, nice. Then we just need this?
>> Fine with me.
>>
>> Acked-by: H. Peter Anvin 
>>
>> However, I guess the real question if we shouldn't export ALL VMX
>> features in a consistent way instead?
>>
> When I add feature reporting to cpuinfo, I just put highlight features there, 
> otherwise the VMX feature list would at least as long as CPU one.

That could become true. But the question is always what the highlights
are. Often this depends on the hypervisor as it may implement
workarounds for missing features differently (or not at all). So I'm
also for exposing feature information consistently.

> 
> I have also suggested another field for virtualization feature for it, but 
> some concern again userspace tools raised.
> 
> For we got indeed quite a lot features, and would get more, would it better 
> to 
> export the part of struct vmcs_config entries(that's pin_based_exec_ctrl, 
> cpu_based_exec_ctrl, and cpu_based_2nd_exec_ctrl) through 
> sys/module/kvm_intel/? Put every feature to cpuinfo seems not that necessary 
> for such a big list.

I don't think this information should only come from KVM. Consider you
didn't build it into some kernel but still want to find out what your
system is able to provide.

What about adding some dedicated /proc entry for CPU virtualization
features, say /proc/hvminfo?

Jan

-- 
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: cpuinfo and HVM features (was: Host latency peaks due to kvm-intel)

2009-07-27 Thread Yang, Sheng
On Monday 27 July 2009 17:08:42 Jan Kiszka wrote:
> [ carrying this to LKML ]
>
> Yang, Sheng wrote:
> > On Monday 27 July 2009 03:16:27 H. Peter Anvin wrote:
> >> Jan Kiszka wrote:
> >>> Avi Kivity wrote:
>  On 07/24/2009 12:41 PM, Jan Kiszka wrote:
> > I vaguely recall that someone promised to add a feature reporting
> > facility for all those nice things, modern VM-extensions may or may
> > not support (something like or even an extension of /proc/cpuinfo).
> > What is the state of this plan? Would be specifically interesting for
> > Intel CPUs as there seem to be many of them out there with
> > restrictions for special use cases - like real-time.
> 
>  Newer kernels do report some vmx features (like flexpriority) in
>  /proc/cpuinfo but not all.
> >>>
> >>> Ah, nice. Then we just need this?
> >>
> >> Fine with me.
> >>
> >> Acked-by: H. Peter Anvin 
> >>
> >> However, I guess the real question if we shouldn't export ALL VMX
> >> features in a consistent way instead?
> >
> > When I add feature reporting to cpuinfo, I just put highlight features
> > there, otherwise the VMX feature list would at least as long as CPU one.
>
> That could become true. But the question is always what the highlights
> are. Often this depends on the hypervisor as it may implement
> workarounds for missing features differently (or not at all). So I'm
> also for exposing feature information consistently.

(CC Andi and Ingo)

The highlight means the feature we would gain a lot, like FlexPriority, EPT, 
VPID. They can be vendor specific. And I am talking about hardware capability 
here, so what's hypervisor did for workaround is not in scope.
>
> > I have also suggested another field for virtualization feature for it,
> > but some concern again userspace tools raised.
> >
> > For we got indeed quite a lot features, and would get more, would it
> > better to export the part of struct vmcs_config entries(that's
> > pin_based_exec_ctrl, cpu_based_exec_ctrl, and cpu_based_2nd_exec_ctrl)
> > through
> > sys/module/kvm_intel/? Put every feature to cpuinfo seems not that
> > necessary for such a big list.
>
> I don't think this information should only come from KVM. Consider you
> didn't build it into some kernel but still want to find out what your
> system is able to provide.

Yes, agree.
>
> What about adding some dedicated /proc entry for CPU virtualization
> features, say /proc/hvminfo?

Well, compared to this, I may still prefer a new item in /proc/cpuinfo, for 
it's still CPU feature, like Andi did for power management(IIRC).

Any more preferred location?

-- 
regards
Yang, Sheng
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Host latency peaks due to kvm-intel

2009-07-26 Thread Yang, Sheng
On Monday 27 July 2009 03:16:27 H. Peter Anvin wrote:
> Jan Kiszka wrote:
> > Avi Kivity wrote:
> >> On 07/24/2009 12:41 PM, Jan Kiszka wrote:
> >>> I vaguely recall that someone promised to add a feature reporting
> >>> facility for all those nice things, modern VM-extensions may or may not
> >>> support (something like or even an extension of /proc/cpuinfo). What is
> >>> the state of this plan? Would be specifically interesting for Intel
> >>> CPUs as there seem to be many of them out there with restrictions for
> >>> special use cases - like real-time.
> >>
> >> Newer kernels do report some vmx features (like flexpriority) in
> >> /proc/cpuinfo but not all.
> >
> > Ah, nice. Then we just need this?
>
> Fine with me.
>
> Acked-by: H. Peter Anvin 
>
> However, I guess the real question if we shouldn't export ALL VMX
> features in a consistent way instead?
>
When I add feature reporting to cpuinfo, I just put highlight features there, 
otherwise the VMX feature list would at least as long as CPU one.

I have also suggested another field for virtualization feature for it, but 
some concern again userspace tools raised.

For we got indeed quite a lot features, and would get more, would it better to 
export the part of struct vmcs_config entries(that's pin_based_exec_ctrl, 
cpu_based_exec_ctrl, and cpu_based_2nd_exec_ctrl) through 
sys/module/kvm_intel/? Put every feature to cpuinfo seems not that necessary 
for such a big list.

-- 
regards
Yang, Sheng
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Host latency peaks due to kvm-intel

2009-07-26 Thread H. Peter Anvin
Jan Kiszka wrote:
> Avi Kivity wrote:
>> On 07/24/2009 12:41 PM, Jan Kiszka wrote:
>>> I vaguely recall that someone promised to add a feature reporting
>>> facility for all those nice things, modern VM-extensions may or may not
>>> support (something like or even an extension of /proc/cpuinfo). What is
>>> the state of this plan? Would be specifically interesting for Intel CPUs
>>> as there seem to be many of them out there with restrictions for special
>>> use cases - like real-time.
>>>
>> Newer kernels do report some vmx features (like flexpriority) in
>> /proc/cpuinfo but not all.
>>
> 
> Ah, nice. Then we just need this?
> 

Fine with me.

Acked-by: H. Peter Anvin 

However, I guess the real question if we shouldn't export ALL VMX
features in a consistent way instead?

-hpa
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Host latency peaks due to kvm-intel

2009-07-26 Thread Jan Kiszka
Avi Kivity wrote:
> On 07/26/2009 05:34 PM, Jan Kiszka wrote:
>> Avi Kivity wrote:
>>   
>>> On 07/24/2009 12:41 PM, Jan Kiszka wrote:
>>> 
 Jan (who is now patching his guest to avoid wbinvd where possible)



>>> Is there ever a case where it is required?  What about under a
>>> hypervisor (i.e. check the hypervisor enabled bit).
>>>
>>>  
>>
>> Reminds me of the discussion in '07 when I first stumbled over this :) :
>> Yes, the bochs bios could safely skip the wbinvd in qemu mode. But that
>> won't safe us from Linux and - far more problematic - Windows or any
>> binary-only guest which think they have to issue it.
>>
>> One may the close eyes, fire up the guest and then start the
>> time-critical host application in the hope that the guest remains calm
>> as long as it's up and running. But, well...
>>
> 
> Given that it's now '09, how critical is the problem?  Don't most cpus
> have vwbinvd now?

Sadly, in (embedded) industry you have to live with "old" hardware for
quite a long time. And I would have to throw my only 2-years-old
notebook from the table to have a more decent portable test environment.

> 
> If so, the real-time management application can simply refuse to run on
> such an old processor.
> 

At least one could go and collect the cpuinfo from some box that suffers
from high latencies. Normally, you go through extensive testing anyway,
also checking for issues like crazy SMI BIOS code that runs for eternities.

Jan



signature.asc
Description: OpenPGP digital signature


Re: Host latency peaks due to kvm-intel

2009-07-26 Thread Avi Kivity

On 07/26/2009 05:34 PM, Jan Kiszka wrote:

Avi Kivity wrote:
   

On 07/24/2009 12:41 PM, Jan Kiszka wrote:
 

Jan (who is now patching his guest to avoid wbinvd where possible)


   

Is there ever a case where it is required?  What about under a
hypervisor (i.e. check the hypervisor enabled bit).

 


Reminds me of the discussion in '07 when I first stumbled over this :) :
Yes, the bochs bios could safely skip the wbinvd in qemu mode. But that
won't safe us from Linux and - far more problematic - Windows or any
binary-only guest which think they have to issue it.

One may the close eyes, fire up the guest and then start the
time-critical host application in the hope that the guest remains calm
as long as it's up and running. But, well...
   


Given that it's now '09, how critical is the problem?  Don't most cpus 
have vwbinvd now?


If so, the real-time management application can simply refuse to run on 
such an old processor.


--
error compiling committee.c: too many arguments to function

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Host latency peaks due to kvm-intel

2009-07-26 Thread Jan Kiszka
Avi Kivity wrote:
> On 07/24/2009 12:41 PM, Jan Kiszka wrote:
>> Jan (who is now patching his guest to avoid wbinvd where possible)
>>
>>
> 
> Is there ever a case where it is required?  What about under a
> hypervisor (i.e. check the hypervisor enabled bit).
> 

Reminds me of the discussion in '07 when I first stumbled over this :) :
Yes, the bochs bios could safely skip the wbinvd in qemu mode. But that
won't safe us from Linux and - far more problematic - Windows or any
binary-only guest which think they have to issue it.

One may the close eyes, fire up the guest and then start the
time-critical host application in the hope that the guest remains calm
as long as it's up and running. But, well...

Jan



signature.asc
Description: OpenPGP digital signature


Re: Host latency peaks due to kvm-intel

2009-07-26 Thread Jan Kiszka
Avi Kivity wrote:
> On 07/25/2009 12:55 PM, Jan Kiszka wrote:
>> Avi Kivity wrote:
>>   
>>> On 07/24/2009 12:41 PM, Jan Kiszka wrote:
>>> 
 I vaguely recall that someone promised to add a feature reporting
 facility for all those nice things, modern VM-extensions may or may not
 support (something like or even an extension of /proc/cpuinfo). What is
 the state of this plan? Would be specifically interesting for Intel
 CPUs
 as there seem to be many of them out there with restrictions for
 special
 use cases - like real-time.


>>> Newer kernels do report some vmx features (like flexpriority) in
>>> /proc/cpuinfo but not all.
>>>
>>>  
>>
>> Ah, nice. Then we just need this?
>>
>> >
>>
>> From: Jan Kiszka
>> Subject: [PATCH] x86: Report VMX feature vwbinvd
>>
>> Not all VMX-capable CPUs support guest exists on wbinvd execution. If
>> this is not supported, the instruction will run natively on behalf of
>> the guest. This can cause multi-millisecond latencies to the host which
>> is very problematic in real-time scenarios.
>>
>> Report the wbinvd trapping feature along with other VMX feature flags,
>> calling it 'vwbinvd' ('virtual wbinvd').
>>
>>
> 
> What about AMD cpus that can always trap wbinvd?  do we set the bit or
> do we trust the user to know that it isn't needed on AMD (I suppose the
> latter)?

I also think that the feature flags should remain vendor-specific.

> 
> This should go in via tip.git, it isn't really kvm related (except that
> kvm should start reading these caps one day instead of querying the
> hardware directly).
> 

OK, will go that way. Probably I will also add some flags for AMD's NPT,
Intel's EPT and they new unrestricted guest mode at this chance.

Jan



signature.asc
Description: OpenPGP digital signature


Re: Host latency peaks due to kvm-intel

2009-07-26 Thread Sujit Karataparambil
> Do not meddle in the internals of kernels, for they are subtle and quick to
> panic.
Also the kvm code. Are you sure that the processor supports KVM
Extension. I know of a lot of intel architectures where KVM is not
support. Especially the HW_CHECK_SUM. Might not be sure, but this sure
seems an problem. Also there is no dependency check with the KVM on
Linux. What I mean by this is that KVM Install on an Architecture that
donot support the extension without problem. So compiling KVM alone
does not mean it works on an architecture.


-- 
-- Sujit K M

blog(http://kmsujit.blogspot.com/)
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Host latency peaks due to kvm-intel

2009-07-25 Thread Avi Kivity

On 07/24/2009 12:41 PM, Jan Kiszka wrote:

Jan (who is now patching his guest to avoid wbinvd where possible)

   


Is there ever a case where it is required?  What about under a 
hypervisor (i.e. check the hypervisor enabled bit).


--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html



Re: Host latency peaks due to kvm-intel

2009-07-25 Thread Avi Kivity

On 07/25/2009 12:55 PM, Jan Kiszka wrote:

Avi Kivity wrote:
   

On 07/24/2009 12:41 PM, Jan Kiszka wrote:
 

I vaguely recall that someone promised to add a feature reporting
facility for all those nice things, modern VM-extensions may or may not
support (something like or even an extension of /proc/cpuinfo). What is
the state of this plan? Would be specifically interesting for Intel CPUs
as there seem to be many of them out there with restrictions for special
use cases - like real-time.

   

Newer kernels do report some vmx features (like flexpriority) in
/proc/cpuinfo but not all.

 


Ah, nice. Then we just need this?

>

From: Jan Kiszka
Subject: [PATCH] x86: Report VMX feature vwbinvd

Not all VMX-capable CPUs support guest exists on wbinvd execution. If
this is not supported, the instruction will run natively on behalf of
the guest. This can cause multi-millisecond latencies to the host which
is very problematic in real-time scenarios.

Report the wbinvd trapping feature along with other VMX feature flags,
calling it 'vwbinvd' ('virtual wbinvd').

   


What about AMD cpus that can always trap wbinvd?  do we set the bit or 
do we trust the user to know that it isn't needed on AMD (I suppose the 
latter)?


This should go in via tip.git, it isn't really kvm related (except that 
kvm should start reading these caps one day instead of querying the 
hardware directly).


--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Host latency peaks due to kvm-intel

2009-07-25 Thread Jan Kiszka
Avi Kivity wrote:
> On 07/24/2009 12:41 PM, Jan Kiszka wrote:
>> I vaguely recall that someone promised to add a feature reporting
>> facility for all those nice things, modern VM-extensions may or may not
>> support (something like or even an extension of /proc/cpuinfo). What is
>> the state of this plan? Would be specifically interesting for Intel CPUs
>> as there seem to be many of them out there with restrictions for special
>> use cases - like real-time.
>>
> 
> Newer kernels do report some vmx features (like flexpriority) in
> /proc/cpuinfo but not all.
> 

Ah, nice. Then we just need this?

>

From: Jan Kiszka 
Subject: [PATCH] x86: Report VMX feature vwbinvd

Not all VMX-capable CPUs support guest exists on wbinvd execution. If
this is not supported, the instruction will run natively on behalf of
the guest. This can cause multi-millisecond latencies to the host which
is very problematic in real-time scenarios.

Report the wbinvd trapping feature along with other VMX feature flags,
calling it 'vwbinvd' ('virtual wbinvd').

Signed-off-by: Jan Kiszka 
---

 arch/x86/include/asm/cpufeature.h |1 +
 arch/x86/kernel/cpu/intel.c   |4 
 2 files changed, 5 insertions(+), 0 deletions(-)

diff --git a/arch/x86/include/asm/cpufeature.h 
b/arch/x86/include/asm/cpufeature.h
index 4a28d22..8647524 100644
--- a/arch/x86/include/asm/cpufeature.h
+++ b/arch/x86/include/asm/cpufeature.h
@@ -165,6 +165,7 @@
 #define X86_FEATURE_FLEXPRIORITY (8*32+ 2) /* Intel FlexPriority */
 #define X86_FEATURE_EPT (8*32+ 3) /* Intel Extended Page Table */
 #define X86_FEATURE_VPID(8*32+ 4) /* Intel Virtual Processor ID */
+#define X86_FEATURE_VWBINVD (8*32+ 5) /* Guest Exiting on WBINVD */
 
 #if defined(__KERNEL__) && !defined(__ASSEMBLY__)
 
diff --git a/arch/x86/kernel/cpu/intel.c b/arch/x86/kernel/cpu/intel.c
index 3260ab0..2d921b0 100644
--- a/arch/x86/kernel/cpu/intel.c
+++ b/arch/x86/kernel/cpu/intel.c
@@ -297,6 +297,7 @@ static void __cpuinit detect_vmx_virtcap(struct cpuinfo_x86 
*c)
 #define X86_VMX_FEATURE_PROC_CTLS2_VIRT_APIC   0x0001
 #define X86_VMX_FEATURE_PROC_CTLS2_EPT 0x0002
 #define X86_VMX_FEATURE_PROC_CTLS2_VPID0x0020
+#define X86_VMX_FEATURE_PROC_CTLS2_VWBINVD 0x0040
 
u32 vmx_msr_low, vmx_msr_high, msr_ctl, msr_ctl2;
 
@@ -305,6 +306,7 @@ static void __cpuinit detect_vmx_virtcap(struct cpuinfo_x86 
*c)
clear_cpu_cap(c, X86_FEATURE_FLEXPRIORITY);
clear_cpu_cap(c, X86_FEATURE_EPT);
clear_cpu_cap(c, X86_FEATURE_VPID);
+   clear_cpu_cap(c, X86_FEATURE_VWBINVD);
 
rdmsr(MSR_IA32_VMX_PROCBASED_CTLS, vmx_msr_low, vmx_msr_high);
msr_ctl = vmx_msr_high | vmx_msr_low;
@@ -323,6 +325,8 @@ static void __cpuinit detect_vmx_virtcap(struct cpuinfo_x86 
*c)
set_cpu_cap(c, X86_FEATURE_EPT);
if (msr_ctl2 & X86_VMX_FEATURE_PROC_CTLS2_VPID)
set_cpu_cap(c, X86_FEATURE_VPID);
+   if (msr_ctl2 & X86_VMX_FEATURE_PROC_CTLS2_VWBINVD)
+   set_cpu_cap(c, X86_FEATURE_VWBINVD);
}
 }
 



signature.asc
Description: OpenPGP digital signature


Re: Host latency peaks due to kvm-intel

2009-07-25 Thread Avi Kivity

On 07/24/2009 12:41 PM, Jan Kiszka wrote:

I vaguely recall that someone promised to add a feature reporting
facility for all those nice things, modern VM-extensions may or may not
support (something like or even an extension of /proc/cpuinfo). What is
the state of this plan? Would be specifically interesting for Intel CPUs
as there seem to be many of them out there with restrictions for special
use cases - like real-time.
   


Newer kernels do report some vmx features (like flexpriority) in 
/proc/cpuinfo but not all.


--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.

--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Host latency peaks due to kvm-intel

2009-07-24 Thread Gregory Haskins
Jan Kiszka wrote:
> Gregory Haskins wrote:
>   
>> Jan Kiszka wrote:
>> 
>>> Hi,
>>>
>>> did anyone recently tried current KVM for Intel over some real-time
>>> Linux? I'm seeing more than 500 us latency peaks on the host,
>>> specifically during VM startup. This applies to both 2.6.29.6-rt23 and
>>> Xenomai/I-pipe. For -rt, I both tried the included (patched) KVM modules
>>> as well as kvm.git head with some additionally required -rt fixes.
>>> Xenomai ran over a 2.6.30 kernel with my own KVM-enabler patch.
>>>
>>> Early instrumentation actually points to the guest exit itself: I added
>>> markers right before and after the assembly part of vmx_vcpu_run, and
>>> further instrumentation reports that the next host APIC tick should go
>>> off right inside guest mode. But KVM leaves the switching part 500 us
>>> too late in that case - as if guest exit on external IRQs was disabled.
>>>
>>> Will debug this further, but I'm also curious to hear other user
>>> experiences.
>>>
>>> Jan
>>>
>>>   
>>>   
>> Hi Jan,
>>   Did you try to run with latency-tracer enabled?  If not, this may
>> pinpoint the source for you.
>> 
>
> I did, see above.
>   

Ah, sorry.  It wasn't clear what "instrumenation" was or if you felt it
was definitively pinpointed.  :P

Regards,
-Greg




signature.asc
Description: OpenPGP digital signature


Re: Host latency peaks due to kvm-intel

2009-07-24 Thread Jan Kiszka
Gregory Haskins wrote:
> Jan Kiszka wrote:
>> Hi,
>>
>> did anyone recently tried current KVM for Intel over some real-time
>> Linux? I'm seeing more than 500 us latency peaks on the host,
>> specifically during VM startup. This applies to both 2.6.29.6-rt23 and
>> Xenomai/I-pipe. For -rt, I both tried the included (patched) KVM modules
>> as well as kvm.git head with some additionally required -rt fixes.
>> Xenomai ran over a 2.6.30 kernel with my own KVM-enabler patch.
>>
>> Early instrumentation actually points to the guest exit itself: I added
>> markers right before and after the assembly part of vmx_vcpu_run, and
>> further instrumentation reports that the next host APIC tick should go
>> off right inside guest mode. But KVM leaves the switching part 500 us
>> too late in that case - as if guest exit on external IRQs was disabled.
>>
>> Will debug this further, but I'm also curious to hear other user
>> experiences.
>>
>> Jan
>>
>>   
> Hi Jan,
>   Did you try to run with latency-tracer enabled?  If not, this may
> pinpoint the source for you.

I did, see above.

It finally turned out that I got burned once again by wbinvd: My test
CPUs (and likely also some of my customer) are too "old" to support
SECONDARY_VM_EXEC_CONTROL, which includes trapping guest's wbinvd
invocations. Too bad.

I vaguely recall that someone promised to add a feature reporting
facility for all those nice things, modern VM-extensions may or may not
support (something like or even an extension of /proc/cpuinfo). What is
the state of this plan? Would be specifically interesting for Intel CPUs
as there seem to be many of them out there with restrictions for special
use cases - like real-time.

Jan (who is now patching his guest to avoid wbinvd where possible)

-- 
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Host latency peaks due to kvm-intel

2009-07-23 Thread Gregory Haskins
Jan Kiszka wrote:
> Hi,
>
> did anyone recently tried current KVM for Intel over some real-time
> Linux? I'm seeing more than 500 us latency peaks on the host,
> specifically during VM startup. This applies to both 2.6.29.6-rt23 and
> Xenomai/I-pipe. For -rt, I both tried the included (patched) KVM modules
> as well as kvm.git head with some additionally required -rt fixes.
> Xenomai ran over a 2.6.30 kernel with my own KVM-enabler patch.
>
> Early instrumentation actually points to the guest exit itself: I added
> markers right before and after the assembly part of vmx_vcpu_run, and
> further instrumentation reports that the next host APIC tick should go
> off right inside guest mode. But KVM leaves the switching part 500 us
> too late in that case - as if guest exit on external IRQs was disabled.
>
> Will debug this further, but I'm also curious to hear other user
> experiences.
>
> Jan
>
>   
Hi Jan,
  Did you try to run with latency-tracer enabled?  If not, this may
pinpoint the source for you.

Regards,
-Greg



signature.asc
Description: OpenPGP digital signature


Host latency peaks due to kvm-intel

2009-07-23 Thread Jan Kiszka
Hi,

did anyone recently tried current KVM for Intel over some real-time
Linux? I'm seeing more than 500 us latency peaks on the host,
specifically during VM startup. This applies to both 2.6.29.6-rt23 and
Xenomai/I-pipe. For -rt, I both tried the included (patched) KVM modules
as well as kvm.git head with some additionally required -rt fixes.
Xenomai ran over a 2.6.30 kernel with my own KVM-enabler patch.

Early instrumentation actually points to the guest exit itself: I added
markers right before and after the assembly part of vmx_vcpu_run, and
further instrumentation reports that the next host APIC tick should go
off right inside guest mode. But KVM leaves the switching part 500 us
too late in that case - as if guest exit on external IRQs was disabled.

Will debug this further, but I'm also curious to hear other user
experiences.

Jan

-- 
Siemens AG, Corporate Technology, CT SE 2
Corporate Competence Center Embedded Linux
--
To unsubscribe from this list: send the line "unsubscribe kvm" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html