Re: Some Code for Performance Profiling

2010-04-07 Thread Jiaqing Du
2010/4/5 Avi Kivity a...@redhat.com:
 On 03/31/2010 07:53 PM, Jiaqing Du wrote:

 Hi,

 We have some code about performance profiling in KVM. They are outputs
 of a school project. Previous discussions in KVM, Perfmon2, and Xen
 mailing lists helped us a lot. The code are NOT in a good shape and
 are only used to demonstrated the feasibility of doing performance
 profiling in KVM. Feel free to use it if you want.


 Performance monitoring is an important feature for kvm.  Is there any chance
 you can work at getting it into good shape?

I have been following the discussions about PMU virtualization in the
list for a while. Exporting a proper interface, i.e., guest visible
MSRs and supported events, to the guest across a large number physical
CPUs from different vendors, families, and models is the major
problem. For KVM, currently it also supports almost a dozen different
types of virtual CPUs. I will think about it and try to come up with
something more general.


 We categorize performance profiling in a virtualized environment into
 two types: *guest-wide profiling* and *system-wide profiling*. For
 guest-wide profiling, only the guest is profiled. KVM virtualizes the
 PMU and the user runs a profiler directly in the guest. It requires no
 modifications to the guest OS and the profiler running in the guest.
 For system-wide profiling, both KVM and the guest OS are profiled. The
 results are similar to what XenOprof outputs. In this case, one
 profiler running in the host and one profiler running in the guest.
 Still it requires no modifications to the guest and the profiler
 running in it.


 Can your implementation support both simultaneously?

What do you mean simultaneously? With my implementation, you either
do guest-wide profiling or system-wide profiling. They are achieved
through different patches. Actually, the result of guest-wide
profiling is a subset of system-wide profiling.


 For guest-wide profiling, there are two possible places to save and
 restore the related MSRs. One is where the CPU switches between guest
 mode and host mode. We call this *CPU-switch*. Profiling with this
 enabled reflects how the guest behaves on the physical CPU, plus other
 virtualized, not emulated, devices. The other place is where the CPU
 switches between the KVM context and others. Here KVM context means
 the CPU is executing guest code or KVM code, both kernel space and
 user space. We call this *domain-switch*. Profiling with this enabled
 discloses how the guest behaves on both the physical CPU and KVM.
 (Some emulated operations are really expensive in a virtualized
 environment.)


 Which method do you use?  Or do you support both?

I post two patches in my previous email. One is for CPU-switch, and
the other is for domain-switch.


 Note disclosing host pmu data to the guest is sometimes a security issue.


For instance?

 --
 Do not meddle in the internals of kernels, for they are subtle and quick to
 panic.


--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Some Code for Performance Profiling

2010-04-07 Thread Avi Kivity

On 04/07/2010 10:23 PM, Jiaqing Du wrote:



Can your implementation support both simultaneously?
 

What do you mean simultaneously? With my implementation, you either
do guest-wide profiling or system-wide profiling. They are achieved
through different patches. Actually, the result of guest-wide
profiling is a subset of system-wide profiling.

   


A guest admin monitors the performance of their guest via a vpmu.  
Meanwhile the host admin monitors the performance of the host (including 
all guests) using the host pmu.  Given that the host pmu and the vpmu 
may select different counters, it is difficult to support both 
simultaneously.



For guest-wide profiling, there are two possible places to save and
restore the related MSRs. One is where the CPU switches between guest
mode and host mode. We call this *CPU-switch*. Profiling with this
enabled reflects how the guest behaves on the physical CPU, plus other
virtualized, not emulated, devices. The other place is where the CPU
switches between the KVM context and others. Here KVM context means
the CPU is executing guest code or KVM code, both kernel space and
user space. We call this *domain-switch*. Profiling with this enabled
discloses how the guest behaves on both the physical CPU and KVM.
(Some emulated operations are really expensive in a virtualized
environment.)

   

Which method do you use?  Or do you support both?
 

I post two patches in my previous email. One is for CPU-switch, and
the other is for domain-switch.

   


I see.  I'm not sure I know which one is better!


Note disclosing host pmu data to the guest is sometimes a security issue.

 

For instance?
   


The standard example is hyperthreading where the memory bus unit is 
shared among two logical processors.  A guest sampling a vcpu on one 
thread can gain information about what is happening on the other - the 
number of bus transactions the other thread has issued.  This can be 
used to establish a communication channel between two guests that 
shouldn't be communicating, or to eavesdrop on another guest.  A similar 
problem happens with multicores.


--
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


Re: Some Code for Performance Profiling

2010-04-05 Thread Avi Kivity

On 03/31/2010 07:53 PM, Jiaqing Du wrote:

Hi,

We have some code about performance profiling in KVM. They are outputs
of a school project. Previous discussions in KVM, Perfmon2, and Xen
mailing lists helped us a lot. The code are NOT in a good shape and
are only used to demonstrated the feasibility of doing performance
profiling in KVM. Feel free to use it if you want.
   


Performance monitoring is an important feature for kvm.  Is there any 
chance you can work at getting it into good shape?



We categorize performance profiling in a virtualized environment into
two types: *guest-wide profiling* and *system-wide profiling*. For
guest-wide profiling, only the guest is profiled. KVM virtualizes the
PMU and the user runs a profiler directly in the guest. It requires no
modifications to the guest OS and the profiler running in the guest.
For system-wide profiling, both KVM and the guest OS are profiled. The
results are similar to what XenOprof outputs. In this case, one
profiler running in the host and one profiler running in the guest.
Still it requires no modifications to the guest and the profiler
running in it.
   


Can your implementation support both simultaneously?


For guest-wide profiling, there are two possible places to save and
restore the related MSRs. One is where the CPU switches between guest
mode and host mode. We call this *CPU-switch*. Profiling with this
enabled reflects how the guest behaves on the physical CPU, plus other
virtualized, not emulated, devices. The other place is where the CPU
switches between the KVM context and others. Here KVM context means
the CPU is executing guest code or KVM code, both kernel space and
user space. We call this *domain-switch*. Profiling with this enabled
discloses how the guest behaves on both the physical CPU and KVM.
(Some emulated operations are really expensive in a virtualized
environment.)
   


Which method do you use?  Or do you support both?

Note disclosing host pmu data to the guest is sometimes a security issue.

--
Do not meddle in the internals of kernels, for they are subtle and quick to 
panic.

--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html