Huang Ying wrote:
MCE registers are saved/load into/from CPUState in
kvm_arch_save/load_regs. Because all MCE registers except for
MCG_STATUS should be preserved, MCE registers are saved before
kvm_arch_load_regs in kvm_arch_cpu_reset. To simulate the MCG_STATUS
clearing upon reset,
* Zhang, Yanmin yanmin_zh...@linux.intel.com wrote:
On Thu, 2010-02-25 at 17:26 +0100, Ingo Molnar wrote:
* Jan Kiszka jan.kis...@siemens.com wrote:
Jes Sorensen wrote:
Hi,
It looks like several of us have been looking at how to use the PMU
for virtualization. Rather
On Fri, Feb 26, 2010 at 10:55:17AM +0800, Zhang, Yanmin wrote:
On Thu, 2010-02-25 at 18:34 +0100, Joerg Roedel wrote:
On Thu, Feb 25, 2010 at 04:04:28PM +0100, Jes Sorensen wrote:
1) Add support to perf to allow it to monitor a KVM guest from the
host.
This shouldn't be a big
* Zhang, Yanmin yanmin_zh...@linux.intel.com wrote:
2) We couldn't get guest os kernel/user stack data in an easy way, so we
might not support callchain feature of tool perf. A work around is KVM
copies kernel stack data out, so we could at least support guest os kernel
callchain.
If the
* Joerg Roedel j...@8bytes.org wrote:
On Fri, Feb 26, 2010 at 10:55:17AM +0800, Zhang, Yanmin wrote:
On Thu, 2010-02-25 at 18:34 +0100, Joerg Roedel wrote:
On Thu, Feb 25, 2010 at 04:04:28PM +0100, Jes Sorensen wrote:
1) Add support to perf to allow it to monitor a KVM guest from
On 02/26/2010 10:42 AM, Ingo Molnar wrote:
* Joerg Roedelj...@8bytes.org wrote:
I personally don't like a self-defined event-set as the only solution
because that would probably only work with linux and perf. [...]
The 'soft-PMU' i suggested is transparent on the guest side - if
On 02/25/2010 07:15 PM, Joerg Roedel wrote:
The algorithm to find the offset in the msrpm for a given
msr is needed at other places too. Move that logic to its
own function.
#define MAX_INST_SIZE 15
@@ -417,23 +439,22 @@ err_1:
static void set_msr_interception(u32 *msrpm, unsigned msr,
On Fri, Feb 26, 2010 at 12:20:10PM +0200, Avi Kivity wrote:
On 02/25/2010 07:15 PM, Joerg Roedel wrote:
The algorithm to find the offset in the msrpm for a given
msr is needed at other places too. Move that logic to its
own function.
#define MAX_INST_SIZE 15
@@ -417,23 +439,22 @@
On 02/25/2010 07:15 PM, Joerg Roedel wrote:
This patch optimizes the way the msrpm of the host and the
guest are merged. The old code merged the 2 msrpm pages
completly. This code needed to touch 24kb of memory for that
operation. The optimized variant this patch introduces
merges only the parts
On 02/25/2010 07:15 PM, Joerg Roedel wrote:
There is a generic function now to calculate msrpm offsets.
Use that function in nested_svm_exit_handled_msr() remove
the duplicate logic.
Hm, if the function would also calculate the mask, then it would be
useful for set_msr_interception() as
On 02/25/2010 07:15 PM, Joerg Roedel wrote:
This patch adds the correct handling of the nested io
permission bitmap. Old behavior was to not lookup the port
in the iopm but only reinject an io intercept to the guest.
Signed-off-by: Joerg Roedeljoerg.roe...@amd.com
---
arch/x86/kvm/svm.c |
* Avi Kivity a...@redhat.com wrote:
On 02/26/2010 11:01 AM, Ingo Molnar wrote:
* Zhang, Yanminyanmin_zh...@linux.intel.com wrote:
2) We couldn't get guest os kernel/user stack data in an easy way, so we
might not support callchain feature of tool perf. A work around is KVM
copies kernel
On Fri, Feb 26, 2010 at 11:46:34AM +0200, Avi Kivity wrote:
On 02/26/2010 10:42 AM, Ingo Molnar wrote:
Note that the 'soft PMU' still sucks from a design POV as there's no generic
hw interface to the PMU. So there would have to be a 'soft AMD' and a 'soft
Intel' PMU driver at minimum.
On Fri, Feb 26, 2010 at 10:17:32AM +0100, Ingo Molnar wrote:
My suggestion, as always, would be to start very simple and very minimal:
Enable 'perf kvm top' to show guest overhead. Use the exact same kernel image
both as a host and as guest (for testing), to not have to deal with the
* Avi Kivity a...@redhat.com wrote:
Right, this will severely limit migration domains to hosts of the same
vendor and processor generation. There is a middle ground, though, Intel
has recently moved to define an architectural pmu which is not model
specific. I don't know if AMD adopted
* Joerg Roedel j...@8bytes.org wrote:
On Fri, Feb 26, 2010 at 11:46:34AM +0200, Avi Kivity wrote:
On 02/26/2010 10:42 AM, Ingo Molnar wrote:
Note that the 'soft PMU' still sucks from a design POV as there's no
generic
hw interface to the PMU. So there would have to be a 'soft AMD' and
On 02/26/2010 12:35 PM, Ingo Molnar wrote:
One additional step needed is to get symbol information from the guest, and to
integrate it into the symbol cache on the host side in ~/.debug. We already
support cross-arch symbols and 'perf archive', so the basic facilities are
there for that. So
On 02/26/2010 12:46 PM, Ingo Molnar wrote:
Right, this will severely limit migration domains to hosts of the same
vendor and processor generation. There is a middle ground, though,
Intel has recently moved to define an architectural pmu which is not
model specific. I don't know if AMD
* Joerg Roedel j...@8bytes.org wrote:
On Fri, Feb 26, 2010 at 10:17:32AM +0100, Ingo Molnar wrote:
My suggestion, as always, would be to start very simple and very minimal:
Enable 'perf kvm top' to show guest overhead. Use the exact same kernel
image
both as a host and as guest
On 02/25/10 17:26, Ingo Molnar wrote:
Given that perf can apply the PMU to individual host tasks, I don't see
fundamental problems multiplexing it between individual guests (which can
then internally multiplex it again).
In terms of how to expose it to guests, a 'soft PMU' might be a usable
On Fri, Feb 26, 2010 at 11:46:59AM +0100, Ingo Molnar wrote:
* Joerg Roedel j...@8bytes.org wrote:
On Fri, Feb 26, 2010 at 11:46:34AM +0200, Avi Kivity wrote:
On 02/26/2010 10:42 AM, Ingo Molnar wrote:
Note that the 'soft PMU' still sucks from a design POV as there's no
generic
On 02/26/2010 12:44 PM, Ingo Molnar wrote:
Far cleaner would be to expose it via hypercalls to guest OSs that are
interested in instrumentation.
It's also slower - you can give the guest direct access to the various
counters so no exits are taken when reading the counters (though
* Avi Kivity a...@redhat.com wrote:
Do you have (or plan) any turn-key 'access to all files of the guest' kind
of guest-transparent facility that could be used for such purposes?
Not really. The guest and host admins are usually different people, who
may, being admins, even actively
On 02/26/2010 06:57 AM, Zachary Amsden wrote:
Anyone seeing list_add corruption running qemu-kvm with -smp 2 on
Intel hardware?
Debugging some local changes, which don't appear related. Running
module from latest git on F12.
Can you post a trace? Which list appears to be involved?
--
On 02/26/10 12:06, Joerg Roedel wrote:
Isn't there a cpuid bit indicating the availability of architectural
perfmon?
Nope, the perfmon flag is a fake Linux flag, set based on the contents
on cpuid 0x0a
Jes
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a
* Joerg Roedel j...@8bytes.org wrote:
On Fri, Feb 26, 2010 at 11:46:59AM +0100, Ingo Molnar wrote:
* Joerg Roedel j...@8bytes.org wrote:
On Fri, Feb 26, 2010 at 11:46:34AM +0200, Avi Kivity wrote:
On 02/26/2010 10:42 AM, Ingo Molnar wrote:
Note that the 'soft PMU' still sucks
* Jes Sorensen jes.soren...@redhat.com wrote:
On 02/26/10 12:06, Joerg Roedel wrote:
Isn't there a cpuid bit indicating the availability of architectural
perfmon?
Nope, the perfmon flag is a fake Linux flag, set based on the contents on
cpuid 0x0a
There is a way to query the CPU for
On 02/26/10 11:44, Ingo Molnar wrote:
Direct access to counters is not something that is a big issue. [ Given that i
sometimes can see KVM redraw the screen of a guest OS real-time i doubt this
is the biggest of performance challenges right now ;-) ]
By far the biggest instrumentation issue is:
On 02/26/10 12:24, Ingo Molnar wrote:
There is a way to query the CPU for 'architectural perfmon' though, via CPUID
alone - that is how we set the X86_FEATURE_ARCH_PERFMON shortcut. The logic
is:
if (c-cpuid_level 9) {
unsigned eax = cpuid_eax(10);
/*
* Avi Kivity a...@redhat.com wrote:
On 02/26/2010 12:44 PM, Ingo Molnar wrote:
Far cleaner would be to expose it via hypercalls to guest OSs that are
interested in instrumentation.
It's also slower - you can give the guest direct access to the various
counters so no exits are taken when
* Jes Sorensen jes.soren...@redhat.com wrote:
On 02/26/10 11:44, Ingo Molnar wrote:
Direct access to counters is not something that is a big issue. [ Given that
i
sometimes can see KVM redraw the screen of a guest OS real-time i doubt this
is the biggest of performance challenges right now
On 02/26/2010 01:17 PM, Ingo Molnar wrote:
* Avi Kivitya...@redhat.com wrote:
Do you have (or plan) any turn-key 'access to all files of the guest' kind
of guest-transparent facility that could be used for such purposes?
Not really. The guest and host admins are usually
On Fri, 2010-02-26 at 12:47 +0200, Avi Kivity wrote:
Not really. The guest and host admins are usually different people, who
may, being admins, even actively hate each other. The guest admin would
probably regard it as a security hole. It's probably useful for the
single-host scenario,
On 02/26/2010 01:26 PM, Ingo Molnar wrote:
By far the biggest instrumentation issue is:
- availability
- usability
- flexibility
Exposing the raw hw is a step backwards in many regards.
In a way, virtualization as a whole is a step backwards. We take the nice
On 02/26/2010 01:42 PM, Ingo Molnar wrote:
* Jes Sorensenjes.soren...@redhat.com wrote:
On 02/26/10 11:44, Ingo Molnar wrote:
Direct access to counters is not something that is a big issue. [ Given that i
sometimes can see KVM redraw the screen of a guest OS real-time i doubt this
On 02/26/2010 01:48 PM, Peter Zijlstra wrote:
On Fri, 2010-02-26 at 12:47 +0200, Avi Kivity wrote:
Not really. The guest and host admins are usually different people, who
may, being admins, even actively hate each other. The guest admin would
probably regard it as a security hole. It's
* Avi Kivity a...@redhat.com wrote:
A native API to the host will lock out 100% of the install base now, and a
large section of any future install base.
... which is why i suggested the soft-PMU approach.
And note that _any_ solution we offer locks out 100% of the installed base
right now,
On 02/26/2010 02:07 PM, Ingo Molnar wrote:
* Avi Kivitya...@redhat.com wrote:
A native API to the host will lock out 100% of the install base now, and a
large section of any future install base.
... which is why i suggested the soft-PMU approach.
Not sure I understand it
On 26.02.2010, at 13:25, Joerg Roedel wrote:
On Fri, Feb 26, 2010 at 12:28:24PM +0200, Avi Kivity wrote:
+static void add_msr_offset(u32 offset)
+{
+ u32 old;
+ int i;
+
+again:
+ for (i = 0; i MSRPM_OFFSETS; ++i) {
+ old = msrpm_offsets[i];
+
+ if (old ==
When this was merged in qemu-kvm/master (commit
6249f61a891b6b003531ca4e459c3a553faa82bc) it removed Avi's compile fix when
!CONFIG_EVENTFD (db311e8619d310bd7729637b702581d3d8565049).
So current master fails to build:
CCosdep.o
cc1: warnings being treated as errors
osdep.c: In function
* Avi Kivity a...@redhat.com wrote:
On 02/26/2010 02:07 PM, Ingo Molnar wrote:
* Avi Kivitya...@redhat.com wrote:
A native API to the host will lock out 100% of the install base now, and a
large section of any future install base.
... which is why i suggested the soft-PMU approach.
On Fri, Feb 26, 2010 at 12:28:24PM +0200, Avi Kivity wrote:
+static void add_msr_offset(u32 offset)
+{
+u32 old;
+int i;
+
+again:
+for (i = 0; i MSRPM_OFFSETS; ++i) {
+old = msrpm_offsets[i];
+
+if (old == offset)
+return;
+
+
* Avi Kivity a...@redhat.com wrote:
You basically have given up control over the quality of KVM by pushing so
many aspects of it to user-space and letting it rot there.
That's wrong on so many levels. First, nothing is rotting in userspace,
qemu is evolving faster than kvm is. If I
On 02/26/10 12:42, Ingo Molnar wrote:
* Jes Sorensenjes.soren...@redhat.com wrote:
I have to say I disagree on that. When you run perfmon on a system, it is
normally to measure a specific application. You want to see accurate numbers
for cache misses, mul instructions or whatever else is
On 02/26/2010 02:46 PM, Ingo Molnar wrote:
* Avi Kivitya...@redhat.com wrote:
You basically have given up control over the quality of KVM by pushing so
many aspects of it to user-space and letting it rot there.
That's wrong on so many levels. First, nothing is rotting in
On 02/26/10 13:20, Avi Kivity wrote:
On 02/26/2010 02:07 PM, Ingo Molnar wrote:
... which is why i suggested the soft-PMU approach.
Not sure I understand it completely.
Do you mean to take the model specific host pmu events, and expose them
to the guest via trap'n'emulate? In that case we
On Fri, Feb 26, 2010 at 01:28:29PM +0100, Alexander Graf wrote:
On 26.02.2010, at 13:25, Joerg Roedel wrote:
On Fri, Feb 26, 2010 at 12:28:24PM +0200, Avi Kivity wrote:
+static void add_msr_offset(u32 offset)
+{
+ u32 old;
+ int i;
+
+again:
+ for (i = 0; i MSRPM_OFFSETS;
On 02/26/2010 02:38 PM, Ingo Molnar wrote:
* Avi Kivitya...@redhat.com wrote:
On 02/26/2010 02:07 PM, Ingo Molnar wrote:
* Avi Kivitya...@redhat.com wrote:
A native API to the host will lock out 100% of the install base now, and a
large section of any future install
* Jes Sorensen jes.soren...@redhat.com wrote:
On 02/26/10 12:42, Ingo Molnar wrote:
* Jes Sorensenjes.soren...@redhat.com wrote:
I have to say I disagree on that. When you run perfmon on a system, it is
normally to measure a specific application. You want to see accurate
numbers
On 26.02.2010, at 14:04, Joerg Roedel wrote:
On Fri, Feb 26, 2010 at 01:28:29PM +0100, Alexander Graf wrote:
On 26.02.2010, at 13:25, Joerg Roedel wrote:
On Fri, Feb 26, 2010 at 12:28:24PM +0200, Avi Kivity wrote:
+static void add_msr_offset(u32 offset)
+{
+ u32 old;
+ int i;
+
On 02/26/2010 03:04 PM, Joerg Roedel wrote:
I'm still not convinced on this way of doing things. If it's static,
make it static. If it's dynamic, make it dynamic. Dynamically
generating a static list just sounds plain wrong to me.
Stop. I had a static list in the first version of the
On Fri, Feb 26, 2010 at 02:26:32PM +0100, Alexander Graf wrote:
On 26.02.2010, at 14:21, Joerg Roedel wrote:
On Fri, Feb 26, 2010 at 03:10:13PM +0200, Avi Kivity wrote:
On 02/26/2010 03:04 PM, Joerg Roedel wrote:
I'm still not convinced on this way of doing things. If it's static,
On 02/26/2010 03:06 PM, Ingo Molnar wrote:
Firstly, an emulated PMU was only the second-tier option i suggested. By far
the best approach is native API to the host regarding performance events and
good guest side integration.
Secondly, the PMU cannot be 'given' to the guest in the general
On 02/26/10 14:06, Ingo Molnar wrote:
* Jes Sorensenjes.soren...@redhat.com wrote:
Well you cannot steal the PMU without collaborating with perf_event.c, but
thats quite feasible. Sharing the PMU between the guest and the host is very
costly and guarantees incorrect results in the host.
* Avi Kivity a...@redhat.com wrote:
Or do you mean to define a new, kvm-specific pmu model and feed it off the
host pmu? In this case all the guests will need to be taught about it,
which raises the compatibility problem.
You are missing two big things wrt. compatibility here:
1) The
On 02/26/2010 03:27 PM, Ingo Molnar wrote:
For Linux-Linux the sanest, tier-1 approach would be to map sys_perf_open()
on the guest side over to the host, transparently, via a paravirt driver.
Let us for the purpose of this discussion assume that we are also
interested in supporting
On 02/26/10 14:30, Avi Kivity wrote:
On 02/26/2010 03:06 PM, Ingo Molnar wrote:
That's precisely my point: the guest should obviously not get raw
access to
the PMU. (except where it might matter to performance, such as RDPMC)
That's doable if all counters are steerable. IIRC some counters are
On Fri, 2010-02-26 at 13:51 +0200, Avi Kivity wrote:
It would be the other way round - the host would steal the pmu from the
guest. Later we can try to time-slice and extrapolate, though that's
not going to be easy.
Right, so perf already does the time slicing and interpolating thing, so
On 02/26/10 14:18, Ingo Molnar wrote:
* Avi Kivitya...@redhat.com wrote:
Can you emulate the Core 2 pmu on, say, a P4? [...]
How about the Pentium? Or the i486?
As long as there's perf events support, the CPU can be supported in a soft
PMU. You can even cross-map exotic hw events if need
On 02/26/10 14:31, Ingo Molnar wrote:
You are missing two big things wrt. compatibility here:
1) The first upgrade overhead a one time overhead only.
2) Once a Linux guest has upgraded, it will work in the future, with _any_
future CPU - _without_ having to upgrade the guest!
Dont
On 02/26/2010 03:31 PM, Ingo Molnar wrote:
* Avi Kivitya...@redhat.com wrote:
Or do you mean to define a new, kvm-specific pmu model and feed it off the
host pmu? In this case all the guests will need to be taught about it,
which raises the compatibility problem.
You are missing
* Avi Kivity a...@redhat.com wrote:
On 02/26/2010 03:06 PM, Ingo Molnar wrote:
Firstly, an emulated PMU was only the second-tier option i suggested. By
far
the best approach is native API to the host regarding performance events
and
good guest side integration.
Secondly, the PMU
On 02/26/2010 03:28 PM, Peter Zijlstra wrote:
On Fri, 2010-02-26 at 13:51 +0200, Avi Kivity wrote:
It would be the other way round - the host would steal the pmu from the
guest. Later we can try to time-slice and extrapolate, though that's
not going to be easy.
Right, so perf
On 02/26/10 14:28, Peter Zijlstra wrote:
On Fri, 2010-02-26 at 13:51 +0200, Avi Kivity wrote:
It would be the other way round - the host would steal the pmu from the
guest. Later we can try to time-slice and extrapolate, though that's
not going to be easy.
Right, so perf already does the
On 02/26/2010 03:44 PM, Ingo Molnar wrote:
* Avi Kivitya...@redhat.com wrote:
On 02/26/2010 03:06 PM, Ingo Molnar wrote:
Firstly, an emulated PMU was only the second-tier option i suggested. By far
the best approach is native API to the host regarding performance events and
On 02/26/2010 03:37 PM, Jes Sorensen wrote:
On 02/26/10 14:31, Ingo Molnar wrote:
You are missing two big things wrt. compatibility here:
1) The first upgrade overhead a one time overhead only.
2) Once a Linux guest has upgraded, it will work in the future,
with _any_
future CPU -
On 02/26/10 14:16, Ingo Molnar wrote:
* Avi Kivitya...@redhat.com wrote:
That was not what i suggested tho. tools/kvm/ would work plenty fine.
I'll wait until we have tools/libc and tools/X. After all, they affect a
lot more people and are concerned with a lot more kernel/user interfaces
On 02/26/2010 03:30 PM, Joerg Roedel wrote:
So the msrpm bitmap changes dynamically for each vcpu? Great, make it
fully dynamic then, changing the vcpu-arch.msrpm only from within its
vcpu context. No need for atomic ops.
The msrpm_offsets table is global. But I think I will follow Avis
* Avi Kivity a...@redhat.com wrote:
On 02/26/2010 03:31 PM, Ingo Molnar wrote:
* Avi Kivitya...@redhat.com wrote:
Or do you mean to define a new, kvm-specific pmu model and feed it off the
host pmu? In this case all the guests will need to be taught about it,
which raises the
On 02/26/2010 03:16 PM, Ingo Molnar wrote:
* Avi Kivitya...@redhat.com wrote:
That was not what i suggested tho. tools/kvm/ would work plenty fine.
I'll wait until we have tools/libc and tools/X. After all, they affect a
lot more people and are concerned with a lot more
On 02/26/10 14:27, Ingo Molnar wrote:
* Jes Sorensenjes.soren...@redhat.com wrote:
You certainly cannot emulate the Core2 on a P4. The Core2 is Perfmon v2,
whereas Nehalem and Atom are v3 if I remember correctly. [...]
Of course you can emulate a good portion of it, as long as there's perf
Hi list,
While trying to upgrade some internal infrastructure to qemu-kvm-0.12 I
stumbled across this really weird problem that I see with current qemu-kvm git
too:
I start qemu-kvm using:
./qemu-system-x86_64 -L ../pc-bios/ -m 512 -net nic,model=virtio -net
tap,ifname=tap0,script=/bin/true
On 02/26/2010 04:07 PM, Jes Sorensen wrote:
On 02/26/10 14:27, Ingo Molnar wrote:
* Jes Sorensenjes.soren...@redhat.com wrote:
You certainly cannot emulate the Core2 on a P4. The Core2 is Perfmon
v2,
whereas Nehalem and Atom are v3 if I remember correctly. [...]
Of course you can emulate
* Avi Kivity a...@redhat.com wrote:
On 02/26/2010 03:44 PM, Ingo Molnar wrote:
* Avi Kivitya...@redhat.com wrote:
On 02/26/2010 03:06 PM, Ingo Molnar wrote:
Firstly, an emulated PMU was only the second-tier option i suggested. By
far
the best approach is native API to the host
On 02/26/2010 04:01 PM, Ingo Molnar wrote:
* Avi Kivitya...@redhat.com wrote:
On 02/26/2010 03:31 PM, Ingo Molnar wrote:
* Avi Kivitya...@redhat.com wrote:
Or do you mean to define a new, kvm-specific pmu model and feed it off the
host pmu? In this case all the guests
* Avi Kivity a...@redhat.com wrote:
On 02/26/2010 03:16 PM, Ingo Molnar wrote:
* Avi Kivitya...@redhat.com wrote:
That was not what i suggested tho. tools/kvm/ would work plenty fine.
I'll wait until we have tools/libc and tools/X. After all, they affect a
lot more people and are
On Fri, 2010-02-26 at 15:55 +0200, Avi Kivity wrote:
That actually works on the Intel-only architectural pmu. I'm beginning
to like it more and more.
Only for the arch defined events, all _7_ of them.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message
* Avi Kivity a...@redhat.com wrote:
Certainly guests that we don't port won't be able to use this. I doubt
we'll be able to make Windows work with this - the only performance tool
I'm
familiar with on Windows is Intel's VTune, and that's proprietary.
Dont you see the extreme irony
On Fri, 2010-02-26 at 14:51 +0100, Jes Sorensen wrote:
Furthermore, when KVM doesn't virtualize the physical system topology,
some PMU features cannot even be sanely used from a vcpu.
That is definitely an issue, and there is nothing we can really do about
that. Having two guests running
On Fri, 2010-02-26 at 15:30 +0200, Avi Kivity wrote:
Even if there were no security considerations, if the guest can observe
host data in the pmu, it means the pmu is inaccurate. We should expose
guest data only in the guest pmu. That's not difficult to do, you stop
the pmu on exit and
On Fri, 2010-02-26 at 15:30 +0200, Avi Kivity wrote:
Scheduling at event granularity would be a good thing. However we need
to be able to handle the guest using the full pmu.
Does the full PMU include things like LBR, PEBS and uncore? in that
case, there is no way you're going to get that
On 26.02.2010, at 15:12, Alexander Graf wrote:
Hi list,
While trying to upgrade some internal infrastructure to qemu-kvm-0.12 I
stumbled across this really weird problem that I see with current qemu-kvm
git too:
I start qemu-kvm using:
./qemu-system-x86_64 -L ../pc-bios/ -m 512
On 02/26/2010 04:12 PM, Ingo Molnar wrote:
Again you are making an incorrect assumption: that information leakage via the
PMU only occurs while the host is running on that CPU. It does not - the PMU
can leak general system details _while the guest is running_.
You mean like bus
On 02/26/2010 04:27 PM, Peter Zijlstra wrote:
On Fri, 2010-02-26 at 15:55 +0200, Avi Kivity wrote:
That actually works on the Intel-only architectural pmu. I'm beginning
to like it more and more.
Only for the arch defined events, all _7_ of them.
That's 7 more than what we
On 02/26/2010 04:23 PM, Ingo Molnar wrote:
* Avi Kivitya...@redhat.com wrote:
On 02/26/2010 03:16 PM, Ingo Molnar wrote:
* Avi Kivitya...@redhat.com wrote:
That was not what i suggested tho. tools/kvm/ would work plenty fine.
I'll wait until we have
On Fri, 2010-02-26 at 16:54 +0200, Avi Kivity wrote:
On 02/26/2010 04:27 PM, Peter Zijlstra wrote:
On Fri, 2010-02-26 at 15:55 +0200, Avi Kivity wrote:
That actually works on the Intel-only architectural pmu. I'm beginning
to like it more and more.
Only for the arch defined
On 02/26/2010 05:08 PM, Peter Zijlstra wrote:
That's 7 more than what we support now, and 7 more than what we can
guarantee without it.
Again, what windows software uses only those 7? Does it pay to only have
access to those 7 or does it limit the usability to exactly the same
subset a
On Fri, 2010-02-26 at 16:53 +0200, Avi Kivity wrote:
If you give a full PMU to a guest it's a whole different dimension and
quality
of information. Literally hundreds of different events about all sorts of
aspects of the CPU and the hardware in general.
Well, we filter out the
On Fri, 2010-02-26 at 17:11 +0200, Avi Kivity wrote:
On 02/26/2010 05:08 PM, Peter Zijlstra wrote:
That's 7 more than what we support now, and 7 more than what we can
guarantee without it.
Again, what windows software uses only those 7? Does it pay to only have
access to those 7
Hello Jan,
I can compile kvm-kmod-2.6.32.9 under Ubuntu 9.1 64-Bit, but 'make
install' fails with
ing...@nexoc:~/KVM/kvm-kmod-2.6.32.9$ sudo make install
[sudo] password for ingmar:
mkdir -p ///usr/local/include/kvm-kmod/asm/
install -m 644 usr/include/asm-x86/{kvm,kvm_para}.h
Ingmar Schraub wrote:
Hello Jan,
I can compile kvm-kmod-2.6.32.9 under Ubuntu 9.1 64-Bit, but 'make
install' fails with
ing...@nexoc:~/KVM/kvm-kmod-2.6.32.9$ sudo make install
[sudo] password for ingmar:
mkdir -p ///usr/local/include/kvm-kmod/asm/
install -m 644
I will be on vacation and offline, pmu threads included, for a week.
Marcelo will handle all kvm issues as usual.
--
Do not meddle in the internals of kernels, for they are subtle and quick
to panic.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to
On Fri, 2010-02-26 at 17:11 +0200, Avi Kivity wrote:
On 02/26/2010 05:08 PM, Peter Zijlstra wrote:
That's 7 more than what we support now, and 7 more than what we can
guarantee without it.
Again, what windows software uses only those 7? Does it pay to only have
access to those 7
On 02/26/2010 04:37 PM, Ingo Molnar wrote:
* Avi Kivitya...@redhat.com wrote:
Certainly guests that we don't port won't be able to use this. I doubt
we'll be able to make Windows work with this - the only performance tool I'm
familiar with on Windows is Intel's VTune, and that's
On 02/26/2010 05:55 PM, Peter Zijlstra wrote:
BTW, just wondering, why would a developer be running VTune in a guest
anyway? I'd think that a developer that windows oriented would simply
run windows on his desktop and VTune there.
Cloud.
You have an app running somewhere on a cloud,
On 02/26/2010 06:03 PM, Avi Kivity wrote:
Note, I'll be away for a week, so will not be responsive for a while
--
Do not meddle in the internals of kernels, for they are subtle and quick to
panic.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to
1 0 0 98 0 1| 0 0 | 66B 354B| 0 0 | 3011
1 1 0 98 0 0| 0 0 | 66B 354B| 0 0 | 2911
From that point onwards, nothing will happen.
The host has disk IO to spare... So what is it waiting for??
Moved to an AMD64 host. No effect.
Disabled
On 02/26/2010 01:17 PM, Ingo Molnar wrote:
Nobody is really 'in charge' of how KVM gets delivered to the user. You
isolated the fun kernel part for you and pushed out the boring bits to
user-space. So if mundane things like mouse integration sucks 'hey that's a
user-space tooling problem', if
On Fri, 2010-02-26 at 10:51 +0800, David V. Cloud wrote:
Hi,
I read some kernel source. My basic understanding is that, in
net/8021q/vlan_dev.c, vlan_dev_init, the dev-features of vconfig
created interface is defined to be
dev-features |= real_dev-features real_dev-vlan_features;
On Fri, 2010-02-26 at 10:51 +0800, David V. Cloud wrote:
Hi,
I read some kernel source. My basic understanding is that, in
net/8021q/vlan_dev.c, vlan_dev_init, the dev-features of vconfig
created interface is defined to be
dev-features |= real_dev-features real_dev-vlan_features;
1 - 100 of 114 matches
Mail list logo