Paolo Bonzini pbonz...@redhat.com writes:
On 19/05/2015 16:25, Zhang, Yang Z wrote:
Paolo Bonzini wrote on 2015-04-30:
This patch series introduces system management mode support.
Just curious what's motivation to add vSMM supporting? Is there any
usage case inside guest requires SMM?
If there's active LBR users out there, we should refuse to enable PT and
vice versa.
This doesn't work, e.g. hardware debuggers can take over at any time.
-Andi
--
a...@linux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of
Signed-off-by: Andi Kleen a...@linux.intel.com
I did not contribute to this patch, so please remove that SOB.
Signed-off-by: Kan Liang kan.li...@intel.com
struct extra_reg *extra_regs;
unsigned int er_flags;
+ boolextra_msr_access; /* EXTRA REG MSR can
On Wed, Jul 02, 2014 at 11:14:14AM -0700, kan.li...@intel.com wrote:
From: Kan Liang kan.li...@intel.com
If RTIT_CTL.TraceEn=1, any attempt to read or write the LBR or LER MSRs,
including LBR_TOS, will result in a #GP.
Since Intel PT can be enabled/disabled at runtime, LBR MSRs have to be
So it we take from it that translation should be present the same goes for
accessed and dirty. If Andi can clarify this within Intel it would be great.
Andi?
There were some problems on really old CPUs with non dirty/accessed pages
(P4 generation) with PEBS. But PEBS virtualization is
Peter Zijlstra pet...@infradead.org writes:
So I really hate this patch, it makes the code hideous. Also, its a
death by a thousand cuts adding endless branches in this code.
FWIW compared to the cost of a RDMSR (which is a very complex operation
for the CPU) the cost of a predicted branch is
First, it's not sufficient to pin the debug store area, you also
have to pin the guest page tables that are used to map the debug
store. But even if you do that, as soon as the guest fork()s, it
will create a new pgd which the host will be free to swap out. The
processor can then attempt a
+ * Failure to instantiate pages will abort guest entry.
+ *
+ * Page frames should be pinned with get_page in advance.
+ *
+ * Pinning is not guaranteed while executing as L2 guest.
Does this undermine security?
It should not. In the worst case it'll randomly lose PEBS records.
-Andi
--
Userspace then can read/write these MSRs, and add them to the migration
stream. QEMU has code for that.
Thanks. The PEBS setup always redoes its state, can be arbitarily often redone.
So the only change needed would be to add the MSRs to some list in qemu?
-Andi
--
a...@linux.intel.com --
Andi Kleen a...@firstfloor.org writes:
Signed-off-by: Kan Liang kan.li...@intel.com
And here I thought that Andi was of the opinion that if you set CPUID to
indicate a particular CPU you had better also handle all its MSRs.
Yes, philosophically that would be the right way,
but we
Peter Zijlstra pet...@infradead.org writes:
This order indicates Andi is the author; but there's no corresponding
From.
I wrote an early version of the patch, but Kan took it over and extended
it. So both are authors.
BTW Kan you may want to use git send-email to get standard format.
On Wed, Jun 18, 2014 at 08:12:03PM -0300, mtosa...@redhat.com wrote:
Required by PEBS support as discussed at
Subject: [patch 0/5] Implement PEBS virtualization for Silvermont
Message-Id: 1401412327-14810-1-git-send-email-a...@firstfloor.org
Thanks marcelo. I'll give it a stress test here.
On Tue, Jun 10, 2014 at 03:04:48PM -0300, Marcelo Tosatti wrote:
On Thu, May 29, 2014 at 06:12:07PM -0700, Andi Kleen wrote:
{
struct kvm_pmu *pmu = vcpu-arch.pmu;
@@ -407,6 +551,20 @@ int kvm_pmu_set_msr(struct kvm_vcpu *vcpu, struct
msr_data *msr_info)
return
BTW I found some more problems in the v1 version.
With EPT it is less likely to happen (but still possible IIRC depending
on memory
pressure and how much memory shadow paging code is allowed to use),
without EPT
it will happen for sure.
Don't care about the non EPT case,
It seems to me that with this patch, there is no way to expose a
PMU-without-PEBS to the guest if the host has PEBS.
If you clear the CPUIDs then noone would ilikely access it.
But fair enough, I'll add extra checks for CPUID.
It would be a bigger concern if we expected virtual PMU migration
On Fri, May 30, 2014 at 09:31:53AM +0200, Peter Zijlstra wrote:
On Thu, May 29, 2014 at 06:12:05PM -0700, Andi Kleen wrote:
From: Andi Kleen a...@linux.intel.com
Currently perf unconditionally disables PEBS for guest.
Now that we have the infrastructure in place to handle
it we can
From: Andi Kleen a...@linux.intel.com
To avoid various problems (like leaking counters) the PEBS
virtualization needs white listing per CPU model. Add state to the
x86_pmu for this and enable it for Silvermont.
Silvermont is currently the only CPU where it is safe
to virtualize PEBS
PEBS is very useful (e.g. enabling the more cycles:pp event or
memory profiling) Unfortunately it didn't work in virtualization,
which is becoming more and more common.
This patch kit implements simple PEBS virtualization for KVM on Silvermont
CPUs. Silvermont does not have the leak problems that
From: Andi Kleen a...@linux.intel.com
PEBS (Precise Event Bases Sampling) profiling is very powerful,
allowing improved sampling precision and much additional information,
like address or TSX abort profiling. cycles:p and :pp uses PEBS.
This patch enables PEBS profiling in KVM guests.
PEBS
From: Andi Kleen a...@linux.intel.com
Currently perf unconditionally disables PEBS for guest.
Now that we have the infrastructure in place to handle
it we can allow it for KVM owned guest events. For
the perf needs to know that a event is owned by
a guest. Add a new state bit in the perf_event
From: Andi Kleen a...@linux.intel.com
With PEBS virtualization the PEBS record gets delivered to the guest,
but the host sees the PMI. This would normally result in a spurious
PEBS PMI that is ignored. But we need to inject the PMI into the guest,
so that the guest PMI handler can handle the PEBS
kvm_rebooting is referenced from assembler code, thus
needs to be visible.
Cc: g...@redhat.com
Cc: pbonz...@redhat.com
Cc: kvm@vger.kernel.org
Signed-off-by: Andi Kleen a...@linux.intel.com
---
virt/kvm/kvm_main.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/virt/kvm
From: Andi Kleen a...@linux.intel.com
kvm_rebooting is referenced from assembler code, thus
needs to be visible.
Cc: g...@redhat.com
Cc: pbonz...@redhat.com
Cc: kvm@vger.kernel.org
Signed-off-by: Andi Kleen a...@linux.intel.com
---
virt/kvm/kvm_main.c | 2 +-
1 file changed, 1 insertion(+), 1
Should this go into 3.11 too? Or it never worked?
It's ok to keep it for .12. It was broken since it was merged,
but normal builds don't trigger the problem.
-andi
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
More
From: Andi Kleen a...@linux.intel.com
kvm_rebooting is referenced from assembler code, thus
needs to be visible.
Signed-off-by: Andi Kleen a...@linux.intel.com
---
virt/kvm/kvm_main.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
static void kvm_io_bus_destroy(struct kvm_io_bus *bus);
-bool kvm_rebooting;
+__visible bool kvm_rebooting;
EXPORT_SYMBOL_GPL(kvm_rebooting);
How many of these are there kernel wide?
Not very many (at least on x86 allyes) ~10.
Also most users are not exported.
Probably not worth an
From: Andi Kleen a...@linux.intel.com
[KVM maintainers:
The underlying support for this is in perf/core now. So please merge
this patch into the KVM tree.]
This is not arch perfmon, but older CPUs will just ignore it. This makes
it possible to do at least some TSX measurements from a KVM guest
FWIW I use the paravirt spinlock ops for adding lock elision
to the spinlocks.
This needs to be done at the top level (so the level you're removing)
However I don't like the pv mechanism very much and would
be fine with using an static key hook in the main path
like I do for all the other lock
On Sat, Jun 01, 2013 at 01:28:00PM -0700, Jeremy Fitzhardinge wrote:
On 06/01/2013 01:14 PM, Andi Kleen wrote:
FWIW I use the paravirt spinlock ops for adding lock elision
to the spinlocks.
Does lock elision still use the ticketlock algorithm/structure, or are
they different? If they're
Rik van Riel r...@redhat.com writes:
If we always incremented the ticket number by 2 (instead of 1), then
we could use the lower bit of the ticket number as the spinlock.
Spinning on a single bit is very inefficient, as you need to do
try lock in a loop which is very unfriendly to the MESI
On Thu, Sep 13, 2012 at 11:27:43AM +0300, Avi Kivity wrote:
On 09/12/2012 10:17 PM, Andi Kleen wrote:
On Wed, Sep 12, 2012 at 05:50:41PM +0300, Avi Kivity wrote:
vmx.c has an lto-unfriendly bit, fix it up.
While there, clean up our asm code.
Avi Kivity (3):
KVM: VMX: Make lto
.
I cannot guarantee I always hit the unit splitting case, but it looks
good so far.
I replaced my patches with yours.
Acked-by: Andi Kleen a...@linux.intel.com
-Andi
--
a...@linux.intel.com -- Speaking for myself only
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body
The reason we use a local label is so that we the function isn't split
into two from the profiler's point of view. See cd2276a795b013d1.
Hmm that commit message is not very enlightening.
The goal was to force a compiler error?
With LTO there is no way to force two functions be in the same
On Sun, Aug 19, 2012 at 06:12:57PM +0300, Avi Kivity wrote:
On 08/19/2012 06:09 PM, Andi Kleen wrote:
The reason we use a local label is so that we the function isn't split
into two from the profiler's point of view. See cd2276a795b013d1.
Hmm that commit message is not very enlightening
So if a guest exits due to an external event it's easy to inspect the
state of that guest and avoid to schedule away when it was interrupted
in a spinlock held section. That guest/host shared state needs to be
On a large system under high contention sleeping can perform surprisingly
well.
On Wed, Sep 14, 2011 at 10:00:07AM +0300, Avi Kivity wrote:
On 09/13/2011 10:21 PM, Don Zickus wrote:
Or are you saying an NMI in an idle system will have the same %rip thus
falsely detecting a back-to-back NMI?
That's easy to avoid - insert an instruction zeroing the last nmi_rip
If an NMI hits in an interrupt handler, or in the after hlt section
before the write-to-last-nmi-rip, then we'll see that %rip has changed.
If it hits after the write-to-last-nmi-rip instruction (or in the hlt
itself), then we'll also see that %rip has changed, due to the effect of
that
On Wed, Sep 14, 2011 at 10:26:21PM +0300, Avi Kivity wrote:
On 09/14/2011 08:28 PM, Andi Kleen wrote:
If an NMI hits in an interrupt handler, or in the after hlt section
before the write-to-last-nmi-rip, then we'll see that %rip has changed.
If it hits after the write-to-last-nmi-rip
So I got around to implementing this and it seems to work great. The back
to back NMIs are detected properly using the %rip and that info is passed to
the NMI notifier. That info is used to determine if only the first
handler to report 'handled' is executed or _all_ the handlers are
Or are you saying an NMI in an idle system will have the same %rip thus
falsely detecting a back-to-back NMI?
Yup.
Another problem is very long running instructions, like WBINVD and some others.
If there's a high frequency NMI it may well hit multiple times in a single
instance.
-Andi
--
On Tue, Sep 13, 2011 at 04:53:18PM -0400, Don Zickus wrote:
On Tue, Sep 13, 2011 at 09:58:38PM +0200, Andi Kleen wrote:
Or are you saying an NMI in an idle system will have the same %rip thus
falsely detecting a back-to-back NMI?
Yup.
Hmm. That sucks. Is there another register
But, erm, does that even make sense? I'm assuming the NMI reason port
tells the CPU why it got an NMI. If multiple CPUs can get NMIs and
there's only a single reason port, then doesn't that mean that either 1)
they all got the NMI for the same reason, or 2) having a single port is
Ok it looks like the 32bit kernel only handles 1/2/4. Maybe that
was the problem if you ran on 32bit.
I'm happy with a slower copy_from_user() for that particular case.
It wouldn't be hard to fix.
-Andi
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a
Only I can guess for that reason is the reduction of some function calls
by inlining some functions.
Yes once at a time cfu was inline too and just checked for the right
sizes and the used g*u, but it got lost in the icache over everything else
mania which is unfortunately en vogue for quite
Do you think the following case would not differ so much
from (1' 2') ?
walk_addr_generic() ---1''
copy_from_user() ---2''
Yes it should be the same and is cleaner.
If you do a make .../foo.i and look at the code coming out of the
preprocessor you'll see it
Avi Kivity a...@redhat.com writes:
Good optimization. copy_from_user() really isn't optimized for short
buffers, I expect much of the improvement comes from that.
Actually it is equivalent to get_user for the lenghts supported by
get_user, assuming you pass in a constant length. You probably
Avi Kivity a...@redhat.com writes:
With EPT or NPT you cannot detect if a page is read only.
Why not? You can always walk the page tables manually again.
Furthermore, at least Linux (without highmem) maps all of memory with
a read/write mapping in addition to the per-process mapping, so no
On Fri, Feb 11, 2011 at 02:29:40PM +0800, Lai Jiangshan wrote:
The changelog of 104f226 said adds the __noclone attribute,
but it was missing in its patch. I think it is still needed.
Looks good. Thanks.
Acked-by: Andi Kleen a...@linux.intel.com
-Andi
--
To unsubscribe from this list: send
I personally would consider it cleaner to have clearly
defined wrappers instead of complicted flags in the caller.
The number of args to these functions is getting nutty - you'll
probably find that it is beneficial to inline these wrapepr functions, if
the number of callsites is small.
Really
Also longer term we'll get compilers that can do cross-file inlining
for optimized builds.
Which we'll probably need to turn off all over the place :(
Why?
So please better avoid these kinds of micro optimizations unless
it's a really really extremly speed critical path.
It's
Can't you free and reallocate all guest memory instead, on reboot, if
there's a hwpoisoned page? Then you don't need this interface.
I think that would be more efficient. You can potentially save a lot
of memory if the new guest doesn't need as much as the old one.
-Andi
--
To unsubscribe
On 11/18/2010 1:17 PM, Avi Kivity wrote:
cea15c2 (KVM: Move KVM context switch into own function) split vmx_vcpu_run()
to prevent multiple copies of the context switch from being generated (causing
problems due to a label). This patch folds them back together again and adds
the __noclone
On 11/18/2010 3:32 PM, Avi Kivity wrote:
On 11/18/2010 03:48 PM, Andi Kleen wrote:
On 11/18/2010 1:17 PM, Avi Kivity wrote:
cea15c2 (KVM: Move KVM context switch into own function) split
vmx_vcpu_run()
to prevent multiple copies of the context switch from being
generated (causing
problems due
The issue of d) is that there are multiple ways to inject MCE. Now one
software based, one APEI based, and maybe some others in the future.
They all use different interfaces. And as debug interface, there are not
considered kernel ABI too (some are in debugfs). So I think it is better
to use
We need host kernel to break down the 2M huge page into 4k pages. Then
send SIGBUS to QEMU with the poisoned 4k page. Because host kernel will
poison the whole 2M virtual address space otherwise, and other 4k pages
inside the 2M page can not used accessed in guest (will trigger SIGBUS
and
Doing it in userspace in easier, since we can replace the vma for
that section (and avoid mixed 4k/2M pages in hugetlbfs).
You can't do that today, there's no way currently to access the non corrupted
portion of the 2MB page. Once it's poisoned it's all gone.
-Andi
--
a...@linux.intel.com --
On Wed, Nov 10, 2010 at 07:47:11PM +0200, Avi Kivity wrote:
On 11/10/2010 07:44 PM, Andi Kleen wrote:
Doing it in userspace in easier, since we can replace the vma for
that section (and avoid mixed 4k/2M pages in hugetlbfs).
You can't do that today, there's no way currently to access
We have said 3.4 minimum for x86 for a long time now, and have an RFC
Ok makes sense. I thought it was still at 3.3. I should retire
this 3.3 fossil anyways, it's really only for old compat testing.
I don't remember seeing a warning -- aren't there supposed to be warnings
for unsupported
Not unless they are actively known to break. People get huffy about it
Well they do -- i just found out.
because even if it is known to have problems it doesn't break *their*
particular configuration. I'm getting to be of the opinion that people
who compile modern kernels with ancient
That is an issue too, as 3.x does a lot fewer optimizations than 4.x.
Well to be fair the default -Os build disables most of the fancy stuff
(and the resulting code is often terrible)
I guess it doesn't matter too much, at least not with the
CONFIG_CC_OPTIMIZE_SIZE default.
-Andi
--
From: Andi Kleen a...@linux.intel.com
gcc 4.5 with some special options is able to duplicate the VMX
context switch asm in vmx_vcpu_run(). This results in a compile error
because the inline asm sequence uses an on local label. The non local
label is needed because other code wants to set up
On Wed, Oct 20, 2010 at 06:12:11PM +0200, Avi Kivity wrote:
On 10/20/2010 05:56 PM, Andi Kleen wrote:
From: Andi Kleena...@linux.intel.com
gcc 4.5 with some special options is able to duplicate the VMX
context switch asm in vmx_vcpu_run(). This results in a compile error
because the inline
Anthony Liguori anth...@codemonkey.ws writes:
If we extended integrated -mem-path with -numa such that a different
path could be used with each numa node (and we let an explicit file be
specified instead of just a directory), then if I understand
correctly, we could use numactl without any
The point is about hotplug CPUs. Any hotplugged CPU will not have a
perfectly synchronized TSC, ever, even on a single socket, single crystal
board.
hotplug was in the next section, not in this.
Besides most systems do not support hotplug CPUs.
-Andi
--
To unsubscribe from this list: send
Zachary Amsden zams...@redhat.com writes:
I think listing all the obscure bits in the PIT was an attempt to
weed out the weak and weary readers early, right?
+this as well. Several hardware limitations make the problem worse - if it is
+not possible to write the full 32-bits of the TSC, it
On Tue, Jun 15, 2010 at 02:22:06PM +0300, Avi Kivity wrote:
Too much duplication. How about putting the tail end of the function in a
common helper (with an inatomic flag)?
btw, is_hwpoison_address() is racy. While it looks up the address, some
other task can unmap the page tables under
The page is fine, the page tables are not. Another task can munmap() the
thing while is_hwpoison_address() is running.
Ok that boils down to me not seeing that source.
If it accesses the page tables yes then it's racy. But whoever
looked up the page tables in the first place should have
No real bugs in this one, the real bug I found is in a separate
patch.
Cc: a...@redhat.com
Cc: kvm@vger.kernel.org
Signed-off-by: Andi Kleen a...@linux.intel.com
---
arch/x86/kvm/paging_tmpl.h |1 +
arch/x86/kvm/vmx.c |3 +--
virt/kvm/assigned-dev.c|2 --
3 files
Real bug fix.
When the user passed in a NULL mask pass this on from the ioctl
handler.
Found by gcc 4.6's new warnings.
Cc: a...@redhat.com
Cc: kvm@vger.kernel.org
Signed-off-by: Andi Kleen a...@linux.intel.com
---
virt/kvm/kvm_main.c |2 +-
1 file changed, 1 insertion(+), 1 deletion
Stephen Hemminger shemmin...@vyatta.com writes:
Still not sure this is a good idea for a couple of reasons:
1. We already have lots of special cases with skb's (frags and fraglist),
and skb's travel through a lot of different parts of the kernel. So any
new change like this creates
On Thu, Jun 03, 2010 at 09:50:51AM +0530, Srivatsa Vaddagiri wrote:
On Wed, Jun 02, 2010 at 12:00:27PM +0300, Avi Kivity wrote:
There are two separate problems: the more general problem is that
the hypervisor can put a vcpu to sleep while holding a lock, causing
other vcpus to spin until
On Thu, Jun 03, 2010 at 12:06:39PM +0100, David Woodhouse wrote:
On Tue, 2010-06-01 at 21:36 +0200, Andi Kleen wrote:
Collecting the contention/usage statistics on a per spinlock
basis seems complex. I believe a practical approximation
to this are adaptive mutexes where upon hitting
On Thu, Jun 03, 2010 at 10:38:32PM +1000, Nick Piggin wrote:
And they aren't even using ticket spinlocks!!
I suppose they simply don't have unfair memory. Makes things easier.
-Andi
--
a...@linux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line unsubscribe
That would certainly be a part of it, I'm sure they have stronger
fairness and guarantees at the expense of some performance. We saw the
spinlock starvation first on 8-16 core Opterons I think, wheras Altix
had been over 1024 cores and POWER7 1024 threads now apparently without
reported
On Wed, Jun 02, 2010 at 05:51:14AM +0300, Avi Kivity wrote:
On 06/01/2010 08:27 PM, Andi Kleen wrote:
On Tue, Jun 01, 2010 at 07:52:28PM +0300, Avi Kivity wrote:
We are running everything on NUMA (since all modern machines are now NUMA).
At what scale do the issues become observable
Gleb Natapov g...@redhat.com writes:
The patch below allows to patch ticket spinlock code to behave similar to
old unfair spinlock when hypervisor is detected. After patching unlocked
The question is what happens when you have a system with unfair
memory and you run the hypervisor on that.
On Tue, Jun 01, 2010 at 07:24:14PM +0300, Gleb Natapov wrote:
On Tue, Jun 01, 2010 at 05:53:09PM +0200, Andi Kleen wrote:
Gleb Natapov g...@redhat.com writes:
The patch below allows to patch ticket spinlock code to behave similar to
old unfair spinlock when hypervisor is detected
On Tue, Jun 01, 2010 at 07:52:28PM +0300, Avi Kivity wrote:
We are running everything on NUMA (since all modern machines are now NUMA).
At what scale do the issues become observable?
On Intel platforms it's visible starting with 4 sockets.
I understand that reason and do not propose to get
Collecting the contention/usage statistics on a per spinlock
basis seems complex. I believe a practical approximation
to this are adaptive mutexes where upon hitting a spin
time threshold, punt and let the scheduler reconcile fairness.
That would probably work, except: how do you get the
Peter Lieven p...@dlh.net writes:
Starting mail service (Postfix)
NMI Watchdog detected LOCKUP on CPU 0
You could simply turn off the NMI watchdog (nmi_watchdog=0 at the kernel
command line)
Perhaps the PMU emulation is not complete and nmi watchdog
needs PMU. It's not really needed
I guess these warnings could be just disabled. With nearly everyone
using multi-core these days they are kind of obsolete anyways.
Well, the warning refers to an old single-core only CPU model. Most of
those were able to run in SMP boards, but only a subset of them was
officially certified
On Wed, Mar 31, 2010 at 10:15:28AM +0200, Jiri Kosina wrote:
On Wed, 31 Mar 2010, Andi Kleen wrote:
booting 32bit guest on 32bit host on AMD system gives me the following
warning when KVM is instructed to boot as SMP:
I guess these warnings could be just disabled. With nearly
On Wed, Mar 31, 2010 at 01:03:02AM +0200, Jiri Kosina wrote:
Hi,
booting 32bit guest on 32bit host on AMD system gives me the following
warning when KVM is instructed to boot as SMP:
I guess these warnings could be just disabled. With nearly everyone
using multi-core these days they are
If you're profiling a single guest it makes more sense to do this from
inside the guest - you can profile userspace as well as the kernel.
I'm interested in debugging the guest without guest cooperation.
In many cases qemu's new gdb stub works for that, but in some cases
I would prefer
Avi Kivity a...@redhat.com writes:
On 03/24/2010 09:38 AM, Andi Kleen wrote:
If you're profiling a single guest it makes more sense to do this from
inside the guest - you can profile userspace as well as the kernel.
I'm interested in debugging the guest without guest cooperation
Soeren Sandmann sandm...@daimi.au.dk writes:
To fix that problem, it seems like we need some way to have python
export what is going on. Maybe the same mechanism could be used to
both access what is going on in qemu and python.
oprofile already has an interface to let JITs export
information
Soeren Sandmann sandm...@daimi.au.dk writes:
Examples:
- What is going on inside QEMU?
That's something the JIT interface could answer.
- Which client is the X server servicing?
- What parts of a python/shell/scheme/javascript program is
taking the most
Peter Zijlstra pet...@infradead.org writes:
Whatever are we doing to end up in do_page_fault() as it stands? Surely
we can tell the CPU to go elsewhere to handle faults?
Isn't that as simple as calling set_intr_gate(14, my_page_fault)
somewhere on the cpuinit instead of the regular
--- a/qemu-kvm-x86.c
+++ b/qemu-kvm-x86.c
@@ -1015,6 +1015,7 @@ void kvm_arch_load_regs(CPUState *env)
#endif
set_msr_entry(msrs[n++], MSR_KVM_SYSTEM_TIME, env-system_time_msr);
set_msr_entry(msrs[n++], MSR_KVM_WALL_CLOCK, env-wall_clock_msr);
+set_msr_entry(msrs[n++],
i.e. it has all the makings of a stupid, avoidable, permanent fork. The thing
Nearly. There was no equivalent of a kernel based virtual driver host
before.
- Are a pure software concept and any compatibility mismatch is
self-inflicted. The patches
http://www.redhat.com/f/pdf/summit/cwright_11_open_source_virt.pdf
See slide 32. This is without vhost-net.
Thanks. Do you also have latency numbers?
It seems like there's definitely still potential for improvement
with messages 4K. But for the large messages they indeed
look rather good.
And its moot, anyway, as I have already retracted my one outstanding
pull request based on Linus' observation. So at this time, I am not
advocating _anything_ for upstream inclusion. And I am contemplating
_never_ doing so again. It's not worth _this_.
That certainly sounds like the wrong
It seems like there's definitely still potential for improvement
with messages4K. But for the large messages they indeed
look rather good.
You are misreading the graph. At 4K it is tracking bare metal (the
green and yellow lines are bare metal, the red and blue bars are virtio).
At 4k
Ingo Molnar mi...@elte.hu writes:
Yes, there's (obviously) compatibility requirements and artifacts and past
mistakes (as with any software interface), but you need to admit it to
Yes that's exactly what I meant.
yourself that your virtualization is sloppy just like hardware claim is
Ira W. Snyder i...@ovro.caltech.edu writes:
(You'll quickly find that you must use DMA to transfer data across PCI.
AFAIK, CPU's cannot do burst accesses to the PCI bus. I get a 10+ times
AFAIK that's what write-combining on x86 does. DMA has other
advantages of course.
-Andi
--
Gleb Natapov g...@redhat.com writes:
+int nested = 1;
+EXPORT_SYMBOL_GPL(nested);
Unless this is a lot better tested and audited wouldn't it make more sense
to default it to off?
I don't think it's a big burden to let users set a special knob for this,
but it would be a big problem if there
Zachary Amsden zams...@redhat.com writes:
Damn, this is complicated crap. The analagous task in real life would
be keeping a band of howler monkeys, each in their own tree, singing in
unison while the lead vocalist jumps from tree to tree, and meanwhile,
an unseen conductor keeps changing
Thomas Fjellstrom tfjellst...@shaw.ca writes:
Hardware context switches aren't free either.
FWIW, SMT has no hardware context switches, the 'S' stands for
simultaneous: the operations from the different threads are travelling
simultaneously through the CPU's pipeline.
You seem to confuse it
Fine?
I cannot say -- are there paths that could drop the device beforehand?
(as in do you hold a reference to it?)
-Andi
--
a...@linux.intel.com -- Speaking for myself only.
--
To unsubscribe from this list: send the line unsubscribe kvm in
the body of a message to majord...@vger.kernel.org
On Wed, Nov 04, 2009 at 03:08:28PM +0200, Michael S. Tsirkin wrote:
On Wed, Nov 04, 2009 at 01:59:57PM +0100, Andi Kleen wrote:
Fine?
I cannot say -- are there paths that could drop the device beforehand?
Do you mean drop the mm reference?
No the reference to the device, which owns
1 - 100 of 153 matches
Mail list logo