Thanks Paolo! That makes sense, KVM has to inject the #UD in order for
the guest to deliver the exception in the guest user space.
On Wed, May 7, 2014 at 1:55 AM, Paolo Bonzini wrote:
> Il 06/05/2014 22:11, Alexandru Duţu ha scritto:
>
>> What is puzzling thought is the fact that even if there is
Hi,
I launched a kvm guest from openstack, with only one NIC. (OpenStack
Folsom on RHCE 6.3)
The network setup is like this:
virtual NIC -> vnet0 -> br100 -> eth0 -> physical network
The vNIC has several IPs: 172.17.11.11/24 is the primary one,
172.17.11.65-66/24 are secondary ones.
I expect t
On Mon, May 05, 2014 at 03:51:22PM +0200, Alexander Graf wrote:
> When we migrate we ask the kernel about its current belief on what the guest
> time would be. However, I've seen cases where the kvmclock guest structure
> indicates a time more recent than the kvm returned time.
>
> To make sure we
This patch adds support for handling 2nd stage page faults during migration,
it disables faulting in huge pages, and splits up existing huge pages.
Signed-off-by: Mario Smarduch
---
arch/arm/kvm/mmu.c | 30 --
1 file changed, 28 insertions(+), 2 deletions(-)
diff -
This patch adds support for keeping track of VM dirty pages, by updating
per memslot dirty bitmap and write protecting the page again.
Signed-off-by: Mario Smarduch
---
arch/arm/include/asm/kvm_host.h |3 ++
arch/arm/kvm/arm.c |5 --
arch/arm/kvm/mmu.c | 99 +
Patch adds support for live migration initial split up of huge pages
in memory slot and write protection of all pages in memory slot.
Signed-off-by: Mario Smarduch
---
arch/arm/include/asm/kvm_host.h |7 ++
arch/arm/include/asm/kvm_mmu.h | 16 +++-
arch/arm/kvm/arm.c |3 +
Hi,
This v5 patchset of live mgiration support for ARMv7.
- Tested on two 4-way A15 hardware, QEMU 2-way/4-way SMP guest upto 2GB
- Various dirty data rates tested - 2GB/1s ... 2048 pgs/5ms
- validated source/destination memory image integrity
- No issues, v4 time skip due to test harness.
Chan
Patch adds HYP interface for global VM TLB invalidation without address
parameter.
- Added ARM version of kvm_flush_remote_tlbs()
Signed-off-by: Mario Smarduch
---
arch/arm/include/asm/kvm_asm.h |1 +
arch/arm/include/asm/kvm_host.h |2 ++
arch/arm/kvm/interrupts.S |5 +
On 08.05.14 01:21, Marcelo Tosatti wrote:
On Tue, May 06, 2014 at 09:18:27AM +0200, Alexander Graf wrote:
On 06.05.14 01:31, Marcelo Tosatti wrote:
On Mon, May 05, 2014 at 08:23:43PM -0300, Marcelo Tosatti wrote:
Hi Alexander,
On Mon, May 05, 2014 at 03:51:22PM +0200, Alexander Graf wrote:
On Tue, May 06, 2014 at 09:54:35PM +0200, Marcin Gibuła wrote:
> >Yes, and it isn't. Any ideas why it's not? This patch really just uses
> >the guest visible kvmclock time rather than the host view of it on
> >migration.
> >
> >There is definitely something very broken on the host's side since it
>
On Tue, May 06, 2014 at 09:18:27AM +0200, Alexander Graf wrote:
>
> On 06.05.14 01:31, Marcelo Tosatti wrote:
> >On Mon, May 05, 2014 at 08:23:43PM -0300, Marcelo Tosatti wrote:
> >>Hi Alexander,
> >>
> >>On Mon, May 05, 2014 at 03:51:22PM +0200, Alexander Graf wrote:
> >>>When we migrate we ask t
Treat monitor and mwait instructions as nop, which is architecturally
correct (but inefficient) behavior. We do this to prevent misbehaving
guests (e.g. OS X <= 10.7) from crashing after they fail to check for
monitor/mwait availability via cpuid.
Since mwait-based idle loops relying on these nop-
Il 07/05/2014 20:10, Gabriel L. Somlo ha scritto:
1. I can't test svm.c (on AMD). As such, I'm not sure the
skip_emulated_instruction() call in my own version of nop_interception()
is necessary. If not, I could probably just call the already existing
nop_on_interception() (line 1
> Raghavendra KT had done some performance testing on this patch with
> the following results:
>
> Overall we are seeing good improvement for pv-unfair version.
>
> System: 32 cpu sandybridge with HT on (4 node with 32 GB each)
> Guest : 8GB with 16 vcpu/VM.
> Average was taken over 8-10 data poi
On Wed, May 07, 2014 at 11:01:28AM -0400, Waiman Long wrote:
> v9->v10:
> - Make some minor changes to qspinlock.c to accommodate review feedback.
> - Change author to PeterZ for 2 of the patches.
> - Include Raghavendra KT's test results in patch 18.
Any chance you can post these on a git t
On 05/07/2014 08:15 PM, Michael S. Tsirkin wrote:
On Wed, May 07, 2014 at 02:10:59PM -0400, Gabriel L. Somlo wrote:
Treat monitor and mwait instructions as nop, which is architecturally
correct (but inefficient) behavior. We do this to prevent misbehaving
guests (e.g. OS X <= 10.7) from receivin
On Wed, May 07, 2014 at 02:10:59PM -0400, Gabriel L. Somlo wrote:
> Treat monitor and mwait instructions as nop, which is architecturally
> correct (but inefficient) behavior. We do this to prevent misbehaving
> guests (e.g. OS X <= 10.7) from receiving invalid opcode faults after
> failing to chec
Treat monitor and mwait instructions as nop, which is architecturally
correct (but inefficient) behavior. We do this to prevent misbehaving
guests (e.g. OS X <= 10.7) from receiving invalid opcode faults after
failing to check for monitor/mwait availability via cpuid.
Since mwait-based idle loops
On Wed, May 07, 2014 at 04:20:47PM +0100, Marc Zyngier wrote:
> In order to be able to use the DBG_MDSCR_* macros from the KVM code,
> move the relevant definitions to the obvious include file.
>
> Also move the debug_el enum to a portion of the file that is guarded
> by #ifndef __ASSEMBLY__ in or
Il 07/05/2014 15:19, Gabriel L. Somlo ha scritto:
> On Wed, May 07, 2014 at 08:29:19AM +0200, Jan Kiszka wrote:
>> On 2014-05-06 20:35, gso...@gmail.com wrote:
>>> Signed-off-by: Gabriel Somlo
>>> ---
>>>
>>> Jan,
>>>
>>> After today's pull from kvm, I also need this to build against my
>>> Fedora
https://bugzilla.kernel.org/show_bug.cgi?id=73721
Paolo Bonzini changed:
What|Removed |Added
Status|NEW |RESOLVED
CC|
On 5/7/14, 5:50 PM, Bandan Das wrote:
Nadav Amit writes:
32-bit operations are zero extended in 64-bit mode. Currently, the code does
not handle them correctly and keeps the high bits. In 16-bit mode, the high
32-bits are kept intact.
In addition, although it is not well-documented, when addr
Il 07/05/2014 15:29, Michael S. Tsirkin ha scritto:
It seems that it's easy to implement the EOI assist
on top of the PV EOI feature: simply convert the
page address to the format expected by PV EOI.
Notes:
-"No EOI required" is set only if interrupt injected
is edge triggered; this is true bec
Il 07/05/2014 14:32, Nadav Amit ha scritto:
This series of patches fixes various scenarios in which KVM does not follow x86
specifications. Patches #4 and #5 are related; they reflect a new revision of
the previously submitted patch that dealt with the wrong masking of registers
in long-mode. Pa
Hello.
On 06-05-2014 19:51, Andreas Herrmann wrote:
From: David Daney
It is a performance enhancement. When running in a simulator, each
system call to write a character takes a lot of time. Batching them
up decreases the overhead (in the root kernel) of each virtio console
write.
Signe
On 5/7/14, 5:43 PM, Bandan Das wrote:
Nadav Amit writes:
Relative jumps and calls do the masking according to the operand size, and not
according to the address size as the KVM emulator does today. In 64-bit mode,
the resulting RIP is always 64-bit. Otherwise it is masked according to the
ins
On 07/05/14 16:42, Peter Maydell wrote:
> On 7 May 2014 16:20, Marc Zyngier wrote:
>> This patch series adds debug support, a key feature missing from the
>> KVM/arm64 port.
>>
>> The main idea is to keep track of whether the debug registers are
>> "dirty" (changed by the guest) or not. In this ca
Il 07/05/2014 14:32, Nadav Amit ha scritto:
Relative jumps and calls do the masking according to the operand size, and not
according to the address size as the KVM emulator does today. In 64-bit mode,
the resulting RIP is always 64-bit. Otherwise it is masked according to the
instruction operand
Il 07/05/2014 14:32, Nadav Amit ha scritto:
32-bit operations are zero extended in 64-bit mode. Currently, the code does
not handle them correctly and keeps the high bits. In 16-bit mode, the high
32-bits are kept intact.
In addition, although it is not well-documented, when address override pre
On 07/05/14 16:34, Peter Maydell wrote:
> On 7 May 2014 16:20, Marc Zyngier wrote:
>> pm_fake doesn't quite describe what the handler does (ignoring writes
>> and returning 0 for reads).
>>
>> As we're about to use it (a lot) in a different context, rename it
>> with a (admitedly cryptic) name tha
On 7 May 2014 16:20, Marc Zyngier wrote:
> This patch series adds debug support, a key feature missing from the
> KVM/arm64 port.
>
> The main idea is to keep track of whether the debug registers are
> "dirty" (changed by the guest) or not. In this case, perform the usual
> save/restore dance, for
Il 07/05/2014 17:30, Abel Gordon ha scritto:
> > ... which we already do. The only secondary execution controls we allow are
> > APIC page, unrestricted guest, WBINVD exits, and of course EPT.
>
> But we don't verify if L1 tries to enable the feature for L1 (even if
> it's not exposed)... Or do
On 7 May 2014 16:20, Marc Zyngier wrote:
> pm_fake doesn't quite describe what the handler does (ignoring writes
> and returning 0 for reads).
>
> As we're about to use it (a lot) in a different context, rename it
> with a (admitedly cryptic) name that make sense for all users.
> -/*
> - * We cou
On Wed, May 7, 2014 at 2:40 PM, Paolo Bonzini wrote:
> Il 07/05/2014 13:37, Paolo Bonzini ha scritto:
>
>> Il 07/05/2014 13:16, Abel Gordon ha scritto:
> PLE should be left enabled, I think.
>>>
>>> Well... the PLE settings L0 uses to run L1 (vmcs01) may be different
>>> than the PLE se
Add handlers for all the AArch32 debug registers that are accessible
from EL0 or EL1. The code follow the same strategy as the AArch64
counterpart with regards to tracking the dirty state of the debug
registers.
Signed-off-by: Marc Zyngier
---
arch/arm64/include/asm/kvm_asm.h | 9 +++
arch/arm
Il 07/05/2014 16:50, Bandan Das ha scritto:
> +static void assign_masked(ulong *dest, ulong src, int bytes)
> {
> - *dest = (*dest & ~mask) | (src & mask);
> + switch (bytes) {
> + case 2:
> + *dest = (u16)src | (*dest & ~0xul);
> + break;
> + case 4:
> + *dest
Add handlers for all the AArch64 debug registers that are accessible
from EL0 or EL1. The trapping code keeps track of the state of the
debug registers, allowing for the switch code to implement a lazy
switching strategy.
Signed-off-by: Marc Zyngier
---
arch/arm64/include/asm/kvm_asm.h | 28 ++
This patch series adds debug support, a key feature missing from the
KVM/arm64 port.
The main idea is to keep track of whether the debug registers are
"dirty" (changed by the guest) or not. In this case, perform the usual
save/restore dance, for one run only. It means we only have a penalty
if a g
On 5/7/14, 4:57 PM, Paolo Bonzini wrote:
Il 07/05/2014 14:32, Nadav Amit ha scritto:
In long-mode, when the address size is 4 bytes, the linear address is not
truncated as the emulator mistakenly does. Instead, the offset within
the
segment (the ea field) should be truncated according to the ad
We now have multiple tables for the various system registers
we trap. Make sure we check the order of all of them, as it is
critical that we get the order right (been there, done that...).
Signed-off-by: Marc Zyngier
---
arch/arm64/kvm/sys_regs.c | 22 --
1 file changed, 20 i
An interesting "feature" of the CP14 encoding is that there is
an overlap between 32 and 64bit registers, meaning they cannot
live in the same table as we did for CP15.
Create separate tables for 64bit CP14 and CP15 registers, and
let the top level handler use the right one.
Signed-off-by: Marc Z
As we're about to trap a bunch of CP14 registers, let's rework
the CP15 handling so it can be generalized and work with multiple
tables.
Signed-off-by: Marc Zyngier
---
arch/arm64/include/asm/kvm_asm.h| 2 +-
arch/arm64/include/asm/kvm_coproc.h | 3 +-
arch/arm64/include/asm/kvm_host.h
Implement switching of the debug registers. While the number
of registers is massive, CPUs usually don't implement them all
(A57 has 6 breakpoints and 4 watchpoints, which gives us a total
of 22 registers "only").
Also, we only save/restore them when MDSCR_EL1 has debug enabled,
or when we've flag
pm_fake doesn't quite describe what the handler does (ignoring writes
and returning 0 for reads).
As we're about to use it (a lot) in a different context, rename it
with a (admitedly cryptic) name that make sense for all users.
Signed-off-by: Marc Zyngier
---
arch/arm64/kvm/sys_regs.c | 83
Enable trapping of the debug registers, preventing the guests to
mess with the host state (and allowing guests to use the debug
infrastructure as well).
Signed-off-by: Marc Zyngier
---
arch/arm64/kvm/hyp.S | 8
1 file changed, 8 insertions(+)
diff --git a/arch/arm64/kvm/hyp.S b/arch/ar
In order to be able to use the DBG_MDSCR_* macros from the KVM code,
move the relevant definitions to the obvious include file.
Also move the debug_el enum to a portion of the file that is guarded
by #ifndef __ASSEMBLY__ in order to use that file from assembly code.
Signed-off-by: Marc Zyngier
-
v9->v10:
- Make some minor changes to qspinlock.c to accommodate review feedback.
- Change author to PeterZ for 2 of the patches.
- Include Raghavendra KT's test results in patch 18.
v8->v9:
- Integrate PeterZ's version of the queue spinlock patch with some
modification:
http://lkm
There is a problem in the current trylock_pending() function. When the
lock is free, but the pending bit holder hasn't grabbed the lock &
cleared the pending bit yet, the trylock_pending() function will fail.
As a result, the regular queuing code path will be used most of
the time even when there
Currently, atomic_cmpxchg() is used to get the lock. However, this is
not really necessary if there is more than one task in the queue and
the queue head don't need to reset the queue code word. For that case,
a simple write to set the lock bit is enough as the queue head will
be the only one eligi
In order to support additional virtualization features like unfair lock
and para-virtualized spinlock, it is necessary to store additional
CPU specific data into the queue node structure. As a result, a new
qnode structure is created and the mcs_spinlock structure is now part
of the new structure.
This patch extracts the logic for the exchange of new and previous tail
code words into a new xchg_tail() function which can be optimized in a
later patch.
Signed-off-by: Waiman Long
---
include/asm-generic/qspinlock_types.h |2 +
kernel/locking/qspinlock.c| 61
This patch introduces a new generic queue spinlock implementation that
can serve as an alternative to the default ticket spinlock. Compared
with the ticket spinlock, this queue spinlock should be almost as fair
as the ticket spinlock. It has about the same speed in single-thread
and it can be much
From: Peter Zijlstra
Because the qspinlock needs to touch a second cacheline; add a pending
bit and allow a single in-word spinner before we punt to the second
cacheline.
Signed-off-by: Peter Zijlstra
Signed-off-by: Waiman Long
---
include/asm-generic/qspinlock_types.h | 12 +++-
kernel/loc
This patch adds base para-virtualization support to the queue
spinlock in the same way as was done in the PV ticket lock code. In
essence, the lock waiters will spin for a specified number of times
(QSPIN_THRESHOLD = 2^14) and then halted itself. The queue head waiter,
unlike the other waiter, will
With the pending addition of more codes to support unfair lock and
PV spinlock, the complexity of the slowpath function increases to
the point that the number of scratch-pad registers in the x86-64
architecture is not enough and so those additional non-scratch-pad
registers will need to be used. Th
From: Peter Zijlstra
When we allow for a max NR_CPUS < 2^14 we can optimize the pending
wait-acquire and the xchg_tail() operations.
By growing the pending bit to a byte, we reduce the tail to 16bit.
This means we can use xchg16 for the tail part and do away with all
the repeated compxchg() oper
This patch enables the coexistence of both the PV qspinlock and
unfair lock. When both are enabled, however, only the lock fastpath
will perform lock stealing whereas the slowpath will have that disabled
to get the best of both features.
We also need to transition a CPU spinning too long in the p
Locking is always an issue in a virtualized environment because of 2
different types of problems:
1) Lock holder preemption
2) Lock waiter preemption
One solution to the lock waiter preemption problem is to allow unfair
lock in a virtualized environment. In this case, a new lock acquirer
can com
In order to fully resolve the lock waiter preemption problem in virtual
guests, it is necessary to enable lock stealing in the lock waiters.
A simple test-and-set lock, however, has 2 main problems:
1) The constant spinning on the lock word put a lot of cacheline
contention traffic on the aff
The simple unfair queue lock cannot completely solve the lock waiter
preemption problem as a preempted CPU at the front of the queue will
block forward progress in all the other CPUs behind it in the queue.
To allow those CPUs to move forward, it is necessary to enable lock
stealing for those lock
If unfair lock is supported, the lock acquisition loop at the end of
the queue_spin_lock_slowpath() function may need to detect the fact
the lock can be stolen. Code are added for the stolen lock detection.
A new qhead macro is also defined as a shorthand for mcs.locked.
Signed-off-by: Waiman Lon
This patch renames the paravirt_ticketlocks_enabled static key to a
more generic paravirt_spinlocks_enabled name.
Signed-off-by: Waiman Long
---
arch/x86/include/asm/spinlock.h |4 ++--
arch/x86/kernel/kvm.c|2 +-
arch/x86/kernel/paravirt-spinlocks.c |4 ++--
arc
This patch modifies the para-virtualization (PV) infrastructure code
of the x86-64 architecture to support the PV queue spinlock. Three
new virtual methods are added to support PV qspinlock:
1) kick_cpu - schedule in a virtual CPU
2) halt_cpu - schedule out a virtual CPU
3) lockstat - update st
This patch adds the necessary KVM specific code to allow KVM to
support the CPU halting and kicking operations needed by the queue
spinlock PV code.
Two KVM guests of 20 CPU cores (2 nodes) were created for performance
testing in one of the following three configurations:
1) Only 1 VM is active
This patch adds the necessary XEN specific code to allow XEN to
support the CPU halting and kicking operations needed by the queue
spinlock PV code.
Signed-off-by: Waiman Long
---
arch/x86/xen/spinlock.c | 147 +--
kernel/Kconfig.locks|2 +-
2
This patch makes the necessary changes at the x86 architecture
specific layer to enable the use of queue spinlock for x86-64. As
x86-32 machines are typically not multi-socket. The benefit of queue
spinlock may not be apparent. So queue spinlock is not enabled.
Currently, there is some incompatibi
On 04/27/2014 02:09 PM, Raghavendra K T wrote:
For kvm part feel free to add:
Tested-by: Raghavendra K T
V9 testing has shown no hangs.
I was able to do some performance testing. here are the results:
Overall we are seeing good improvement for pv-unfair version.
System : 32 cpu sandybridge w
On Wed, May 07, 2014 at 10:00:21AM +0100, Marc Zyngier wrote:
> Kim, Christoffer,
>
> On Tue, May 06 2014 at 7:04:48 pm BST, Christoffer Dall
> wrote:
> > On Tue, Mar 25, 2014 at 05:08:14PM -0500, Kim Phillips wrote:
> >> Use the correct memory type for device MMIO mappings: PAGE_S2_DEVICE.
> >
Nadav Amit writes:
> 32-bit operations are zero extended in 64-bit mode. Currently, the code does
> not handle them correctly and keeps the high bits. In 16-bit mode, the high
> 32-bits are kept intact.
>
> In addition, although it is not well-documented, when address override prefix
It would be
Nadav Amit writes:
> Relative jumps and calls do the masking according to the operand size, and not
> according to the address size as the KVM emulator does today. In 64-bit mode,
> the resulting RIP is always 64-bit. Otherwise it is masked according to the
> instruction operand-size. Note that
It seems that it's easy to implement the EOI assist
on top of the PV EOI feature: simply convert the
page address to the format expected by PV EOI.
Notes:
-"No EOI required" is set only if interrupt injected
is edge triggered; this is true because level interrupts are going
through IOAPIC which
Il 07/05/2014 14:32, Nadav Amit ha scritto:
In long-mode, when the address size is 4 bytes, the linear address is not
truncated as the emulator mistakenly does. Instead, the offset within the
segment (the ea field) should be truncated according to the address size.
As Intel SDM says: "In 64-bit
On Wed, May 07, 2014 at 08:29:19AM +0200, Jan Kiszka wrote:
> On 2014-05-06 20:35, gso...@gmail.com wrote:
> > Signed-off-by: Gabriel Somlo
> > ---
> >
> > Jan,
> >
> > After today's pull from kvm, I also need this to build against my
> > Fedora 20 kernel (3.13.10-200.fc20.x86_64).
>
> Which ve
In long-mode, bit 7 in the PDPTE is not reserved only if 1GB pages are
supported by the CPU. Currently the bit is considered by KVM as always
reserved.
Signed-off-by: Nadav Amit
---
arch/x86/kvm/cpuid.h | 7 +++
arch/x86/kvm/mmu.c | 8 ++--
2 files changed, 13 insertions(+), 2 deletion
This series of patches fixes various scenarios in which KVM does not follow x86
specifications. Patches #4 and #5 are related; they reflect a new revision of
the previously submitted patch that dealt with the wrong masking of registers
in long-mode. Patch #3 is a follow-up to the previously sumbit
32-bit operations are zero extended in 64-bit mode. Currently, the code does
not handle them correctly and keeps the high bits. In 16-bit mode, the high
32-bits are kept intact.
In addition, although it is not well-documented, when address override prefix
is used with REP-string instruction, RCX h
The RSP register is not automatically cached, causing mov DR instruction with
RSP to fail. Instead the regular register accessing interface should be used.
Signed-off-by: Nadav Amit
---
arch/x86/kvm/vmx.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/arch/x86/kvm/vmx.c b/a
Relative jumps and calls do the masking according to the operand size, and not
according to the address size as the KVM emulator does today. In 64-bit mode,
the resulting RIP is always 64-bit. Otherwise it is masked according to the
instruction operand-size. Note that when 16-bit address size is u
In long-mode, when the address size is 4 bytes, the linear address is not
truncated as the emulator mistakenly does. Instead, the offset within the
segment (the ea field) should be truncated according to the address size.
As Intel SDM says: "In 64-bit mode, the effective address components are ad
On Wed, 7 May 2014 13:17:51 +0100
Peter Maydell wrote:
> On 7 May 2014 12:04, Marc Zyngier wrote:
> > On Wed, May 07 2014 at 11:40:54 am BST, Greg Kurz
> > wrote:
> >> All the fuzz is not really about enforcing kernel access... PPC also
> >> has a current endianness selector (MSR_LE) but it on
On 07/05/14 13:17, Peter Maydell wrote:
> On 7 May 2014 12:04, Marc Zyngier wrote:
>> On Wed, May 07 2014 at 11:40:54 am BST, Greg Kurz
>> wrote:
>>> All the fuzz is not really about enforcing kernel access... PPC also
>>> has a current endianness selector (MSR_LE) but it only makes sense
>>> if
On 7 May 2014 13:16, Marc Zyngier wrote:
> That being said, I'm going to stop replying to this thread, and instead
> go back writing code, posting it, and getting on with my life in
> virtio-legacy land.
Some of us are trying to have a conversation in this thread
about virtio-legacy behaviour :-)
On 7 May 2014 12:04, Marc Zyngier wrote:
> On Wed, May 07 2014 at 11:40:54 am BST, Greg Kurz
> wrote:
>> All the fuzz is not really about enforcing kernel access... PPC also
>> has a current endianness selector (MSR_LE) but it only makes sense
>> if you are in the cpu context. Initial versions o
On 07/05/14 12:49, Alexander Graf wrote:
> On 05/07/2014 12:46 PM, Marc Zyngier wrote:
>> On Wed, May 07 2014 at 11:10:56 am BST, Peter Maydell
>> wrote:
>>> On 7 May 2014 10:52, Marc Zyngier wrote:
On Wed, May 07 2014 at 10:34:30 am BST, Peter Maydell
wrote:
> Current opinion on
On 05/07/2014 12:46 PM, Marc Zyngier wrote:
On Wed, May 07 2014 at 11:10:56 am BST, Peter Maydell
wrote:
On 7 May 2014 10:52, Marc Zyngier wrote:
On Wed, May 07 2014 at 10:34:30 am BST, Peter Maydell
wrote:
Current opinion on the qemu-devel thread seems to be that we
should just define tha
Il 07/05/2014 13:37, Paolo Bonzini ha scritto:
Il 07/05/2014 13:16, Abel Gordon ha scritto:
> PLE should be left enabled, I think.
Well... the PLE settings L0 uses to run L1 (vmcs01) may be different
than the PLE settings L1 configured to run L2 (vmcs12).
For example, L0 can use a ple_gap to
On 05/07/2014 07:56 AM, Paul Mackerras wrote:
On Sun, May 04, 2014 at 10:56:08PM +0530, Aneesh Kumar K.V wrote:
With debug option "sleep inside atomic section checking" enabled we get
the below WARN_ON during a PR KVM boot. This is because upstream now
have PREEMPT_COUNT enabled even if we have
Il 07/05/2014 13:16, Abel Gordon ha scritto:
> PLE should be left enabled, I think.
Well... the PLE settings L0 uses to run L1 (vmcs01) may be different
than the PLE settings L1 configured to run L2 (vmcs12).
For example, L0 can use a ple_gap to run L1 that is bigger than the
ple_gap L1 confi
On Wed, May 7, 2014 at 11:58 AM, Paolo Bonzini wrote:
> Il 04/05/2014 18:33, Hu Yaohui ha scritto:
>
>>> I experienced a similar problem that was related to nested code
>>> having some bugs related to apicv and other new vmx features.
>>>
>>> For example, the code enabled posted interrupts to run
On Wed, May 07 2014 at 11:40:54 am BST, Greg Kurz
wrote:
> On Wed, 07 May 2014 10:52:01 +0100
> Marc Zyngier wrote:
>
>> On Wed, May 07 2014 at 10:34:30 am BST, Peter Maydell
>> wrote:
>> > On 6 May 2014 19:38, Peter Maydell wrote:
>> >> On 6 May 2014 18:25, Marc Zyngier wrote:
>> >>> On Tue,
On Wed, May 07 2014 at 11:10:56 am BST, Peter Maydell
wrote:
> On 7 May 2014 10:52, Marc Zyngier wrote:
>> On Wed, May 07 2014 at 10:34:30 am BST, Peter Maydell
>> wrote:
>>> Current opinion on the qemu-devel thread seems to be that we
>>> should just define that the endianness of the virtio de
On Wed, 07 May 2014 10:52:01 +0100
Marc Zyngier wrote:
> On Wed, May 07 2014 at 10:34:30 am BST, Peter Maydell
> wrote:
> > On 6 May 2014 19:38, Peter Maydell wrote:
> >> On 6 May 2014 18:25, Marc Zyngier wrote:
> >>> On Tue, May 06 2014 at 3:28:07 pm BST, Will Deacon
> >>> wrote:
> O
On Wed, May 07 2014 at 11:11:13 am BST, Alexander Graf wrote:
> On 05/07/2014 11:57 AM, Marc Zyngier wrote:
>> Huh? What if my guest has usespace using an idmap, with Stage-1 MMU for
>> isolation only (much like an MPU)? R-class guests anyone?
>>
>> Agreed, this is not the general use case, but t
Hi all,
On 06/05/14 08:16, Alexander Graf wrote:
>
> On 06.05.14 01:23, Marcelo Tosatti wrote:
>
>> 1) By what algorithm you retrieve
>> and compare time in kvmclock guest structure and KVM_GET_CLOCK.
>> What are the results of the comparison.
>> And whether and backwards time was visible in the
On Wed, May 07, 2014 at 12:11:13PM +0200, Alexander Graf wrote:
> On 05/07/2014 11:57 AM, Marc Zyngier wrote:
> >On Wed, May 07 2014 at 10:42:54 am BST, Alexander Graf wrote:
> >>>Am 07.05.2014 um 11:34 schrieb Peter Maydell :
> >>>
> On 6 May 2014 19:38, Peter Maydell wrote:
> >On 6 May
On Wed, May 07 2014 at 10:55:45 am BST, Alexander Graf wrote:
>> Am 07.05.2014 um 11:52 schrieb Marc Zyngier :
>>
>>> On Wed, May 07 2014 at 10:34:30 am BST, Peter Maydell
>>> wrote:
On 6 May 2014 19:38, Peter Maydell wrote:
> On 6 May 2014 18:25, Marc Zyngier wrote:
>> On Tue, Ma
On 05/07/2014 11:57 AM, Marc Zyngier wrote:
On Wed, May 07 2014 at 10:42:54 am BST, Alexander Graf wrote:
Am 07.05.2014 um 11:34 schrieb Peter Maydell :
On 6 May 2014 19:38, Peter Maydell wrote:
On 6 May 2014 18:25, Marc Zyngier wrote:
On Tue, May 06 2014 at 3:28:07 pm BST, Will Deacon
On 7 May 2014 10:52, Marc Zyngier wrote:
> On Wed, May 07 2014 at 10:34:30 am BST, Peter Maydell
> wrote:
>> Current opinion on the qemu-devel thread seems to be that we
>> should just define that the endianness of the virtio device is
>> the endianness of the guest kernel at the point where the
On Wed, May 07 2014 at 10:42:54 am BST, Alexander Graf wrote:
>> Am 07.05.2014 um 11:34 schrieb Peter Maydell :
>>
>>> On 6 May 2014 19:38, Peter Maydell wrote:
On 6 May 2014 18:25, Marc Zyngier wrote:
> On Tue, May 06 2014 at 3:28:07 pm BST, Will Deacon
> wrote:
>> On Thu,
> Am 07.05.2014 um 11:52 schrieb Marc Zyngier :
>
>> On Wed, May 07 2014 at 10:34:30 am BST, Peter Maydell
>> wrote:
>>> On 6 May 2014 19:38, Peter Maydell wrote:
On 6 May 2014 18:25, Marc Zyngier wrote:
> On Tue, May 06 2014 at 3:28:07 pm BST, Will Deacon
> wrote:
>> On
1 - 100 of 110 matches
Mail list logo