On Mon, 28 Jan 2019 17:31:01 +0100
Andrew Jones wrote:
> On Mon, Jan 28, 2019 at 02:24:29PM +, Alexandru Elisei wrote:
> > On 1/25/19 4:47 PM, Andrew Jones wrote:
> > > On Fri, Jan 25, 2019 at 04:36:13PM +, Alexandru Elisei
> > > wrote:
> > >> On 1/24/19 12:37 PM, Andrew Jones wrote:
Hi Eric,
On 25/01/2019 16:49, Auger Eric wrote:
[...]
>>> diff --git a/include/uapi/linux/iommu.h b/include/uapi/linux/iommu.h
>>> index 7a7cf7a3de7c..4605f5cfac84 100644
>>> --- a/include/uapi/linux/iommu.h
>>> +++ b/include/uapi/linux/iommu.h
>>> @@ -47,4 +47,99 @@ struct
On Tue, Jan 22, 2019 at 02:59:48PM +, Julien Thierry wrote:
> Hi Andrew
>
> On 22/01/2019 10:49, Andrew Murray wrote:
> > Emulate chained PMU counters by creating a single 64 bit event counter
> > for a pair of chained KVM counters.
> >
> > Signed-off-by: Andrew Murray
> > ---
> >
On Tue, Jan 22, 2019 at 01:41:49PM +, Julien Thierry wrote:
> Hi Andrew,
>
> On 22/01/2019 10:49, Andrew Murray wrote:
> > To prevent re-creating perf events everytime the counter registers
> > are changed, let's instead lazily create the event when the event
> > is first enabled and destroy
On Mon, Jan 28, 2019 at 02:24:29PM +, Alexandru Elisei wrote:
> On 1/25/19 4:47 PM, Andrew Jones wrote:
> > On Fri, Jan 25, 2019 at 04:36:13PM +, Alexandru Elisei wrote:
> >> On 1/24/19 12:37 PM, Andrew Jones wrote:
> >>> On Thu, Jan 24, 2019 at 11:59:43AM +, Andre Przywara wrote:
>
From: Mark Rutland
[ Upstream commit 0d640732dbebed0f10f18526de21652931f0b2f2 ]
When we emulate an MMIO instruction, we advance the CPU state within
decode_hsr(), before emulating the instruction effects.
Having this logic in decode_hsr() is opaque, and advancing the state
before emulation is
From: Mark Rutland
[ Upstream commit 0d640732dbebed0f10f18526de21652931f0b2f2 ]
When we emulate an MMIO instruction, we advance the CPU state within
decode_hsr(), before emulating the instruction effects.
Having this logic in decode_hsr() is opaque, and advancing the state
before emulation is
From: Mark Rutland
[ Upstream commit 0d640732dbebed0f10f18526de21652931f0b2f2 ]
When we emulate an MMIO instruction, we advance the CPU state within
decode_hsr(), before emulating the instruction effects.
Having this logic in decode_hsr() is opaque, and advancing the state
before emulation is
From: Mark Rutland
[ Upstream commit 0d640732dbebed0f10f18526de21652931f0b2f2 ]
When we emulate an MMIO instruction, we advance the CPU state within
decode_hsr(), before emulating the instruction effects.
Having this logic in decode_hsr() is opaque, and advancing the state
before emulation is
From: Mark Rutland
[ Upstream commit 0d640732dbebed0f10f18526de21652931f0b2f2 ]
When we emulate an MMIO instruction, we advance the CPU state within
decode_hsr(), before emulating the instruction effects.
Having this logic in decode_hsr() is opaque, and advancing the state
before emulation is
From: Mark Rutland
[ Upstream commit 0d640732dbebed0f10f18526de21652931f0b2f2 ]
When we emulate an MMIO instruction, we advance the CPU state within
decode_hsr(), before emulating the instruction effects.
Having this logic in decode_hsr() is opaque, and advancing the state
before emulation is
Hi Amit,
On 28/01/2019 06:58, Amit Daniel Kachhap wrote:
> This is a runtime feature and can be enabled by --ptrauth option.
>
> Signed-off-by: Amit Daniel Kachhap
> Cc: Mark Rutland
> Cc: Christoffer Dall
> Cc: Marc Zyngier
> Cc: Kristina Martsenko
> Cc: kvmarm@lists.cs.columbia.edu
> Cc:
Now that we have a common infrastructure for doing MMU cache
allocations, use this for mips as well.
Signed-off-by: Christoffer Dall
---
arch/mips/include/asm/kvm_host.h | 15 ++---
arch/mips/include/asm/kvm_types.h | 5 +++
arch/mips/kvm/mips.c | 2 +-
arch/mips/kvm/mmu.c
Now when we have a common mmu mmemcache implementation, we can reuse
this for arm and arm64.
The common implementation has a slightly different behavior when
allocating objects under high memory pressure; whereas the current
arm/arm64 implementation will give up and return -ENOMEM if the full
We are currently duplicating the mmu memory cache functionality quite
heavily between the architectures that support KVM. As a first step,
move the x86 implementation (which seems to have the most recently
maintained version of the mmu memory cache) to common code.
We rename the functions and
We currently have duplicated functionality for the mmu_memory_cache used
to pre-allocate memory for the page table manipulation code which cannot
allocate memory while holding spinlocks. This functionality is
duplicated across x86, arm/arm64, and mips.
There were recently a debate of modifying
Hi Amit,
On 28/01/2019 06:58, Amit Daniel Kachhap wrote:
> This feature will allow the KVM guest to allow the handling of
> pointer authentication instructions or to treat them as undefined
> if not set. It uses the existing vcpu API KVM_ARM_VCPU_INIT to
> supply this parameter instead of
On Tue, Jan 22, 2019 at 10:12:22PM +, Suzuki K Poulose wrote:
> Hi Andrew,
>
> On 01/22/2019 10:49 AM, Andrew Murray wrote:
> > To prevent re-creating perf events everytime the counter registers
> > are changed, let's instead lazily create the event when the event
> > is first enabled and
Hi Amit,
On 28/01/2019 06:58, Amit Daniel Kachhap wrote:
> When pointer authentication is supported, a guest may wish to use it.
> This patch adds the necessary KVM infrastructure for this to work, with
> a semi-lazy context switch of the pointer auth state.
>
> Pointer authentication feature is
On 1/25/19 4:47 PM, Andrew Jones wrote:
> On Fri, Jan 25, 2019 at 04:36:13PM +, Alexandru Elisei wrote:
>> On 1/24/19 12:37 PM, Andrew Jones wrote:
>>> On Thu, Jan 24, 2019 at 11:59:43AM +, Andre Przywara wrote:
On Thu, 24 Jan 2019 11:16:29 +
Alexandru Elisei wrote:
On 1/24/19 1:07 PM, Andrew Jones wrote:
> On Thu, Jan 24, 2019 at 11:16:34AM +, Alexandru Elisei wrote:
>> Instead of aborting the test when an unexpected parameter is found, use
>> argv_find() to search for the desired parameter. On arm and arm64, this
>> allows kvm-unit-tests to be used with
On Tue, Jan 22, 2019 at 02:18:17PM +, Suzuki K Poulose wrote:
> Hi Andrew
>
> On 01/22/2019 10:49 AM, Andrew Murray wrote:
> > The perf event sample_period is currently set based upon the current
> > counter value, when PMXEVTYPER is written to and the perf event is created.
> > However the
On Mon, 21 Jan 2019 15:33:29 +,
Julien Thierry wrote:
>
> Interrupts masked by ICC_PMR_EL1 will not be signaled to the CPU. This
> means that hypervisor will not receive masked interrupts while running a
> guest.
>
> Avoid this by making sure ICC_PMR_EL1 is unmasked when we enter a guest.
>
23 matches
Mail list logo