On 03.08.20 13:37, Vitaly Kuznetsov wrote:
Alexander Graf writes:
It's not desireable to have all MSRs always handled by KVM kernel space. Some
MSRs would be useful to handle in user space to either emulate behavior (like
uCode updates) or differentiate whether they are valid based
On 03.08.20 13:27, Vitaly Kuznetsov wrote:
Alexander Graf writes:
MSRs are weird. Some of them are normal control registers, such as EFER.
Some however are registers that really are model specific, not very
interesting to virtualization workloads, and not performance critical.
Others again
On 01.08.20 01:36, Jim Mattson wrote:
On Fri, Jul 31, 2020 at 2:50 PM Alexander Graf wrote:
MSRs are weird. Some of them are normal control registers, such as EFER.
Some however are registers that really are model specific, not very
interesting to virtualization workloads
is populated, MSR handling stays identical to before.
Signed-off-by: KarimAllah Ahmed
Signed-off-by: Alexander Graf
---
v2 -> v3:
- document flags for KVM_X86_ADD_MSR_ALLOWLIST
- generalize exit path, always unlock when returning
- s/KVM_CAP_ADD_MSR_ALLOWLIST/KVM_CAP_X86_MSR_ALLOWLIS
Now that we have the ability to handle MSRs from user space and also to
select which ones we do want to prevent in-kernel KVM code from handling,
let's add a selftest to show case and verify the API.
Signed-off-by: Alexander Graf
---
v2 -> v3:
- s/KVM_CAP_ADD_MSR_ALLOWL
the existing "ignore_msrs" logic with
something that applies per-VM rather than on the full system. That way you
can run productive VMs in parallel to experimental ones where you don't care
about proper MSR handling.
Signed-off-by: Alexander Graf
---
v1 -> v2:
- s/ETRAP_TO_USER_S
- Add test to clear whitelist
- Adjust to reply-less API
- Fix asserts
- Actually trap on MSR_IA32_POWER_CTL writes
Alexander Graf (3):
KVM: x86: Deflect unknown MSR accesses to user space
KVM: x86: Introduce allow list for MSR emulation
KVM: selftests: Add test for user space MSR ha
On 30.07.20 10:59, Vitaly Kuznetsov wrote:
Alexander Graf writes:
It's not desireable to have all MSRs always handled by KVM kernel space. Some
MSRs would be useful to handle in user space to either emulate behavior (like
uCode updates) or differentiate whether they are valid based
PM Alexander Graf wrote:
Do you have a particular situation in mind where that would not be the
case and where we would still want to actually complete an MSR operation
after the environment changed?
As far as userspace is concerned, if it has replied with error=0, the
instruction has completed
On 31.07.20 00:42, Jim Mattson wrote:
On Wed, Jul 29, 2020 at 4:59 PM Alexander Graf wrote:
MSRs are weird. Some of them are normal control registers, such as EFER.
Some however are registers that really are model specific, not very
interesting to virtualization workloads
n readable
names to these exit reasons inside the trace log.
Let's fix that up after the fact, so that trace logs are pretty even when
we get user space MMIO traps on ARM.
Fixes: c726200dd106d ("KVM: arm/arm64: Allow reporting non-ISV data aborts to
userspace")
Signed-off-by: Alexande
is populated, MSR handling stays identical to before.
Signed-off-by: KarimAllah Ahmed
Signed-off-by: Alexander Graf
---
Documentation/virt/kvm/api.rst | 53 ++
arch/x86/include/asm/kvm_host.h | 10 +++
arch/x86/include/uapi/asm/kvm.h | 15
arch/x86/kvm/x86.c | 123
the existing "ignore_msrs" logic with
something that applies per-VM rather than on the full system. That way you
can run productive VMs in parallel to experimental ones where you don't care
about proper MSR handling.
Signed-off-by: Alexander Graf
---
v1 -> v2:
- s/ETRAP_TO_USER_S
Now that we have the ability to handle MSRs from user space and also to
select which ones we do want to prevent in-kernel KVM code from handling,
let's add a selftest to show case and verify the API.
Signed-off-by: Alexander Graf
---
tools/testing/selftests/kvm/Makefile | 1
space trapping allows us
to emulate arbitrary MSRs in user space, paving the way for target CPU
specific MSR implementations from user space.
Alexander Graf (3):
KVM: x86: Deflect unknown MSR accesses to user space
KVM: x86: Introduce allow list for MSR emulation
KVM: selftests: Add test
On 29.07.20 22:37, Jim Mattson wrote:
On Wed, Jul 29, 2020 at 1:29 PM Alexander Graf wrote:
Meanwhile, I have cleaned up Karim's old patch to add allow listing to
KVM and would post it if Aaron doesn't beat me to it :).
Ideally, this becomes a collaboration rather than a race
On 29.07.20 20:27, Jim Mattson wrote:
CAUTION: This email originated from outside of the organization. Do not click
links or open attachments unless you can confirm the sender and know the
content is safe.
On Wed, Jul 29, 2020 at 2:06 AM Alexander Graf wrote:
On 28.07.20 19:13, Jim
On 29.07.20 11:22, Vitaly Kuznetsov wrote:
CAUTION: This email originated from outside of the organization. Do not click
links or open attachments unless you can confirm the sender and know the
content is safe.
Alexander Graf writes:
On 29.07.20 10:23, Vitaly Kuznetsov wrote:
Jim
On 29.07.20 10:23, Vitaly Kuznetsov wrote:
Jim Mattson writes:
On Tue, Jul 28, 2020 at 5:41 AM Alexander Graf wrote:
...
While it does feel a bit overengineered, it would solve the problem that
we're turning in-KVM handled MSRs into an ABI.
It seems unlikely that userspace
On 28.07.20 19:13, Jim Mattson wrote:
CAUTION: This email originated from outside of the organization. Do not click
links or open attachments unless you can confirm the sender and know the
content is safe.
On Tue, Jul 28, 2020 at 5:41 AM Alexander Graf wrote:
On 28.07.20 10:15
On 28.07.20 10:15, Vitaly Kuznetsov wrote:
Alexander Graf writes:
MSRs are weird. Some of them are normal control registers, such as EFER.
Some however are registers that really are model specific, not very
interesting to virtualization workloads, and not performance critical.
Others
the existing "ignore_msrs" logic with
something that applies per-VM rather than on the full system. That way you
can run productive VMs in parallel to experimental ones where you don't care
about proper MSR handling.
Signed-off-by: Alexander Graf
---
As a quick example to show what th
On 23.07.20 20:21, Paraschiv, Andra-Irina wrote:
On 23/07/2020 13:54, Greg KH wrote:
On Thu, Jul 23, 2020 at 12:23:56PM +0300, Paraschiv, Andra-Irina wrote:
On 22/07/2020 12:57, Greg KH wrote:
On Wed, Jul 22, 2020 at 11:27:29AM +0300, Paraschiv, Andra-Irina wrote:
+#ifndef
...@intel.com/
Reported-by: kbuild test robot
This means that the overall patch is a fix that was reported by the test
rebot. I doubt that's what you mean. Just remove the line.
Signed-off-by: Andra Paraschiv
Reviewed-by: Alexander Graf
Alex
Amazon Development Center Germany GmbH
support for MSI-X interrupts.
Signed-off-by: Alexandru-Catalin Vasile
Signed-off-by: Alexandru Ciobotaru
Signed-off-by: Andra Paraschiv
Reviewed-by: Alexander Graf
Alex
Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan
On 09.07.20 09:36, Paraschiv, Andra-Irina wrote:
On 06/07/2020 13:46, Alexander Graf wrote:
On 22.06.20 22:03, Andra Paraschiv wrote:
Another resource that is being set for an enclave is memory. User space
memory regions, that need to be backed by contiguous memory regions
On 22.06.20 22:03, Andra Paraschiv wrote:
Signed-off-by: Alexandru Vasile
Signed-off-by: Andra Paraschiv
---
Changelog
v3 -> v4
* Update usage details to match the updates in v4.
* Update NE ioctl interface usage.
v2 -> v3
* Remove the include directory to use the uapi from the kernel.
On 22.06.20 22:03, Andra Paraschiv wrote:
Signed-off-by: Andra Paraschiv
Reviewed-by: Alexander Graf
Alex
Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB
On 22.06.20 22:03, Andra Paraschiv wrote:
Signed-off-by: Andra Paraschiv
---
Changelog
v3 -> v4
* Add PCI and SMP dependencies.
v2 -> v3
* Remove the GPL additional wording as SPDX-License-Identifier is
already in place.
v1 -> v2
* Update path to Kconfig to match the
termination, that is mapped to the enclave fd
release callback. Free the internal enclave info used for bookkeeping.
Signed-off-by: Alexandru Vasile
Signed-off-by: Andra Paraschiv
Reviewed-by: Alexander Graf
Alex
Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
On 22.06.20 22:03, Andra Paraschiv wrote:
After all the enclave resources are set, the enclave is ready for
beginning to run.
Add ioctl command logic for starting an enclave after all its resources,
memory regions and CPUs, have been set.
The enclave start information includes the local
On 22.06.20 22:03, Andra Paraschiv wrote:
Another resource that is being set for an enclave is memory. User space
memory regions, that need to be backed by contiguous memory regions,
are associated with the enclave.
One solution for allocating / reserving contiguous memory regions, that
is
On 22.06.20 22:03, Andra Paraschiv wrote:
Before setting the memory regions for the enclave, the enclave image
needs to be placed in memory. After the memory regions are set, this
memory cannot be used anymore by the VM, being carved out.
Add ioctl command logic to get the offset in enclave
On 22.06.20 22:03, Andra Paraschiv wrote:
An enclave, before being started, has its resources set. One of its
resources is CPU.
The NE CPU pool is set for choosing CPUs for enclaves from it. Offline
the CPUs from the NE CPU pool during the pool setup and online them back
during the NE CPU
On 06.07.20 09:49, Paraschiv, Andra-Irina wrote:
On 06/07/2020 10:13, Alexander Graf wrote:
On 22.06.20 22:03, Andra Paraschiv wrote:
The Nitro Enclaves driver provides an ioctl interface to the user space
for enclave lifetime management e.g. enclave creation / termination and
setting
.
Signed-off-by: Alexandru Vasile
Signed-off-by: Andra Paraschiv
Reviewed-by: Alexander Graf
Alex
Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz
On 22.06.20 22:03, Andra Paraschiv wrote:
The Nitro Enclaves driver provides an ioctl interface to the user space
for enclave lifetime management e.g. enclave creation / termination and
setting enclave resources such as memory and CPU.
This ioctl interface is mapped to a Nitro Enclaves misc
by this device. Add an internal data structure used as private
data for the PCI device driver and the functions for the PCI device init
/ uninit and command requests handling.
Signed-off-by: Alexandru-Catalin Vasile
Signed-off-by: Alexandru Ciobotaru
Signed-off-by: Andra Paraschiv
Reviewed-by: Alexander
running in the VM that launched it. The process interacts with
the NE driver, that exposes an ioctl interface for creating an enclave
and setting up its resources.
Signed-off-by: Alexandru Vasile
Signed-off-by: Andra Paraschiv
Reviewed-by: Alexander Graf
Alex
Amazon Development Center Germany
On 22.06.20 22:03, Andra Paraschiv wrote:
In addition to the replies sent by the Nitro Enclaves PCI device in
response to command requests, out-of-band enclave events can happen e.g.
an enclave crashes. In this case, the Nitro Enclaves driver needs to be
aware of the event and notify the
.
Signed-off-by: Alexandru-Catalin Vasile
Signed-off-by: Andra Paraschiv
Reviewed-by: Alexander Graf
Alex
Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Jonathan Weiss
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
On 22.06.20 22:03, Andra Paraschiv wrote:
The Nitro Enclaves PCI device exposes a MMIO space that this driver
uses to submit command requests and to receive command replies e.g. for
enclave creation / termination or setting enclave resources.
Add logic for handling PCI device command
On 22.06.20 22:03, Andra Paraschiv wrote:
The Nitro Enclaves PCI device is used by the kernel driver as a means of
communication with the hypervisor on the host where the primary VM and
the enclaves run. It handles requests with regard to enclave lifetime.
Setup the PCI device driver and add
the outer mode to be set to SameAsInner
explicitly, so the easy fix is to default to that instead of nC for
situations when an OS asks for a not fulfillable cachability request.
This fixes booting Windows in KVM with vgicv3 and ITS enabled for me.
Signed-off-by: Alexander Graf
---
arch/arm64/kvm/vgic
On 01.06.20 05:04, Benjamin Herrenschmidt wrote:
On Thu, 2020-05-28 at 15:12 +0200, Greg KH wrote:
So at runtime, after all is booted and up and going, you just ripped
cores out from under someone's feet? :)
And the code really handles writing to that value while the module is
already
On 27.05.20 00:24, Greg KH wrote:
On Tue, May 26, 2020 at 03:44:30PM +0200, Alexander Graf wrote:
On 26.05.20 15:17, Greg KH wrote:
On Tue, May 26, 2020 at 02:44:18PM +0200, Alexander Graf wrote:
On 26.05.20 14:33, Greg KH wrote:
On Tue, May 26, 2020 at 01:42:41PM +0200, Alexander
On 26.05.20 15:17, Greg KH wrote:
On Tue, May 26, 2020 at 02:44:18PM +0200, Alexander Graf wrote:
On 26.05.20 14:33, Greg KH wrote:
On Tue, May 26, 2020 at 01:42:41PM +0200, Alexander Graf wrote:
On 26.05.20 08:51, Greg KH wrote:
On Tue, May 26, 2020 at 01:13:23AM +0300, Andra
On 26.05.20 14:33, Greg KH wrote:
On Tue, May 26, 2020 at 01:42:41PM +0200, Alexander Graf wrote:
On 26.05.20 08:51, Greg KH wrote:
On Tue, May 26, 2020 at 01:13:23AM +0300, Andra Paraschiv wrote:
+#define NE "nitro_enclaves: "
Again, no need for this.
+#define N
On 26.05.20 08:51, Greg KH wrote:
On Tue, May 26, 2020 at 01:13:23AM +0300, Andra Paraschiv wrote:
+#define NE "nitro_enclaves: "
Again, no need for this.
+#define NE_DEV_NAME "nitro_enclaves"
KBUILD_MODNAME?
+#define NE_IMAGE_LOAD_OFFSET (8 * 1024UL * 1024UL)
+
+static char
Hey Greg,
On 22.05.20 09:04, Greg KH wrote:
On Fri, May 22, 2020 at 09:29:32AM +0300, Andra Paraschiv wrote:
+/**
+ * ne_setup_msix - Setup MSI-X vectors for the PCI device.
+ *
+ * @pdev: PCI device to setup the MSI-X for.
+ *
+ * @returns: 0 on success, negative return value on failure.
+
on current systems.
Reported-by: Alexander Graf
Cc: KarimAllah Raslan
Cc: sta...@vger.kernel.org
Fixes: 15d45071523d ("KVM/x86: Add IBPB support")
Signed-off-by: Sean Christopherson
---
v2: Pass a boolean to indicate a nested VMCS switch and instead WARN if
the buddy VMCS is n
on current systems.
Reported-by: Alexander Graf
Cc: KarimAllah Raslan
Cc: sta...@vger.kernel.org
Fixes: 15d45071523d ("KVM/x86: Add IBPB support")
Signed-off-by: Sean Christopherson
I can confirm that with kvm-unit-test's vmcall benchmark, the patch does
make a big difference:
BEFO
On 30.04.20 13:58, Paolo Bonzini wrote:
On 30/04/20 13:47, Alexander Graf wrote:
So the issue would be that a firmware image provided by the parent could
be tampered with by something malicious running in the parent enclave?
You have to have a root of trust somewhere. That root
On 30.04.20 13:38, Paolo Bonzini wrote:
On 30/04/20 13:21, Alexander Graf wrote:
Also, would you consider a mode where ne_load_image is not invoked and
the enclave starts in real mode at 0xff0?
Consider, sure. But I don't quite see any big benefit just yet. The
current abstraction
On 30.04.20 12:34, Paolo Bonzini wrote:
On 28/04/20 17:07, Alexander Graf wrote:
Why don't we build something like the following instead?
vm = ne_create(vcpus = 4)
ne_set_memory(vm, hva, len)
ne_load_image(vm, addr, len)
ne_start(vm)
That way we would get the EIF loading
On 27.04.20 13:44, Liran Alon wrote:
On 27/04/2020 10:56, Paraschiv, Andra-Irina wrote:
On 25/04/2020 18:25, Liran Alon wrote:
On 23/04/2020 16:19, Paraschiv, Andra-Irina wrote:
The memory and CPUs are carved out of the primary VM, they are
dedicated for the enclave. The Nitro
On 25.04.20 18:05, Paolo Bonzini wrote:
CAUTION: This email originated from outside of the organization. Do not click
links or open attachments unless you can confirm the sender and know the
content is safe.
On 24/04/20 21:11, Alexander Graf wrote:
What I was saying above is that maybe
On 04.09.19 18:16, Anup Patel wrote:
From: Atish Patra
The KVM host kernel running in HS-mode needs to handle SBI calls coming
from guest kernel running in VS-mode.
This patch adds SBI v0.1 support in KVM RISC-V. All the SBI calls are
implemented correctly except remote tlb flushes. For
On 04.09.19 18:15, Anup Patel wrote:
We get illegal instruction trap whenever Guest/VM executes WFI
instruction.
This patch handles WFI trap by blocking the trapped VCPU using
kvm_vcpu_block() API. The blocked VCPU will be automatically
resumed whenever a VCPU interrupt is injected from
.
Reviewed-by: Alexander Graf
Alex
Amazon Development Center Germany GmbH
Krausenstr. 38
10117 Berlin
Geschaeftsfuehrung: Christian Schlaeger, Ralf Herbrich
Eingetragen am Amtsgericht Charlottenburg unter HRB 149173 B
Sitz: Berlin
Ust-ID: DE 289 237 879
On 04.09.19 18:14, Anup Patel wrote:
This patch implements VCPU create, init and destroy functions
required by generic KVM module. We don't have much dynamic
resources in struct kvm_vcpu_arch so thest functions are quite
Since you're respinning for v8 anyway, please s/thest/these/ :)
Alex
On 06.09.19 15:50, Peter Maydell wrote:
On Fri, 6 Sep 2019 at 14:41, Alexander Graf wrote:
On 06.09.19 15:31, Peter Maydell wrote:
(b) we try to reuse the code we already have that does TCG exception
injection, which might or might not be a design mistake, and
That's probably a design
On 06.09.19 15:31, Peter Maydell wrote:
On Fri, 6 Sep 2019 at 14:13, Christoffer Dall wrote:
I'd prefer leaving it to userspace to worry about, but I thought Peter
said that had been problematic historically, which I took at face value,
but I could have misunderstood.
If QEMU, kvmtool, and
This fixes a bug I have with code which configures real hardware to
inject virtual SMIs into my guest.
Signed-off-by: Alexander Graf
---
v1 -> v2:
- Make error message more unique
- Update commit message to point to __apic_accept_irq()
v2 -> v3:
- Use if() rather than switch()
On 04.09.19 17:51, Sean Christopherson wrote:
On Wed, Sep 04, 2019 at 05:36:39PM +0200, Alexander Graf wrote:
On 04.09.19 16:40, Sean Christopherson wrote:
On Wed, Sep 04, 2019 at 03:35:10PM +0200, Alexander Graf wrote:
We can easily route hardware interrupts directly into VM context when
On 04.09.19 16:40, Sean Christopherson wrote:
On Wed, Sep 04, 2019 at 03:35:10PM +0200, Alexander Graf wrote:
We can easily route hardware interrupts directly into VM context when
they target the "Fixed" or "LowPriority" delivery modes.
However, on modes such as &quo
This fixes a bug I have with code which configures real hardware to
inject virtual SMIs into my guest.
Signed-off-by: Alexander Graf
Reviewed-by: Liran Alon
---
v1 -> v2:
- Make error message more unique
- Update commit message to point to __apic_accept_irq()
---
arch/x86/kvm/vmx/vmx.c | 22
to fix
the situation for x86 systems. If anyone has a great idea how to generalize
the filtering though, I'm all ears.
Alex
---
v1 -> v2:
- Make error message more unique
- Update commit message to point to __apic_accept_irq()
Alexander Graf (2):
KVM: VMX: Disable posted interrupts
This fixes a bug I have with code which configures real hardware to
inject virtual SMIs into my guest.
Signed-off-by: Alexander Graf
Reviewed-by: Liran Alon
---
v1 -> v2:
- Make error message more unique
- Update commit message to point to __apic_accept_irq()
---
arch/x86/kvm/svm.c | 16
On 04.09.19 01:20, Liran Alon wrote:
On 3 Sep 2019, at 17:29, Alexander Graf wrote:
We can easily route hardware interrupts directly into VM context when
they target the "Fixed" or "LowPriority" delivery modes.
However, on modes such as "SMI" or &quo
n, so we can
not post the interrupt
Add code in the SVM PI logic to explicitly refuse to establish posted
mappings for advanced IRQ deliver modes.
This fixes a bug I have with code which configures real hardware to
inject virtual SMIs into my guest.
Signed-off-by: Alexander Graf
---
arch/x86
to fix
the situation for x86 systems. If anyone has a great idea how to generalize
the filtering though, I'm all ears.
Alex
Alexander Graf (2):
KVM: VMX: Disable posted interrupts for odd IRQs
KVM: SVM: Disable posted interrupts for odd IRQs
arch/x86/kvm/svm.c | 16
arch/x8
n, so we can
not post the interrupt
Add code in the VMX PI logic to explicitly refuse to establish posted
mappings for advanced IRQ deliver modes.
This fixes a bug I have with code which configures real hardware to
inject virtual SMIs into my guest.
Signed-off-by: Alexander Graf
---
arch/x86/kvm/
On 26.08.19 22:46, Suthikulpanit, Suravee wrote:
Alex,
On 8/19/2019 5:42 AM, Alexander Graf wrote:
On 15.08.19 18:25, Suthikulpanit, Suravee wrote:
ACK notifiers don't work with AMD SVM w/ AVIC when the PIT interrupt
is delivered as edge-triggered fixed interrupt since AMD processors
On 26.08.19 21:41, Suthikulpanit, Suravee wrote:
Alex,
On 8/19/2019 4:57 AM, Alexander Graf wrote:
On 15.08.19 18:25, Suthikulpanit, Suravee wrote:
Currently, there is no way to tell whether APICv is active
on a particular VM. This often cause confusion since APICv
can be deactivated
On 27.08.19 09:29, Vitaly Kuznetsov wrote:
"Suthikulpanit, Suravee" writes:
Certain runtime conditions require APICv to be temporary deactivated.
However, current implementation only support permanently deactivate
APICv at runtime (mainly used when running Hyper-V guest).
In addtion, for
On 23.08.19 14:19, Anup Patel wrote:
On Fri, Aug 23, 2019 at 5:40 PM Paolo Bonzini wrote:
On 23/08/19 13:44, Graf (AWS), Alexander wrote:
Overall, I'm quite happy with the code. It's a very clean implementation
of a KVM target.
Yup, I said the same even for v1 (I prefer recursive
On 23.08.19 14:11, Anup Patel wrote:
On Fri, Aug 23, 2019 at 5:19 PM Alexander Graf wrote:
On 23.08.19 13:46, Anup Patel wrote:
On Fri, Aug 23, 2019 at 5:03 PM Graf (AWS), Alexander wrote:
Am 23.08.2019 um 13:05 schrieb Anup Patel :
On Fri, Aug 23, 2019 at 1:23 PM Alexander Graf
On 23.08.19 14:00, Anup Patel wrote:
On Fri, Aug 23, 2019 at 5:09 PM Graf (AWS), Alexander wrote:
Am 23.08.2019 um 13:18 schrieb Anup Patel :
On Fri, Aug 23, 2019 at 1:34 PM Alexander Graf wrote:
On 22.08.19 10:46, Anup Patel wrote:
From: Atish Patra
The KVM host kernel running
On 23.08.19 13:46, Anup Patel wrote:
On Fri, Aug 23, 2019 at 5:03 PM Graf (AWS), Alexander wrote:
Am 23.08.2019 um 13:05 schrieb Anup Patel :
On Fri, Aug 23, 2019 at 1:23 PM Alexander Graf wrote:
On 22.08.19 10:46, Anup Patel wrote:
From: Atish Patra
The RISC-V hypervisor
stage2 page table programing
4. SBI v0.2 emulation in-kernel
5. SBI v0.2 hart hotplug emulation in-kernel
6. In-kernel PLIC emulation
7. . and more .
Please consider patches I did not comment on as
Reviewed-by: Alexander Graf
Overall, I'm quite happy with the code. It's a very clean
On 22.08.19 10:46, Anup Patel wrote:
From: Atish Patra
The KVM host kernel running in HS-mode needs to handle SBI calls coming
from guest kernel running in VS-mode.
This patch adds SBI v0.1 support in KVM RISC-V. All the SBI calls are
implemented correctly except remote tlb flushes. For
On 22.08.19 10:46, Anup Patel wrote:
From: Atish Patra
The RISC-V hypervisor specification doesn't have any virtual timer
feature.
Due to this, the guest VCPU timer will be programmed via SBI calls.
The host will use a separate hrtimer event for each guest VCPU to
provide timer functionality.
On 22.08.19 16:00, Anup Patel wrote:
On Thu, Aug 22, 2019 at 5:31 PM Alexander Graf wrote:
On 22.08.19 10:44, Anup Patel wrote:
For KVM RISC-V, we use KVM_GET_ONE_REG/KVM_SET_ONE_REG ioctls to access
VCPU config and registers from user-space.
We have three types of VCPU registers:
1
On 22.08.19 15:58, Anup Patel wrote:
On Thu, Aug 22, 2019 at 6:57 PM Alexander Graf wrote:
On 22.08.19 14:38, Anup Patel wrote:
On Thu, Aug 22, 2019 at 5:58 PM Alexander Graf wrote:
On 22.08.19 10:45, Anup Patel wrote:
This patch implements all required functions for programming
On 22.08.19 14:38, Anup Patel wrote:
On Thu, Aug 22, 2019 at 5:58 PM Alexander Graf wrote:
On 22.08.19 10:45, Anup Patel wrote:
This patch implements all required functions for programming
the stage2 page table for each Guest/VM.
At high-level, the flow of stage2 related functions
On 22.08.19 14:33, Anup Patel wrote:
On Thu, Aug 22, 2019 at 5:44 PM Alexander Graf wrote:
On 22.08.19 10:44, Anup Patel wrote:
We will get stage2 page faults whenever Guest/VM access SW emulated
MMIO device or unmapped Guest RAM.
This patch implements MMIO read/write emulation
On 22.08.19 10:45, Anup Patel wrote:
This patch implements all required functions for programming
the stage2 page table for each Guest/VM.
At high-level, the flow of stage2 related functions is similar
from KVM ARM/ARM64 implementation but the stage2 page table
format is quite different for KVM
On 22.08.19 10:45, Anup Patel wrote:
We get illegal instruction trap whenever Guest/VM executes WFI
instruction.
This patch handles WFI trap by blocking the trapped VCPU using
kvm_vcpu_block() API. The blocked VCPU will be automatically
resumed whenever a VCPU interrupt is injected from
On 22.08.19 10:44, Anup Patel wrote:
We will get stage2 page faults whenever Guest/VM access SW emulated
MMIO device or unmapped Guest RAM.
This patch implements MMIO read/write emulation by extracting MMIO
details from the trapped load/store instruction and forwarding the
MMIO read/write to
On 22.08.19 10:44, Anup Patel wrote:
We will get stage2 page faults whenever Guest/VM access SW emulated
MMIO device or unmapped Guest RAM.
This patch implements MMIO read/write emulation by extracting MMIO
details from the trapped load/store instruction and forwarding the
MMIO read/write to
On 22.08.19 10:44, Anup Patel wrote:
For KVM RISC-V, we use KVM_GET_ONE_REG/KVM_SET_ONE_REG ioctls to access
VCPU config and registers from user-space.
We have three types of VCPU registers:
1. CONFIG - these are VCPU config and capabilities
2. CORE - these are VCPU general purpose registers
On 15.08.19 18:25, Suthikulpanit, Suravee wrote:
In-kernel IOAPIC does not update RTC pending EOI info with AMD SVM /w AVIC
when interrupt is delivered as edge-triggered since AMD processors
cannot exit on EOI for these interrupts.
Add code to also check LAPIC pending EOI before injecting
On 15.08.19 18:25, Suthikulpanit, Suravee wrote:
ACK notifiers don't work with AMD SVM w/ AVIC when the PIT interrupt
is delivered as edge-triggered fixed interrupt since AMD processors
cannot exit on EOI for these interrupts.
Add code to check LAPIC pending EOI before injecting any pending
On 15.08.19 18:25, Suthikulpanit, Suravee wrote:
AMD AVIC does not support ExtINT. Therefore, AVIC must be temporary
deactivated and fall back to using legacy interrupt injection via vINTR
and interrupt window.
Signed-off-by: Suravee Suthikulpanit
---
arch/x86/kvm/svm.c | 49
On 15.08.19 18:25, Suthikulpanit, Suravee wrote:
Since disabling APICv has to be done for all vcpus on AMD-based system,
adopt the newly introduced kvm_make_apicv_deactivate_request() intereface.
typo
Signed-off-by: Suravee Suthikulpanit
---
arch/x86/kvm/hyperv.c | 12 ++--
1
On 15.08.19 18:25, Suthikulpanit, Suravee wrote:
Add necessary logics for supporting activate/deactivate AVIC at runtime.
Signed-off-by: Suravee Suthikulpanit
---
arch/x86/kvm/svm.c | 27 +--
1 file changed, 25 insertions(+), 2 deletions(-)
diff --git
On 15.08.19 18:25, Suthikulpanit, Suravee wrote:
Currently, there is no way to tell whether APICv is active
on a particular VM. This often cause confusion since APICv
can be deactivated at runtime.
Introduce a debugfs entry to report APICv state of a VM.
This creates a read-only file:
On 15.08.19 18:25, Suthikulpanit, Suravee wrote:
Currently, after a VM boots with APICv enabled, it could go into
the following states:
* activated = VM is running w/ APICv
* deactivated = VM deactivate APICv (temporary)
* disabled= VM deactivate APICv (permanent)
Introduce
On 12.07.19 16:36, Andy Lutomirski wrote:
On Fri, Jul 12, 2019 at 6:45 AM Alexandre Chartre
wrote:
On 7/12/19 2:50 PM, Peter Zijlstra wrote:
On Fri, Jul 12, 2019 at 01:56:44PM +0200, Alexandre Chartre wrote:
I think that's precisely what makes ASI and PTI different and independent.
PTI
101 - 200 of 847 matches
Mail list logo