First, something that is not clear to me: how do you deal with a guest
writing 1 to VSIP.SSIP? I think that could lead to lost interrupts if
you have the following sequence
1) guest writes 1 to VSIP.SSIP
2) guest leaves VS-mode
3) host syncs VSIP
4) user mode triggers interrupt
5) host reenters guest
6) host moves irqs_pending to VSIP and clears VSIP.SSIP in the process
Perhaps irqs_pending needs to be split in two fields, irqs_pending and
irqs_pending_mask, and then you can do this:
/*
* irqs_pending and irqs_pending_mask have multiple-producer/single-
* consumer semantics; therefore bits can be set in the mask without
* a lock, but clearing the bits requires vcpu_lock. Furthermore,
* consumers should never write to irqs_pending, and should not
* use bits of irqs_pending that weren't 1 in the mask.
*/
int kvm_riscv_vcpu_set_interrupt(struct kvm_vcpu *vcpu, unsigned int irq)
{
...
set_bit(irq, &vcpu->arch.irqs_pending);
smp_mb__before_atomic();
set_bit(irq, &vcpu->arch.irqs_pending_mask);
kvm_vcpu_kick(vcpu);
}
int kvm_riscv_vcpu_unset_interrupt(struct kvm_vcpu *vcpu, unsigned int irq)
{
...
clear_bit(irq, &vcpu->arch.irqs_pending);
smp_mb__before_atomic();
set_bit(irq, &vcpu->arch.irqs_pending_mask);
}
static void kvm_riscv_reset_vcpu(struct kvm_vcpu *vcpu)
{
...
WRITE_ONCE(vcpu->arch.irqs_pending_mask, 0);
}
and kvm_riscv_vcpu_flush_interrupts can leave aside VSIP bits that
aren't in vcpu->arch.irqs_pending_mask:
if (atomic_read(&vcpu->arch.irqs_pending_mask)) {
u32 mask, val;
mask = xchg_acquire(&vcpu->arch.irqs_pending_mask, 0);
val = READ_ONCE(vcpu->arch.irqs_pending) & mask;
vcpu->arch.guest_csr.vsip &= ~mask;
vcpu->arch.guest_csr.vsip |= val;
csr_write(CSR_VSIP, vsip);
}
Also, the getter of CSR_VSIP should call
kvm_riscv_vcpu_flush_interrupts, while the setter should clear
irqs_pending_mask.
On 29/07/19 13:56, Anup Patel wrote:
> + kvm_make_request(KVM_REQ_IRQ_PENDING, vcpu);
> + kvm_vcpu_kick(vcpu);
The request is not needed as long as kvm_riscv_vcpu_flush_interrupts is
called *after* smp_store_mb(vcpu->mode, IN_GUEST_MODE) in
kvm_arch_vcpu_ioctl_run. This is the "request-less vCPU kick" pattern
in Documentation/virtual/kvm/vcpu-requests.rst. The smp_store_mb then
orders the write of IN_GUEST_MODE before the read of irqs_pending (or
irqs_pending_mask in my proposal above); in the producers, there is a
dual memory barrier in kvm_vcpu_exiting_guest_mode(), ordering the write
of irqs_pending(_mask) before the read of vcpu->mode.
Similar to other VS* CSRs, I'd rather have a ONE_REG interface for VSIE
and VSIP from the beginning as well. Note that the VSIP setter would
clear irqs_pending_mask, while the getter would call
kvm_riscv_vcpu_flush_interrupts before reading. It's up to userspace to
ensure that no interrupt injections happen between the calls to the
getter and the setter.
Paolo
> + csr_write(CSR_VSIP, vcpu->arch.irqs_pending);
> + vcpu->arch.guest_csr.vsip = vcpu->arch.irqs_pending;
> + }