Re: [PATCH v1] kvm: Make vcpu->requests as 64 bit bitmap

2015-12-24 Thread James Hogan
Hi Andrey,

On Thu, Dec 24, 2015 at 12:30:26PM +0300, Andrey Smetanin wrote:
> Currently on x86 arch we has already 32 requests defined
> so the newer request bits can't be placed inside
> vcpu->requests(unsigned long) inside x86 32 bit system.
> But we are going to add a new request in x86 arch
> for Hyper-V tsc page support.
> 
> To solve the problem the patch replaces vcpu->requests by
> bitmap with 64 bit length and uses bitmap API.
> 
> The patch consists of:
> * announce kvm_vcpu_has_requests() to check whether vcpu has
> requests
> * announce kvm_vcpu_requests() to get vcpu requests pointer
> * announce kvm_clear_request() to clear particular vcpu request
> * replace if (vcpu->requests) by if (kvm_vcpu_has_requests(vcpu))
> * replace clear_bit(req, vcpu->requests) by
>  kvm_clear_request(req, vcpu)
> 
> Signed-off-by: Andrey Smetanin <asmeta...@virtuozzo.com>
> CC: Paolo Bonzini <pbonz...@redhat.com>
> CC: Gleb Natapov <g...@kernel.org>
> CC: James Hogan <james.ho...@imgtec.com>
> CC: Paolo Bonzini <pbonz...@redhat.com>
> CC: Paul Burton <paul.bur...@imgtec.com>
> CC: Ralf Baechle <r...@linux-mips.org>
> CC: Alexander Graf <ag...@suse.com>
> CC: Christian Borntraeger <borntrae...@de.ibm.com>
> CC: Cornelia Huck <cornelia.h...@de.ibm.com>
> CC: linux-m...@linux-mips.org
> CC: kvm-ppc@vger.kernel.org
> CC: linux-s...@vger.kernel.org
> CC: Roman Kagan <rka...@virtuozzo.com>
> CC: Denis V. Lunev <d...@openvz.org>
> CC: qemu-de...@nongnu.org

For MIPS KVM bit:
Acked-by: James Hogan <james.ho...@imgtec.com>

Thanks
James

> 
> ---
>  arch/mips/kvm/emulate.c   |  4 +---
>  arch/powerpc/kvm/book3s_pr.c  |  2 +-
>  arch/powerpc/kvm/book3s_pr_papr.c |  2 +-
>  arch/powerpc/kvm/booke.c  |  6 +++---
>  arch/powerpc/kvm/powerpc.c|  6 +++---
>  arch/powerpc/kvm/trace.h  |  2 +-
>  arch/s390/kvm/kvm-s390.c  |  4 ++--
>  arch/x86/kvm/vmx.c|  2 +-
>  arch/x86/kvm/x86.c| 14 +++---
>  include/linux/kvm_host.h  | 27 ++-
>  10 files changed, 42 insertions(+), 27 deletions(-)
> 
> diff --git a/arch/mips/kvm/emulate.c b/arch/mips/kvm/emulate.c
> index 41b1b09..14aebe8 100644
> --- a/arch/mips/kvm/emulate.c
> +++ b/arch/mips/kvm/emulate.c
> @@ -774,10 +774,8 @@ enum emulation_result kvm_mips_emul_wait(struct kvm_vcpu 
> *vcpu)
>* We we are runnable, then definitely go off to user space to
>* check if any I/O interrupts are pending.
>*/
> - if (kvm_check_request(KVM_REQ_UNHALT, vcpu)) {
> - clear_bit(KVM_REQ_UNHALT, >requests);
> + if (kvm_check_request(KVM_REQ_UNHALT, vcpu))
>   vcpu->run->exit_reason = KVM_EXIT_IRQ_WINDOW_OPEN;
> - }
>   }
>  
>   return EMULATE_DONE;
> diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
> index 64891b0..e975279 100644
> --- a/arch/powerpc/kvm/book3s_pr.c
> +++ b/arch/powerpc/kvm/book3s_pr.c
> @@ -349,7 +349,7 @@ static void kvmppc_set_msr_pr(struct kvm_vcpu *vcpu, u64 
> msr)
>   if (msr & MSR_POW) {
>   if (!vcpu->arch.pending_exceptions) {
>   kvm_vcpu_block(vcpu);
> - clear_bit(KVM_REQ_UNHALT, >requests);
> + kvm_clear_request(KVM_REQ_UNHALT, vcpu));
>   vcpu->stat.halt_wakeup++;
>  
>   /* Unset POW bit after we woke up */
> diff --git a/arch/powerpc/kvm/book3s_pr_papr.c 
> b/arch/powerpc/kvm/book3s_pr_papr.c
> index f2c75a1..60cf393 100644
> --- a/arch/powerpc/kvm/book3s_pr_papr.c
> +++ b/arch/powerpc/kvm/book3s_pr_papr.c
> @@ -309,7 +309,7 @@ int kvmppc_h_pr(struct kvm_vcpu *vcpu, unsigned long cmd)
>   case H_CEDE:
>   kvmppc_set_msr_fast(vcpu, kvmppc_get_msr(vcpu) | MSR_EE);
>   kvm_vcpu_block(vcpu);
> - clear_bit(KVM_REQ_UNHALT, >requests);
> + kvm_clear_request(KVM_REQ_UNHALT, vcpu);
>   vcpu->stat.halt_wakeup++;
>   return EMULATE_DONE;
>   case H_LOGICAL_CI_LOAD:
> diff --git a/arch/powerpc/kvm/booke.c b/arch/powerpc/kvm/booke.c
> index fd58751..6bed382 100644
> --- a/arch/powerpc/kvm/booke.c
> +++ b/arch/powerpc/kvm/booke.c
> @@ -574,7 +574,7 @@ static void arm_next_watchdog(struct kvm_vcpu *vcpu)
>* userspace, so clear the KVM_REQ_WATCHDOG request.
>*/
>   if ((vcpu->arch.tsr & (TSR_ENW | TSR_WIS)) != (TSR_ENW | TSR_WIS))
> - clear_bit(KVM_REQ_WATCHDOG, >re

Re: [PATCH v12 2/6] KVM: Add generic support for dirty page logging

2014-11-01 Thread James Hogan
Hi Mario,

On Wed, Oct 22, 2014 at 03:34:07PM -0700, Mario Smarduch wrote:
 +/**
 + * kvm_vm_ioctl_get_dirty_log - get and clear the log of dirty pages in a 
 slot
 + * @kvm: kvm instance
 + * @log: slot id and address to which we copy the log
 + *
 + * We need to keep it in mind that VCPU threads can write to the bitmap
 + * concurrently.  So, to avoid losing data, we keep the following order for
 + * each bit:
 + *
 + *   1. Take a snapshot of the bit and clear it if needed.
 + *   2. Write protect the corresponding page.
 + *   3. Flush TLB's if needed.
 + *   4. Copy the snapshot to the userspace.
 + *
 + * Between 2 and 3, the guest may write to the page using the remaining TLB
 + * entry.  This is not a problem because the page will be reported dirty at
 + * step 4 using the snapshot taken before and step 3 ensures that successive
 + * writes will be logged for the next call.
 + */
 +int kvm_vm_ioctl_get_dirty_log(struct kvm *kvm, struct kvm_dirty_log *log)
 +{
 + int r;
 + struct kvm_memory_slot *memslot;
 + unsigned long n, i;
 + unsigned long *dirty_bitmap;
 + unsigned long *dirty_bitmap_buffer;
 + bool is_dirty = false;
 +
 + mutex_lock(kvm-slots_lock);
 +
 + r = -EINVAL;
 + if (log-slot = KVM_USER_MEM_SLOTS)
 + goto out;
 +
 + memslot = id_to_memslot(kvm-memslots, log-slot);
 +
 + dirty_bitmap = memslot-dirty_bitmap;
 + r = -ENOENT;
 + if (!dirty_bitmap)
 + goto out;
 +
 + n = kvm_dirty_bitmap_bytes(memslot);
 +
 + dirty_bitmap_buffer = dirty_bitmap + n / sizeof(long);
 + memset(dirty_bitmap_buffer, 0, n);
 +
 + spin_lock(kvm-mmu_lock);
 +
 + for (i = 0; i  n / sizeof(long); i++) {
 + unsigned long mask;
 + gfn_t offset;
 +
 + if (!dirty_bitmap[i])
 + continue;
 +
 + is_dirty = true;
 +
 + mask = xchg(dirty_bitmap[i], 0);
 + dirty_bitmap_buffer[i] = mask;
 +
 + offset = i * BITS_PER_LONG;
 + kvm_mmu_write_protect_pt_masked(kvm, memslot, offset, mask);
 + }
 +
 + spin_unlock(kvm-mmu_lock);
 +
 + /* See the comments in kvm_mmu_slot_remove_write_access(). */
 + lockdep_assert_held(kvm-slots_lock);
 +
 + /*
 +  * All the TLBs can be flushed out of mmu lock, see the comments in
 +  * kvm_mmu_slot_remove_write_access().
 +  */
 + if (is_dirty)
 + kvm_flush_remote_tlbs(kvm);
 +
 + r = -EFAULT;
 + if (copy_to_user(log-dirty_bitmap, dirty_bitmap_buffer, n))
 + goto out;

AFAICT all of the arch implementations of kvm_vm_ioctl_get_dirty_log()
except x86 and ppc hv (i.e. ia60, mips, ppc pv, s390) already make use
of the existing generic function kvm_get_dirty_log() to help implement
their kvm_vm_ioctl_get_dirty_log functions, which all look pretty
similar now except for TLB flushing.

Would they not be a better base for a generic
kvm_vm_ioctl_get_dirty_log()?

It feels a bit wrong to add a generic higher level function which
doesn't make use of the existing generic lower level abstraction.

(Appologies if this has already been brought up in previous versions of
the patchset, I haven't been tracking them).

Cheers
James

 +
 + r = 0;
 +out:
 + mutex_unlock(kvm-slots_lock);
 + return r;
 +}
--
To unsubscribe from this list: send the line unsubscribe kvm-ppc in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html