Re: [PATCH v12 00/84] KVM: Stop grabbing references to PFNMAP'd pages

2024-08-27 Thread Alex Bennée
Sean Christopherson  writes:

> arm64 folks, the first two patches are bug fixes, but I have very low
> confidence that they are correct and/or desirable.  If they are more or
> less correct, I can post them separately if that'd make life easier.  I
> included them here to avoid conflicts, and because I'm pretty sure how
> KVM deals with MTE tags vs. dirty logging will impact what APIs KVM needs
> to provide to arch code.
>
> On to the series...  The TL;DR is that I would like to get input on two
> things:
>
>  1. Marking folios dirty/accessed only on the intial stage-2 page fault
>  2. The new APIs for faulting, prefetching, and doing "lookups" on
>  pfns

I've finally managed to get virtio-vulkan working on my Arm64 devbox
with an AMD graphics card plugged into the PCI. I'm confident that the
graphics path is using the discrete card memory (as it has been mapped
as device memory with alignment handlers to deal with the broken Altra
PCI). However aside from running graphics workloads in KVM guests is
their anything else I can check to see things are behaving as expected?

The predecessor series did break launching some KVM guests on my x86
system but with this series launching guests works fine and I haven't
noticed any weirdness.

So for those caveats you can certainly have a:

Tested-by: Alex Bennée 

However if there is anything else I can do to further stress test this
code do let me know.

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro



Re: [PATCH v12 13/84] KVM: Annotate that all paths in hva_to_pfn() might sleep

2024-08-08 Thread Alex Bennée
Sean Christopherson  writes:

> On Thu, Aug 08, 2024, Alex Bennée wrote:
>> Sean Christopherson  writes:
>> 
>> > On Thu, Aug 08, 2024, Alex Bennée wrote:
>> >> Sean Christopherson  writes:
>> >> 
>> >> > Now that hva_to_pfn() no longer supports being called in atomic context,
>> >> > move the might_sleep() annotation from hva_to_pfn_slow() to
>> >> > hva_to_pfn().
>> >> 
>> >> The commentary for hva_to_pfn_fast disagrees.
>> >> 
>> >>   /*
>> >>* The fast path to get the writable pfn which will be stored in @pfn,
>> >>* true indicates success, otherwise false is returned.  It's also the
>> >>* only part that runs if we can in atomic context.
>> >>*/
>> >>   static bool hva_to_pfn_fast(struct kvm_follow_pfn *kfp, kvm_pfn_t *pfn)
>> >> 
>> >> At which point did it loose the ability to run in the atomic context? I
>> >> couldn't work it out from the commits.
>> >
>> > It didn't lose the ability per se (calling hva_to_pfn_fast() in atomic 
>> > context
>> > would still be functionally ok), rather the previous patch
>> >
>> >   KVM: Drop @atomic param from gfn=>pfn and hva=>pfn APIs
>> >
>> > removed support for doing so in order to simplify hva_to_pfn() as a whole.
>> 
>> It still sticks out given the only caller no longer enforces this. 
>
> Oh, sorry, I should have been more explicit.  I'll fix the comment, I simply
> missed it.

No worries, with the fixed comment:

Reviewed-by: Alex Bennée 

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro


Re: [PATCH v12 13/84] KVM: Annotate that all paths in hva_to_pfn() might sleep

2024-08-08 Thread Alex Bennée
Sean Christopherson  writes:

> On Thu, Aug 08, 2024, Alex Bennée wrote:
>> Sean Christopherson  writes:
>> 
>> > Now that hva_to_pfn() no longer supports being called in atomic context,
>> > move the might_sleep() annotation from hva_to_pfn_slow() to
>> > hva_to_pfn().
>> 
>> The commentary for hva_to_pfn_fast disagrees.
>> 
>>   /*
>>* The fast path to get the writable pfn which will be stored in @pfn,
>>* true indicates success, otherwise false is returned.  It's also the
>>* only part that runs if we can in atomic context.
>>*/
>>   static bool hva_to_pfn_fast(struct kvm_follow_pfn *kfp, kvm_pfn_t *pfn)
>> 
>> At which point did it loose the ability to run in the atomic context? I
>> couldn't work it out from the commits.
>
> It didn't lose the ability per se (calling hva_to_pfn_fast() in atomic context
> would still be functionally ok), rather the previous patch
>
>   KVM: Drop @atomic param from gfn=>pfn and hva=>pfn APIs
>
> removed support for doing so in order to simplify hva_to_pfn() as a whole.

It still sticks out given the only caller no longer enforces this. 

How about:

* true indicates success, otherwise false is returned.  It's also the
* only part that could run in an atomic context if we wanted to
* (although no callers expect it to).

?

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro


Re: [PATCH v12 13/84] KVM: Annotate that all paths in hva_to_pfn() might sleep

2024-08-08 Thread Alex Bennée
Sean Christopherson  writes:

> Now that hva_to_pfn() no longer supports being called in atomic context,
> move the might_sleep() annotation from hva_to_pfn_slow() to
> hva_to_pfn().

The commentary for hva_to_pfn_fast disagrees.

  /*
   * The fast path to get the writable pfn which will be stored in @pfn,
   * true indicates success, otherwise false is returned.  It's also the
   * only part that runs if we can in atomic context.
   */
  static bool hva_to_pfn_fast(struct kvm_follow_pfn *kfp, kvm_pfn_t *pfn)

At which point did it loose the ability to run in the atomic context? I
couldn't work it out from the commits.

>
> Signed-off-by: Sean Christopherson 
> ---
>  virt/kvm/kvm_main.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index 84c73b4fc804..03af1a0090b1 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -2807,8 +2807,6 @@ static int hva_to_pfn_slow(unsigned long addr, bool 
> *async, bool write_fault,
>   struct page *page;
>   int npages;
>  
> - might_sleep();
> -
>   if (writable)
>   *writable = write_fault;
>  
> @@ -2947,6 +2945,8 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool 
> interruptible, bool *async,
>   kvm_pfn_t pfn;
>   int npages, r;
>  
> + might_sleep();
> +
>   if (hva_to_pfn_fast(addr, write_fault, writable, &pfn))
>   return pfn;

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro


Re: [PATCH v12 11/84] KVM: Rename gfn_to_page_many_atomic() to kvm_prefetch_pages()

2024-08-02 Thread Alex Bennée
Sean Christopherson  writes:

> Rename gfn_to_page_many_atomic() to kvm_prefetch_pages() to try and
> communicate its true purpose, as the "atomic" aspect is essentially a
> side effect of the fact that x86 uses the API while holding mmu_lock.

It's never too late to start adding some kdoc annotations to a function
and renaming a kvm_host API call seems like a good time to do it.

> E.g. even if mmu_lock weren't held, KVM wouldn't want to fault-in pages,
> as the goal is to opportunistically grab surrounding pages that have
> already been accessed and/or dirtied by the host, and to do so quickly.
>
> Signed-off-by: Sean Christopherson 
> ---


/**
 * kvm_prefetch_pages() - opportunistically grab previously accessed pages
 * @slot: which @kvm_memory_slot the pages are in
 * @gfn: guest frame
 * @pages: array to receives page pointers
 * @nr_pages: number of pages
 *
 * Returns the number of pages actually mapped.
 */

?

>  
> -int gfn_to_page_many_atomic(struct kvm_memory_slot *slot, gfn_t gfn,
> - struct page **pages, int nr_pages)
> +int kvm_prefetch_pages(struct kvm_memory_slot *slot, gfn_t gfn,
> +struct page **pages, int nr_pages)
>  {
>   unsigned long addr;
>   gfn_t entry = 0;
> @@ -3075,7 +3075,7 @@ int gfn_to_page_many_atomic(struct kvm_memory_slot 
> *slot, gfn_t gfn,
>  
>   return get_user_pages_fast_only(addr, nr_pages, FOLL_WRITE, pages);
>  }
> -EXPORT_SYMBOL_GPL(gfn_to_page_many_atomic);
> +EXPORT_SYMBOL_GPL(kvm_prefetch_pages);
>  
>  /*
>   * Do not use this helper unless you are absolutely certain the gfn _must_ be

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro


Re: [PATCH v12 83/84] KVM: Drop APIs that manipulate "struct page" via pfns

2024-08-02 Thread Alex Bennée
Sean Christopherson  writes:

> Remove all kvm_{release,set}_pfn_*() APIs not that all users are gone.

now?

Otherwise:

Reviewed-by: Alex Bennée 

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro


Re: [PATCH v12 26/84] KVM: Move kvm_{set,release}_page_{clean,dirty}() helpers up in kvm_main.c

2024-08-01 Thread Alex Bennée
Sean Christopherson  writes:

> Hoist the kvm_{set,release}_page_{clean,dirty}() APIs further up in
> kvm_main.c so that they can be used by the kvm_follow_pfn family of APIs.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson 

Reviewed-by: Alex Bennée 

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro


Re: [PATCH v12 24/84] KVM: Use plain "struct page" pointer instead of single-entry array

2024-08-01 Thread Alex Bennée
Sean Christopherson  writes:

> Use a single pointer instead of a single-entry array for the struct page
> pointer in hva_to_pfn_fast().  Using an array makes the code unnecessarily
> annoying to read and update.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson 

Reviewed-by: Alex Bennée 

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro


Re: [PATCH v12 12/84] KVM: Drop @atomic param from gfn=>pfn and hva=>pfn APIs

2024-08-01 Thread Alex Bennée
Sean Christopherson  writes:

> Drop @atomic from the myriad "to_pfn" APIs now that all callers pass
> "false".
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson 

Reviewed-by: Alex Bennée 

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro


Re: [PATCH v12 05/84] KVM: Add kvm_release_page_unused() API to put pages that KVM never consumes

2024-08-01 Thread Alex Bennée
Sean Christopherson  writes:

> Add an API to release an unused page, i.e. to put a page without marking
> it accessed or dirty.  The API will be used when KVM faults-in a page but
> bails before installing the guest mapping (and other similar flows).
>
> Signed-off-by: Sean Christopherson 
> ---
>  include/linux/kvm_host.h | 9 +
>  1 file changed, 9 insertions(+)
>
> diff --git a/include/linux/kvm_host.h b/include/linux/kvm_host.h
> index 3d9617d1de41..c5d39a337aa3 100644
> --- a/include/linux/kvm_host.h
> +++ b/include/linux/kvm_host.h
> @@ -1201,6 +1201,15 @@ unsigned long gfn_to_hva_prot(struct kvm *kvm, gfn_t 
> gfn, bool *writable);
>  unsigned long gfn_to_hva_memslot(struct kvm_memory_slot *slot, gfn_t gfn);
>  unsigned long gfn_to_hva_memslot_prot(struct kvm_memory_slot *slot, gfn_t 
> gfn,
> bool *writable);
> +
> +static inline void kvm_release_page_unused(struct page *page)
> +{
> + if (!page)
> + return;
> +
> + put_page(page);
> +}

I guess it's unfamiliarity with the mm layout but I was trying to find
where the get_pages come from to see the full pattern of allocate and
return. I guess somewhere in the depths of hva_to_pfn() from
hva_to_pfn_retry()? I think the indirection of the page walking confuses
me ;-)

Anyway the API seems reasonable enough given the other kvm_release_
functions.

Reviewed-by: Alex Bennée 

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro


Re: [PATCH v12 04/84] KVM: Allow calling kvm_release_page_{clean,dirty}() on a NULL page pointer

2024-08-01 Thread Alex Bennée
Sean Christopherson  writes:

> Allow passing a NULL @page to kvm_release_page_{clean,dirty}(), there's no
> tangible benefit to forcing the callers to pre-check @page, and it ends up
> generating a lot of duplicate boilerplate code.
>
> Signed-off-by: Sean Christopherson 

Reviewed-by: Alex Bennée 

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro


Re: [PATCH v12 03/84] KVM: Drop KVM_ERR_PTR_BAD_PAGE and instead return NULL to indicate an error

2024-08-01 Thread Alex Bennée
Sean Christopherson  writes:

> Remove KVM_ERR_PTR_BAD_PAGE and instead return NULL, as "bad page" is just
> a leftover bit of weirdness from days of old when KVM stuffed a "bad" page
> into the guest instead of actually handling missing pages.  See commit
> cea7bb21280e ("KVM: MMU: Make gfn_to_page() always safe").
>
> Signed-off-by: Sean Christopherson 

Reviewed-by: Alex Bennée 

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro


Re: [PATCH v12 01/84] KVM: arm64: Release pfn, i.e. put page, if copying MTE tags hits ZONE_DEVICE

2024-07-31 Thread Alex Bennée
Sean Christopherson  writes:

> Put the page reference acquired by gfn_to_pfn_prot() if
> kvm_vm_ioctl_mte_copy_tags() runs into ZONE_DEVICE memory.  KVM's less-
> than-stellar heuristics for dealing with pfn-mapped memory means that KVM
> can get a page reference to ZONE_DEVICE memory.
>
> Fixes: f0376edb1ddc ("KVM: arm64: Add ioctl to fetch/store tags in a guest")
> Signed-off-by: Sean Christopherson 
> ---
>  arch/arm64/kvm/guest.c | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/arch/arm64/kvm/guest.c b/arch/arm64/kvm/guest.c
> index 11098eb7eb44..e1f0ff08836a 100644
> --- a/arch/arm64/kvm/guest.c
> +++ b/arch/arm64/kvm/guest.c
> @@ -1059,6 +1059,7 @@ int kvm_vm_ioctl_mte_copy_tags(struct kvm *kvm,
>   page = pfn_to_online_page(pfn);
>   if (!page) {
>   /* Reject ZONE_DEVICE memory */
> + kvm_release_pfn_clean(pfn);

I guess this gets renamed later in the series.

However my main comment is does lack of page always mean a ZONE_DEVICE?
Looking at pfn_to_online_page() I see a bunch of other checks first. Why
isn't it that functions responsibility to clean up after itself if its
returning NULLs?

>   ret = -EFAULT;
>   goto out;
>   }

-- 
Alex Bennée
Virtualisation Tech Lead @ Linaro


Re: [EXTERNAL] [PATCH] target/ppc: Fix mtmsr(d) L=1 variant that loses interrupts

2020-04-16 Thread Alex Bennée
 {
>> +/* L=1 form only updates EE and RI */
>>  TCGv t0 = tcg_temp_new();
>> +TCGv t1 = tcg_temp_new();
>>  tcg_gen_andi_tl(t0, cpu_gpr[rS(ctx->opcode)],
>>  (1 << MSR_RI) | (1 << MSR_EE));
>> -tcg_gen_andi_tl(cpu_msr, cpu_msr,
>> +tcg_gen_andi_tl(t1, cpu_msr,
>>  ~(target_ulong)((1 << MSR_RI) | (1 << MSR_EE)));
>> -tcg_gen_or_tl(cpu_msr, cpu_msr, t0);
>> +tcg_gen_or_tl(t1, t1, t0);
>> +
>> +gen_helper_store_msr(cpu_env, t1);
>>  tcg_temp_free(t0);
>> +tcg_temp_free(t1);
>> +
>>  } else {
>>  TCGv msr = tcg_temp_new();
>>  
>> @@ -4411,9 +4423,6 @@ static void gen_mtmsr(DisasContext *ctx)
>>   *  power saving mode, we will exit the loop directly from
>>       *  ppc_store_msr
>>   */
>> -if (tb_cflags(ctx->base.tb) & CF_USE_ICOUNT) {
>> -gen_io_start();
>> -}
>>  gen_update_nip(ctx, ctx->base.pc_next);
>>  #if defined(TARGET_PPC64)
>>  tcg_gen_deposit_tl(msr, cpu_msr, cpu_gpr[rS(ctx->opcode)], 0, 32);
>> @@ -4422,10 +4431,9 @@ static void gen_mtmsr(DisasContext *ctx)
>>  #endif
>>  gen_helper_store_msr(cpu_env, msr);
>>  tcg_temp_free(msr);
>> -/* Must stop the translation as machine state (may have) changed */
>> -/* Note that mtmsr is not always defined as context-synchronizing */
>> -gen_stop_exception(ctx);
>>  }
>> +/* Must stop the translation as machine state (may have) changed */
>> +gen_stop_exception(ctx);
>>  #endif
>>  }
>>  
>> 


-- 
Alex Bennée


[PATCH v4 02/12] KVM: define common KVM_GUESTDBG_USE_SW/HW_BP bits

2015-05-15 Thread Alex Bennée
Currently x86, powerpc and soon arm64 use the same two architecture
specific bits for guest debug support for software and hardware
breakpoints. This makes the shared values explicit.

Signed-off-by: Alex Bennée 
Reviewed-by: Andrew Jones 

-
v4
  - claim more bits for the common functionality
v5
  - don't use __ mechanism to move values common

diff --git a/arch/powerpc/include/uapi/asm/kvm.h 
b/arch/powerpc/include/uapi/asm/kvm.h
index ab4d473..6ea24a5 100644
--- a/arch/powerpc/include/uapi/asm/kvm.h
+++ b/arch/powerpc/include/uapi/asm/kvm.h
@@ -307,11 +307,9 @@ struct kvm_guest_debug_arch {
 /* Debug related defines */
 /*
  * kvm_guest_debug->control is a 32 bit field. The lower 16 bits are generic
- * and upper 16 bits are architecture specific. Architecture specific defines
+ * and upper 14 bits are architecture specific. Architecture specific defines
  * that ioctl is for setting hardware breakpoint or software breakpoint.
  */
-#define KVM_GUESTDBG_USE_SW_BP 0x0001
-#define KVM_GUESTDBG_USE_HW_BP 0x0002
 
 /* definition of registers in kvm_run */
 struct kvm_sync_regs {
diff --git a/arch/s390/include/uapi/asm/kvm.h b/arch/s390/include/uapi/asm/kvm.h
index ef1a5fc..aca4f86 100644
--- a/arch/s390/include/uapi/asm/kvm.h
+++ b/arch/s390/include/uapi/asm/kvm.h
@@ -114,8 +114,6 @@ struct kvm_fpu {
__u64 fprs[16];
 };
 
-#define KVM_GUESTDBG_USE_HW_BP 0x0001
-
 #define KVM_HW_BP  1
 #define KVM_HW_WP_WRITE2
 #define KVM_SINGLESTEP 4
diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
index d7dcef5..ca51d4c 100644
--- a/arch/x86/include/uapi/asm/kvm.h
+++ b/arch/x86/include/uapi/asm/kvm.h
@@ -250,8 +250,6 @@ struct kvm_debug_exit_arch {
__u64 dr7;
 };
 
-#define KVM_GUESTDBG_USE_SW_BP 0x0001
-#define KVM_GUESTDBG_USE_HW_BP 0x0002
 #define KVM_GUESTDBG_INJECT_DB 0x0004
 #define KVM_GUESTDBG_INJECT_BP 0x0008
 
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 70ac641..7c5dd11 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -570,8 +570,10 @@ struct kvm_s390_irq_state {
 
 /* for KVM_SET_GUEST_DEBUG */
 
-#define KVM_GUESTDBG_ENABLE0x0001
-#define KVM_GUESTDBG_SINGLESTEP0x0002
+#define KVM_GUESTDBG_ENABLE(1 << 0)
+#define KVM_GUESTDBG_SINGLESTEP(1 << 1)
+#define KVM_GUESTDBG_USE_SW_BP (1 << 16)
+#define KVM_GUESTDBG_USE_HW_BP (1 << 17)
 
 struct kvm_guest_debug {
__u32 control;
-- 
2.3.5

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH v3 02/12] KVM: define common __KVM_GUESTDBG_USE_SW/HW_BP values

2015-05-06 Thread Alex Bennée
Currently x86, powerpc and soon arm64 use the same two architecture
specific bits for guest debug support for software and hardware
breakpoints. This makes the shared values explicit while leaving the
gate open for another architecture to use some other value if they
really really want to.

Signed-off-by: Alex Bennée 
Reviewed-by: Andrew Jones 

diff --git a/arch/powerpc/include/uapi/asm/kvm.h 
b/arch/powerpc/include/uapi/asm/kvm.h
index ab4d473..1731569 100644
--- a/arch/powerpc/include/uapi/asm/kvm.h
+++ b/arch/powerpc/include/uapi/asm/kvm.h
@@ -310,8 +310,8 @@ struct kvm_guest_debug_arch {
  * and upper 16 bits are architecture specific. Architecture specific defines
  * that ioctl is for setting hardware breakpoint or software breakpoint.
  */
-#define KVM_GUESTDBG_USE_SW_BP 0x0001
-#define KVM_GUESTDBG_USE_HW_BP 0x0002
+#define KVM_GUESTDBG_USE_SW_BP __KVM_GUESTDBG_USE_SW_BP
+#define KVM_GUESTDBG_USE_HW_BP __KVM_GUESTDBG_USE_HW_BP
 
 /* definition of registers in kvm_run */
 struct kvm_sync_regs {
diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
index d7dcef5..1438202 100644
--- a/arch/x86/include/uapi/asm/kvm.h
+++ b/arch/x86/include/uapi/asm/kvm.h
@@ -250,8 +250,8 @@ struct kvm_debug_exit_arch {
__u64 dr7;
 };
 
-#define KVM_GUESTDBG_USE_SW_BP 0x0001
-#define KVM_GUESTDBG_USE_HW_BP 0x0002
+#define KVM_GUESTDBG_USE_SW_BP __KVM_GUESTDBG_USE_SW_BP
+#define KVM_GUESTDBG_USE_HW_BP __KVM_GUESTDBG_USE_HW_BP
 #define KVM_GUESTDBG_INJECT_DB 0x0004
 #define KVM_GUESTDBG_INJECT_BP 0x0008
 
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 70ac641..3b6252e 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -570,8 +570,16 @@ struct kvm_s390_irq_state {
 
 /* for KVM_SET_GUEST_DEBUG */
 
-#define KVM_GUESTDBG_ENABLE0x0001
-#define KVM_GUESTDBG_SINGLESTEP0x0002
+#define KVM_GUESTDBG_ENABLE(1 << 0)
+#define KVM_GUESTDBG_SINGLESTEP(1 << 1)
+
+/*
+ * Architecture specific stuff uses the top 16 bits of the field,
+ * however there is some shared commonality for the common cases
+ */
+#define __KVM_GUESTDBG_USE_SW_BP   (1 << 16)
+#define __KVM_GUESTDBG_USE_HW_BP   (1 << 17)
+
 
 struct kvm_guest_debug {
__u32 control;
-- 
2.3.5

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

Re: [PATCH v2 02/10] KVM: define common __KVM_GUESTDBG_USE_SW/HW_BP values

2015-04-13 Thread Alex Bennée

Christoffer Dall  writes:

> On Tue, Mar 31, 2015 at 04:08:00PM +0100, Alex Bennée wrote:
>> Currently x86, powerpc and soon arm64 use the same two architecture
>> specific bits for guest debug support for software and hardware
>> breakpoints. This makes the shared values explicit while leaving the
>> gate open for another architecture to use some other value if they
>> really really want to.
>> 
>> Signed-off-by: Alex Bennée 
>> 
>> diff --git a/arch/powerpc/include/uapi/asm/kvm.h 
>> b/arch/powerpc/include/uapi/asm/kvm.h
>> index ab4d473..1731569 100644
>> --- a/arch/powerpc/include/uapi/asm/kvm.h
>> +++ b/arch/powerpc/include/uapi/asm/kvm.h
>> @@ -310,8 +310,8 @@ struct kvm_guest_debug_arch {
>>   * and upper 16 bits are architecture specific. Architecture specific 
>> defines
>>   * that ioctl is for setting hardware breakpoint or software breakpoint.
>>   */
>> -#define KVM_GUESTDBG_USE_SW_BP  0x0001
>> -#define KVM_GUESTDBG_USE_HW_BP  0x0002
>> +#define KVM_GUESTDBG_USE_SW_BP  __KVM_GUESTDBG_USE_SW_BP
>> +#define KVM_GUESTDBG_USE_HW_BP  __KVM_GUESTDBG_USE_HW_BP
>>  
>>  /* definition of registers in kvm_run */
>>  struct kvm_sync_regs {
>> diff --git a/arch/x86/include/uapi/asm/kvm.h 
>> b/arch/x86/include/uapi/asm/kvm.h
>> index d7dcef5..1438202 100644
>> --- a/arch/x86/include/uapi/asm/kvm.h
>> +++ b/arch/x86/include/uapi/asm/kvm.h
>> @@ -250,8 +250,8 @@ struct kvm_debug_exit_arch {
>>  __u64 dr7;
>>  };
>>  
>> -#define KVM_GUESTDBG_USE_SW_BP  0x0001
>> -#define KVM_GUESTDBG_USE_HW_BP  0x0002
>> +#define KVM_GUESTDBG_USE_SW_BP  __KVM_GUESTDBG_USE_SW_BP
>> +#define KVM_GUESTDBG_USE_HW_BP  __KVM_GUESTDBG_USE_HW_BP
>>  #define KVM_GUESTDBG_INJECT_DB  0x0004
>>  #define KVM_GUESTDBG_INJECT_BP  0x0008
>>  
>> diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
>> index 5eedf84..ce2db14 100644
>> --- a/include/uapi/linux/kvm.h
>> +++ b/include/uapi/linux/kvm.h
>> @@ -525,8 +525,16 @@ struct kvm_s390_irq {
>>  
>>  /* for KVM_SET_GUEST_DEBUG */
>>  
>> -#define KVM_GUESTDBG_ENABLE 0x0001
>> -#define KVM_GUESTDBG_SINGLESTEP 0x0002
>> +#define KVM_GUESTDBG_ENABLE (1 << 0)
>> +#define KVM_GUESTDBG_SINGLESTEP (1 << 1)
>> +
>> +/*
>> + * Architecture specific stuff uses the top 16 bits of the field,
>
> can you be more specific than 'stuff' here?  features?
>
>> + * however there is some shared commonality for the common cases
>
> I don't like this sentence; shared commonality is a pleonasm and the use
> of however makes it sounds like there's some caveat here.

OK I can see that - after I looked it up ;-)

> If the top 16 bits are indeed arhictecture specific, then I think they
> should just be defined in their architecture specific headers.  Unless
> the idea here is that there's a fixed set of of flags that architectures
> can choose to support, in which case it should simply be defined in the
> common header.

Well an architecture might not support some features and want to use
those bits for something else? I didn't want to force the bottom two
of the architecture specific bits to wasted if the features don't exist.

>
>
>> + */
>> +#define __KVM_GUESTDBG_USE_SW_BP(1 << 16)
>> +#define __KVM_GUESTDBG_USE_HW_BP(1 << 17)
>> +
>>  
>>  struct kvm_guest_debug {
>>  __u32 control;
>> -- 
>> 2.3.4
>> 

-- 
Alex Bennée
___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev

[PATCH v2 02/10] KVM: define common __KVM_GUESTDBG_USE_SW/HW_BP values

2015-03-31 Thread Alex Bennée
Currently x86, powerpc and soon arm64 use the same two architecture
specific bits for guest debug support for software and hardware
breakpoints. This makes the shared values explicit while leaving the
gate open for another architecture to use some other value if they
really really want to.

Signed-off-by: Alex Bennée 

diff --git a/arch/powerpc/include/uapi/asm/kvm.h 
b/arch/powerpc/include/uapi/asm/kvm.h
index ab4d473..1731569 100644
--- a/arch/powerpc/include/uapi/asm/kvm.h
+++ b/arch/powerpc/include/uapi/asm/kvm.h
@@ -310,8 +310,8 @@ struct kvm_guest_debug_arch {
  * and upper 16 bits are architecture specific. Architecture specific defines
  * that ioctl is for setting hardware breakpoint or software breakpoint.
  */
-#define KVM_GUESTDBG_USE_SW_BP 0x0001
-#define KVM_GUESTDBG_USE_HW_BP 0x0002
+#define KVM_GUESTDBG_USE_SW_BP __KVM_GUESTDBG_USE_SW_BP
+#define KVM_GUESTDBG_USE_HW_BP __KVM_GUESTDBG_USE_HW_BP
 
 /* definition of registers in kvm_run */
 struct kvm_sync_regs {
diff --git a/arch/x86/include/uapi/asm/kvm.h b/arch/x86/include/uapi/asm/kvm.h
index d7dcef5..1438202 100644
--- a/arch/x86/include/uapi/asm/kvm.h
+++ b/arch/x86/include/uapi/asm/kvm.h
@@ -250,8 +250,8 @@ struct kvm_debug_exit_arch {
__u64 dr7;
 };
 
-#define KVM_GUESTDBG_USE_SW_BP 0x0001
-#define KVM_GUESTDBG_USE_HW_BP 0x0002
+#define KVM_GUESTDBG_USE_SW_BP __KVM_GUESTDBG_USE_SW_BP
+#define KVM_GUESTDBG_USE_HW_BP __KVM_GUESTDBG_USE_HW_BP
 #define KVM_GUESTDBG_INJECT_DB 0x0004
 #define KVM_GUESTDBG_INJECT_BP 0x0008
 
diff --git a/include/uapi/linux/kvm.h b/include/uapi/linux/kvm.h
index 5eedf84..ce2db14 100644
--- a/include/uapi/linux/kvm.h
+++ b/include/uapi/linux/kvm.h
@@ -525,8 +525,16 @@ struct kvm_s390_irq {
 
 /* for KVM_SET_GUEST_DEBUG */
 
-#define KVM_GUESTDBG_ENABLE0x0001
-#define KVM_GUESTDBG_SINGLESTEP0x0002
+#define KVM_GUESTDBG_ENABLE(1 << 0)
+#define KVM_GUESTDBG_SINGLESTEP(1 << 1)
+
+/*
+ * Architecture specific stuff uses the top 16 bits of the field,
+ * however there is some shared commonality for the common cases
+ */
+#define __KVM_GUESTDBG_USE_SW_BP   (1 << 16)
+#define __KVM_GUESTDBG_USE_HW_BP   (1 << 17)
+
 
 struct kvm_guest_debug {
__u32 control;
-- 
2.3.4

___
Linuxppc-dev mailing list
Linuxppc-dev@lists.ozlabs.org
https://lists.ozlabs.org/listinfo/linuxppc-dev