ching some KVM guests on my x86
system but with this series launching guests works fine and I haven't
noticed any weirdness.
So for those caveats you can certainly have a:
Tested-by: Alex Bennée
However if there is anything else I can do to further stress test this
code do let me know.
--
Alex Bennée
Virtualisation Tech Lead @ Linaro
Sean Christopherson writes:
> On Thu, Aug 08, 2024, Alex Bennée wrote:
>> Sean Christopherson writes:
>>
>> > On Thu, Aug 08, 2024, Alex Bennée wrote:
>> >> Sean Christopherson writes:
>> >>
>> >> > Now that hva_to_pfn() no l
Sean Christopherson writes:
> On Thu, Aug 08, 2024, Alex Bennée wrote:
>> Sean Christopherson writes:
>>
>> > Now that hva_to_pfn() no longer supports being called in atomic context,
>> > move the might_sleep() annotation from hva_to_pfn_slow() to
>> &g
*writable = write_fault;
>
> @@ -2947,6 +2945,8 @@ kvm_pfn_t hva_to_pfn(unsigned long addr, bool
> interruptible, bool *async,
> kvm_pfn_t pfn;
> int npages, r;
>
> + might_sleep();
> +
> if (hva_to_pfn_fast(addr, write_fault, writable, &pfn))
> return pfn;
--
Alex Bennée
Virtualisation Tech Lead @ Linaro
ly(addr, nr_pages, FOLL_WRITE, pages);
> }
> -EXPORT_SYMBOL_GPL(gfn_to_page_many_atomic);
> +EXPORT_SYMBOL_GPL(kvm_prefetch_pages);
>
> /*
> * Do not use this helper unless you are absolutely certain the gfn _must_ be
--
Alex Bennée
Virtualisation Tech Lead @ Linaro
Sean Christopherson writes:
> Remove all kvm_{release,set}_pfn_*() APIs not that all users are gone.
now?
Otherwise:
Reviewed-by: Alex Bennée
--
Alex Bennée
Virtualisation Tech Lead @ Linaro
Sean Christopherson writes:
> Hoist the kvm_{set,release}_page_{clean,dirty}() APIs further up in
> kvm_main.c so that they can be used by the kvm_follow_pfn family of APIs.
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson
Reviewed-by: Alex Bennée
stopherson
Reviewed-by: Alex Bennée
--
Alex Bennée
Virtualisation Tech Lead @ Linaro
Sean Christopherson writes:
> Drop @atomic from the myriad "to_pfn" APIs now that all callers pass
> "false".
>
> No functional change intended.
>
> Signed-off-by: Sean Christopherson
Reviewed-by: Alex Bennée
--
Alex Bennée
Virtualisation Tech Lead @ Linaro
ges come from to see the full pattern of allocate and
return. I guess somewhere in the depths of hva_to_pfn() from
hva_to_pfn_retry()? I think the indirection of the page walking confuses
me ;-)
Anyway the API seems reasonable enough given the other kvm_release_
functions.
Reviewed-by: Alex Bennée
--
Alex Bennée
Virtualisation Tech Lead @ Linaro
Reviewed-by: Alex Bennée
--
Alex Bennée
Virtualisation Tech Lead @ Linaro
t; cea7bb21280e ("KVM: MMU: Make gfn_to_page() always safe").
>
> Signed-off-by: Sean Christopherson
Reviewed-by: Alex Bennée
--
Alex Bennée
Virtualisation Tech Lead @ Linaro
it that functions responsibility to clean up after itself if its
returning NULLs?
> ret = -EFAULT;
> goto out;
> }
--
Alex Bennée
Virtualisation Tech Lead @ Linaro
tcg_temp_free(t1);
>> +
>> } else {
>> TCGv msr = tcg_temp_new();
>>
>> @@ -4411,9 +4423,6 @@ static void gen_mtmsr(DisasContext *ctx)
>> * power saving mode, we will exit the loop directly from
>> * ppc_store_msr
>> */
>> -if (tb_cflags(ctx->base.tb) & CF_USE_ICOUNT) {
>> -gen_io_start();
>> -}
>> gen_update_nip(ctx, ctx->base.pc_next);
>> #if defined(TARGET_PPC64)
>> tcg_gen_deposit_tl(msr, cpu_msr, cpu_gpr[rS(ctx->opcode)], 0, 32);
>> @@ -4422,10 +4431,9 @@ static void gen_mtmsr(DisasContext *ctx)
>> #endif
>> gen_helper_store_msr(cpu_env, msr);
>> tcg_temp_free(msr);
>> -/* Must stop the translation as machine state (may have) changed */
>> -/* Note that mtmsr is not always defined as context-synchronizing */
>> -gen_stop_exception(ctx);
>> }
>> +/* Must stop the translation as machine state (may have) changed */
>> +gen_stop_exception(ctx);
>> #endif
>> }
>>
>>
--
Alex Bennée
Currently x86, powerpc and soon arm64 use the same two architecture
specific bits for guest debug support for software and hardware
breakpoints. This makes the shared values explicit.
Signed-off-by: Alex Bennée
Reviewed-by: Andrew Jones
-
v4
- claim more bits for the common functionality
v5
-off-by: Alex Bennée
Reviewed-by: Andrew Jones
diff --git a/arch/powerpc/include/uapi/asm/kvm.h
b/arch/powerpc/include/uapi/asm/kvm.h
index ab4d473..1731569 100644
--- a/arch/powerpc/include/uapi/asm/kvm.h
+++ b/arch/powerpc/include/uapi/asm/kvm.h
@@ -310,8 +310,8 @@ struct kvm_guest_debug_arch
Christoffer Dall writes:
> On Tue, Mar 31, 2015 at 04:08:00PM +0100, Alex Bennée wrote:
>> Currently x86, powerpc and soon arm64 use the same two architecture
>> specific bits for guest debug support for software and hardware
>> breakpoints. This makes the shared values e
-off-by: Alex Bennée
diff --git a/arch/powerpc/include/uapi/asm/kvm.h
b/arch/powerpc/include/uapi/asm/kvm.h
index ab4d473..1731569 100644
--- a/arch/powerpc/include/uapi/asm/kvm.h
+++ b/arch/powerpc/include/uapi/asm/kvm.h
@@ -310,8 +310,8 @@ struct kvm_guest_debug_arch {
* and upper 16 bits are
18 matches
Mail list logo