Re: [RFC PATCH 16/27] KVM: arm64: Prepare Hyp memory protection

2020-12-08 Thread Will Deacon
On Mon, Dec 07, 2020 at 02:11:20PM +, Quentin Perret wrote:
> On Monday 07 Dec 2020 at 13:40:52 (+), Will Deacon wrote:
> > Why not use the RESERVEDMEM_OF_DECLARE() interface for the hypervisor
> > memory? That way, the hypervisor memory can either be statically partitioned
> > as a carveout or allocated dynamically for us -- we wouldn't need to care.
> 
> Yup, I did consider that, but the actual amount of memory we need to
> reserve for the hypervisor depends on things such as the size of struct
> hyp_page, which depends on the kernel you're running (that is, it might
> change over time). So, that really felt like something the kernel should
> be doing, to keep the DT backward compatible, ... Or did you have
> something more elaborate in mind?

No, that's fair. Just wanted to make sure we had a good reason not to use
the existing memory reservation code.

Will
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [RFC PATCH 16/27] KVM: arm64: Prepare Hyp memory protection

2020-12-07 Thread Quentin Perret
On Monday 07 Dec 2020 at 13:54:48 (+), Marc Zyngier wrote:
> None. Whichever name you pick, someone will ask you to change it.
> Just call it Bob.

:-)

> What I really *don't* want is see a blanket rename of existing symbols
> or concepts.

Understood. I'll go with pkvm_create_mappings() and friends for all the
new functions unless someone comes up with a better name in the meantime.

Thanks,
Quentin
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [RFC PATCH 16/27] KVM: arm64: Prepare Hyp memory protection

2020-12-07 Thread Quentin Perret
On Monday 07 Dec 2020 at 13:40:52 (+), Will Deacon wrote:
> Why not use the RESERVEDMEM_OF_DECLARE() interface for the hypervisor
> memory? That way, the hypervisor memory can either be statically partitioned
> as a carveout or allocated dynamically for us -- we wouldn't need to care.

Yup, I did consider that, but the actual amount of memory we need to
reserve for the hypervisor depends on things such as the size of struct
hyp_page, which depends on the kernel you're running (that is, it might
change over time). So, that really felt like something the kernel should
be doing, to keep the DT backward compatible, ... Or did you have
something more elaborate in mind?

Thanks,
Quentin
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [RFC PATCH 16/27] KVM: arm64: Prepare Hyp memory protection

2020-12-07 Thread Quentin Perret
On Monday 07 Dec 2020 at 11:16:05 (+), Fuad Tabba wrote:
> On Fri, Dec 4, 2020 at 6:01 PM Quentin Perret  wrote:
> >
> > On Thursday 03 Dec 2020 at 12:57:33 (+), Fuad Tabba wrote:
> > 
> > > > +int hyp_create_idmap(void);
> > > > +int hyp_map_vectors(void);
> > > > +int hyp_back_vmemmap(phys_addr_t phys, unsigned long size, phys_addr_t 
> > > > back);
> > > > +int hyp_cpu_set_vector(enum arm64_hyp_spectre_vector slot);
> > > > +int hyp_create_mappings(void *from, void *to, enum kvm_pgtable_prot 
> > > > prot);
> > > > +int __hyp_create_mappings(unsigned long start, unsigned long size,
> > > > + unsigned long phys, unsigned long prot);
> > > > +unsigned long __hyp_create_private_mapping(phys_addr_t phys, size_t 
> > > > size,
> > > > +  unsigned long prot);
> > > > +
> > >
> > > nit: I also thought that the hyp_create_mappings function names are a
> > > bit confusing, since there's the create_hyp_mappings functions which
> > > use the aforementioned *hyp_pgtable.
> >
> > Sure, happy to re-name those (and hyp_pgtable above). Any suggestions?
> 
> Perhaps something to indicate that these are temporary, tmp_ or
> bootstrap_ maybe?

Hmm, the thing is these are temporary only in protected mode, they're
permanent otherwise :/

Perhaps I could prefix the protected pgtable (and associated functions)
with 'pkvm_' or so? Marc, any preferences?

Thanks,
Quentin
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [RFC PATCH 16/27] KVM: arm64: Prepare Hyp memory protection

2020-12-07 Thread Fuad Tabba
On Fri, Dec 4, 2020 at 6:01 PM Quentin Perret  wrote:
>
> On Thursday 03 Dec 2020 at 12:57:33 (+), Fuad Tabba wrote:
> 
> > > +int hyp_create_idmap(void);
> > > +int hyp_map_vectors(void);
> > > +int hyp_back_vmemmap(phys_addr_t phys, unsigned long size, phys_addr_t 
> > > back);
> > > +int hyp_cpu_set_vector(enum arm64_hyp_spectre_vector slot);
> > > +int hyp_create_mappings(void *from, void *to, enum kvm_pgtable_prot 
> > > prot);
> > > +int __hyp_create_mappings(unsigned long start, unsigned long size,
> > > + unsigned long phys, unsigned long prot);
> > > +unsigned long __hyp_create_private_mapping(phys_addr_t phys, size_t size,
> > > +  unsigned long prot);
> > > +
> >
> > nit: I also thought that the hyp_create_mappings function names are a
> > bit confusing, since there's the create_hyp_mappings functions which
> > use the aforementioned *hyp_pgtable.
>
> Sure, happy to re-name those (and hyp_pgtable above). Any suggestions?

Perhaps something to indicate that these are temporary, tmp_ or
bootstrap_ maybe?

Cheers,
/fuad
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [RFC PATCH 16/27] KVM: arm64: Prepare Hyp memory protection

2020-12-07 Thread Fuad Tabba
On Mon, Dec 7, 2020 at 11:05 AM Mark Rutland  wrote:
>
> On Mon, Dec 07, 2020 at 10:20:03AM +, Will Deacon wrote:
> > On Fri, Dec 04, 2020 at 06:01:52PM +, Quentin Perret wrote:
> > > On Thursday 03 Dec 2020 at 12:57:33 (+), Fuad Tabba wrote:
> > > 
> > > > > +SYM_FUNC_START(__kvm_init_switch_pgd)
> > > > > +   /* Turn the MMU off */
> > > > > +   pre_disable_mmu_workaround
> > > > > +   mrs x2, sctlr_el2
> > > > > +   bic x3, x2, #SCTLR_ELx_M
> > > > > +   msr sctlr_el2, x3
> > > > > +   isb
> > > > > +
> > > > > +   tlbialle2
> > > > > +
> > > > > +   /* Install the new pgtables */
> > > > > +   ldr x3, [x0, #NVHE_INIT_PGD_PA]
> > > > > +   phys_to_ttbr x4, x3
> > > > > +alternative_if ARM64_HAS_CNP
> > > > > +   orr x4, x4, #TTBR_CNP_BIT
> > > > > +alternative_else_nop_endif
> > > > > +   msr ttbr0_el2, x4
> > > > > +
> > > > > +   /* Set the new stack pointer */
> > > > > +   ldr x0, [x0, #NVHE_INIT_STACK_HYP_VA]
> > > > > +   mov sp, x0
> > > > > +
> > > > > +   /* And turn the MMU back on! */
> > > > > +   dsb nsh
> > > > > +   isb
> > > > > +   msr sctlr_el2, x2
> > > > > +   isb
> > > > > +   ret x1
> > > > > +SYM_FUNC_END(__kvm_init_switch_pgd)
> > > > > +
> > > >
> > > > Should the instruction cache be flushed here (ic iallu), to discard
> > > > speculatively fetched instructions?
> > >
> > > Hmm, Will? Thoughts?
> >
> > The I-cache is physically tagged, so not sure what invalidation would
> > achieve here. Fuad -- what do you think could go wrong specifically?
>
> While the MMU is off, instruction fetches can be made from the PoC
> rather than the PoU, so where instructions have been modified/copied and
> not cleaned to the PoC, it's possible to fetch stale copies into the
> I-caches. The physical tag doesn't prevent that.
>
> In the regular CPU boot paths, __enabble_mmu() has an IC IALLU after
> enabling the MMU to ensure that we get rid of anything stale (e.g. so
> secondaries don't miss ftrace patching, which is only cleaned to the
> PoU).
>
> That might not be a problem here, if things are suitably padded and
> never dynamically patched, but if so it's probably worth a comment.
>
> Fuad, is that the sort of thing you were considering, or did you have
> additional concerns?

No other concerns. Thanks Mark.
/fuad
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [RFC PATCH 16/27] KVM: arm64: Prepare Hyp memory protection

2020-12-07 Thread Marc Zyngier

On 2020-12-07 11:58, Quentin Perret wrote:

On Monday 07 Dec 2020 at 11:16:05 (+), Fuad Tabba wrote:
On Fri, Dec 4, 2020 at 6:01 PM Quentin Perret  
wrote:

>
> On Thursday 03 Dec 2020 at 12:57:33 (+), Fuad Tabba wrote:
> 
> > > +int hyp_create_idmap(void);
> > > +int hyp_map_vectors(void);
> > > +int hyp_back_vmemmap(phys_addr_t phys, unsigned long size, phys_addr_t 
back);
> > > +int hyp_cpu_set_vector(enum arm64_hyp_spectre_vector slot);
> > > +int hyp_create_mappings(void *from, void *to, enum kvm_pgtable_prot 
prot);
> > > +int __hyp_create_mappings(unsigned long start, unsigned long size,
> > > + unsigned long phys, unsigned long prot);
> > > +unsigned long __hyp_create_private_mapping(phys_addr_t phys, size_t size,
> > > +  unsigned long prot);
> > > +
> >
> > nit: I also thought that the hyp_create_mappings function names are a
> > bit confusing, since there's the create_hyp_mappings functions which
> > use the aforementioned *hyp_pgtable.
>
> Sure, happy to re-name those (and hyp_pgtable above). Any suggestions?

Perhaps something to indicate that these are temporary, tmp_ or
bootstrap_ maybe?


Hmm, the thing is these are temporary only in protected mode, they're
permanent otherwise :/

Perhaps I could prefix the protected pgtable (and associated functions)
with 'pkvm_' or so? Marc, any preferences?


None. Whichever name you pick, someone will ask you to change it.
Just call it Bob.

What I really *don't* want is see a blanket rename of existing symbols
or concepts.

Thanks,

M.
--
Jazz is not dead. It just smells funny...
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [RFC PATCH 16/27] KVM: arm64: Prepare Hyp memory protection

2020-12-07 Thread Will Deacon
Hi Quentin,

On Tue, Nov 17, 2020 at 06:15:56PM +, Quentin Perret wrote:
> When memory protection is enabled, the Hyp code needs the ability to
> create and manage its own page-table. To do so, introduce a new set of
> hypercalls to initialize Hyp memory protection.
> 
> During the init hcall, the hypervisor runs with the host-provided
> page-table and uses the trivial early page allocator to create its own
> set of page-tables, using a memory pool that was donated by the host.
> Specifically, the hypervisor creates its own mappings for __hyp_text,
> the Hyp memory pool, the __hyp_bss, the portion of hyp_vmemmap
> corresponding to the Hyp pool, among other things. It then jumps back in
> the idmap page, switches to use the newly-created pgd (instead of the
> temporary one provided by the host) and then installs the full-fledged
> buddy allocator which will then be the only one in used from then on.
> 
> Note that for the sake of symplifying the review, this only introduces
> the code doing this operation, without actually being called by anyhing
> yet. This will be done in a subsequent patch, which will introduce the
> necessary host kernel changes.

[...]

> diff --git a/arch/arm64/kvm/hyp/reserved_mem.c 
> b/arch/arm64/kvm/hyp/reserved_mem.c
> new file mode 100644
> index ..02b0b18006f5
> --- /dev/null
> +++ b/arch/arm64/kvm/hyp/reserved_mem.c

[...]

> +extern bool enable_protected_kvm;
> +void __init reserve_kvm_hyp(void)
> +{
> + u64 nr_pages, prev;
> +
> + if (!enable_protected_kvm)
> + return;
> +
> + if (!is_hyp_mode_available() || is_kernel_in_hyp_mode())
> + return;
> +
> + if (kvm_nvhe_sym(hyp_memblock_nr) <= 0)
> + return;
> +
> + hyp_mem_size += num_possible_cpus() << PAGE_SHIFT;
> + hyp_mem_size += hyp_s1_pgtable_size();
> +
> + /*
> +  * The hyp_vmemmap needs to be backed by pages, but these pages
> +  * themselves need to be present in the vmemmap, so compute the number
> +  * of pages needed by looking for a fixed point.
> +  */
> + nr_pages = 0;
> + do {
> + prev = nr_pages;
> + nr_pages = (hyp_mem_size >> PAGE_SHIFT) + prev;
> + nr_pages = DIV_ROUND_UP(nr_pages * sizeof(struct hyp_page), 
> PAGE_SIZE);
> + nr_pages += __hyp_pgtable_max_pages(nr_pages);
> + } while (nr_pages != prev);
> + hyp_mem_size += nr_pages << PAGE_SHIFT;
> +
> + hyp_mem_base = memblock_find_in_range(0, memblock_end_of_DRAM(),
> +   hyp_mem_size, SZ_2M);
> + if (!hyp_mem_base) {
> + kvm_err("Failed to reserve hyp memory\n");
> + return;
> + }
> + memblock_reserve(hyp_mem_base, hyp_mem_size);

Why not use the RESERVEDMEM_OF_DECLARE() interface for the hypervisor
memory? That way, the hypervisor memory can either be statically partitioned
as a carveout or allocated dynamically for us -- we wouldn't need to care.

Will
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [RFC PATCH 16/27] KVM: arm64: Prepare Hyp memory protection

2020-12-07 Thread Will Deacon
On Mon, Dec 07, 2020 at 11:05:45AM +, Mark Rutland wrote:
> On Mon, Dec 07, 2020 at 10:20:03AM +, Will Deacon wrote:
> > On Fri, Dec 04, 2020 at 06:01:52PM +, Quentin Perret wrote:
> > > On Thursday 03 Dec 2020 at 12:57:33 (+), Fuad Tabba wrote:
> > > 
> > > > > +SYM_FUNC_START(__kvm_init_switch_pgd)
> > > > > +   /* Turn the MMU off */
> > > > > +   pre_disable_mmu_workaround
> > > > > +   mrs x2, sctlr_el2
> > > > > +   bic x3, x2, #SCTLR_ELx_M
> > > > > +   msr sctlr_el2, x3
> > > > > +   isb
> > > > > +
> > > > > +   tlbialle2
> > > > > +
> > > > > +   /* Install the new pgtables */
> > > > > +   ldr x3, [x0, #NVHE_INIT_PGD_PA]
> > > > > +   phys_to_ttbr x4, x3
> > > > > +alternative_if ARM64_HAS_CNP
> > > > > +   orr x4, x4, #TTBR_CNP_BIT
> > > > > +alternative_else_nop_endif
> > > > > +   msr ttbr0_el2, x4
> > > > > +
> > > > > +   /* Set the new stack pointer */
> > > > > +   ldr x0, [x0, #NVHE_INIT_STACK_HYP_VA]
> > > > > +   mov sp, x0
> > > > > +
> > > > > +   /* And turn the MMU back on! */
> > > > > +   dsb nsh
> > > > > +   isb
> > > > > +   msr sctlr_el2, x2
> > > > > +   isb
> > > > > +   ret x1
> > > > > +SYM_FUNC_END(__kvm_init_switch_pgd)
> > > > > +
> > > > 
> > > > Should the instruction cache be flushed here (ic iallu), to discard
> > > > speculatively fetched instructions?
> > > 
> > > Hmm, Will? Thoughts?
> > 
> > The I-cache is physically tagged, so not sure what invalidation would
> > achieve here. Fuad -- what do you think could go wrong specifically?
> 
> While the MMU is off, instruction fetches can be made from the PoC
> rather than the PoU, so where instructions have been modified/copied and
> not cleaned to the PoC, it's possible to fetch stale copies into the
> I-caches. The physical tag doesn't prevent that.

Oh yeah, we even have a comment about that in
idmap_kpti_install_ng_mappings(). Maybe we should wrap disable_mmu and
enable_mmu in some macros so we don't have to trip over this every time (and
this would mean we could get rid of pre_disable_mmu_workaround too).

> In the regular CPU boot paths, __enabble_mmu() has an IC IALLU after
> enabling the MMU to ensure that we get rid of anything stale (e.g. so
> secondaries don't miss ftrace patching, which is only cleaned to the
> PoU).
> 
> That might not be a problem here, if things are suitably padded and
> never dynamically patched, but if so it's probably worth a comment.

It's fragile enough that we should just do the invalidation.

Will
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [RFC PATCH 16/27] KVM: arm64: Prepare Hyp memory protection

2020-12-07 Thread Mark Rutland
On Mon, Dec 07, 2020 at 10:20:03AM +, Will Deacon wrote:
> On Fri, Dec 04, 2020 at 06:01:52PM +, Quentin Perret wrote:
> > On Thursday 03 Dec 2020 at 12:57:33 (+), Fuad Tabba wrote:
> > 
> > > > +SYM_FUNC_START(__kvm_init_switch_pgd)
> > > > +   /* Turn the MMU off */
> > > > +   pre_disable_mmu_workaround
> > > > +   mrs x2, sctlr_el2
> > > > +   bic x3, x2, #SCTLR_ELx_M
> > > > +   msr sctlr_el2, x3
> > > > +   isb
> > > > +
> > > > +   tlbialle2
> > > > +
> > > > +   /* Install the new pgtables */
> > > > +   ldr x3, [x0, #NVHE_INIT_PGD_PA]
> > > > +   phys_to_ttbr x4, x3
> > > > +alternative_if ARM64_HAS_CNP
> > > > +   orr x4, x4, #TTBR_CNP_BIT
> > > > +alternative_else_nop_endif
> > > > +   msr ttbr0_el2, x4
> > > > +
> > > > +   /* Set the new stack pointer */
> > > > +   ldr x0, [x0, #NVHE_INIT_STACK_HYP_VA]
> > > > +   mov sp, x0
> > > > +
> > > > +   /* And turn the MMU back on! */
> > > > +   dsb nsh
> > > > +   isb
> > > > +   msr sctlr_el2, x2
> > > > +   isb
> > > > +   ret x1
> > > > +SYM_FUNC_END(__kvm_init_switch_pgd)
> > > > +
> > > 
> > > Should the instruction cache be flushed here (ic iallu), to discard
> > > speculatively fetched instructions?
> > 
> > Hmm, Will? Thoughts?
> 
> The I-cache is physically tagged, so not sure what invalidation would
> achieve here. Fuad -- what do you think could go wrong specifically?

While the MMU is off, instruction fetches can be made from the PoC
rather than the PoU, so where instructions have been modified/copied and
not cleaned to the PoC, it's possible to fetch stale copies into the
I-caches. The physical tag doesn't prevent that.

In the regular CPU boot paths, __enabble_mmu() has an IC IALLU after
enabling the MMU to ensure that we get rid of anything stale (e.g. so
secondaries don't miss ftrace patching, which is only cleaned to the
PoU).

That might not be a problem here, if things are suitably padded and
never dynamically patched, but if so it's probably worth a comment.

Fuad, is that the sort of thing you were considering, or did you have
additional concerns?

Thanks,
Mark.
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [RFC PATCH 16/27] KVM: arm64: Prepare Hyp memory protection

2020-12-07 Thread Will Deacon
On Fri, Dec 04, 2020 at 06:01:52PM +, Quentin Perret wrote:
> On Thursday 03 Dec 2020 at 12:57:33 (+), Fuad Tabba wrote:
> 
> > > +int hyp_create_idmap(void);
> > > +int hyp_map_vectors(void);
> > > +int hyp_back_vmemmap(phys_addr_t phys, unsigned long size, phys_addr_t 
> > > back);
> > > +int hyp_cpu_set_vector(enum arm64_hyp_spectre_vector slot);
> > > +int hyp_create_mappings(void *from, void *to, enum kvm_pgtable_prot 
> > > prot);
> > > +int __hyp_create_mappings(unsigned long start, unsigned long size,
> > > + unsigned long phys, unsigned long prot);
> > > +unsigned long __hyp_create_private_mapping(phys_addr_t phys, size_t size,
> > > +  unsigned long prot);
> > > +
> > 
> > nit: I also thought that the hyp_create_mappings function names are a
> > bit confusing, since there's the create_hyp_mappings functions which
> > use the aforementioned *hyp_pgtable.
> 
> Sure, happy to re-name those (and hyp_pgtable above). Any suggestions?
> 
> 
> 
> > > +SYM_FUNC_START(__kvm_init_switch_pgd)
> > > +   /* Turn the MMU off */
> > > +   pre_disable_mmu_workaround
> > > +   mrs x2, sctlr_el2
> > > +   bic x3, x2, #SCTLR_ELx_M
> > > +   msr sctlr_el2, x3
> > > +   isb
> > > +
> > > +   tlbialle2
> > > +
> > > +   /* Install the new pgtables */
> > > +   ldr x3, [x0, #NVHE_INIT_PGD_PA]
> > > +   phys_to_ttbr x4, x3
> > > +alternative_if ARM64_HAS_CNP
> > > +   orr x4, x4, #TTBR_CNP_BIT
> > > +alternative_else_nop_endif
> > > +   msr ttbr0_el2, x4
> > > +
> > > +   /* Set the new stack pointer */
> > > +   ldr x0, [x0, #NVHE_INIT_STACK_HYP_VA]
> > > +   mov sp, x0
> > > +
> > > +   /* And turn the MMU back on! */
> > > +   dsb nsh
> > > +   isb
> > > +   msr sctlr_el2, x2
> > > +   isb
> > > +   ret x1
> > > +SYM_FUNC_END(__kvm_init_switch_pgd)
> > > +
> > 
> > Should the instruction cache be flushed here (ic iallu), to discard
> > speculatively fetched instructions?
> 
> Hmm, Will? Thoughts?

The I-cache is physically tagged, so not sure what invalidation would
achieve here. Fuad -- what do you think could go wrong specifically?

Will
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [RFC PATCH 16/27] KVM: arm64: Prepare Hyp memory protection

2020-12-04 Thread Quentin Perret
On Thursday 03 Dec 2020 at 12:57:33 (+), Fuad Tabba wrote:

> > +int hyp_create_idmap(void);
> > +int hyp_map_vectors(void);
> > +int hyp_back_vmemmap(phys_addr_t phys, unsigned long size, phys_addr_t 
> > back);
> > +int hyp_cpu_set_vector(enum arm64_hyp_spectre_vector slot);
> > +int hyp_create_mappings(void *from, void *to, enum kvm_pgtable_prot prot);
> > +int __hyp_create_mappings(unsigned long start, unsigned long size,
> > + unsigned long phys, unsigned long prot);
> > +unsigned long __hyp_create_private_mapping(phys_addr_t phys, size_t size,
> > +  unsigned long prot);
> > +
> 
> nit: I also thought that the hyp_create_mappings function names are a
> bit confusing, since there's the create_hyp_mappings functions which
> use the aforementioned *hyp_pgtable.

Sure, happy to re-name those (and hyp_pgtable above). Any suggestions?



> > +SYM_FUNC_START(__kvm_init_switch_pgd)
> > +   /* Turn the MMU off */
> > +   pre_disable_mmu_workaround
> > +   mrs x2, sctlr_el2
> > +   bic x3, x2, #SCTLR_ELx_M
> > +   msr sctlr_el2, x3
> > +   isb
> > +
> > +   tlbialle2
> > +
> > +   /* Install the new pgtables */
> > +   ldr x3, [x0, #NVHE_INIT_PGD_PA]
> > +   phys_to_ttbr x4, x3
> > +alternative_if ARM64_HAS_CNP
> > +   orr x4, x4, #TTBR_CNP_BIT
> > +alternative_else_nop_endif
> > +   msr ttbr0_el2, x4
> > +
> > +   /* Set the new stack pointer */
> > +   ldr x0, [x0, #NVHE_INIT_STACK_HYP_VA]
> > +   mov sp, x0
> > +
> > +   /* And turn the MMU back on! */
> > +   dsb nsh
> > +   isb
> > +   msr sctlr_el2, x2
> > +   isb
> > +   ret x1
> > +SYM_FUNC_END(__kvm_init_switch_pgd)
> > +
> 
> Should the instruction cache be flushed here (ic iallu), to discard
> speculatively fetched instructions?

Hmm, Will? Thoughts?

Thanks,
Quentin
___
kvmarm mailing list
kvmarm@lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm


Re: [RFC PATCH 16/27] KVM: arm64: Prepare Hyp memory protection

2020-12-03 Thread Fuad Tabba
Hi Quentin,

On Tue, Nov 17, 2020 at 6:17 PM 'Quentin Perret' via kernel-team
 wrote:
>
> When memory protection is enabled, the Hyp code needs the ability to
> create and manage its own page-table. To do so, introduce a new set of
> hypercalls to initialize Hyp memory protection.
>
> During the init hcall, the hypervisor runs with the host-provided
> page-table and uses the trivial early page allocator to create its own
> set of page-tables, using a memory pool that was donated by the host.
> Specifically, the hypervisor creates its own mappings for __hyp_text,
> the Hyp memory pool, the __hyp_bss, the portion of hyp_vmemmap
> corresponding to the Hyp pool, among other things. It then jumps back in
> the idmap page, switches to use the newly-created pgd (instead of the
> temporary one provided by the host) and then installs the full-fledged
> buddy allocator which will then be the only one in used from then on.
>
> Note that for the sake of symplifying the review, this only introduces
> the code doing this operation, without actually being called by anyhing
> yet. This will be done in a subsequent patch, which will introduce the
> necessary host kernel changes.
>
> Credits to Will for __kvm_init_switch_pgd.
>
> Co-authored-by: Will Deacon 
> Signed-off-by: Quentin Perret 
> ---
>  arch/arm64/include/asm/kvm_asm.h |   6 +-
>  arch/arm64/include/asm/kvm_host.h|   8 +
>  arch/arm64/include/asm/kvm_hyp.h |   8 +
>  arch/arm64/kernel/cpufeature.c   |   2 +-
>  arch/arm64/kernel/image-vars.h   |  19 +++
>  arch/arm64/kvm/hyp/Makefile  |   2 +-
>  arch/arm64/kvm/hyp/include/nvhe/memory.h |   6 +
>  arch/arm64/kvm/hyp/include/nvhe/mm.h |  79 +
>  arch/arm64/kvm/hyp/nvhe/Makefile |   4 +-
>  arch/arm64/kvm/hyp/nvhe/hyp-init.S   |  30 
>  arch/arm64/kvm/hyp/nvhe/hyp-main.c   |  44 +
>  arch/arm64/kvm/hyp/nvhe/mm.c | 175 
>  arch/arm64/kvm/hyp/nvhe/psci-relay.c |   2 -
>  arch/arm64/kvm/hyp/nvhe/setup.c  | 196 +++
>  arch/arm64/kvm/hyp/reserved_mem.c|  75 +
>  arch/arm64/kvm/mmu.c |   2 +-
>  arch/arm64/mm/init.c |   3 +
>  17 files changed, 653 insertions(+), 8 deletions(-)
>  create mode 100644 arch/arm64/kvm/hyp/include/nvhe/mm.h
>  create mode 100644 arch/arm64/kvm/hyp/nvhe/mm.c
>  create mode 100644 arch/arm64/kvm/hyp/nvhe/setup.c
>  create mode 100644 arch/arm64/kvm/hyp/reserved_mem.c
>
> diff --git a/arch/arm64/include/asm/kvm_asm.h 
> b/arch/arm64/include/asm/kvm_asm.h
> index e4934f5e4234..9266b17f8ba9 100644
> --- a/arch/arm64/include/asm/kvm_asm.h
> +++ b/arch/arm64/include/asm/kvm_asm.h
> @@ -57,6 +57,10 @@
>  #define __KVM_HOST_SMCCC_FUNC___kvm_get_mdcr_el2   12
>  #define __KVM_HOST_SMCCC_FUNC___vgic_v3_save_aprs  13
>  #define __KVM_HOST_SMCCC_FUNC___vgic_v3_restore_aprs   14
> +#define __KVM_HOST_SMCCC_FUNC___kvm_hyp_protect15
> +#define __KVM_HOST_SMCCC_FUNC___hyp_create_mappings16
> +#define __KVM_HOST_SMCCC_FUNC___hyp_create_private_mapping 17
> +#define __KVM_HOST_SMCCC_FUNC___hyp_cpu_set_vector 18
>
>  #ifndef __ASSEMBLY__
>
> @@ -171,7 +175,7 @@ struct kvm_vcpu;
>  struct kvm_s2_mmu;
>
>  DECLARE_KVM_NVHE_SYM(__kvm_hyp_init);
> -DECLARE_KVM_NVHE_SYM(__kvm_hyp_host_vector);
> +DECLARE_KVM_HYP_SYM(__kvm_hyp_host_vector);
>  DECLARE_KVM_HYP_SYM(__kvm_hyp_vector);
>  #define __kvm_hyp_init CHOOSE_NVHE_SYM(__kvm_hyp_init)
>  #define __kvm_hyp_host_vector  CHOOSE_NVHE_SYM(__kvm_hyp_host_vector)
> diff --git a/arch/arm64/include/asm/kvm_host.h 
> b/arch/arm64/include/asm/kvm_host.h
> index 7a5d5f4b3351..ee8bb8021637 100644
> --- a/arch/arm64/include/asm/kvm_host.h
> +++ b/arch/arm64/include/asm/kvm_host.h
> @@ -742,4 +742,12 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
>  #define kvm_vcpu_has_pmu(vcpu) \
> (test_bit(KVM_ARM_VCPU_PMU_V3, (vcpu)->arch.features))
>
> +#ifdef CONFIG_KVM
> +extern phys_addr_t hyp_mem_base;
> +extern phys_addr_t hyp_mem_size;
> +void __init reserve_kvm_hyp(void);
> +#else
> +static inline void reserve_kvm_hyp(void) { }
> +#endif
> +
>  #endif /* __ARM64_KVM_HOST_H__ */
> diff --git a/arch/arm64/include/asm/kvm_hyp.h 
> b/arch/arm64/include/asm/kvm_hyp.h
> index 95a2bbbcc7e1..dbd2ef86afa9 100644
> --- a/arch/arm64/include/asm/kvm_hyp.h
> +++ b/arch/arm64/include/asm/kvm_hyp.h
> @@ -105,5 +105,13 @@ void __noreturn hyp_panic(void);
>  void __noreturn __hyp_do_panic(bool restore_host, u64 spsr, u64 elr, u64 
> par);
>  #endif
>
> +#ifdef __KVM_NVHE_HYPERVISOR__
> +void __kvm_init_switch_pgd(phys_addr_t phys, unsigned long size,
> +  phys_addr_t pgd, void *sp, void *cont_fn);
> +int __kvm_hyp_protect(phys_addr_t phys, unsigned long size,
> + unsigned long nr_cpus, unsigned

[RFC PATCH 16/27] KVM: arm64: Prepare Hyp memory protection

2020-11-17 Thread Quentin Perret
When memory protection is enabled, the Hyp code needs the ability to
create and manage its own page-table. To do so, introduce a new set of
hypercalls to initialize Hyp memory protection.

During the init hcall, the hypervisor runs with the host-provided
page-table and uses the trivial early page allocator to create its own
set of page-tables, using a memory pool that was donated by the host.
Specifically, the hypervisor creates its own mappings for __hyp_text,
the Hyp memory pool, the __hyp_bss, the portion of hyp_vmemmap
corresponding to the Hyp pool, among other things. It then jumps back in
the idmap page, switches to use the newly-created pgd (instead of the
temporary one provided by the host) and then installs the full-fledged
buddy allocator which will then be the only one in used from then on.

Note that for the sake of symplifying the review, this only introduces
the code doing this operation, without actually being called by anyhing
yet. This will be done in a subsequent patch, which will introduce the
necessary host kernel changes.

Credits to Will for __kvm_init_switch_pgd.

Co-authored-by: Will Deacon 
Signed-off-by: Quentin Perret 
---
 arch/arm64/include/asm/kvm_asm.h |   6 +-
 arch/arm64/include/asm/kvm_host.h|   8 +
 arch/arm64/include/asm/kvm_hyp.h |   8 +
 arch/arm64/kernel/cpufeature.c   |   2 +-
 arch/arm64/kernel/image-vars.h   |  19 +++
 arch/arm64/kvm/hyp/Makefile  |   2 +-
 arch/arm64/kvm/hyp/include/nvhe/memory.h |   6 +
 arch/arm64/kvm/hyp/include/nvhe/mm.h |  79 +
 arch/arm64/kvm/hyp/nvhe/Makefile |   4 +-
 arch/arm64/kvm/hyp/nvhe/hyp-init.S   |  30 
 arch/arm64/kvm/hyp/nvhe/hyp-main.c   |  44 +
 arch/arm64/kvm/hyp/nvhe/mm.c | 175 
 arch/arm64/kvm/hyp/nvhe/psci-relay.c |   2 -
 arch/arm64/kvm/hyp/nvhe/setup.c  | 196 +++
 arch/arm64/kvm/hyp/reserved_mem.c|  75 +
 arch/arm64/kvm/mmu.c |   2 +-
 arch/arm64/mm/init.c |   3 +
 17 files changed, 653 insertions(+), 8 deletions(-)
 create mode 100644 arch/arm64/kvm/hyp/include/nvhe/mm.h
 create mode 100644 arch/arm64/kvm/hyp/nvhe/mm.c
 create mode 100644 arch/arm64/kvm/hyp/nvhe/setup.c
 create mode 100644 arch/arm64/kvm/hyp/reserved_mem.c

diff --git a/arch/arm64/include/asm/kvm_asm.h b/arch/arm64/include/asm/kvm_asm.h
index e4934f5e4234..9266b17f8ba9 100644
--- a/arch/arm64/include/asm/kvm_asm.h
+++ b/arch/arm64/include/asm/kvm_asm.h
@@ -57,6 +57,10 @@
 #define __KVM_HOST_SMCCC_FUNC___kvm_get_mdcr_el2   12
 #define __KVM_HOST_SMCCC_FUNC___vgic_v3_save_aprs  13
 #define __KVM_HOST_SMCCC_FUNC___vgic_v3_restore_aprs   14
+#define __KVM_HOST_SMCCC_FUNC___kvm_hyp_protect15
+#define __KVM_HOST_SMCCC_FUNC___hyp_create_mappings16
+#define __KVM_HOST_SMCCC_FUNC___hyp_create_private_mapping 17
+#define __KVM_HOST_SMCCC_FUNC___hyp_cpu_set_vector 18
 
 #ifndef __ASSEMBLY__
 
@@ -171,7 +175,7 @@ struct kvm_vcpu;
 struct kvm_s2_mmu;
 
 DECLARE_KVM_NVHE_SYM(__kvm_hyp_init);
-DECLARE_KVM_NVHE_SYM(__kvm_hyp_host_vector);
+DECLARE_KVM_HYP_SYM(__kvm_hyp_host_vector);
 DECLARE_KVM_HYP_SYM(__kvm_hyp_vector);
 #define __kvm_hyp_init CHOOSE_NVHE_SYM(__kvm_hyp_init)
 #define __kvm_hyp_host_vector  CHOOSE_NVHE_SYM(__kvm_hyp_host_vector)
diff --git a/arch/arm64/include/asm/kvm_host.h 
b/arch/arm64/include/asm/kvm_host.h
index 7a5d5f4b3351..ee8bb8021637 100644
--- a/arch/arm64/include/asm/kvm_host.h
+++ b/arch/arm64/include/asm/kvm_host.h
@@ -742,4 +742,12 @@ bool kvm_arm_vcpu_is_finalized(struct kvm_vcpu *vcpu);
 #define kvm_vcpu_has_pmu(vcpu) \
(test_bit(KVM_ARM_VCPU_PMU_V3, (vcpu)->arch.features))
 
+#ifdef CONFIG_KVM
+extern phys_addr_t hyp_mem_base;
+extern phys_addr_t hyp_mem_size;
+void __init reserve_kvm_hyp(void);
+#else
+static inline void reserve_kvm_hyp(void) { }
+#endif
+
 #endif /* __ARM64_KVM_HOST_H__ */
diff --git a/arch/arm64/include/asm/kvm_hyp.h b/arch/arm64/include/asm/kvm_hyp.h
index 95a2bbbcc7e1..dbd2ef86afa9 100644
--- a/arch/arm64/include/asm/kvm_hyp.h
+++ b/arch/arm64/include/asm/kvm_hyp.h
@@ -105,5 +105,13 @@ void __noreturn hyp_panic(void);
 void __noreturn __hyp_do_panic(bool restore_host, u64 spsr, u64 elr, u64 par);
 #endif
 
+#ifdef __KVM_NVHE_HYPERVISOR__
+void __kvm_init_switch_pgd(phys_addr_t phys, unsigned long size,
+  phys_addr_t pgd, void *sp, void *cont_fn);
+int __kvm_hyp_protect(phys_addr_t phys, unsigned long size,
+ unsigned long nr_cpus, unsigned long *per_cpu_base);
+void __noreturn __host_enter(struct kvm_cpu_context *host_ctxt);
+#endif
+
 #endif /* __ARM64_KVM_HYP_H__ */
 
diff --git a/arch/arm64/kernel/cpufeature.c b/arch/arm64/kernel/cpufeature.c
index 3bc86d1423f8..010458f6d799 100644
--- a/arch/arm64/kernel/cpufeature.