On Mon, Jan 25, 2016 at 10:37:42AM -0800, Andy Lutomirski wrote:
> This adds helpers for each of the four currently-specified INVPCID
> modes.
> 
> Signed-off-by: Andy Lutomirski <[email protected]>
> ---
>  arch/x86/include/asm/tlbflush.h | 41 
> +++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 41 insertions(+)
> 
> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
> index 6df2029405a3..20fc38d8478a 100644
> --- a/arch/x86/include/asm/tlbflush.h
> +++ b/arch/x86/include/asm/tlbflush.h
> @@ -7,6 +7,47 @@
>  #include <asm/processor.h>
>  #include <asm/special_insns.h>
>  
> +static inline void __invpcid(unsigned long pcid, unsigned long addr,
> +                          unsigned long type)
> +{
> +     u64 desc[2] = { pcid, addr };
> +
> +     /*
> +      * The memory clobber is because the whole point is to invalidate
> +      * stale TLB entries and, especially if we're flushing global
> +      * mappings, we don't want the compiler to reorder any subsequent
> +      * memory accesses before the TLB flush.
> +      */
> +     asm volatile (

Yeah, no need for that linebreak here:

        asm volatile (".byte 0x66, 0x0f, 0x38, 0x82, 0x01"

reads fine too.

> +             ".byte 0x66, 0x0f, 0x38, 0x82, 0x01"    /* invpcid (%cx), %ax */
> +             : : "m" (desc), "a" (type), "c" (desc) : "memory");
> +}
> +

Please add defines for the invalidation types:

#define INVPCID_TYPE_INDIVIDUAL         0
#define INVPCID_TYPE_SINGLE_CTXT        1
#define INVPCID_TYPE_ALL                2
#define INVPCID_TYPE_ALL_NON_GLOBAL     3

and add macros:

#define invpcid_flush_one(pcid, addr)   __invpcid(pcid, addr, 
INVPCID_TYPE_INDIVIDUAL)
...

and so on.

Oh, and the "flush everything" macro I'd call invpcid_flush_all() like
tlb_flush_all().

Thanks.

-- 
Regards/Gruss,
    Boris.

ECO tip #101: Trim your mails when you reply.

Reply via email to