On Fri, Jul 19, 2019 at 01:41:08PM -0700, Sean Christopherson wrote:
> @@ -68,8 +67,22 @@ static __always_inline unsigned long __vmcs_readl(unsigned 
> long field)
>  {
>       unsigned long value;
>  
> -     asm volatile (__ex_clear("vmread %1, %0", "%k0")
> -                   : "=r"(value) : "r"(field));
> +     asm volatile("1: vmread %2, %1\n\t"
> +                  ".byte 0x3e\n\t" /* branch taken hint */
> +                  "ja 3f\n\t"
> +                  "mov %2, %%" _ASM_ARG1 "\n\t"
> +                  "xor %%" _ASM_ARG2 ", %%" _ASM_ARG2 "\n\t"
> +                  "2: call vmread_error\n\t"
> +                  "xor %k1, %k1\n\t"
> +                  "3:\n\t"
> +
> +                  ".pushsection .fixup, \"ax\"\n\t"
> +                  "4: mov %2, %%" _ASM_ARG1 "\n\t"
> +                  "mov $1, %%" _ASM_ARG2 "\n\t"
> +                  "jmp 2b\n\t"
> +                  ".popsection\n\t"
> +                  _ASM_EXTABLE(1b, 4b)
> +                  : ASM_CALL_CONSTRAINT, "=r"(value) : "r"(field) : "cc");

Was there a reason you didn't do the asm goto thing here like you did
for the previous patch?  That seemed cleaner, and needs less asm.  

I think the branch hints aren't needed -- they're ignored on modern
processors.  Ditto for the previous patch.

Also please use named asm operands whereever you can, like "%[field]"
instead of "%2".  It helps a lot with readability.

-- 
Josh

Reply via email to