Paolo Bonzini <pbonz...@redhat.com> writes:

> On 12/04/2018 17:25, Vitaly Kuznetsov wrote:
>> @@ -5335,6 +5353,9 @@ static void __always_inline 
>> vmx_disable_intercept_for_msr(unsigned long *msr_bit
>>      if (!cpu_has_vmx_msr_bitmap())
>>              return;
>>  
>> +    if (static_branch_unlikely(&enable_emsr_bitmap))
>> +            evmcs_touch_msr_bitmap();
>> +
>>      /*
>>       * See Intel PRM Vol. 3, 20.6.9 (MSR-Bitmap Address). Early manuals
>>       * have the write-low and read-high bitmap offsets the wrong way round.
>> @@ -5370,6 +5391,9 @@ static void __always_inline 
>> vmx_enable_intercept_for_msr(unsigned long *msr_bitm
>>      if (!cpu_has_vmx_msr_bitmap())
>>              return;
>>  
>> +    if (static_branch_unlikely(&enable_emsr_bitmap))
>> +            evmcs_touch_msr_bitmap();
>
> I'm not sure about the "unlikely".  Can you just check current_evmcs
> instead (dropping the static key completely)?

current_evmcs is just a cast:

 (struct hv_enlightened_vmcs *)this_cpu_read(current_vmcs)

so it is always not NULL here :-) We need to check enable_evmcs static
key first. Getting rid of the newly added enable_emsr_bitmap is, of
course, possible.

(Actually, we only call vmx_{dis,en}able_intercept_for_msr in the very
beginning of vCPUs life so this is not a hotpath and likeliness doesn't
really matter).

Will do v2 without the static key, thanks!

>
> The function, also, is small enough that inlining should be beneficial.
>
> Paolo

-- 
  Vitaly

Reply via email to