Sean Christopherson <sean.j.christopher...@intel.com> writes:

> On Tue, Jul 14, 2020 at 02:00:04PM +0200, Vitaly Kuznetsov wrote:
>> Sean Christopherson <sean.j.christopher...@intel.com> writes:
>> > diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
>> > index 95ef629228691..5f526d94c33f3 100644
>> > --- a/arch/x86/kvm/x86.c
>> > +++ b/arch/x86/kvm/x86.c
>> > @@ -819,22 +819,22 @@ int kvm_set_cr0(struct kvm_vcpu *vcpu, unsigned long 
>> > cr0)
>> >    if ((cr0 & X86_CR0_PG) && !(cr0 & X86_CR0_PE))
>> >            return 1;
>> >  
>> > -  if (cr0 & X86_CR0_PG) {
>> >  #ifdef CONFIG_X86_64
>> > -          if (!is_paging(vcpu) && (vcpu->arch.efer & EFER_LME)) {
>> > -                  int cs_db, cs_l;
>> > +  if ((vcpu->arch.efer & EFER_LME) && !is_paging(vcpu) &&
>> > +      (cr0 & X86_CR0_PG)) {
>> 
>> it seems we have more than one occurance of "if (vcpu->arch.efer &
>> EFER_LME)" under "#ifdef CONFIG_X86_64" and we alredy have 
>> 
>> static inline int is_long_mode(struct kvm_vcpu *vcpu)
>> {
>> #ifdef CONFIG_X86_64
>>      return vcpu->arch.efer & EFER_LMA;
>> #else
>>      return 0;
>> #endif
>> }
>> 
>> so if we use this instead, the compilers will just throw away the
>> non-reachable blocks when !(#ifdef CONFIG_X86_64), right?
>
> EFER.LME vs. EFER.LMA.  The kvm_set_cr0() check is specifically looking at
> the case where EFER.LME=1, EFER.LMA=0, and CR0.PG is being toggled on, i.e.
> long mode is being enabled.  EFER_LMA won't be set until vmx_set_cr0() does
> enter_lmode().

Oops, shame on me :-(

Would it make sense to introduce something like is_long_mode()
(e.g. is_efer_lme()) for LME and #ifdef CONFIG_X86_64? I see the
same checks in vmx_set_cr0()/svm_set_cr0())?

-- 
Vitaly

Reply via email to