On 12/14/20 10:03 AM, Paolo Bonzini wrote:
> On 10/12/20 18:10, Tom Lendacky wrote:
>> From: Tom Lendacky <thomas.lenda...@amd.com>
>>
>> Typically under KVM, an AP is booted using the INIT-SIPI-SIPI sequence,
>> where the guest vCPU register state is updated and then the vCPU is VMRUN
>> to begin execution of the AP. For an SEV-ES guest, this won't work because
>> the guest register state is encrypted.
>>
>> Following the GHCB specification, the hypervisor must not alter the guest
>> register state, so KVM must track an AP/vCPU boot. Should the guest want
>> to park the AP, it must use the AP Reset Hold exit event in place of, for
>> example, a HLT loop.
>>
>> First AP boot (first INIT-SIPI-SIPI sequence):
>>    Execute the AP (vCPU) as it was initialized and measured by the SEV-ES
>>    support. It is up to the guest to transfer control of the AP to the
>>    proper location.
>>
>> Subsequent AP boot:
>>    KVM will expect to receive an AP Reset Hold exit event indicating that
>>    the vCPU is being parked and will require an INIT-SIPI-SIPI sequence to
>>    awaken it. When the AP Reset Hold exit event is received, KVM will place
>>    the vCPU into a simulated HLT mode. Upon receiving the INIT-SIPI-SIPI
>>    sequence, KVM will make the vCPU runnable. It is again up to the guest
>>    to then transfer control of the AP to the proper location.
>>
>> The GHCB specification also requires the hypervisor to save the address of
>> an AP Jump Table so that, for example, vCPUs that have been parked by UEFI
>> can be started by the OS. Provide support for the AP Jump Table set/get
>> exit code.
>>
>> Signed-off-by: Tom Lendacky <thomas.lenda...@amd.com>
>> ---
>>   arch/x86/include/asm/kvm_host.h |  2 ++
>>   arch/x86/kvm/svm/sev.c          | 50 +++++++++++++++++++++++++++++++++
>>   arch/x86/kvm/svm/svm.c          |  7 +++++
>>   arch/x86/kvm/svm/svm.h          |  3 ++
>>   arch/x86/kvm/x86.c              |  9 ++++++
>>   5 files changed, 71 insertions(+)
>>
>> diff --git a/arch/x86/include/asm/kvm_host.h
>> b/arch/x86/include/asm/kvm_host.h
>> index 048b08437c33..60a3b9d33407 100644
>> --- a/arch/x86/include/asm/kvm_host.h
>> +++ b/arch/x86/include/asm/kvm_host.h
>> @@ -1286,6 +1286,8 @@ struct kvm_x86_ops {
>>         void (*migrate_timers)(struct kvm_vcpu *vcpu);
>>       void (*msr_filter_changed)(struct kvm_vcpu *vcpu);
>> +
>> +    void (*vcpu_deliver_sipi_vector)(struct kvm_vcpu *vcpu, u8 vector);
>>   };
>>     struct kvm_x86_nested_ops {
>> diff --git a/arch/x86/kvm/svm/sev.c b/arch/x86/kvm/svm/sev.c
>> index a7531de760b5..b47285384b1f 100644
>> --- a/arch/x86/kvm/svm/sev.c
>> +++ b/arch/x86/kvm/svm/sev.c
>> @@ -17,6 +17,8 @@
>>   #include <linux/processor.h>
>>   #include <linux/trace_events.h>
>>   +#include <asm/trapnr.h>
>> +
>>   #include "x86.h"
>>   #include "svm.h"
>>   #include "cpuid.h"
>> @@ -1449,6 +1451,8 @@ static int sev_es_validate_vmgexit(struct vcpu_svm
>> *svm)
>>           if (!ghcb_sw_scratch_is_valid(ghcb))
>>               goto vmgexit_err;
>>           break;
>> +    case SVM_VMGEXIT_AP_HLT_LOOP:
>> +    case SVM_VMGEXIT_AP_JUMP_TABLE:
>>       case SVM_VMGEXIT_UNSUPPORTED_EVENT:
>>           break;
>>       default:
>> @@ -1770,6 +1774,35 @@ int sev_handle_vmgexit(struct vcpu_svm *svm)
>>                           control->exit_info_2,
>>                           svm->ghcb_sa);
>>           break;
>> +    case SVM_VMGEXIT_AP_HLT_LOOP:
>> +        svm->ap_hlt_loop = true;
> 
> This value needs to be communicated to userspace.  Let's get this right
> from the beginning and use a new KVM_MP_STATE_* value instead (perhaps
> reuse KVM_MP_STATE_STOPPED but for x86 #define it as
> KVM_MP_STATE_AP_HOLD_RECEIVED?).

Ok, let me look into this.

> 
>> @@ -68,6 +68,7 @@ struct kvm_sev_info {
>>      int fd;            /* SEV device fd */
>>      unsigned long pages_locked; /* Number of pages locked */
>>      struct list_head regions_list;  /* List of registered regions */
>> +    u64 ap_jump_table;    /* SEV-ES AP Jump Table address */
> 
> Do you have any plans for migration of this value?  How does the guest
> ensure that the hypervisor does not screw with it?

I'll be sure that this is part of the SEV-ES live migration support.

For SEV-ES, we can't guarantee that the hypervisor doesn't screw with it.
This is something that SEV-SNP will be able to address.

Thanks,
Tom

> 
> Paolo
> 

Reply via email to