Re: [PATCH 7/9] i386/sev: Refactor setting of reset vector and initial CPU state

2024-03-18 Thread Roy Hopkins
On Tue, 2024-03-12 at 16:12 +, Daniel P. Berrangé wrote:
> On Tue, Mar 12, 2024 at 03:45:20PM +, Roy Hopkins wrote:
> > On Fri, 2024-03-01 at 17:01 +, Daniel P. Berrangé wrote:
> > > On Tue, Feb 27, 2024 at 02:50:13PM +, Roy Hopkins wrote:
> > > > +    /*
> > > > + * Ideally we would provide the VMSA directly to kvm which
> > > > would
> > > > + * ensure that the resulting initial VMSA measurement which
> > > > is
> > > > + * calculated during KVM_SEV_LAUNCH_UPDATE_VMSA is
> > > > calculated
> > > > from
> > > > + * exactly what we provide here. Currently this is not
> > > > possible
> > > > so
> > > > + * we need to copy the parts of the VMSA structure that we
> > > > currently
> > > > + * support into the CPU state.
> > > > + */
> > > 
> > > This sounds like it is saying that the code is not honouring
> > > everything in the VMSA defiend by the IGVM file ?
> > > 
> > > If so, that is pretty awkward. The VMSA is effectively an external
> > > ABI between QEMU and the guest owner (or whatever is validating
> > > guest attestation reports for them), and thus predictability and
> > > stability of this over time is critical.
> > > 
> > > We don't want the attestation process to be dependent/variable on
> > > the particular QEMU/KVM version, because any upgrade to QEMU/KVM
> > > could then alter the effective VMSA that the guest owner sees.
> > > 
> > > We've already suffered pain in this respect not long ago when the
> > > kernel arbitrarily changed a default setting which altered the
> > > VMSA it exposed, breaking existing apps that validate attestation.
> > > 
> > > What will it take to provide the full VMSA to KVM, so that we can
> > > guarantee to the guest owner than the VMSA for the guest is going
> > > to perfectly match what their IGVM defined ?
> > > 
> > 
> > Yes, the fact that we have to copy the individual fields from the VMSA to
> > "CPUX86State" is less than ideal - a problem made worse by the fact that the
> > kernel does not allow direct control over some of the fields from userspace,
> > "sev_features" being a good example here where "SVM_SEV_FEAT_DEBUG_SWAP" is
> > unconditionally added by the kernel.
> 
> Ah yes, the SVM_SEV_FEAT_DEBUG_SWAP feature is the one I couldn't remember
> the name of in my quoted text above, that break our apps when the kernel
> suddenly set it by default (thankfully now reverted in Linux with
> 5abf6dceb066f2b02b225fd561440c98a8062681).
> 
> > The kernel VMSA is at least predictable. So, although we cannot yet allow
> > full
> > flexibility in providing a complete VMSA from QEMU and guarantee it will be
> > honoured, we could check to see if any settings conflict with those imposed
> > by
> > the kernel and exit with an error if this is the case. I chose not to
> > implement
> > for this first series but could easily add a patch to support this. The
> > problem
> > here is that it ties the version of QEMU to VMSA handling functionality in
> > the
> > kernel. Any change to the VMSA handling in the kernel would potentially
> > invalidate the checks in QEMU. The one upside here is that this will easily
> > be
> > detectable by the attestation measurement not matching the expected
> > measurement
> > of the IGVM file. But it will be difficult for the user to determine what
> > the
> > discrepancy is.
> 
> Yes, the difficulty in diagnosis is the big thing I'm worried about from
> a distro supportability POV. The DEBUG_SWAP issue caused us a bunch of
> pain and that's before CVMs are even widely used.
> 
> I agree that hardcoding checks in QEMU is pretty unpleasant, and probably
> not something that I'd want us to do. I'd want QEMU to be able to live
> query the kernel's default initial VMSA, if it were to be reporting
> differences vs the IGVM provided VMSA. I dn't think there's a way to
> do that nicely though - i only know of ftrace probes to dump it informally.
> 
> I guess if we know & document what subset of the VMSA QEMU /can/ directly
> control, that at least narrows down where to look if something does change
> or go wrong.
> 
Yes, it makes sense to document the subset that can be reliably set by QEMU,
along with any modifications made byt the kernel. Perhaps I should go one step
further and check that the VMSA does not contain any entries beyond what is
copied in "sev_apply_cpu_context()"? If any field other than those explicitly
copied by this function contain a non-zero value then an error is generated. As
you suggest this will limit the scope of any measurement differences to the
documented subset.

> > The ideal solution is to add or modify a KVM ioctl to allow the VMSA to be
> > set
> > directly, overriding the state in "CPUX86State". The current
> > KVM_SEV_LAUNCH_UPDATE_VMSA ioctl triggers the synchronisation of the VMSA
> > but
> > does not allow it to be specified directly. This could be modified for what
> > we
> > need. The SEV-SNP kernel patc

Re: [PATCH 7/9] i386/sev: Refactor setting of reset vector and initial CPU state

2024-03-12 Thread Daniel P . Berrangé
On Tue, Mar 12, 2024 at 03:45:20PM +, Roy Hopkins wrote:
> On Fri, 2024-03-01 at 17:01 +, Daniel P. Berrangé wrote:
> > On Tue, Feb 27, 2024 at 02:50:13PM +, Roy Hopkins wrote:
> > > +    /*
> > > + * Ideally we would provide the VMSA directly to kvm which
> > > would
> > > + * ensure that the resulting initial VMSA measurement which 
> > > is
> > > + * calculated during KVM_SEV_LAUNCH_UPDATE_VMSA is calculated
> > > from
> > > + * exactly what we provide here. Currently this is not 
> > > possible
> > > so
> > > + * we need to copy the parts of the VMSA structure that we
> > > currently
> > > + * support into the CPU state.
> > > + */
> > 
> > This sounds like it is saying that the code is not honouring
> > everything in the VMSA defiend by the IGVM file ?
> > 
> > If so, that is pretty awkward. The VMSA is effectively an external
> > ABI between QEMU and the guest owner (or whatever is validating
> > guest attestation reports for them), and thus predictability and
> > stability of this over time is critical.
> > 
> > We don't want the attestation process to be dependent/variable on
> > the particular QEMU/KVM version, because any upgrade to QEMU/KVM
> > could then alter the effective VMSA that the guest owner sees.
> > 
> > We've already suffered pain in this respect not long ago when the
> > kernel arbitrarily changed a default setting which altered the
> > VMSA it exposed, breaking existing apps that validate attestation.
> > 
> > What will it take to provide the full VMSA to KVM, so that we can
> > guarantee to the guest owner than the VMSA for the guest is going
> > to perfectly match what their IGVM defined ?
> > 
> 
> Yes, the fact that we have to copy the individual fields from the VMSA to
> "CPUX86State" is less than ideal - a problem made worse by the fact that the
> kernel does not allow direct control over some of the fields from userspace,
> "sev_features" being a good example here where "SVM_SEV_FEAT_DEBUG_SWAP" is
> unconditionally added by the kernel.

Ah yes, the SVM_SEV_FEAT_DEBUG_SWAP feature is the one I couldn't remember
the name of in my quoted text above, that break our apps when the kernel
suddenly set it by default (thankfully now reverted in Linux with
5abf6dceb066f2b02b225fd561440c98a8062681).

> The kernel VMSA is at least predictable. So, although we cannot yet allow full
> flexibility in providing a complete VMSA from QEMU and guarantee it will be
> honoured, we could check to see if any settings conflict with those imposed by
> the kernel and exit with an error if this is the case. I chose not to 
> implement
> for this first series but could easily add a patch to support this. The 
> problem
> here is that it ties the version of QEMU to VMSA handling functionality in the
> kernel. Any change to the VMSA handling in the kernel would potentially
> invalidate the checks in QEMU. The one upside here is that this will easily be
> detectable by the attestation measurement not matching the expected 
> measurement
> of the IGVM file. But it will be difficult for the user to determine what the
> discrepancy is.

Yes, the difficulty in diagnosis is the big thing I'm worried about from
a distro supportability POV. The DEBUG_SWAP issue caused us a bunch of
pain and that's before CVMs are even widely used.

I agree that hardcoding checks in QEMU is pretty unpleasant, and probably
not something that I'd want us to do. I'd want QEMU to be able to live
query the kernel's default initial VMSA, if it were to be reporting
differences vs the IGVM provided VMSA. I dn't think there's a way to
do that nicely though - i only know of ftrace probes to dump it informally.

I guess if we know & document what subset of the VMSA QEMU /can/ directly
control, that at least narrows down where to look if something does change
or go wrong.

> The ideal solution is to add or modify a KVM ioctl to allow the VMSA to be set
> directly, overriding the state in "CPUX86State". The current
> KVM_SEV_LAUNCH_UPDATE_VMSA ioctl triggers the synchronisation of the VMSA but
> does not allow it to be specified directly. This could be modified for what we
> need. The SEV-SNP kernel patches add KVM_SEV_SNP_LAUNCH_UPDATE which allows a
> page type of VMSA to be updated, although the current patch series does not
> support using this to set the initial state of the VMSA:
> https://lore.kernel.org/lkml/20231230172351.574091-19-michael.r...@amd.com/ I
> have experimented with this myself and have successfully modified the SEV-SNP
> kernel patches to support directly setting the VMSA from QEMU.
> 
> On the other hand, I have also verified that I can indeed measure an IGVM file
> loaded using the VMSA synchronisation method currently employed and get a
> matching measurement from the SEV attestation report.
> 
> What would you suggest is the best way forward for this?

I'll delegate to Paolo for an opinion on the possiblity of n

Re: [PATCH 7/9] i386/sev: Refactor setting of reset vector and initial CPU state

2024-03-12 Thread Roy Hopkins
On Fri, 2024-03-01 at 17:01 +, Daniel P. Berrangé wrote:
> On Tue, Feb 27, 2024 at 02:50:13PM +, Roy Hopkins wrote:
> > When an SEV guest is started, the reset vector and state are
> > extracted from metadata that is contained in the firmware volume.
> > 
> > In preparation for using IGVM to setup the initial CPU state,
> > the code has been refactored to populate vmcb_save_area for each
> > CPU which is then applied during guest startup and CPU reset.
> > 
> > Signed-off-by: Roy Hopkins 
> > ---
> >  target/i386/sev.c | 288 +-
> >  target/i386/sev.h | 110 ++
> >  2 files changed, 369 insertions(+), 29 deletions(-)
> > 
> > diff --git a/target/i386/sev.c b/target/i386/sev.c
> > index 173de91afe..d6902432fd 100644
> > --- a/target/i386/sev.c
> > +++ b/target/i386/sev.c
> > @@ -74,9 +74,7 @@ struct SevGuestState {
> >  SevState state;
> >  gchar *measurement;
> >  
> > -    uint32_t reset_cs;
> > -    uint32_t reset_ip;
> > -    bool reset_data_valid;
> > +    QTAILQ_HEAD(, SevLaunchVmsa) launch_vmsa;
> >  };
> >  
> >  #define DEFAULT_GUEST_POLICY    0x1 /* disable debug */
> > @@ -99,6 +97,12 @@ typedef struct QEMU_PACKED SevHashTableDescriptor {
> >  /* hard code sha256 digest size */
> >  #define HASH_SIZE 32
> >  
> > +/* Convert between SEV-ES VMSA and SegmentCache flags/attributes */
> > +#define FLAGS_VMSA_TO_SEGCACHE(flags) \
> > +    flags) & 0xff00) << 12) | (((flags) & 0xff) << 8))
> > +#define FLAGS_SEGCACHE_TO_VMSA(flags) \
> > +    flags) & 0xff00) >> 8) | (((flags) & 0xf0) >> 12))
> > +
> >  typedef struct QEMU_PACKED SevHashTableEntry {
> >  QemuUUID guid;
> >  uint16_t len;
> > @@ -125,6 +129,15 @@ typedef struct QEMU_PACKED PaddedSevHashTable {
> >  QEMU_BUILD_BUG_ON(sizeof(PaddedSevHashTable) % 16 != 0);
> >  
> >  static SevGuestState *sev_guest;
> > +
> > +typedef struct SevLaunchVmsa {
> > +    QTAILQ_ENTRY(SevLaunchVmsa) next;
> > +
> > +    uint16_t cpu_index;
> > +    uint64_t gpa;
> > +    struct sev_es_save_area vmsa;
> > +} SevLaunchVmsa;
> > +
> >  static Error *sev_mig_blocker;
> >  
> >  static const char *const sev_fw_errlist[] = {
> > @@ -291,6 +304,149 @@ sev_guest_finalize(Object *obj)
> >  {
> >  }
> >  
> > +static void sev_apply_cpu_context(CPUState *cpu)
> > +{
> > +    SevGuestState *sev_guest = SEV_GUEST(MACHINE(qdev_get_machine())->cgs);
> > +    X86CPU *x86;
> > +    CPUX86State *env;
> > +    struct SevLaunchVmsa *launch_vmsa;
> > +
> > +    /* See if an initial VMSA has been provided for this CPU */
> > +    QTAILQ_FOREACH(launch_vmsa, &sev_guest->launch_vmsa, next)
> > +    {
> > +    if (cpu->cpu_index == launch_vmsa->cpu_index) {
> > +    x86 = X86_CPU(cpu);
> > +    env = &x86->env;
> > +
> > +    /*
> > + * Ideally we would provide the VMSA directly to kvm which
> > would
> > + * ensure that the resulting initial VMSA measurement which is
> > + * calculated during KVM_SEV_LAUNCH_UPDATE_VMSA is calculated
> > from
> > + * exactly what we provide here. Currently this is not possible
> > so
> > + * we need to copy the parts of the VMSA structure that we
> > currently
> > + * support into the CPU state.
> > + */
> 
> This sounds like it is saying that the code is not honouring
> everything in the VMSA defiend by the IGVM file ?
> 
> If so, that is pretty awkward. The VMSA is effectively an external
> ABI between QEMU and the guest owner (or whatever is validating
> guest attestation reports for them), and thus predictability and
> stability of this over time is critical.
> 
> We don't want the attestation process to be dependent/variable on
> the particular QEMU/KVM version, because any upgrade to QEMU/KVM
> could then alter the effective VMSA that the guest owner sees.
> 
> We've already suffered pain in this respect not long ago when the
> kernel arbitrarily changed a default setting which altered the
> VMSA it exposed, breaking existing apps that validate attestation.
> 
> What will it take to provide the full VMSA to KVM, so that we can
> guarantee to the guest owner than the VMSA for the guest is going
> to perfectly match what their IGVM defined ?
> 

Yes, the fact that we have to copy the individual fields from the VMSA to
"CPUX86State" is less than ideal - a problem made worse by the fact that the
kernel does not allow direct control over some of the fields from userspace,
"sev_features" being a good example here where "SVM_SEV_FEAT_DEBUG_SWAP" is
unconditionally added by the kernel.

The kernel VMSA is at least predictable. So, although we cannot yet allow full
flexibility in providing a complete VMSA from QEMU and guarantee it will be
honoured, we could check to see if any settings conflict with those imposed by
the kernel and exit with an error if this is the case. I chose not to implement
for this first series but could easily add a patch to supp

Re: [PATCH 7/9] i386/sev: Refactor setting of reset vector and initial CPU state

2024-03-01 Thread Daniel P . Berrangé
On Tue, Feb 27, 2024 at 02:50:13PM +, Roy Hopkins wrote:
> When an SEV guest is started, the reset vector and state are
> extracted from metadata that is contained in the firmware volume.
> 
> In preparation for using IGVM to setup the initial CPU state,
> the code has been refactored to populate vmcb_save_area for each
> CPU which is then applied during guest startup and CPU reset.
> 
> Signed-off-by: Roy Hopkins 
> ---
>  target/i386/sev.c | 288 +-
>  target/i386/sev.h | 110 ++
>  2 files changed, 369 insertions(+), 29 deletions(-)
> 
> diff --git a/target/i386/sev.c b/target/i386/sev.c
> index 173de91afe..d6902432fd 100644
> --- a/target/i386/sev.c
> +++ b/target/i386/sev.c
> @@ -74,9 +74,7 @@ struct SevGuestState {
>  SevState state;
>  gchar *measurement;
>  
> -uint32_t reset_cs;
> -uint32_t reset_ip;
> -bool reset_data_valid;
> +QTAILQ_HEAD(, SevLaunchVmsa) launch_vmsa;
>  };
>  
>  #define DEFAULT_GUEST_POLICY0x1 /* disable debug */
> @@ -99,6 +97,12 @@ typedef struct QEMU_PACKED SevHashTableDescriptor {
>  /* hard code sha256 digest size */
>  #define HASH_SIZE 32
>  
> +/* Convert between SEV-ES VMSA and SegmentCache flags/attributes */
> +#define FLAGS_VMSA_TO_SEGCACHE(flags) \
> +flags) & 0xff00) << 12) | (((flags) & 0xff) << 8))
> +#define FLAGS_SEGCACHE_TO_VMSA(flags) \
> +flags) & 0xff00) >> 8) | (((flags) & 0xf0) >> 12))
> +
>  typedef struct QEMU_PACKED SevHashTableEntry {
>  QemuUUID guid;
>  uint16_t len;
> @@ -125,6 +129,15 @@ typedef struct QEMU_PACKED PaddedSevHashTable {
>  QEMU_BUILD_BUG_ON(sizeof(PaddedSevHashTable) % 16 != 0);
>  
>  static SevGuestState *sev_guest;
> +
> +typedef struct SevLaunchVmsa {
> +QTAILQ_ENTRY(SevLaunchVmsa) next;
> +
> +uint16_t cpu_index;
> +uint64_t gpa;
> +struct sev_es_save_area vmsa;
> +} SevLaunchVmsa;
> +
>  static Error *sev_mig_blocker;
>  
>  static const char *const sev_fw_errlist[] = {
> @@ -291,6 +304,149 @@ sev_guest_finalize(Object *obj)
>  {
>  }
>  
> +static void sev_apply_cpu_context(CPUState *cpu)
> +{
> +SevGuestState *sev_guest = SEV_GUEST(MACHINE(qdev_get_machine())->cgs);
> +X86CPU *x86;
> +CPUX86State *env;
> +struct SevLaunchVmsa *launch_vmsa;
> +
> +/* See if an initial VMSA has been provided for this CPU */
> +QTAILQ_FOREACH(launch_vmsa, &sev_guest->launch_vmsa, next)
> +{
> +if (cpu->cpu_index == launch_vmsa->cpu_index) {
> +x86 = X86_CPU(cpu);
> +env = &x86->env;
> +
> +/*
> + * Ideally we would provide the VMSA directly to kvm which would
> + * ensure that the resulting initial VMSA measurement which is
> + * calculated during KVM_SEV_LAUNCH_UPDATE_VMSA is calculated 
> from
> + * exactly what we provide here. Currently this is not possible 
> so
> + * we need to copy the parts of the VMSA structure that we 
> currently
> + * support into the CPU state.
> + */

This sounds like it is saying that the code is not honouring
everything in the VMSA defiend by the IGVM file ?

If so, that is pretty awkward. The VMSA is effectively an external
ABI between QEMU and the guest owner (or whatever is validating
guest attestation reports for them), and thus predictability and
stability of this over time is critical.

We don't want the attestation process to be dependent/variable on
the particular QEMU/KVM version, because any upgrade to QEMU/KVM
could then alter the effective VMSA that the guest owner sees.

We've already suffered pain in this respect not long ago when the
kernel arbitrarily changed a default setting which altered the
VMSA it exposed, breaking existing apps that validate attestation.

What will it take to provide the full VMSA to KVM, so that we can
guarantee to the guest owner than the VMSA for the guest is going
to perfectly match what their IGVM defined ?

> +cpu_load_efer(env, launch_vmsa->vmsa.efer);
> +cpu_x86_update_cr4(env, launch_vmsa->vmsa.cr4);
> +cpu_x86_update_cr0(env, launch_vmsa->vmsa.cr0);
> +cpu_x86_update_cr3(env, launch_vmsa->vmsa.cr3);
> +
> +cpu_x86_load_seg_cache(
> +env, R_CS, launch_vmsa->vmsa.cs.selector,
> +launch_vmsa->vmsa.cs.base, launch_vmsa->vmsa.cs.limit,
> +FLAGS_VMSA_TO_SEGCACHE(launch_vmsa->vmsa.cs.attrib));
> +cpu_x86_load_seg_cache(
> +env, R_DS, launch_vmsa->vmsa.ds.selector,
> +launch_vmsa->vmsa.ds.base, launch_vmsa->vmsa.ds.limit,
> +FLAGS_VMSA_TO_SEGCACHE(launch_vmsa->vmsa.ds.attrib));
> +cpu_x86_load_seg_cache(
> +env, R_ES, launch_vmsa->vmsa.es.selector,
> +launch_vmsa->vmsa.es.base, launch_vmsa->vmsa.es.limit,
> +FLAGS_VMSA_TO_SEGCACHE(launch_vmsa->vmsa.es.attrib));

[PATCH 7/9] i386/sev: Refactor setting of reset vector and initial CPU state

2024-02-27 Thread Roy Hopkins
When an SEV guest is started, the reset vector and state are
extracted from metadata that is contained in the firmware volume.

In preparation for using IGVM to setup the initial CPU state,
the code has been refactored to populate vmcb_save_area for each
CPU which is then applied during guest startup and CPU reset.

Signed-off-by: Roy Hopkins 
---
 target/i386/sev.c | 288 +-
 target/i386/sev.h | 110 ++
 2 files changed, 369 insertions(+), 29 deletions(-)

diff --git a/target/i386/sev.c b/target/i386/sev.c
index 173de91afe..d6902432fd 100644
--- a/target/i386/sev.c
+++ b/target/i386/sev.c
@@ -74,9 +74,7 @@ struct SevGuestState {
 SevState state;
 gchar *measurement;
 
-uint32_t reset_cs;
-uint32_t reset_ip;
-bool reset_data_valid;
+QTAILQ_HEAD(, SevLaunchVmsa) launch_vmsa;
 };
 
 #define DEFAULT_GUEST_POLICY0x1 /* disable debug */
@@ -99,6 +97,12 @@ typedef struct QEMU_PACKED SevHashTableDescriptor {
 /* hard code sha256 digest size */
 #define HASH_SIZE 32
 
+/* Convert between SEV-ES VMSA and SegmentCache flags/attributes */
+#define FLAGS_VMSA_TO_SEGCACHE(flags) \
+flags) & 0xff00) << 12) | (((flags) & 0xff) << 8))
+#define FLAGS_SEGCACHE_TO_VMSA(flags) \
+flags) & 0xff00) >> 8) | (((flags) & 0xf0) >> 12))
+
 typedef struct QEMU_PACKED SevHashTableEntry {
 QemuUUID guid;
 uint16_t len;
@@ -125,6 +129,15 @@ typedef struct QEMU_PACKED PaddedSevHashTable {
 QEMU_BUILD_BUG_ON(sizeof(PaddedSevHashTable) % 16 != 0);
 
 static SevGuestState *sev_guest;
+
+typedef struct SevLaunchVmsa {
+QTAILQ_ENTRY(SevLaunchVmsa) next;
+
+uint16_t cpu_index;
+uint64_t gpa;
+struct sev_es_save_area vmsa;
+} SevLaunchVmsa;
+
 static Error *sev_mig_blocker;
 
 static const char *const sev_fw_errlist[] = {
@@ -291,6 +304,149 @@ sev_guest_finalize(Object *obj)
 {
 }
 
+static void sev_apply_cpu_context(CPUState *cpu)
+{
+SevGuestState *sev_guest = SEV_GUEST(MACHINE(qdev_get_machine())->cgs);
+X86CPU *x86;
+CPUX86State *env;
+struct SevLaunchVmsa *launch_vmsa;
+
+/* See if an initial VMSA has been provided for this CPU */
+QTAILQ_FOREACH(launch_vmsa, &sev_guest->launch_vmsa, next)
+{
+if (cpu->cpu_index == launch_vmsa->cpu_index) {
+x86 = X86_CPU(cpu);
+env = &x86->env;
+
+/*
+ * Ideally we would provide the VMSA directly to kvm which would
+ * ensure that the resulting initial VMSA measurement which is
+ * calculated during KVM_SEV_LAUNCH_UPDATE_VMSA is calculated from
+ * exactly what we provide here. Currently this is not possible so
+ * we need to copy the parts of the VMSA structure that we 
currently
+ * support into the CPU state.
+ */
+cpu_load_efer(env, launch_vmsa->vmsa.efer);
+cpu_x86_update_cr4(env, launch_vmsa->vmsa.cr4);
+cpu_x86_update_cr0(env, launch_vmsa->vmsa.cr0);
+cpu_x86_update_cr3(env, launch_vmsa->vmsa.cr3);
+
+cpu_x86_load_seg_cache(
+env, R_CS, launch_vmsa->vmsa.cs.selector,
+launch_vmsa->vmsa.cs.base, launch_vmsa->vmsa.cs.limit,
+FLAGS_VMSA_TO_SEGCACHE(launch_vmsa->vmsa.cs.attrib));
+cpu_x86_load_seg_cache(
+env, R_DS, launch_vmsa->vmsa.ds.selector,
+launch_vmsa->vmsa.ds.base, launch_vmsa->vmsa.ds.limit,
+FLAGS_VMSA_TO_SEGCACHE(launch_vmsa->vmsa.ds.attrib));
+cpu_x86_load_seg_cache(
+env, R_ES, launch_vmsa->vmsa.es.selector,
+launch_vmsa->vmsa.es.base, launch_vmsa->vmsa.es.limit,
+FLAGS_VMSA_TO_SEGCACHE(launch_vmsa->vmsa.es.attrib));
+cpu_x86_load_seg_cache(
+env, R_FS, launch_vmsa->vmsa.fs.selector,
+launch_vmsa->vmsa.fs.base, launch_vmsa->vmsa.fs.limit,
+FLAGS_VMSA_TO_SEGCACHE(launch_vmsa->vmsa.fs.attrib));
+cpu_x86_load_seg_cache(
+env, R_GS, launch_vmsa->vmsa.gs.selector,
+launch_vmsa->vmsa.gs.base, launch_vmsa->vmsa.gs.limit,
+FLAGS_VMSA_TO_SEGCACHE(launch_vmsa->vmsa.gs.attrib));
+cpu_x86_load_seg_cache(
+env, R_SS, launch_vmsa->vmsa.ss.selector,
+launch_vmsa->vmsa.ss.base, launch_vmsa->vmsa.ss.limit,
+FLAGS_VMSA_TO_SEGCACHE(launch_vmsa->vmsa.ss.attrib));
+
+env->gdt.base = launch_vmsa->vmsa.gdtr.base;
+env->gdt.limit = launch_vmsa->vmsa.gdtr.limit;
+env->idt.base = launch_vmsa->vmsa.idtr.base;
+env->idt.limit = launch_vmsa->vmsa.idtr.limit;
+
+env->regs[R_EAX] = launch_vmsa->vmsa.rax;
+env->regs[R_ECX] = launch_vmsa->vmsa.rcx;
+env->regs[R_EDX] = launch_vmsa->vmsa.rdx;
+env->regs[R_EBX] = launch_vmsa->vmsa.rbx;
+