On 18.07.2024 18:54, Alejandro Vallejo wrote:
> On Thu Jul 18, 2024 at 12:49 PM BST, Jan Beulich wrote:
>> On 09.07.2024 17:52, Alejandro Vallejo wrote:
>>> --- a/xen/arch/x86/include/asm/domain.h
>>> +++ b/xen/arch/x86/include/asm/domain.h
>>> @@ -591,12 +591,7 @@ struct pv_vcpu
>>>  
>>>  struct arch_vcpu
>>>  {
>>> -    /*
>>> -     * guest context (mirroring struct vcpu_guest_context) common
>>> -     * between pv and hvm guests
>>> -     */
>>> -
>>> -    void              *fpu_ctxt;
>>> +    /* Fixed point registers */
>>>      struct cpu_user_regs user_regs;
>>
>> Not exactly, no. Selector registers are there as well for example, which
>> I wouldn't consider "fixed point" ones. I wonder why the existing comment
>> cannot simply be kept, perhaps extended to mention that fpu_ctxt now lives
>> elsewhere.
> 
> Would you prefer "general purpose registers"? It's not quite that either, but
> it's arguably closer. I can part with the comment altogether but I'd rather
> leave a token amount of information to say "non-FPU register state" (but not
> that, because that would be a terrible description). 
> 
> I'd rather update it to something that better reflects reality, as I found it
> quite misleading when reading through. I initially thought it may have been
> related to struct layout (as in C-style single-level inheritance), but as it
> turns out it's merely establishing a vague relationship between arch_vcpu and
> vcpu_guest_context. I can believe once upon a time the relationship was closer
> than it it now, but with the guest context missing AVX state, MSR state and
> other bits and pieces I thought it better to avoid such confusions for future
> navigators down the line so limit its description to the line below.

As said, I'd prefer if you amended the existing comment. Properly describing
what's in cpu_user_regs isn't quite as easy in only very few words. Neither
"fixed point register" nor "general purpose registers" really covers it. And
I'd really like to avoid having potentially confusing comments.

>>> --- a/xen/arch/x86/xstate.c
>>> +++ b/xen/arch/x86/xstate.c
>>> @@ -507,9 +507,16 @@ int xstate_alloc_save_area(struct vcpu *v)
>>>      unsigned int size;
>>>  
>>>      if ( !cpu_has_xsave )
>>> -        return 0;
>>> -
>>> -    if ( !is_idle_vcpu(v) || !cpu_has_xsavec )
>>> +    {
>>> +        /*
>>> +         * This is bigger than FXSAVE_SIZE by 64 bytes, but it helps 
>>> treating
>>> +         * the FPU state uniformly as an XSAVE buffer even if XSAVE is not
>>> +         * available in the host. Note the alignment restriction of the 
>>> XSAVE
>>> +         * area are stricter than those of the FXSAVE area.
>>> +         */
>>> +        size = XSTATE_AREA_MIN_SIZE;
>>
>> What exactly would break if just (a little over) 512 bytes worth were 
>> allocated
>> when there's no XSAVE? If it was exactly 512, something like xstate_all() 
>> would
>> need to apply a little more care, I guess. Yet for that having just 
>> always-zero
>> xstate_bv and xcomp_bv there would already suffice (e.g. using
>> offsetof(..., xsave_hdr.reserved) here, to cover further fields gaining 
>> meaning
>> down the road). Remember that due to xmalloc() overhead and the 
>> 64-byte-aligned
>> requirement, you can only have 6 of them in a page the way you do it, when 
>> the
>> alternative way 7 would fit (if I got my math right).
> 
> I'm slightly confused.
> 
> XSTATE_AREA_MIN_SIZE is already 512 + 64 to account for the XSAVE header,
> including its reserved fields. Did you mean something else?

No, I didn't. I've in fact commented on it precisely because it is the value
you name. That's larger than necessary, and when suitably shrunk - as said -
one more of these structures could fit in a page (assuming they were all
allocated back-to-back, which isn't quite true right now, but other
intervening allocations may or may not take space from the same page, so
chances are still that the ones here all might come from one page as long as
there's space left).

>     #define XSAVE_HDR_SIZE            64
>     #define XSAVE_SSE_OFFSET          160
>     #define XSTATE_YMM_SIZE           256
>     #define FXSAVE_SIZE               512
>     #define XSAVE_HDR_OFFSET          FXSAVE_SIZE
>     #define XSTATE_AREA_MIN_SIZE      (FXSAVE_SIZE + XSAVE_HDR_SIZE)
> 
> Part of the rationale is to simplify other bits of code that are currently
> conditionalized on v->xsave_header being NULL. And for that the full xsave
> header must be present (even if unused because !cpu_xsave)

But that's my point: The reserved[] part doesn't need to be there; it's
not being accessed anywhere, I don't think.

Jan

Reply via email to