>>> On 17.01.16 at 22:58, <haozhong.zh...@intel.com> wrote:
> +u64 hvm_get_tsc_scaling_ratio(u32 gtsc_khz)
> +{
> +    u64 ratio;
> +
> +    if ( !hvm_tsc_scaling_supported )
> +        return 0;
> +
> +    /*
> +     * The multiplication of the first two terms may overflow a 64-bit
> +     * integer, so use mul_u64_u32_div() instead to keep precision.
> +     */
> +    ratio = mul_u64_u32_div(1ULL << hvm_funcs.tsc_scaling_ratio_frac_bits,
> +                            gtsc_khz, cpu_khz);

Is this the only use for this new math64 function? If so, I don't
see the point of adding that function, because (leaving limited
significant bits aside) the above simply is

(gtsc_khz << hvm_funcs.tsc_scaling_ratio_frac_bits) / cpu_khz

which can be had without any multiplication. Personally, if indeed
the only use I'd favor converting the above to inline assembly
here instead of adding that helper function (just like we have a
number of asm()-s in x86/time.c for similar reasons).

> +void hvm_setup_tsc_scaling(struct vcpu *v)
> +{
> +    v->arch.hvm_vcpu.tsc_scaling_ratio =
> +        hvm_get_tsc_scaling_ratio(v->domain->arch.tsc_khz);
> +}

So why again is this per-vCPU setup of per-vCPU state when it
only depends on a per-domain input? If this was per-domain, its
setup could be where it belongs - in arch_hvm_load().

> @@ -5504,6 +5536,9 @@ void hvm_vcpu_reset_state(struct vcpu *v, uint16_t cs, 
> uint16_t ip)
>      hvm_set_segment_register(v, x86_seg_gdtr, &reg);
>      hvm_set_segment_register(v, x86_seg_idtr, &reg);
>  
> +    if ( hvm_tsc_scaling_supported && !d->arch.vtsc )
> +        hvm_setup_tsc_scaling(v);

Could you remind me why this is needed? What state of the guest
would have changed making this necessary? Is this perhaps just
because it's per-vCPU instead of per-domain?

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

Reply via email to