On Wed, 18 Sep 2019, at 01:44, Peter Maydell wrote:
> On Thu, 12 Sep 2019 at 07:56, Andrew Jeffery <and...@aj.id.au> wrote:
> > diff --git a/target/arm/helper.c b/target/arm/helper.c
> > index 507026c9154b..09975704d47f 100644
> > --- a/target/arm/helper.c
> > +++ b/target/arm/helper.c
> > @@ -2409,7 +2409,21 @@ static CPAccessResult gt_stimer_access(CPUARMState 
> > *env,
> >
> >  static uint64_t gt_get_countervalue(CPUARMState *env)
> >  {
> > -    return qemu_clock_get_ns(QEMU_CLOCK_VIRTUAL) / GTIMER_SCALE;
> > +    uint64_t effective;
> > +
> > +    /*
> > +     * Deal with quantized clock scaling by calculating the effective 
> > frequency
> > +     * in terms of the timer scale.
> > +     */
> > +    if (env->cp15.c14_cntfrq <= NANOSECONDS_PER_SECOND) {
> > +        uint32_t scale = NANOSECONDS_PER_SECOND / env->cp15.c14_cntfrq;
> > +        effective = NANOSECONDS_PER_SECOND / scale;
> > +    } else {
> > +        effective = NANOSECONDS_PER_SECOND;
> > +    }
> 
> What is this doing, and why didn't we need to do it before?

I'll fix all of your other comments, but I think this question in particular is 
best
answered by turning the patch into a short series. It's a bit of a complex 
story.
I'll try to split what's going on into smaller steps so what I've done above is
better documented. The short story is there's asymmetry between converting
time to ticks and ticks to time that leads us to schedule timers in the past for
most CNTFRQ values if we don't do something like the above.

Andrew

Reply via email to