On 06/27 07:43:19, Savolainen, Petri (Nokia - FI/Espoo) wrote:
> 
> 
> > -----Original Message-----
> > From: lng-odp [mailto:lng-odp-boun...@lists.linaro.org] On Behalf Of Brian
> > Brooks
> > Sent: Monday, June 26, 2017 9:21 PM
> > To: lng-odp@lists.linaro.org
> > Subject: [lng-odp] [PATCH] linux-gen: time: use true hz
> > 
> > Use true hz value instead of dividing by 10.
> > 
> > The architected timer in ARM SoCs generally have a lower
> > frequency than IA TSC. It is detrimental to divide the
> > hertz by ten in this case. This is causing time validation
> > failures on ARM systems where the architected timer runs
> > at 50MHz.
> > 
> > Signed-off-by: Brian Brooks <brian.bro...@arm.com>
> > ---
> >  platform/linux-generic/odp_time.c | 4 +---
> >  1 file changed, 1 insertion(+), 3 deletions(-)
> > 
> > diff --git a/platform/linux-generic/odp_time.c b/platform/linux-
> > generic/odp_time.c
> > index 2bbe5666..a831cc51 100644
> > --- a/platform/linux-generic/odp_time.c
> > +++ b/platform/linux-generic/odp_time.c
> > @@ -102,9 +102,7 @@ static inline odp_time_t time_hw_cur(void)
> > 
> >  static inline uint64_t time_hw_res(void)
> >  {
> > -   /* Promise a bit lower resolution than average cycle counter
> > -    * frequency */
> > -   return global.hw_freq_hz / 10;
> > +   return global.hw_freq_hz;
> >  }
> > 
> >  static inline uint64_t time_hw_to_ns(odp_time_t time)
> > --
> > 2.13.1
> 
> 
> Did you test this change on x86? This should cause test failures on x86.

Yes, tests passed on x86 2.8GHz.

> You should fix the time test suite also. The suite converts Hz resolution to 
> ns resolution here:
> 
> static void time_test_res(time_res_cb time_res, uint64_t *res)
> {
>       uint64_t rate;
> 
>       rate = time_res();
>       CU_ASSERT(rate > MIN_TIME_RATE);
>       CU_ASSERT(rate < MAX_TIME_RATE);
> 
>       *res = ODP_TIME_SEC_IN_NS / rate;
>       if (ODP_TIME_SEC_IN_NS % rate)
>               (*res)++;
> }
> 
> When TSC resolution is >1GHz, the ns resolution is zero and there is no 
> margin for rounding errors. 

Actually, ns resolution is one.

> static void time_test_conversion(time_from_ns_cb time_from_ns, uint64_t res)
> {
>       uint64_t ns1, ns2;
>       odp_time_t time;
>       uint64_t upper_limit, lower_limit;
> 
>       ns1 = 100;
>       time = time_from_ns(ns1);
> 
>       ns2 = odp_time_to_ns(time);
> 
>       /* need to check within arithmetic tolerance that the same
>        * value in ns is returned after conversions */
>       upper_limit = ns1 + res;
>       lower_limit = ns1 - res;
>       CU_ASSERT((ns2 <= upper_limit) && (ns2 >= lower_limit)); 
> 
> 
> The error margin needs to be changed from ns resolution to something else. I 
> think it should be just a percentage from the original value (floating point 
> math) e.g. +-20% throughout the test suite.

As long as the tick values read from the CPU are converted to nanoseconds
using the true frequency of the tick counter (true for code in odp_time.c)
I think reporting a resolution 1ns for >1GHz might be fine.

> -Petri
> 
> 

Reply via email to