On Wed, Jun 24, 2015 at 01:34:07PM +0200, Ola Liljedahl wrote:
> On 10 June 2015 at 16:08, Jerin Jacob <jerin.ja...@caviumnetworks.com>
> wrote:
> 
> > This option is to specify minimum number ticks
> > (delta between future tick and current tick) required to successfully
> > reset/set the timer.
> >
> > some hardware timer implementations needs at least two ticks gap between
> > "current tick" and "future tick" to cancel/set timer and let timer test
> > case aware of this behavior by introducing CONFIG_MIN_TICK
> >
> > Signed-off-by: Jerin Jacob <jerin.ja...@caviumnetworks.com>
> > ---
> >  test/validation/odp_timer.c | 57
> > ++++++++++++++++++++++++++++-----------------
> >  1 file changed, 36 insertions(+), 21 deletions(-)
> >
> > diff --git a/test/validation/odp_timer.c b/test/validation/odp_timer.c
> > index 5dfc06a..d13c12e 100644
> > --- a/test/validation/odp_timer.c
> > +++ b/test/validation/odp_timer.c
> > @@ -242,10 +242,11 @@ static void handle_tmo(odp_event_t ev, bool stale,
> > uint64_t prev_tick)
> >                 if (ttp != NULL && ttp->tick != TICK_INVALID)
> >                         CU_FAIL("Stale timeout for active timer");
> >         } else {
> > -               if (!odp_timeout_fresh(tmo))
> > +               if (!odp_timeout_fresh(tmo) && ttp->tick != TICK_INVALID)
> >
> Why are we checking ttp->tick as well?
> Checking that ttp->tick != TICK_INVALID for a fresh timeout should perhaps
> be a separate check? Not combined with checking the return value of
> odp_timeout_fresh().
> 
> 
> 
> >                         CU_FAIL("Wrong status (stale) for fresh timeout");
> >                 /* Fresh timeout => local timer must have matching tick */
> > -               if (ttp != NULL && ttp->tick != tick) {
> > +               if (ttp != NULL && ttp->tick != TICK_INVALID &&
> > +                   ttp->tick != tick) {
> >
> Basically the same comment here.
> 
>                         LOG_DBG("Wrong tick: expected %"PRIu64" actual
> > %"PRIu64"\n",
> >                                 ttp->tick, tick);
> >                         CU_FAIL("odp_timeout_tick() wrong tick");
> > @@ -268,6 +269,17 @@ static void handle_tmo(odp_event_t ev, bool stale,
> > uint64_t prev_tick)
> >         }
> >  }
> >
> > +#define NAME "timer_pool"
> > +#define RES (10 * ODP_TIME_MSEC / 3)
> > +#define MIN (10 * ODP_TIME_MSEC / 3)
> > +#define MAX (1000000 * ODP_TIME_MSEC)
> > +
> > +/*
> > + * Minimum tick(s)(delta between future tick and current tick)
> > + * required to successfully reset/set the timer
> > + */
> > +#define CONFIG_MIN_TICK 1
> >
> Is this a candidate for a proper ODP API?
> Something you could retrieve from
> odp_timer_pool_info()/odp_timer_pool_info_t?
> It seems applications should know this value in order not to request
> timeouts to close in time and either fail or have to repeat the timer set
> operation with increasing tick values which causes unnecessary overhead.
> 
> +

I agree, Its better to expose the minimum ticks required to program the
HW from the current tick as part of API. I do agree that "min_tmo"  part
of odp_timer_pool_param_t is not useful, its really overlapping with
resolution. I guess it can be made it as output parameter in
info structure but I would like return as min value in ticks not in ns.
As HW has restriction only on ticks not in time domain.

> >  /* @private Worker thread entrypoint which performs timer
> > alloc/set/cancel/free
> >   * tests */
> >  static void *worker_entrypoint(void *arg TEST_UNUSED)
> > @@ -305,7 +317,7 @@ static void *worker_entrypoint(void *arg TEST_UNUSED)
> >         /* Initial set all timers with a random expiration time */
> >         uint32_t nset = 0;
> >         for (i = 0; i < NTIMERS; i++) {
> > -               uint64_t tck = odp_timer_current_tick(tp) + 1 +
> > +               uint64_t tck = odp_timer_current_tick(tp) +
> > CONFIG_MIN_TICK +
> >                                odp_timer_ns_to_tick(tp,
> >                                                     (rand_r(&seed) %
> > RANGE_MS)
> >                                                     * 1000000ULL);
> > @@ -329,9 +341,9 @@ static void *worker_entrypoint(void *arg TEST_UNUSED)
> >         for (ms = 0; ms < 7 * RANGE_MS / 10; ms++) {
> >                 odp_event_t ev;
> >                 while ((ev = odp_queue_deq(queue)) != ODP_EVENT_INVALID) {
> > -                       /* Subtract one from prev_tick to allow for
> > timeouts
> > -                        * to be delivered a tick late */
> > -                       handle_tmo(ev, false, prev_tick - 1);
> > +                       /* Subtract CONFIG_MIN_TICK from prev_tick to allow
> > +                        * for timeouts to be delivered a tick late */
> > +                       handle_tmo(ev, false, prev_tick - CONFIG_MIN_TICK);
> >
> I don't think the potential lateness of a timeout is related to the minimum
> relative tick length.
> The lateness is due to the timer (whether HW or SW) executing
> asynchronously from the clients so (assuming that timers actually expire
> and timeouts delivered exactly on time) actual reception and processing of
> a timeout in a client depends on e.g. the OS scheduler. Timeout processing
> in clients could be delayed even longer if the OS is not real-time enough
> (client threads being swapped out), this does happen intermittently on a
> loaded machine.
> CONFIG_MIN_TICK just tells the clients that there are some restrictions to
> the requested tick when resetting a timer. I expect timers to expire on the
> requested tick (if the reset operation was successful) anyway.

What if reset operation is not successful due to TOO
near case(I was hitting that case) i.e following check in "handle_tmo"

if (tick < prev_tick) {
        LOG_DBG("Too late tick: %"PRIu64" prev_tick%"PRIu64"\n",
                                tick, prev_tick);
}

since in my platform CONFIG_MIN_TICK is 2, I was getting one off
prev_tick value in TOO near cancel case.


> 
> This change should be reverted.
> 
> 
> 
> >                         nrcv++;
> >                 }
> >                 prev_tick = odp_timer_current_tick(tp);
> > @@ -340,10 +352,11 @@ static void *worker_entrypoint(void *arg TEST_UNUSED)
> >                     (rand_r(&seed) % 2 == 0)) {
> >                         /* Timer active, cancel it */
> >                         rc = odp_timer_cancel(tt[i].tim, &tt[i].ev);
> > -                       if (rc != 0)
> > +                       if (rc != 0) {
> > +                               tt[i].tick = TICK_INVALID;
> >                                 /* Cancel failed, timer already expired */
> >                                 ntoolate++;
> > -                       tt[i].tick = TICK_INVALID;
> > +                       }
> >
> Why did you move in the assignment into the if branch?
> If we (successfully) cancel a timer, the expected tick (tt[i].tick) should
> be set to TICK_INVALID to reflect that we don't expect a timeout for this
> tick. If the timer implementation works OK, you will not receive the
> corresponding timeout but just to be able to catch a timeout that was
> erroneously delivered even if cancel timer claimed to succeed, we want to
> update tt[i].tick so that we can catch such a situation.

This change should read inline with first two changes above in
"handle_tmo" function. The problem I have here is that, The cancel will
fail if the future tick is TOO near,  In that case event will be dispatched
where I don't have any control on it so I am using TICK_INVALID as mark
invalid event.

Moving the assignment in branch will make sure
that(combining with "handle_tmo" change) erroneous event dispatches on 
successful
cancel will show up as some assert in handle_tmo

> 
>                         ncancel++;
> >                 } else {
> >                         if (tt[i].ev != ODP_EVENT_INVALID)
> > @@ -352,8 +365,10 @@ static void *worker_entrypoint(void *arg TEST_UNUSED)
> >                         else
> >                                 /* Timer active => reset */
> >                                 nreset++;
> > -                       uint64_t tck = 1 + odp_timer_ns_to_tick(tp,
> > -                                      (rand_r(&seed) % RANGE_MS) *
> > 1000000ULL);
> > +                       uint64_t tck = CONFIG_MIN_TICK +
> > +                              odp_timer_ns_to_tick(tp,
> > +                                                   (rand_r(&seed) %
> > RANGE_MS)
> > +                                                   * 1000000ULL);
> >
> Yes this is the real usage of CONFIG_MIN_TICK.
> 
> 
> >                         odp_timer_set_t rc;
> >                         uint64_t cur_tick;
> >                         /* Loop until we manage to read cur_tick and set a
> > @@ -403,13 +418,17 @@ static void *worker_entrypoint(void *arg TEST_UNUSED)
> >         LOG_DBG("Thread %u: %"PRIu32" stale timeout(s) after
> > odp_timer_free()\n",
> >                 thr, nstale);
> >
> > -       /* Delay some more to ensure timeouts for expired timers can be
> > -        * received */
> > -       struct timespec ts;
> > -       ts.tv_sec = 0;
> > -       ts.tv_nsec = 1000000; /* 1ms */
> > -       if (nanosleep(&ts, NULL) < 0)
> > -               CU_FAIL_FATAL("nanosleep failed");
> > +       for (i = 0; i < CONFIG_MIN_TICK; i++) {
> > +               /* Delay some more tick(s) to ensure
> > +                * timeouts for expired timers can be received */
> > +               struct timespec ts;
> > +
> > +               ts.tv_sec = 0;
> > +               ts.tv_nsec = RES; /* 1 tick */
> > +               if (nanosleep(&ts, NULL) < 0)
> > +                       CU_FAIL_FATAL("nanosleep failed");
> > +       }
> > +
> >
> You don't think waiting for 1 tick is enough for the timeouts of all
> expired timers to have been enqueued and can be received? Could there be an
> even longer delay between enqueue and reception of an event?
> If we need to wait longer (which might be the case), what does this have to
> do with CONFIG_MIN_TICK? Perhaps this is a different configuration or we
> should just hardcode a larger value. The test is about to end so it doesn't
> really matter if we delay for one tick or for ten ticks.
> 
> 

One tick now is 3.3ms which greater the existing 1ms value.

IMO it should be CONFIG_MIN_TICK as
current program cancels all the timers and its using the polled queue so
IMO its very well deterministic and reason(in the example) for cancel failure
should be related TOO near case. If its for TOO near case, Waiting up to
CONFIG_MIN_TICK is enough.

> >         while (nstale != 0) {
> >                 odp_event_t ev = odp_queue_deq(queue);
> >                 if (ev != ODP_EVENT_INVALID) {
> > @@ -458,10 +477,6 @@ static void test_odp_timer_all(void)
> >         if (tbp == ODP_POOL_INVALID)
> >                 CU_FAIL_FATAL("Timeout pool create failed");
> >
> > -#define NAME "timer_pool"
> > -#define RES (10 * ODP_TIME_MSEC / 3)
> > -#define MIN (10 * ODP_TIME_MSEC / 3)
> > -#define MAX (1000000 * ODP_TIME_MSEC)
> >         /* Create a timer pool */
> >         tparam.res_ns = RES;
> >         tparam.min_tmo = MIN;
> > --
> > 2.1.0
> >
> > _______________________________________________
> > lng-odp mailing list
> > lng-odp@lists.linaro.org
> > https://lists.linaro.org/mailman/listinfo/lng-odp
> >
_______________________________________________
lng-odp mailing list
lng-odp@lists.linaro.org
https://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to