On 15.01.16 11:14, Maxim Uvarov wrote:
Following current test logic validation test should correctly
wait for all scheduled timer expirations. Setting that time
to 1ms is not correct due to code above schedules timer to 1.1ms
and context switch in linux takes 10ms.
Problem that odp_timer_pool_destroy() does not disarm all trigered
timers before freeing pool shared memory still exist. That should
be covered with separate test case.
Signed-off-by: Maxim Uvarov <maxim.uva...@linaro.org>
---
test/validation/timer/timer.c | 38
++++++++++++++++++++++++--------------
1 file changed, 24 insertions(+), 14 deletions(-)
diff --git a/test/validation/timer/timer.c
b/test/validation/timer/timer.c
index dda5e4c..db0a5ca 100644
--- a/test/validation/timer/timer.c
+++ b/test/validation/timer/timer.c
@@ -266,6 +266,23 @@ static void handle_tmo(odp_event_t ev, bool
stale, uint64_t prev_tick)
}
}
+/* Delay to ensure timeouts for expired timers can be
+ * received.
+ * Delay time = (max timer expiration time) +
+ * (timer handler execution time) +
+ * (system overhead for thread creation and schedule
+ * context switches)
+ */
+static void _wait_for_timers_expiration(void)
_wait_for* -> wait_for_*
+{
+ struct timespec ts;
+
+ ts.tv_sec = 0;
+ ts.tv_nsec = 50 * ODP_TIME_MSEC_IN_NS;
+ if (nanosleep(&ts, NULL) < 0)
+ CU_FAIL_FATAL("nanosleep failed");
+}
+
use time API here
odp_time_wait_ns(50 * ODP_TIME_MSEC_IN_NS());
Maybe better to correct it with separate preparation patch.
/* @private Worker thread entrypoint which performs timer
alloc/set/cancel/free
* tests */
static void *worker_entrypoint(void *arg TEST_UNUSED)
@@ -305,7 +322,7 @@ static void *worker_entrypoint(void *arg
TEST_UNUSED)
uint64_t tck = odp_timer_current_tick(tp) + 1 +
odp_timer_ns_to_tick(tp,
(rand_r(&seed) % RANGE_MS)
- * 1000000ULL);
+ * ODP_TIME_MSEC_IN_NS);
odp_timer_set_t rc;
rc = odp_timer_set_abs(tt[i].tim, tck, &tt[i].ev);
if (rc != ODP_TIMER_SUCCESS) {
@@ -351,7 +368,8 @@ static void *worker_entrypoint(void *arg
TEST_UNUSED)
/* Timer active => reset */
nreset++;
uint64_t tck = 1 + odp_timer_ns_to_tick(tp,
- (rand_r(&seed) % RANGE_MS) * 1000000ULL);
+ (rand_r(&seed) % RANGE_MS) *
+ ODP_TIME_MSEC_IN_NS);
...it's not related to the patch, but maybe as preparation patch
would be normal to
replace rand_r() usage on odp_random_data()...
odp_timer_set_t rc;
uint64_t cur_tick;
/* Loop until we manage to read cur_tick and set a
@@ -372,11 +390,8 @@ static void *worker_entrypoint(void *arg
TEST_UNUSED)
tt[i].tick = cur_tick + tck;
}
}
- struct timespec ts;
- ts.tv_sec = 0;
- ts.tv_nsec = 1000000; /* 1ms */
- if (nanosleep(&ts, NULL) < 0)
- CU_FAIL_FATAL("nanosleep failed");
+
+ _wait_for_timers_expiration();
You've changed step here from 1ms to 50ms, but didn't change comment
under loop.
It's not seen here, but after this description is not correct.
...
Also, delay is increased but step in the loop is still supposed 1ms.
The line below should be changed also:
for (ms = 0; ms < 7 * RANGE_MS / 10; ms++) {
ms += 50ms, but pay attention, maybe there is reason to tune (7 *
RANGE_MS / 10) a little.
But really I not fully understand why we need to increase step here.
IMHO, it's better to decrease it here. If in some case timeout is
delivered to
early it should be caught, but if it's a little to late - it's normal.
Test simply checks if timeouts are received in right order and are
not to early.
Simply increasing delay here in 50 times and not tuning iteration
number you increased
time for receiving events in the loop. The events are spread in 2000
ms, and you are
lucky to fit in this range by doing like this. To increase time for
receiving events
better to increase iteration number in:
for (ms = 0; ms < 7 * RANGE_MS / 10; ms++) {
to fit 2000 ms interval. That's enough to receive all events and no
need to change polling timeout.