Re: [lng-odp] [PATCH] validation: add odp_schedule_pause and odp_schedule_resume tests

2015-01-22 Thread Maxim Uvarov

On 01/22/2015 12:06 PM, Savolainen, Petri (NSN - FI/Espoo) wrote:



-Original Message-
From: lng-odp-boun...@lists.linaro.org [mailto:lng-odp-
boun...@lists.linaro.org] On Behalf Of ext Ciprian Barbu
Sent: Wednesday, January 14, 2015 3:36 PM
To: Ola Liljedahl
Cc: lng-odp
Subject: Re: [lng-odp] [PATCH] validation: add odp_schedule_pause and
odp_schedule_resume tests

On Wed, Jan 14, 2015 at 3:28 PM, Ola Liljedahl ola.liljed...@linaro.org
wrote:

On 7 January 2015 at 20:41, Mike Holmes mike.hol...@linaro.org wrote:

I am unsure if I need to pay attention to this for 0.7.0

We need to have a decision (and implementation) for ODP 1.0 though.
Scheduling and its semantics are important aspects of ODP.

The odp_schedule_pause API is already documented and implemented, I
didn't exactly catch from Petri if we will keep the behavior for 1.0,
but what is the problem with covering this API in its current form for
at least 0.7 and 0.8?


There are no plans to change schedule pause/resume API.

-Petri

So we are adding this patch, right?

Maxim.



___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp



___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp


Re: [lng-odp] [PATCH] validation: add odp_schedule_pause and odp_schedule_resume tests

2015-01-21 Thread Ola Liljedahl
On 21 January 2015 at 06:23, Jerin Jacob jerin.ja...@caviumnetworks.com wrote:
 On Tue, Jan 20, 2015 at 04:25:41PM -0600, Bill Fischofer wrote:
 My questions were answered.  For now scheduling caches are non-transparent
 and applications wishing to pause scheduling must drain any cached events
 prior to exiting the scheduling loop.  We can revisit this post v1.0 when
 we discuss various recovery scenarios.

 +1
++



 On Tue, Jan 20, 2015 at 3:47 PM, Mike Holmes mike.hol...@linaro.org wrote:

  It is still not clear to me in writing that we want this, we did discuss
  it earlier but Jerin, Bill and Ola have questions on this thread and I am
  not sure they are all addressed.
 
  On 20 January 2015 at 16:34, Maxim Uvarov maxim.uva...@linaro.org wrote:
 
  Who review this patch please add review-by.
 
  Mike please add yours because it's validation patch.
 
  Maxim.
 
 
  On 01/20/2015 05:23 PM, Ciprian Barbu wrote:
 
  PING!
 
  On Wed, Jan 14, 2015 at 7:14 PM, Mike Holmes mike.hol...@linaro.org
  wrote:
 
  Without any clear change in sight,  lets test what we have, this has
  been on
  the list for a month
 
  On 14 January 2015 at 08:35, Ciprian Barbu ciprian.ba...@linaro.org
  wrote:
 
  On Wed, Jan 14, 2015 at 3:28 PM, Ola Liljedahl 
  ola.liljed...@linaro.org
  wrote:
 
  On 7 January 2015 at 20:41, Mike Holmes mike.hol...@linaro.org
  wrote:
 
  I am unsure if I need to pay attention to this for 0.7.0
 
  We need to have a decision (and implementation) for ODP 1.0 though.
  Scheduling and its semantics are important aspects of ODP.
 
  The odp_schedule_pause API is already documented and implemented, I
  didn't exactly catch from Petri if we will keep the behavior for 1.0,
  but what is the problem with covering this API in its current form for
  at least 0.7 and 0.8?
 
   On 7 January 2015 at 04:39, Ciprian Barbu ciprian.ba...@linaro.org
  wrote:
 
  On Tue, Jan 6, 2015 at 4:03 PM, Bill Fischofer
  bill.fischo...@linaro.org wrote:
 
  I think it's something we need to discuss during the sync call.
 
  On Tue, Jan 6, 2015 at 7:48 AM, Mike Holmes 
  mike.hol...@linaro.org
  wrote:
 
  Should a bug be made to track a needed change or is it important
  for
  1.0
  and needs to be in the delta doc ?
 
  On 6 January 2015 at 08:40, Bill Fischofer
  bill.fischo...@linaro.org
  wrote:
 
  Caches should be transparent.  While this may be needed here,
  it's
  a
  poor
  set of semantics to expose as part of the formal APIs.  This is
  definitely
  something we need to address.  My suggestion is that a
  odp_schedule_pause()
  should cause an implicit cache flush if the implementation is
  using a
  scheduling cache.  That way any cache being used is truly
  transparent
  and
  moreover there won't be unnecessary delays in event processing
  since
  who
  knows how long a pause may last?  Clearly it won't be brief since
  otherwise
  the application would not have bothered with a pause/resume in
  the
  first
  place.
 
  Sorry, I couldn't join you in the ODP call yesterday, mind if you
  give
  a brief update on what was decided?
 
   On Tue, Jan 6, 2015 at 7:17 AM, Ciprian Barbu
  ciprian.ba...@linaro.org
  wrote:
 
  On Wed, Dec 17, 2014 at 5:09 PM, Jerin Jacob
  jerin.ja...@caviumnetworks.com wrote:
 
  On Wed, Dec 17, 2014 at 03:10:11PM +0200, Ciprian Barbu wrote:
 
  Signed-off-by: Ciprian Barbu ciprian.ba...@linaro.org
  ---
test/validation/odp_schedule.c | 63
  ++
1 file changed, 58 insertions(+), 5 deletions(-)
 
  diff --git a/test/validation/odp_schedule.c
  b/test/validation/odp_schedule.c
  index 31be742..bdbcf77 100644
  --- a/test/validation/odp_schedule.c
  +++ b/test/validation/odp_schedule.c
  @@ -11,9 +11,11 @@
#define MSG_POOL_SIZE(4*1024*1024)
#define QUEUES_PER_PRIO  16
#define BUF_SIZE 64
  -#define TEST_NUM_BUFS100
  +#define NUM_BUFS 100
#define BURST_BUF_SIZE   4
  -#define TEST_NUM_BUFS_EXCL   1
  +#define NUM_BUFS_EXCL1
  +#define NUM_BUFS_PAUSE   1000
  +#define NUM_BUFS_BEFORE_PAUSE10
 
#define GLOBALS_SHM_NAME test_globals
#define MSG_POOL_NAMEmsg_pool
  @@ -229,7 +231,7 @@ static void
  schedule_common(odp_schedule_sync_t
  sync, int num_queues,
 args.sync = sync;
 args.num_queues = num_queues;
 args.num_prio = num_prio;
  - args.num_bufs = TEST_NUM_BUFS;
  + args.num_bufs = NUM_BUFS;
 args.num_cores = 1;
 args.enable_schd_multi = enable_schd_multi;
 args.enable_excl_atomic = 0;/* Not needed with a
  single
  core */
  @@ -261,9 +263,9 @@ static void
  parallel_execute(odp_schedule_sync_t
  sync, int num_queues,
 thr_args-num_queues = num_queues;
 thr_args-num_prio = num_prio;
 if (enable_excl_atomic)
  - thr_args-num_bufs = TEST_NUM_BUFS_EXCL;
  + 

Re: [lng-odp] [PATCH] validation: add odp_schedule_pause and odp_schedule_resume tests

2015-01-21 Thread Mike Holmes
Petri you are the scheduling expert, can we get your review, otherwise as a
validation test I can review it with less deep understanding.

On 21 January 2015 at 07:55, Ola Liljedahl ola.liljed...@linaro.org wrote:

 On 21 January 2015 at 06:23, Jerin Jacob jerin.ja...@caviumnetworks.com
 wrote:
  On Tue, Jan 20, 2015 at 04:25:41PM -0600, Bill Fischofer wrote:
  My questions were answered.  For now scheduling caches are
 non-transparent
  and applications wishing to pause scheduling must drain any cached
 events
  prior to exiting the scheduling loop.  We can revisit this post v1.0
 when
  we discuss various recovery scenarios.
 
  +1
 ++

 
 
  On Tue, Jan 20, 2015 at 3:47 PM, Mike Holmes mike.hol...@linaro.org
 wrote:
 
   It is still not clear to me in writing that we want this, we did
 discuss
   it earlier but Jerin, Bill and Ola have questions on this thread and
 I am
   not sure they are all addressed.
  
   On 20 January 2015 at 16:34, Maxim Uvarov maxim.uva...@linaro.org
 wrote:
  
   Who review this patch please add review-by.
  
   Mike please add yours because it's validation patch.
  
   Maxim.
  
  
   On 01/20/2015 05:23 PM, Ciprian Barbu wrote:
  
   PING!
  
   On Wed, Jan 14, 2015 at 7:14 PM, Mike Holmes 
 mike.hol...@linaro.org
   wrote:
  
   Without any clear change in sight,  lets test what we have, this
 has
   been on
   the list for a month
  
   On 14 January 2015 at 08:35, Ciprian Barbu 
 ciprian.ba...@linaro.org
   wrote:
  
   On Wed, Jan 14, 2015 at 3:28 PM, Ola Liljedahl 
   ola.liljed...@linaro.org
   wrote:
  
   On 7 January 2015 at 20:41, Mike Holmes mike.hol...@linaro.org
   wrote:
  
   I am unsure if I need to pay attention to this for 0.7.0
  
   We need to have a decision (and implementation) for ODP 1.0
 though.
   Scheduling and its semantics are important aspects of ODP.
  
   The odp_schedule_pause API is already documented and implemented,
 I
   didn't exactly catch from Petri if we will keep the behavior for
 1.0,
   but what is the problem with covering this API in its current
 form for
   at least 0.7 and 0.8?
  
On 7 January 2015 at 04:39, Ciprian Barbu 
 ciprian.ba...@linaro.org
   wrote:
  
   On Tue, Jan 6, 2015 at 4:03 PM, Bill Fischofer
   bill.fischo...@linaro.org wrote:
  
   I think it's something we need to discuss during the sync
 call.
  
   On Tue, Jan 6, 2015 at 7:48 AM, Mike Holmes 
   mike.hol...@linaro.org
   wrote:
  
   Should a bug be made to track a needed change or is it
 important
   for
   1.0
   and needs to be in the delta doc ?
  
   On 6 January 2015 at 08:40, Bill Fischofer
   bill.fischo...@linaro.org
   wrote:
  
   Caches should be transparent.  While this may be needed
 here,
   it's
   a
   poor
   set of semantics to expose as part of the formal APIs.
 This is
   definitely
   something we need to address.  My suggestion is that a
   odp_schedule_pause()
   should cause an implicit cache flush if the implementation
 is
   using a
   scheduling cache.  That way any cache being used is truly
   transparent
   and
   moreover there won't be unnecessary delays in event
 processing
   since
   who
   knows how long a pause may last?  Clearly it won't be brief
 since
   otherwise
   the application would not have bothered with a pause/resume
 in
   the
   first
   place.
  
   Sorry, I couldn't join you in the ODP call yesterday, mind
 if you
   give
   a brief update on what was decided?
  
On Tue, Jan 6, 2015 at 7:17 AM, Ciprian Barbu
   ciprian.ba...@linaro.org
   wrote:
  
   On Wed, Dec 17, 2014 at 5:09 PM, Jerin Jacob
   jerin.ja...@caviumnetworks.com wrote:
  
   On Wed, Dec 17, 2014 at 03:10:11PM +0200, Ciprian Barbu
 wrote:
  
   Signed-off-by: Ciprian Barbu ciprian.ba...@linaro.org
   ---
 test/validation/odp_schedule.c | 63
   ++
 1 file changed, 58 insertions(+), 5 deletions(-)
  
   diff --git a/test/validation/odp_schedule.c
   b/test/validation/odp_schedule.c
   index 31be742..bdbcf77 100644
   --- a/test/validation/odp_schedule.c
   +++ b/test/validation/odp_schedule.c
   @@ -11,9 +11,11 @@
 #define MSG_POOL_SIZE(4*1024*1024)
 #define QUEUES_PER_PRIO  16
 #define BUF_SIZE 64
   -#define TEST_NUM_BUFS100
   +#define NUM_BUFS 100
 #define BURST_BUF_SIZE   4
   -#define TEST_NUM_BUFS_EXCL   1
   +#define NUM_BUFS_EXCL1
   +#define NUM_BUFS_PAUSE   1000
   +#define NUM_BUFS_BEFORE_PAUSE10
  
 #define GLOBALS_SHM_NAME test_globals
 #define MSG_POOL_NAMEmsg_pool
   @@ -229,7 +231,7 @@ static void
   schedule_common(odp_schedule_sync_t
   sync, int num_queues,
  args.sync = sync;
  args.num_queues = num_queues;
  args.num_prio = num_prio;
   - args.num_bufs = TEST_NUM_BUFS;
   + args.num_bufs = NUM_BUFS;
  args.num_cores = 1;
  args.enable_schd_multi = 

Re: [lng-odp] [PATCH] validation: add odp_schedule_pause and odp_schedule_resume tests

2015-01-20 Thread Ciprian Barbu
PING!

On Wed, Jan 14, 2015 at 7:14 PM, Mike Holmes mike.hol...@linaro.org wrote:
 Without any clear change in sight,  lets test what we have, this has been on
 the list for a month

 On 14 January 2015 at 08:35, Ciprian Barbu ciprian.ba...@linaro.org wrote:

 On Wed, Jan 14, 2015 at 3:28 PM, Ola Liljedahl ola.liljed...@linaro.org
 wrote:
  On 7 January 2015 at 20:41, Mike Holmes mike.hol...@linaro.org wrote:
  I am unsure if I need to pay attention to this for 0.7.0
  We need to have a decision (and implementation) for ODP 1.0 though.
  Scheduling and its semantics are important aspects of ODP.

 The odp_schedule_pause API is already documented and implemented, I
 didn't exactly catch from Petri if we will keep the behavior for 1.0,
 but what is the problem with covering this API in its current form for
 at least 0.7 and 0.8?

 
 
  On 7 January 2015 at 04:39, Ciprian Barbu ciprian.ba...@linaro.org
  wrote:
 
  On Tue, Jan 6, 2015 at 4:03 PM, Bill Fischofer
  bill.fischo...@linaro.org wrote:
   I think it's something we need to discuss during the sync call.
  
   On Tue, Jan 6, 2015 at 7:48 AM, Mike Holmes mike.hol...@linaro.org
   wrote:
  
   Should a bug be made to track a needed change or is it important
   for
   1.0
   and needs to be in the delta doc ?
  
   On 6 January 2015 at 08:40, Bill Fischofer
   bill.fischo...@linaro.org
   wrote:
  
   Caches should be transparent.  While this may be needed here, it's
   a
   poor
   set of semantics to expose as part of the formal APIs.  This is
   definitely
   something we need to address.  My suggestion is that a
   odp_schedule_pause()
   should cause an implicit cache flush if the implementation is
   using a
   scheduling cache.  That way any cache being used is truly
   transparent
   and
   moreover there won't be unnecessary delays in event processing
   since
   who
   knows how long a pause may last?  Clearly it won't be brief since
   otherwise
   the application would not have bothered with a pause/resume in the
   first
   place.
 
  Sorry, I couldn't join you in the ODP call yesterday, mind if you give
  a brief update on what was decided?
 
  
   On Tue, Jan 6, 2015 at 7:17 AM, Ciprian Barbu
   ciprian.ba...@linaro.org
   wrote:
  
   On Wed, Dec 17, 2014 at 5:09 PM, Jerin Jacob
   jerin.ja...@caviumnetworks.com wrote:
On Wed, Dec 17, 2014 at 03:10:11PM +0200, Ciprian Barbu wrote:
Signed-off-by: Ciprian Barbu ciprian.ba...@linaro.org
---
 test/validation/odp_schedule.c | 63
++
 1 file changed, 58 insertions(+), 5 deletions(-)
   
diff --git a/test/validation/odp_schedule.c
b/test/validation/odp_schedule.c
index 31be742..bdbcf77 100644
--- a/test/validation/odp_schedule.c
+++ b/test/validation/odp_schedule.c
@@ -11,9 +11,11 @@
 #define MSG_POOL_SIZE(4*1024*1024)
 #define QUEUES_PER_PRIO  16
 #define BUF_SIZE 64
-#define TEST_NUM_BUFS100
+#define NUM_BUFS 100
 #define BURST_BUF_SIZE   4
-#define TEST_NUM_BUFS_EXCL   1
+#define NUM_BUFS_EXCL1
+#define NUM_BUFS_PAUSE   1000
+#define NUM_BUFS_BEFORE_PAUSE10
   
 #define GLOBALS_SHM_NAME test_globals
 #define MSG_POOL_NAMEmsg_pool
@@ -229,7 +231,7 @@ static void
schedule_common(odp_schedule_sync_t
sync, int num_queues,
  args.sync = sync;
  args.num_queues = num_queues;
  args.num_prio = num_prio;
- args.num_bufs = TEST_NUM_BUFS;
+ args.num_bufs = NUM_BUFS;
  args.num_cores = 1;
  args.enable_schd_multi = enable_schd_multi;
  args.enable_excl_atomic = 0;/* Not needed with a
single
core */
@@ -261,9 +263,9 @@ static void
parallel_execute(odp_schedule_sync_t
sync, int num_queues,
  thr_args-num_queues = num_queues;
  thr_args-num_prio = num_prio;
  if (enable_excl_atomic)
- thr_args-num_bufs = TEST_NUM_BUFS_EXCL;
+ thr_args-num_bufs = NUM_BUFS_EXCL;
  else
- thr_args-num_bufs = TEST_NUM_BUFS;
+ thr_args-num_bufs = NUM_BUFS;
  thr_args-num_cores = globals-core_count;
  thr_args-enable_schd_multi = enable_schd_multi;
  thr_args-enable_excl_atomic = enable_excl_atomic;
@@ -459,6 +461,56 @@ static void
test_schedule_multi_1q_mt_a_excl(void)
   ENABLE_EXCL_ATOMIC);
 }
   
+static void test_schedule_pause_resume(void)
+{
+ odp_queue_t queue;
+ odp_buffer_t buf;
+ odp_queue_t from;
+ int i;
+ int local_bufs = 0;
+
+ queue = odp_queue_lookup(sched_0_0_n);
+ CU_ASSERT(queue != ODP_QUEUE_INVALID);
+
+ pool = odp_buffer_pool_lookup(MSG_POOL_NAME);
+ CU_ASSERT_FATAL(pool != 

Re: [lng-odp] [PATCH] validation: add odp_schedule_pause and odp_schedule_resume tests

2015-01-20 Thread Mike Holmes
Petri unless you don't think this fits I suggest we apply it to test what
is in the repo now

On 20 January 2015 at 09:23, Ciprian Barbu ciprian.ba...@linaro.org wrote:

 PING!

 On Wed, Jan 14, 2015 at 7:14 PM, Mike Holmes mike.hol...@linaro.org
 wrote:
  Without any clear change in sight,  lets test what we have, this has
 been on
  the list for a month
 
  On 14 January 2015 at 08:35, Ciprian Barbu ciprian.ba...@linaro.org
 wrote:
 
  On Wed, Jan 14, 2015 at 3:28 PM, Ola Liljedahl 
 ola.liljed...@linaro.org
  wrote:
   On 7 January 2015 at 20:41, Mike Holmes mike.hol...@linaro.org
 wrote:
   I am unsure if I need to pay attention to this for 0.7.0
   We need to have a decision (and implementation) for ODP 1.0 though.
   Scheduling and its semantics are important aspects of ODP.
 
  The odp_schedule_pause API is already documented and implemented, I
  didn't exactly catch from Petri if we will keep the behavior for 1.0,
  but what is the problem with covering this API in its current form for
  at least 0.7 and 0.8?
 
  
  
   On 7 January 2015 at 04:39, Ciprian Barbu ciprian.ba...@linaro.org
   wrote:
  
   On Tue, Jan 6, 2015 at 4:03 PM, Bill Fischofer
   bill.fischo...@linaro.org wrote:
I think it's something we need to discuss during the sync call.
   
On Tue, Jan 6, 2015 at 7:48 AM, Mike Holmes 
 mike.hol...@linaro.org
wrote:
   
Should a bug be made to track a needed change or is it important
for
1.0
and needs to be in the delta doc ?
   
On 6 January 2015 at 08:40, Bill Fischofer
bill.fischo...@linaro.org
wrote:
   
Caches should be transparent.  While this may be needed here,
 it's
a
poor
set of semantics to expose as part of the formal APIs.  This is
definitely
something we need to address.  My suggestion is that a
odp_schedule_pause()
should cause an implicit cache flush if the implementation is
using a
scheduling cache.  That way any cache being used is truly
transparent
and
moreover there won't be unnecessary delays in event processing
since
who
knows how long a pause may last?  Clearly it won't be brief
 since
otherwise
the application would not have bothered with a pause/resume in
 the
first
place.
  
   Sorry, I couldn't join you in the ODP call yesterday, mind if you
 give
   a brief update on what was decided?
  
   
On Tue, Jan 6, 2015 at 7:17 AM, Ciprian Barbu
ciprian.ba...@linaro.org
wrote:
   
On Wed, Dec 17, 2014 at 5:09 PM, Jerin Jacob
jerin.ja...@caviumnetworks.com wrote:
 On Wed, Dec 17, 2014 at 03:10:11PM +0200, Ciprian Barbu
 wrote:
 Signed-off-by: Ciprian Barbu ciprian.ba...@linaro.org
 ---
  test/validation/odp_schedule.c | 63
 ++
  1 file changed, 58 insertions(+), 5 deletions(-)

 diff --git a/test/validation/odp_schedule.c
 b/test/validation/odp_schedule.c
 index 31be742..bdbcf77 100644
 --- a/test/validation/odp_schedule.c
 +++ b/test/validation/odp_schedule.c
 @@ -11,9 +11,11 @@
  #define MSG_POOL_SIZE(4*1024*1024)
  #define QUEUES_PER_PRIO  16
  #define BUF_SIZE 64
 -#define TEST_NUM_BUFS100
 +#define NUM_BUFS 100
  #define BURST_BUF_SIZE   4
 -#define TEST_NUM_BUFS_EXCL   1
 +#define NUM_BUFS_EXCL1
 +#define NUM_BUFS_PAUSE   1000
 +#define NUM_BUFS_BEFORE_PAUSE10

  #define GLOBALS_SHM_NAME test_globals
  #define MSG_POOL_NAMEmsg_pool
 @@ -229,7 +231,7 @@ static void
 schedule_common(odp_schedule_sync_t
 sync, int num_queues,
   args.sync = sync;
   args.num_queues = num_queues;
   args.num_prio = num_prio;
 - args.num_bufs = TEST_NUM_BUFS;
 + args.num_bufs = NUM_BUFS;
   args.num_cores = 1;
   args.enable_schd_multi = enable_schd_multi;
   args.enable_excl_atomic = 0;/* Not needed with a
 single
 core */
 @@ -261,9 +263,9 @@ static void
 parallel_execute(odp_schedule_sync_t
 sync, int num_queues,
   thr_args-num_queues = num_queues;
   thr_args-num_prio = num_prio;
   if (enable_excl_atomic)
 - thr_args-num_bufs = TEST_NUM_BUFS_EXCL;
 + thr_args-num_bufs = NUM_BUFS_EXCL;
   else
 - thr_args-num_bufs = TEST_NUM_BUFS;
 + thr_args-num_bufs = NUM_BUFS;
   thr_args-num_cores = globals-core_count;
   thr_args-enable_schd_multi = enable_schd_multi;
   thr_args-enable_excl_atomic = enable_excl_atomic;
 @@ -459,6 +461,56 @@ static void
 test_schedule_multi_1q_mt_a_excl(void)
ENABLE_EXCL_ATOMIC);
  }

 +static void test_schedule_pause_resume(void)
 +{
 + odp_queue_t queue;
 + 

Re: [lng-odp] [PATCH] validation: add odp_schedule_pause and odp_schedule_resume tests

2015-01-20 Thread Maxim Uvarov

Who review this patch please add review-by.

Mike please add yours because it's validation patch.

Maxim.

On 01/20/2015 05:23 PM, Ciprian Barbu wrote:

PING!

On Wed, Jan 14, 2015 at 7:14 PM, Mike Holmes mike.hol...@linaro.org wrote:

Without any clear change in sight,  lets test what we have, this has been on
the list for a month

On 14 January 2015 at 08:35, Ciprian Barbu ciprian.ba...@linaro.org wrote:

On Wed, Jan 14, 2015 at 3:28 PM, Ola Liljedahl ola.liljed...@linaro.org
wrote:

On 7 January 2015 at 20:41, Mike Holmes mike.hol...@linaro.org wrote:

I am unsure if I need to pay attention to this for 0.7.0

We need to have a decision (and implementation) for ODP 1.0 though.
Scheduling and its semantics are important aspects of ODP.

The odp_schedule_pause API is already documented and implemented, I
didn't exactly catch from Petri if we will keep the behavior for 1.0,
but what is the problem with covering this API in its current form for
at least 0.7 and 0.8?


On 7 January 2015 at 04:39, Ciprian Barbu ciprian.ba...@linaro.org
wrote:

On Tue, Jan 6, 2015 at 4:03 PM, Bill Fischofer
bill.fischo...@linaro.org wrote:

I think it's something we need to discuss during the sync call.

On Tue, Jan 6, 2015 at 7:48 AM, Mike Holmes mike.hol...@linaro.org
wrote:

Should a bug be made to track a needed change or is it important
for
1.0
and needs to be in the delta doc ?

On 6 January 2015 at 08:40, Bill Fischofer
bill.fischo...@linaro.org
wrote:

Caches should be transparent.  While this may be needed here, it's
a
poor
set of semantics to expose as part of the formal APIs.  This is
definitely
something we need to address.  My suggestion is that a
odp_schedule_pause()
should cause an implicit cache flush if the implementation is
using a
scheduling cache.  That way any cache being used is truly
transparent
and
moreover there won't be unnecessary delays in event processing
since
who
knows how long a pause may last?  Clearly it won't be brief since
otherwise
the application would not have bothered with a pause/resume in the
first
place.

Sorry, I couldn't join you in the ODP call yesterday, mind if you give
a brief update on what was decided?


On Tue, Jan 6, 2015 at 7:17 AM, Ciprian Barbu
ciprian.ba...@linaro.org
wrote:

On Wed, Dec 17, 2014 at 5:09 PM, Jerin Jacob
jerin.ja...@caviumnetworks.com wrote:

On Wed, Dec 17, 2014 at 03:10:11PM +0200, Ciprian Barbu wrote:

Signed-off-by: Ciprian Barbu ciprian.ba...@linaro.org
---
  test/validation/odp_schedule.c | 63
++
  1 file changed, 58 insertions(+), 5 deletions(-)

diff --git a/test/validation/odp_schedule.c
b/test/validation/odp_schedule.c
index 31be742..bdbcf77 100644
--- a/test/validation/odp_schedule.c
+++ b/test/validation/odp_schedule.c
@@ -11,9 +11,11 @@
  #define MSG_POOL_SIZE(4*1024*1024)
  #define QUEUES_PER_PRIO  16
  #define BUF_SIZE 64
-#define TEST_NUM_BUFS100
+#define NUM_BUFS 100
  #define BURST_BUF_SIZE   4
-#define TEST_NUM_BUFS_EXCL   1
+#define NUM_BUFS_EXCL1
+#define NUM_BUFS_PAUSE   1000
+#define NUM_BUFS_BEFORE_PAUSE10

  #define GLOBALS_SHM_NAME test_globals
  #define MSG_POOL_NAMEmsg_pool
@@ -229,7 +231,7 @@ static void
schedule_common(odp_schedule_sync_t
sync, int num_queues,
   args.sync = sync;
   args.num_queues = num_queues;
   args.num_prio = num_prio;
- args.num_bufs = TEST_NUM_BUFS;
+ args.num_bufs = NUM_BUFS;
   args.num_cores = 1;
   args.enable_schd_multi = enable_schd_multi;
   args.enable_excl_atomic = 0;/* Not needed with a
single
core */
@@ -261,9 +263,9 @@ static void
parallel_execute(odp_schedule_sync_t
sync, int num_queues,
   thr_args-num_queues = num_queues;
   thr_args-num_prio = num_prio;
   if (enable_excl_atomic)
- thr_args-num_bufs = TEST_NUM_BUFS_EXCL;
+ thr_args-num_bufs = NUM_BUFS_EXCL;
   else
- thr_args-num_bufs = TEST_NUM_BUFS;
+ thr_args-num_bufs = NUM_BUFS;
   thr_args-num_cores = globals-core_count;
   thr_args-enable_schd_multi = enable_schd_multi;
   thr_args-enable_excl_atomic = enable_excl_atomic;
@@ -459,6 +461,56 @@ static void
test_schedule_multi_1q_mt_a_excl(void)
ENABLE_EXCL_ATOMIC);
  }

+static void test_schedule_pause_resume(void)
+{
+ odp_queue_t queue;
+ odp_buffer_t buf;
+ odp_queue_t from;
+ int i;
+ int local_bufs = 0;
+
+ queue = odp_queue_lookup(sched_0_0_n);
+ CU_ASSERT(queue != ODP_QUEUE_INVALID);
+
+ pool = odp_buffer_pool_lookup(MSG_POOL_NAME);
+ CU_ASSERT_FATAL(pool != ODP_BUFFER_POOL_INVALID);
+
+
+ for (i = 0; i  NUM_BUFS_PAUSE; i++) {
+ buf = odp_buffer_alloc(pool);
+ CU_ASSERT(buf != ODP_BUFFER_INVALID);
+ odp_queue_enq(queue, buf);
+ }
+
+ for (i = 0; i  NUM_BUFS_BEFORE_PAUSE; i++) {

Re: [lng-odp] [PATCH] validation: add odp_schedule_pause and odp_schedule_resume tests

2015-01-20 Thread Mike Holmes
It is still not clear to me in writing that we want this, we did discuss it
earlier but Jerin, Bill and Ola have questions on this thread and I am not
sure they are all addressed.

On 20 January 2015 at 16:34, Maxim Uvarov maxim.uva...@linaro.org wrote:

 Who review this patch please add review-by.

 Mike please add yours because it's validation patch.

 Maxim.


 On 01/20/2015 05:23 PM, Ciprian Barbu wrote:

 PING!

 On Wed, Jan 14, 2015 at 7:14 PM, Mike Holmes mike.hol...@linaro.org
 wrote:

 Without any clear change in sight,  lets test what we have, this has
 been on
 the list for a month

 On 14 January 2015 at 08:35, Ciprian Barbu ciprian.ba...@linaro.org
 wrote:

 On Wed, Jan 14, 2015 at 3:28 PM, Ola Liljedahl 
 ola.liljed...@linaro.org
 wrote:

 On 7 January 2015 at 20:41, Mike Holmes mike.hol...@linaro.org
 wrote:

 I am unsure if I need to pay attention to this for 0.7.0

 We need to have a decision (and implementation) for ODP 1.0 though.
 Scheduling and its semantics are important aspects of ODP.

 The odp_schedule_pause API is already documented and implemented, I
 didn't exactly catch from Petri if we will keep the behavior for 1.0,
 but what is the problem with covering this API in its current form for
 at least 0.7 and 0.8?

  On 7 January 2015 at 04:39, Ciprian Barbu ciprian.ba...@linaro.org
 wrote:

 On Tue, Jan 6, 2015 at 4:03 PM, Bill Fischofer
 bill.fischo...@linaro.org wrote:

 I think it's something we need to discuss during the sync call.

 On Tue, Jan 6, 2015 at 7:48 AM, Mike Holmes mike.hol...@linaro.org
 
 wrote:

 Should a bug be made to track a needed change or is it important
 for
 1.0
 and needs to be in the delta doc ?

 On 6 January 2015 at 08:40, Bill Fischofer
 bill.fischo...@linaro.org
 wrote:

 Caches should be transparent.  While this may be needed here, it's
 a
 poor
 set of semantics to expose as part of the formal APIs.  This is
 definitely
 something we need to address.  My suggestion is that a
 odp_schedule_pause()
 should cause an implicit cache flush if the implementation is
 using a
 scheduling cache.  That way any cache being used is truly
 transparent
 and
 moreover there won't be unnecessary delays in event processing
 since
 who
 knows how long a pause may last?  Clearly it won't be brief since
 otherwise
 the application would not have bothered with a pause/resume in the
 first
 place.

 Sorry, I couldn't join you in the ODP call yesterday, mind if you
 give
 a brief update on what was decided?

  On Tue, Jan 6, 2015 at 7:17 AM, Ciprian Barbu
 ciprian.ba...@linaro.org
 wrote:

 On Wed, Dec 17, 2014 at 5:09 PM, Jerin Jacob
 jerin.ja...@caviumnetworks.com wrote:

 On Wed, Dec 17, 2014 at 03:10:11PM +0200, Ciprian Barbu wrote:

 Signed-off-by: Ciprian Barbu ciprian.ba...@linaro.org
 ---
   test/validation/odp_schedule.c | 63
 ++
   1 file changed, 58 insertions(+), 5 deletions(-)

 diff --git a/test/validation/odp_schedule.c
 b/test/validation/odp_schedule.c
 index 31be742..bdbcf77 100644
 --- a/test/validation/odp_schedule.c
 +++ b/test/validation/odp_schedule.c
 @@ -11,9 +11,11 @@
   #define MSG_POOL_SIZE(4*1024*1024)
   #define QUEUES_PER_PRIO  16
   #define BUF_SIZE 64
 -#define TEST_NUM_BUFS100
 +#define NUM_BUFS 100
   #define BURST_BUF_SIZE   4
 -#define TEST_NUM_BUFS_EXCL   1
 +#define NUM_BUFS_EXCL1
 +#define NUM_BUFS_PAUSE   1000
 +#define NUM_BUFS_BEFORE_PAUSE10

   #define GLOBALS_SHM_NAME test_globals
   #define MSG_POOL_NAMEmsg_pool
 @@ -229,7 +231,7 @@ static void
 schedule_common(odp_schedule_sync_t
 sync, int num_queues,
args.sync = sync;
args.num_queues = num_queues;
args.num_prio = num_prio;
 - args.num_bufs = TEST_NUM_BUFS;
 + args.num_bufs = NUM_BUFS;
args.num_cores = 1;
args.enable_schd_multi = enable_schd_multi;
args.enable_excl_atomic = 0;/* Not needed with a
 single
 core */
 @@ -261,9 +263,9 @@ static void
 parallel_execute(odp_schedule_sync_t
 sync, int num_queues,
thr_args-num_queues = num_queues;
thr_args-num_prio = num_prio;
if (enable_excl_atomic)
 - thr_args-num_bufs = TEST_NUM_BUFS_EXCL;
 + thr_args-num_bufs = NUM_BUFS_EXCL;
else
 - thr_args-num_bufs = TEST_NUM_BUFS;
 + thr_args-num_bufs = NUM_BUFS;
thr_args-num_cores = globals-core_count;
thr_args-enable_schd_multi = enable_schd_multi;
thr_args-enable_excl_atomic = enable_excl_atomic;
 @@ -459,6 +461,56 @@ static void
 test_schedule_multi_1q_mt_a_excl(void)
 ENABLE_EXCL_ATOMIC);
   }

 +static void test_schedule_pause_resume(void)
 +{
 + odp_queue_t queue;
 + odp_buffer_t buf;
 + odp_queue_t from;
 + int i;
 + int local_bufs = 0;
 +
 + queue = odp_queue_lookup(sched_0_0_n);
 + 

Re: [lng-odp] [PATCH] validation: add odp_schedule_pause and odp_schedule_resume tests

2015-01-20 Thread Bill Fischofer
My questions were answered.  For now scheduling caches are non-transparent
and applications wishing to pause scheduling must drain any cached events
prior to exiting the scheduling loop.  We can revisit this post v1.0 when
we discuss various recovery scenarios.

On Tue, Jan 20, 2015 at 3:47 PM, Mike Holmes mike.hol...@linaro.org wrote:

 It is still not clear to me in writing that we want this, we did discuss
 it earlier but Jerin, Bill and Ola have questions on this thread and I am
 not sure they are all addressed.

 On 20 January 2015 at 16:34, Maxim Uvarov maxim.uva...@linaro.org wrote:

 Who review this patch please add review-by.

 Mike please add yours because it's validation patch.

 Maxim.


 On 01/20/2015 05:23 PM, Ciprian Barbu wrote:

 PING!

 On Wed, Jan 14, 2015 at 7:14 PM, Mike Holmes mike.hol...@linaro.org
 wrote:

 Without any clear change in sight,  lets test what we have, this has
 been on
 the list for a month

 On 14 January 2015 at 08:35, Ciprian Barbu ciprian.ba...@linaro.org
 wrote:

 On Wed, Jan 14, 2015 at 3:28 PM, Ola Liljedahl 
 ola.liljed...@linaro.org
 wrote:

 On 7 January 2015 at 20:41, Mike Holmes mike.hol...@linaro.org
 wrote:

 I am unsure if I need to pay attention to this for 0.7.0

 We need to have a decision (and implementation) for ODP 1.0 though.
 Scheduling and its semantics are important aspects of ODP.

 The odp_schedule_pause API is already documented and implemented, I
 didn't exactly catch from Petri if we will keep the behavior for 1.0,
 but what is the problem with covering this API in its current form for
 at least 0.7 and 0.8?

  On 7 January 2015 at 04:39, Ciprian Barbu ciprian.ba...@linaro.org
 wrote:

 On Tue, Jan 6, 2015 at 4:03 PM, Bill Fischofer
 bill.fischo...@linaro.org wrote:

 I think it's something we need to discuss during the sync call.

 On Tue, Jan 6, 2015 at 7:48 AM, Mike Holmes 
 mike.hol...@linaro.org
 wrote:

 Should a bug be made to track a needed change or is it important
 for
 1.0
 and needs to be in the delta doc ?

 On 6 January 2015 at 08:40, Bill Fischofer
 bill.fischo...@linaro.org
 wrote:

 Caches should be transparent.  While this may be needed here,
 it's
 a
 poor
 set of semantics to expose as part of the formal APIs.  This is
 definitely
 something we need to address.  My suggestion is that a
 odp_schedule_pause()
 should cause an implicit cache flush if the implementation is
 using a
 scheduling cache.  That way any cache being used is truly
 transparent
 and
 moreover there won't be unnecessary delays in event processing
 since
 who
 knows how long a pause may last?  Clearly it won't be brief since
 otherwise
 the application would not have bothered with a pause/resume in
 the
 first
 place.

 Sorry, I couldn't join you in the ODP call yesterday, mind if you
 give
 a brief update on what was decided?

  On Tue, Jan 6, 2015 at 7:17 AM, Ciprian Barbu
 ciprian.ba...@linaro.org
 wrote:

 On Wed, Dec 17, 2014 at 5:09 PM, Jerin Jacob
 jerin.ja...@caviumnetworks.com wrote:

 On Wed, Dec 17, 2014 at 03:10:11PM +0200, Ciprian Barbu wrote:

 Signed-off-by: Ciprian Barbu ciprian.ba...@linaro.org
 ---
   test/validation/odp_schedule.c | 63
 ++
   1 file changed, 58 insertions(+), 5 deletions(-)

 diff --git a/test/validation/odp_schedule.c
 b/test/validation/odp_schedule.c
 index 31be742..bdbcf77 100644
 --- a/test/validation/odp_schedule.c
 +++ b/test/validation/odp_schedule.c
 @@ -11,9 +11,11 @@
   #define MSG_POOL_SIZE(4*1024*1024)
   #define QUEUES_PER_PRIO  16
   #define BUF_SIZE 64
 -#define TEST_NUM_BUFS100
 +#define NUM_BUFS 100
   #define BURST_BUF_SIZE   4
 -#define TEST_NUM_BUFS_EXCL   1
 +#define NUM_BUFS_EXCL1
 +#define NUM_BUFS_PAUSE   1000
 +#define NUM_BUFS_BEFORE_PAUSE10

   #define GLOBALS_SHM_NAME test_globals
   #define MSG_POOL_NAMEmsg_pool
 @@ -229,7 +231,7 @@ static void
 schedule_common(odp_schedule_sync_t
 sync, int num_queues,
args.sync = sync;
args.num_queues = num_queues;
args.num_prio = num_prio;
 - args.num_bufs = TEST_NUM_BUFS;
 + args.num_bufs = NUM_BUFS;
args.num_cores = 1;
args.enable_schd_multi = enable_schd_multi;
args.enable_excl_atomic = 0;/* Not needed with a
 single
 core */
 @@ -261,9 +263,9 @@ static void
 parallel_execute(odp_schedule_sync_t
 sync, int num_queues,
thr_args-num_queues = num_queues;
thr_args-num_prio = num_prio;
if (enable_excl_atomic)
 - thr_args-num_bufs = TEST_NUM_BUFS_EXCL;
 + thr_args-num_bufs = NUM_BUFS_EXCL;
else
 - thr_args-num_bufs = TEST_NUM_BUFS;
 + thr_args-num_bufs = NUM_BUFS;
thr_args-num_cores = globals-core_count;
thr_args-enable_schd_multi = enable_schd_multi;
thr_args-enable_excl_atomic = enable_excl_atomic;
 @@ -459,6 

Re: [lng-odp] [PATCH] validation: add odp_schedule_pause and odp_schedule_resume tests

2015-01-20 Thread Jerin Jacob
On Tue, Jan 20, 2015 at 04:25:41PM -0600, Bill Fischofer wrote:
 My questions were answered.  For now scheduling caches are non-transparent
 and applications wishing to pause scheduling must drain any cached events
 prior to exiting the scheduling loop.  We can revisit this post v1.0 when
 we discuss various recovery scenarios.

+1

 
 On Tue, Jan 20, 2015 at 3:47 PM, Mike Holmes mike.hol...@linaro.org wrote:
 
  It is still not clear to me in writing that we want this, we did discuss
  it earlier but Jerin, Bill and Ola have questions on this thread and I am
  not sure they are all addressed.
 
  On 20 January 2015 at 16:34, Maxim Uvarov maxim.uva...@linaro.org wrote:
 
  Who review this patch please add review-by.
 
  Mike please add yours because it's validation patch.
 
  Maxim.
 
 
  On 01/20/2015 05:23 PM, Ciprian Barbu wrote:
 
  PING!
 
  On Wed, Jan 14, 2015 at 7:14 PM, Mike Holmes mike.hol...@linaro.org
  wrote:
 
  Without any clear change in sight,  lets test what we have, this has
  been on
  the list for a month
 
  On 14 January 2015 at 08:35, Ciprian Barbu ciprian.ba...@linaro.org
  wrote:
 
  On Wed, Jan 14, 2015 at 3:28 PM, Ola Liljedahl 
  ola.liljed...@linaro.org
  wrote:
 
  On 7 January 2015 at 20:41, Mike Holmes mike.hol...@linaro.org
  wrote:
 
  I am unsure if I need to pay attention to this for 0.7.0
 
  We need to have a decision (and implementation) for ODP 1.0 though.
  Scheduling and its semantics are important aspects of ODP.
 
  The odp_schedule_pause API is already documented and implemented, I
  didn't exactly catch from Petri if we will keep the behavior for 1.0,
  but what is the problem with covering this API in its current form for
  at least 0.7 and 0.8?
 
   On 7 January 2015 at 04:39, Ciprian Barbu ciprian.ba...@linaro.org
  wrote:
 
  On Tue, Jan 6, 2015 at 4:03 PM, Bill Fischofer
  bill.fischo...@linaro.org wrote:
 
  I think it's something we need to discuss during the sync call.
 
  On Tue, Jan 6, 2015 at 7:48 AM, Mike Holmes 
  mike.hol...@linaro.org
  wrote:
 
  Should a bug be made to track a needed change or is it important
  for
  1.0
  and needs to be in the delta doc ?
 
  On 6 January 2015 at 08:40, Bill Fischofer
  bill.fischo...@linaro.org
  wrote:
 
  Caches should be transparent.  While this may be needed here,
  it's
  a
  poor
  set of semantics to expose as part of the formal APIs.  This is
  definitely
  something we need to address.  My suggestion is that a
  odp_schedule_pause()
  should cause an implicit cache flush if the implementation is
  using a
  scheduling cache.  That way any cache being used is truly
  transparent
  and
  moreover there won't be unnecessary delays in event processing
  since
  who
  knows how long a pause may last?  Clearly it won't be brief since
  otherwise
  the application would not have bothered with a pause/resume in
  the
  first
  place.
 
  Sorry, I couldn't join you in the ODP call yesterday, mind if you
  give
  a brief update on what was decided?
 
   On Tue, Jan 6, 2015 at 7:17 AM, Ciprian Barbu
  ciprian.ba...@linaro.org
  wrote:
 
  On Wed, Dec 17, 2014 at 5:09 PM, Jerin Jacob
  jerin.ja...@caviumnetworks.com wrote:
 
  On Wed, Dec 17, 2014 at 03:10:11PM +0200, Ciprian Barbu wrote:
 
  Signed-off-by: Ciprian Barbu ciprian.ba...@linaro.org
  ---
test/validation/odp_schedule.c | 63
  ++
1 file changed, 58 insertions(+), 5 deletions(-)
 
  diff --git a/test/validation/odp_schedule.c
  b/test/validation/odp_schedule.c
  index 31be742..bdbcf77 100644
  --- a/test/validation/odp_schedule.c
  +++ b/test/validation/odp_schedule.c
  @@ -11,9 +11,11 @@
#define MSG_POOL_SIZE(4*1024*1024)
#define QUEUES_PER_PRIO  16
#define BUF_SIZE 64
  -#define TEST_NUM_BUFS100
  +#define NUM_BUFS 100
#define BURST_BUF_SIZE   4
  -#define TEST_NUM_BUFS_EXCL   1
  +#define NUM_BUFS_EXCL1
  +#define NUM_BUFS_PAUSE   1000
  +#define NUM_BUFS_BEFORE_PAUSE10
 
#define GLOBALS_SHM_NAME test_globals
#define MSG_POOL_NAMEmsg_pool
  @@ -229,7 +231,7 @@ static void
  schedule_common(odp_schedule_sync_t
  sync, int num_queues,
 args.sync = sync;
 args.num_queues = num_queues;
 args.num_prio = num_prio;
  - args.num_bufs = TEST_NUM_BUFS;
  + args.num_bufs = NUM_BUFS;
 args.num_cores = 1;
 args.enable_schd_multi = enable_schd_multi;
 args.enable_excl_atomic = 0;/* Not needed with a
  single
  core */
  @@ -261,9 +263,9 @@ static void
  parallel_execute(odp_schedule_sync_t
  sync, int num_queues,
 thr_args-num_queues = num_queues;
 thr_args-num_prio = num_prio;
 if (enable_excl_atomic)
  - thr_args-num_bufs = TEST_NUM_BUFS_EXCL;
  + thr_args-num_bufs = NUM_BUFS_EXCL;
 else
  - thr_args-num_bufs = 

Re: [lng-odp] [PATCH] validation: add odp_schedule_pause and odp_schedule_resume tests

2015-01-14 Thread Mike Holmes
Without any clear change in sight,  lets test what we have, this has been
on the list for a month

On 14 January 2015 at 08:35, Ciprian Barbu ciprian.ba...@linaro.org wrote:

 On Wed, Jan 14, 2015 at 3:28 PM, Ola Liljedahl ola.liljed...@linaro.org
 wrote:
  On 7 January 2015 at 20:41, Mike Holmes mike.hol...@linaro.org wrote:
  I am unsure if I need to pay attention to this for 0.7.0
  We need to have a decision (and implementation) for ODP 1.0 though.
  Scheduling and its semantics are important aspects of ODP.

 The odp_schedule_pause API is already documented and implemented, I
 didn't exactly catch from Petri if we will keep the behavior for 1.0,
 but what is the problem with covering this API in its current form for
 at least 0.7 and 0.8?

 
 
  On 7 January 2015 at 04:39, Ciprian Barbu ciprian.ba...@linaro.org
 wrote:
 
  On Tue, Jan 6, 2015 at 4:03 PM, Bill Fischofer
  bill.fischo...@linaro.org wrote:
   I think it's something we need to discuss during the sync call.
  
   On Tue, Jan 6, 2015 at 7:48 AM, Mike Holmes mike.hol...@linaro.org
   wrote:
  
   Should a bug be made to track a needed change or is it important for
   1.0
   and needs to be in the delta doc ?
  
   On 6 January 2015 at 08:40, Bill Fischofer 
 bill.fischo...@linaro.org
   wrote:
  
   Caches should be transparent.  While this may be needed here, it's
 a
   poor
   set of semantics to expose as part of the formal APIs.  This is
   definitely
   something we need to address.  My suggestion is that a
   odp_schedule_pause()
   should cause an implicit cache flush if the implementation is
 using a
   scheduling cache.  That way any cache being used is truly
 transparent
   and
   moreover there won't be unnecessary delays in event processing
 since
   who
   knows how long a pause may last?  Clearly it won't be brief since
   otherwise
   the application would not have bothered with a pause/resume in the
   first
   place.
 
  Sorry, I couldn't join you in the ODP call yesterday, mind if you give
  a brief update on what was decided?
 
  
   On Tue, Jan 6, 2015 at 7:17 AM, Ciprian Barbu
   ciprian.ba...@linaro.org
   wrote:
  
   On Wed, Dec 17, 2014 at 5:09 PM, Jerin Jacob
   jerin.ja...@caviumnetworks.com wrote:
On Wed, Dec 17, 2014 at 03:10:11PM +0200, Ciprian Barbu wrote:
Signed-off-by: Ciprian Barbu ciprian.ba...@linaro.org
---
 test/validation/odp_schedule.c | 63
++
 1 file changed, 58 insertions(+), 5 deletions(-)
   
diff --git a/test/validation/odp_schedule.c
b/test/validation/odp_schedule.c
index 31be742..bdbcf77 100644
--- a/test/validation/odp_schedule.c
+++ b/test/validation/odp_schedule.c
@@ -11,9 +11,11 @@
 #define MSG_POOL_SIZE(4*1024*1024)
 #define QUEUES_PER_PRIO  16
 #define BUF_SIZE 64
-#define TEST_NUM_BUFS100
+#define NUM_BUFS 100
 #define BURST_BUF_SIZE   4
-#define TEST_NUM_BUFS_EXCL   1
+#define NUM_BUFS_EXCL1
+#define NUM_BUFS_PAUSE   1000
+#define NUM_BUFS_BEFORE_PAUSE10
   
 #define GLOBALS_SHM_NAME test_globals
 #define MSG_POOL_NAMEmsg_pool
@@ -229,7 +231,7 @@ static void
schedule_common(odp_schedule_sync_t
sync, int num_queues,
  args.sync = sync;
  args.num_queues = num_queues;
  args.num_prio = num_prio;
- args.num_bufs = TEST_NUM_BUFS;
+ args.num_bufs = NUM_BUFS;
  args.num_cores = 1;
  args.enable_schd_multi = enable_schd_multi;
  args.enable_excl_atomic = 0;/* Not needed with a
 single
core */
@@ -261,9 +263,9 @@ static void
parallel_execute(odp_schedule_sync_t
sync, int num_queues,
  thr_args-num_queues = num_queues;
  thr_args-num_prio = num_prio;
  if (enable_excl_atomic)
- thr_args-num_bufs = TEST_NUM_BUFS_EXCL;
+ thr_args-num_bufs = NUM_BUFS_EXCL;
  else
- thr_args-num_bufs = TEST_NUM_BUFS;
+ thr_args-num_bufs = NUM_BUFS;
  thr_args-num_cores = globals-core_count;
  thr_args-enable_schd_multi = enable_schd_multi;
  thr_args-enable_excl_atomic = enable_excl_atomic;
@@ -459,6 +461,56 @@ static void
test_schedule_multi_1q_mt_a_excl(void)
   ENABLE_EXCL_ATOMIC);
 }
   
+static void test_schedule_pause_resume(void)
+{
+ odp_queue_t queue;
+ odp_buffer_t buf;
+ odp_queue_t from;
+ int i;
+ int local_bufs = 0;
+
+ queue = odp_queue_lookup(sched_0_0_n);
+ CU_ASSERT(queue != ODP_QUEUE_INVALID);
+
+ pool = odp_buffer_pool_lookup(MSG_POOL_NAME);
+ CU_ASSERT_FATAL(pool != ODP_BUFFER_POOL_INVALID);
+
+
+ for (i = 0; i  NUM_BUFS_PAUSE; i++) {
+ buf = 

Re: [lng-odp] [PATCH] validation: add odp_schedule_pause and odp_schedule_resume tests

2015-01-08 Thread Ciprian Barbu
On Wed, Jan 7, 2015 at 9:41 PM, Mike Holmes mike.hol...@linaro.org wrote:
 I am unsure if I need to pay attention to this for 0.7.0

Still abit unclear after the discussion we had with Petri, but I think
we need to keep the behavior as it is, meaning applications need to
take care of the scheduling cache, and consume everything after
issuing an odp_schedule_pause call. This would also mean my test case
behaves as expected.


 On 7 January 2015 at 04:39, Ciprian Barbu ciprian.ba...@linaro.org wrote:

 On Tue, Jan 6, 2015 at 4:03 PM, Bill Fischofer
 bill.fischo...@linaro.org wrote:
  I think it's something we need to discuss during the sync call.
 
  On Tue, Jan 6, 2015 at 7:48 AM, Mike Holmes mike.hol...@linaro.org
  wrote:
 
  Should a bug be made to track a needed change or is it important for
  1.0
  and needs to be in the delta doc ?
 
  On 6 January 2015 at 08:40, Bill Fischofer bill.fischo...@linaro.org
  wrote:
 
  Caches should be transparent.  While this may be needed here, it's a
  poor
  set of semantics to expose as part of the formal APIs.  This is
  definitely
  something we need to address.  My suggestion is that a
  odp_schedule_pause()
  should cause an implicit cache flush if the implementation is using a
  scheduling cache.  That way any cache being used is truly transparent
  and
  moreover there won't be unnecessary delays in event processing since
  who
  knows how long a pause may last?  Clearly it won't be brief since
  otherwise
  the application would not have bothered with a pause/resume in the
  first
  place.

 Sorry, I couldn't join you in the ODP call yesterday, mind if you give
 a brief update on what was decided?

 
  On Tue, Jan 6, 2015 at 7:17 AM, Ciprian Barbu
  ciprian.ba...@linaro.org
  wrote:
 
  On Wed, Dec 17, 2014 at 5:09 PM, Jerin Jacob
  jerin.ja...@caviumnetworks.com wrote:
   On Wed, Dec 17, 2014 at 03:10:11PM +0200, Ciprian Barbu wrote:
   Signed-off-by: Ciprian Barbu ciprian.ba...@linaro.org
   ---
test/validation/odp_schedule.c | 63
   ++
1 file changed, 58 insertions(+), 5 deletions(-)
  
   diff --git a/test/validation/odp_schedule.c
   b/test/validation/odp_schedule.c
   index 31be742..bdbcf77 100644
   --- a/test/validation/odp_schedule.c
   +++ b/test/validation/odp_schedule.c
   @@ -11,9 +11,11 @@
#define MSG_POOL_SIZE(4*1024*1024)
#define QUEUES_PER_PRIO  16
#define BUF_SIZE 64
   -#define TEST_NUM_BUFS100
   +#define NUM_BUFS 100
#define BURST_BUF_SIZE   4
   -#define TEST_NUM_BUFS_EXCL   1
   +#define NUM_BUFS_EXCL1
   +#define NUM_BUFS_PAUSE   1000
   +#define NUM_BUFS_BEFORE_PAUSE10
  
#define GLOBALS_SHM_NAME test_globals
#define MSG_POOL_NAMEmsg_pool
   @@ -229,7 +231,7 @@ static void
   schedule_common(odp_schedule_sync_t
   sync, int num_queues,
 args.sync = sync;
 args.num_queues = num_queues;
 args.num_prio = num_prio;
   - args.num_bufs = TEST_NUM_BUFS;
   + args.num_bufs = NUM_BUFS;
 args.num_cores = 1;
 args.enable_schd_multi = enable_schd_multi;
 args.enable_excl_atomic = 0;/* Not needed with a single
   core */
   @@ -261,9 +263,9 @@ static void
   parallel_execute(odp_schedule_sync_t
   sync, int num_queues,
 thr_args-num_queues = num_queues;
 thr_args-num_prio = num_prio;
 if (enable_excl_atomic)
   - thr_args-num_bufs = TEST_NUM_BUFS_EXCL;
   + thr_args-num_bufs = NUM_BUFS_EXCL;
 else
   - thr_args-num_bufs = TEST_NUM_BUFS;
   + thr_args-num_bufs = NUM_BUFS;
 thr_args-num_cores = globals-core_count;
 thr_args-enable_schd_multi = enable_schd_multi;
 thr_args-enable_excl_atomic = enable_excl_atomic;
   @@ -459,6 +461,56 @@ static void
   test_schedule_multi_1q_mt_a_excl(void)
  ENABLE_EXCL_ATOMIC);
}
  
   +static void test_schedule_pause_resume(void)
   +{
   + odp_queue_t queue;
   + odp_buffer_t buf;
   + odp_queue_t from;
   + int i;
   + int local_bufs = 0;
   +
   + queue = odp_queue_lookup(sched_0_0_n);
   + CU_ASSERT(queue != ODP_QUEUE_INVALID);
   +
   + pool = odp_buffer_pool_lookup(MSG_POOL_NAME);
   + CU_ASSERT_FATAL(pool != ODP_BUFFER_POOL_INVALID);
   +
   +
   + for (i = 0; i  NUM_BUFS_PAUSE; i++) {
   + buf = odp_buffer_alloc(pool);
   + CU_ASSERT(buf != ODP_BUFFER_INVALID);
   + odp_queue_enq(queue, buf);
   + }
   +
   + for (i = 0; i  NUM_BUFS_BEFORE_PAUSE; i++) {
   + buf = odp_schedule(from, ODP_SCHED_NO_WAIT);
   + CU_ASSERT(from == queue);
   + odp_buffer_free(buf);
   + }
   +
   + odp_schedule_pause();
   +
   + while (1) {
   + buf = odp_schedule(from, 

Re: [lng-odp] [PATCH] validation: add odp_schedule_pause and odp_schedule_resume tests

2015-01-08 Thread Bill Fischofer
That certainly seems to be the upshot for now from yesterday's
discussions.  Whether this is something that will persist longer-term
remains to be seen.

The net is that if you want to pause there a certain amount of choreography
that the application needs to do to perform such role-switching in a
graceful manner.  A good reason, perhaps, to consider designing the
application so that pauses are not needed.

We still need to discuss how to deal with potentially cached work if we
want to be able to support a more robust programming model in which
threads/processes can fail without killing the entire application.

On Thu, Jan 8, 2015 at 6:27 AM, Ciprian Barbu ciprian.ba...@linaro.org
wrote:

 On Wed, Jan 7, 2015 at 9:41 PM, Mike Holmes mike.hol...@linaro.org
 wrote:
  I am unsure if I need to pay attention to this for 0.7.0

 Still abit unclear after the discussion we had with Petri, but I think
 we need to keep the behavior as it is, meaning applications need to
 take care of the scheduling cache, and consume everything after
 issuing an odp_schedule_pause call. This would also mean my test case
 behaves as expected.

 
  On 7 January 2015 at 04:39, Ciprian Barbu ciprian.ba...@linaro.org
 wrote:
 
  On Tue, Jan 6, 2015 at 4:03 PM, Bill Fischofer
  bill.fischo...@linaro.org wrote:
   I think it's something we need to discuss during the sync call.
  
   On Tue, Jan 6, 2015 at 7:48 AM, Mike Holmes mike.hol...@linaro.org
   wrote:
  
   Should a bug be made to track a needed change or is it important for
   1.0
   and needs to be in the delta doc ?
  
   On 6 January 2015 at 08:40, Bill Fischofer 
 bill.fischo...@linaro.org
   wrote:
  
   Caches should be transparent.  While this may be needed here, it's a
   poor
   set of semantics to expose as part of the formal APIs.  This is
   definitely
   something we need to address.  My suggestion is that a
   odp_schedule_pause()
   should cause an implicit cache flush if the implementation is using
 a
   scheduling cache.  That way any cache being used is truly
 transparent
   and
   moreover there won't be unnecessary delays in event processing since
   who
   knows how long a pause may last?  Clearly it won't be brief since
   otherwise
   the application would not have bothered with a pause/resume in the
   first
   place.
 
  Sorry, I couldn't join you in the ODP call yesterday, mind if you give
  a brief update on what was decided?
 
  
   On Tue, Jan 6, 2015 at 7:17 AM, Ciprian Barbu
   ciprian.ba...@linaro.org
   wrote:
  
   On Wed, Dec 17, 2014 at 5:09 PM, Jerin Jacob
   jerin.ja...@caviumnetworks.com wrote:
On Wed, Dec 17, 2014 at 03:10:11PM +0200, Ciprian Barbu wrote:
Signed-off-by: Ciprian Barbu ciprian.ba...@linaro.org
---
 test/validation/odp_schedule.c | 63
++
 1 file changed, 58 insertions(+), 5 deletions(-)
   
diff --git a/test/validation/odp_schedule.c
b/test/validation/odp_schedule.c
index 31be742..bdbcf77 100644
--- a/test/validation/odp_schedule.c
+++ b/test/validation/odp_schedule.c
@@ -11,9 +11,11 @@
 #define MSG_POOL_SIZE(4*1024*1024)
 #define QUEUES_PER_PRIO  16
 #define BUF_SIZE 64
-#define TEST_NUM_BUFS100
+#define NUM_BUFS 100
 #define BURST_BUF_SIZE   4
-#define TEST_NUM_BUFS_EXCL   1
+#define NUM_BUFS_EXCL1
+#define NUM_BUFS_PAUSE   1000
+#define NUM_BUFS_BEFORE_PAUSE10
   
 #define GLOBALS_SHM_NAME test_globals
 #define MSG_POOL_NAMEmsg_pool
@@ -229,7 +231,7 @@ static void
schedule_common(odp_schedule_sync_t
sync, int num_queues,
  args.sync = sync;
  args.num_queues = num_queues;
  args.num_prio = num_prio;
- args.num_bufs = TEST_NUM_BUFS;
+ args.num_bufs = NUM_BUFS;
  args.num_cores = 1;
  args.enable_schd_multi = enable_schd_multi;
  args.enable_excl_atomic = 0;/* Not needed with a
 single
core */
@@ -261,9 +263,9 @@ static void
parallel_execute(odp_schedule_sync_t
sync, int num_queues,
  thr_args-num_queues = num_queues;
  thr_args-num_prio = num_prio;
  if (enable_excl_atomic)
- thr_args-num_bufs = TEST_NUM_BUFS_EXCL;
+ thr_args-num_bufs = NUM_BUFS_EXCL;
  else
- thr_args-num_bufs = TEST_NUM_BUFS;
+ thr_args-num_bufs = NUM_BUFS;
  thr_args-num_cores = globals-core_count;
  thr_args-enable_schd_multi = enable_schd_multi;
  thr_args-enable_excl_atomic = enable_excl_atomic;
@@ -459,6 +461,56 @@ static void
test_schedule_multi_1q_mt_a_excl(void)
   ENABLE_EXCL_ATOMIC);
 }
   
+static void test_schedule_pause_resume(void)
+{
+ odp_queue_t queue;
+ odp_buffer_t buf;
+ odp_queue_t 

Re: [lng-odp] [PATCH] validation: add odp_schedule_pause and odp_schedule_resume tests

2015-01-07 Thread Ciprian Barbu
On Tue, Jan 6, 2015 at 4:03 PM, Bill Fischofer
bill.fischo...@linaro.org wrote:
 I think it's something we need to discuss during the sync call.

 On Tue, Jan 6, 2015 at 7:48 AM, Mike Holmes mike.hol...@linaro.org wrote:

 Should a bug be made to track a needed change or is it important for 1.0
 and needs to be in the delta doc ?

 On 6 January 2015 at 08:40, Bill Fischofer bill.fischo...@linaro.org
 wrote:

 Caches should be transparent.  While this may be needed here, it's a poor
 set of semantics to expose as part of the formal APIs.  This is definitely
 something we need to address.  My suggestion is that a odp_schedule_pause()
 should cause an implicit cache flush if the implementation is using a
 scheduling cache.  That way any cache being used is truly transparent and
 moreover there won't be unnecessary delays in event processing since who
 knows how long a pause may last?  Clearly it won't be brief since otherwise
 the application would not have bothered with a pause/resume in the first
 place.

Sorry, I couldn't join you in the ODP call yesterday, mind if you give
a brief update on what was decided?


 On Tue, Jan 6, 2015 at 7:17 AM, Ciprian Barbu ciprian.ba...@linaro.org
 wrote:

 On Wed, Dec 17, 2014 at 5:09 PM, Jerin Jacob
 jerin.ja...@caviumnetworks.com wrote:
  On Wed, Dec 17, 2014 at 03:10:11PM +0200, Ciprian Barbu wrote:
  Signed-off-by: Ciprian Barbu ciprian.ba...@linaro.org
  ---
   test/validation/odp_schedule.c | 63
  ++
   1 file changed, 58 insertions(+), 5 deletions(-)
 
  diff --git a/test/validation/odp_schedule.c
  b/test/validation/odp_schedule.c
  index 31be742..bdbcf77 100644
  --- a/test/validation/odp_schedule.c
  +++ b/test/validation/odp_schedule.c
  @@ -11,9 +11,11 @@
   #define MSG_POOL_SIZE(4*1024*1024)
   #define QUEUES_PER_PRIO  16
   #define BUF_SIZE 64
  -#define TEST_NUM_BUFS100
  +#define NUM_BUFS 100
   #define BURST_BUF_SIZE   4
  -#define TEST_NUM_BUFS_EXCL   1
  +#define NUM_BUFS_EXCL1
  +#define NUM_BUFS_PAUSE   1000
  +#define NUM_BUFS_BEFORE_PAUSE10
 
   #define GLOBALS_SHM_NAME test_globals
   #define MSG_POOL_NAMEmsg_pool
  @@ -229,7 +231,7 @@ static void schedule_common(odp_schedule_sync_t
  sync, int num_queues,
args.sync = sync;
args.num_queues = num_queues;
args.num_prio = num_prio;
  - args.num_bufs = TEST_NUM_BUFS;
  + args.num_bufs = NUM_BUFS;
args.num_cores = 1;
args.enable_schd_multi = enable_schd_multi;
args.enable_excl_atomic = 0;/* Not needed with a single
  core */
  @@ -261,9 +263,9 @@ static void parallel_execute(odp_schedule_sync_t
  sync, int num_queues,
thr_args-num_queues = num_queues;
thr_args-num_prio = num_prio;
if (enable_excl_atomic)
  - thr_args-num_bufs = TEST_NUM_BUFS_EXCL;
  + thr_args-num_bufs = NUM_BUFS_EXCL;
else
  - thr_args-num_bufs = TEST_NUM_BUFS;
  + thr_args-num_bufs = NUM_BUFS;
thr_args-num_cores = globals-core_count;
thr_args-enable_schd_multi = enable_schd_multi;
thr_args-enable_excl_atomic = enable_excl_atomic;
  @@ -459,6 +461,56 @@ static void
  test_schedule_multi_1q_mt_a_excl(void)
 ENABLE_EXCL_ATOMIC);
   }
 
  +static void test_schedule_pause_resume(void)
  +{
  + odp_queue_t queue;
  + odp_buffer_t buf;
  + odp_queue_t from;
  + int i;
  + int local_bufs = 0;
  +
  + queue = odp_queue_lookup(sched_0_0_n);
  + CU_ASSERT(queue != ODP_QUEUE_INVALID);
  +
  + pool = odp_buffer_pool_lookup(MSG_POOL_NAME);
  + CU_ASSERT_FATAL(pool != ODP_BUFFER_POOL_INVALID);
  +
  +
  + for (i = 0; i  NUM_BUFS_PAUSE; i++) {
  + buf = odp_buffer_alloc(pool);
  + CU_ASSERT(buf != ODP_BUFFER_INVALID);
  + odp_queue_enq(queue, buf);
  + }
  +
  + for (i = 0; i  NUM_BUFS_BEFORE_PAUSE; i++) {
  + buf = odp_schedule(from, ODP_SCHED_NO_WAIT);
  + CU_ASSERT(from == queue);
  + odp_buffer_free(buf);
  + }
  +
  + odp_schedule_pause();
  +
  + while (1) {
  + buf = odp_schedule(from, ODP_SCHED_NO_WAIT);
  + if (buf == ODP_BUFFER_INVALID)
  + break;
  +
  + CU_ASSERT(from == queue);
  + odp_buffer_free(buf);
  + local_bufs++;
  + }
  +
  + CU_ASSERT(local_bufs  NUM_BUFS_PAUSE - NUM_BUFS_BEFORE_PAUSE);
 
  Whats is the expected behavior here, Shouldn't it be
  CU_ASSERT(local_bufs == 0) ?
  meaning, the complete pause ?

 Sorry about the delay, I've been playing around with mutt and I must
 have accidentally marked this email as read.
 The explanation here is that after pausing the scheduling, there might
 still be locally reserved buffers (see the 

Re: [lng-odp] [PATCH] validation: add odp_schedule_pause and odp_schedule_resume tests

2015-01-07 Thread Mike Holmes
I am unsure if I need to pay attention to this for 0.7.0

On 7 January 2015 at 04:39, Ciprian Barbu ciprian.ba...@linaro.org wrote:

 On Tue, Jan 6, 2015 at 4:03 PM, Bill Fischofer
 bill.fischo...@linaro.org wrote:
  I think it's something we need to discuss during the sync call.
 
  On Tue, Jan 6, 2015 at 7:48 AM, Mike Holmes mike.hol...@linaro.org
 wrote:
 
  Should a bug be made to track a needed change or is it important for 1.0
  and needs to be in the delta doc ?
 
  On 6 January 2015 at 08:40, Bill Fischofer bill.fischo...@linaro.org
  wrote:
 
  Caches should be transparent.  While this may be needed here, it's a
 poor
  set of semantics to expose as part of the formal APIs.  This is
 definitely
  something we need to address.  My suggestion is that a
 odp_schedule_pause()
  should cause an implicit cache flush if the implementation is using a
  scheduling cache.  That way any cache being used is truly transparent
 and
  moreover there won't be unnecessary delays in event processing since
 who
  knows how long a pause may last?  Clearly it won't be brief since
 otherwise
  the application would not have bothered with a pause/resume in the
 first
  place.

 Sorry, I couldn't join you in the ODP call yesterday, mind if you give
 a brief update on what was decided?

 
  On Tue, Jan 6, 2015 at 7:17 AM, Ciprian Barbu 
 ciprian.ba...@linaro.org
  wrote:
 
  On Wed, Dec 17, 2014 at 5:09 PM, Jerin Jacob
  jerin.ja...@caviumnetworks.com wrote:
   On Wed, Dec 17, 2014 at 03:10:11PM +0200, Ciprian Barbu wrote:
   Signed-off-by: Ciprian Barbu ciprian.ba...@linaro.org
   ---
test/validation/odp_schedule.c | 63
   ++
1 file changed, 58 insertions(+), 5 deletions(-)
  
   diff --git a/test/validation/odp_schedule.c
   b/test/validation/odp_schedule.c
   index 31be742..bdbcf77 100644
   --- a/test/validation/odp_schedule.c
   +++ b/test/validation/odp_schedule.c
   @@ -11,9 +11,11 @@
#define MSG_POOL_SIZE(4*1024*1024)
#define QUEUES_PER_PRIO  16
#define BUF_SIZE 64
   -#define TEST_NUM_BUFS100
   +#define NUM_BUFS 100
#define BURST_BUF_SIZE   4
   -#define TEST_NUM_BUFS_EXCL   1
   +#define NUM_BUFS_EXCL1
   +#define NUM_BUFS_PAUSE   1000
   +#define NUM_BUFS_BEFORE_PAUSE10
  
#define GLOBALS_SHM_NAME test_globals
#define MSG_POOL_NAMEmsg_pool
   @@ -229,7 +231,7 @@ static void schedule_common(odp_schedule_sync_t
   sync, int num_queues,
 args.sync = sync;
 args.num_queues = num_queues;
 args.num_prio = num_prio;
   - args.num_bufs = TEST_NUM_BUFS;
   + args.num_bufs = NUM_BUFS;
 args.num_cores = 1;
 args.enable_schd_multi = enable_schd_multi;
 args.enable_excl_atomic = 0;/* Not needed with a single
   core */
   @@ -261,9 +263,9 @@ static void
 parallel_execute(odp_schedule_sync_t
   sync, int num_queues,
 thr_args-num_queues = num_queues;
 thr_args-num_prio = num_prio;
 if (enable_excl_atomic)
   - thr_args-num_bufs = TEST_NUM_BUFS_EXCL;
   + thr_args-num_bufs = NUM_BUFS_EXCL;
 else
   - thr_args-num_bufs = TEST_NUM_BUFS;
   + thr_args-num_bufs = NUM_BUFS;
 thr_args-num_cores = globals-core_count;
 thr_args-enable_schd_multi = enable_schd_multi;
 thr_args-enable_excl_atomic = enable_excl_atomic;
   @@ -459,6 +461,56 @@ static void
   test_schedule_multi_1q_mt_a_excl(void)
  ENABLE_EXCL_ATOMIC);
}
  
   +static void test_schedule_pause_resume(void)
   +{
   + odp_queue_t queue;
   + odp_buffer_t buf;
   + odp_queue_t from;
   + int i;
   + int local_bufs = 0;
   +
   + queue = odp_queue_lookup(sched_0_0_n);
   + CU_ASSERT(queue != ODP_QUEUE_INVALID);
   +
   + pool = odp_buffer_pool_lookup(MSG_POOL_NAME);
   + CU_ASSERT_FATAL(pool != ODP_BUFFER_POOL_INVALID);
   +
   +
   + for (i = 0; i  NUM_BUFS_PAUSE; i++) {
   + buf = odp_buffer_alloc(pool);
   + CU_ASSERT(buf != ODP_BUFFER_INVALID);
   + odp_queue_enq(queue, buf);
   + }
   +
   + for (i = 0; i  NUM_BUFS_BEFORE_PAUSE; i++) {
   + buf = odp_schedule(from, ODP_SCHED_NO_WAIT);
   + CU_ASSERT(from == queue);
   + odp_buffer_free(buf);
   + }
   +
   + odp_schedule_pause();
   +
   + while (1) {
   + buf = odp_schedule(from, ODP_SCHED_NO_WAIT);
   + if (buf == ODP_BUFFER_INVALID)
   + break;
   +
   + CU_ASSERT(from == queue);
   + odp_buffer_free(buf);
   + local_bufs++;
   + }
   +
   + CU_ASSERT(local_bufs  NUM_BUFS_PAUSE -
 NUM_BUFS_BEFORE_PAUSE);
  
   Whats is the expected behavior here, Shouldn't it be
   CU_ASSERT(local_bufs == 0) ?
   

Re: [lng-odp] [PATCH] validation: add odp_schedule_pause and odp_schedule_resume tests

2015-01-06 Thread Ciprian Barbu
On Wed, Dec 17, 2014 at 5:09 PM, Jerin Jacob
jerin.ja...@caviumnetworks.com wrote:
 On Wed, Dec 17, 2014 at 03:10:11PM +0200, Ciprian Barbu wrote:
 Signed-off-by: Ciprian Barbu ciprian.ba...@linaro.org
 ---
  test/validation/odp_schedule.c | 63 
 ++
  1 file changed, 58 insertions(+), 5 deletions(-)

 diff --git a/test/validation/odp_schedule.c b/test/validation/odp_schedule.c
 index 31be742..bdbcf77 100644
 --- a/test/validation/odp_schedule.c
 +++ b/test/validation/odp_schedule.c
 @@ -11,9 +11,11 @@
  #define MSG_POOL_SIZE(4*1024*1024)
  #define QUEUES_PER_PRIO  16
  #define BUF_SIZE 64
 -#define TEST_NUM_BUFS100
 +#define NUM_BUFS 100
  #define BURST_BUF_SIZE   4
 -#define TEST_NUM_BUFS_EXCL   1
 +#define NUM_BUFS_EXCL1
 +#define NUM_BUFS_PAUSE   1000
 +#define NUM_BUFS_BEFORE_PAUSE10

  #define GLOBALS_SHM_NAME test_globals
  #define MSG_POOL_NAMEmsg_pool
 @@ -229,7 +231,7 @@ static void schedule_common(odp_schedule_sync_t sync, 
 int num_queues,
   args.sync = sync;
   args.num_queues = num_queues;
   args.num_prio = num_prio;
 - args.num_bufs = TEST_NUM_BUFS;
 + args.num_bufs = NUM_BUFS;
   args.num_cores = 1;
   args.enable_schd_multi = enable_schd_multi;
   args.enable_excl_atomic = 0;/* Not needed with a single core */
 @@ -261,9 +263,9 @@ static void parallel_execute(odp_schedule_sync_t sync, 
 int num_queues,
   thr_args-num_queues = num_queues;
   thr_args-num_prio = num_prio;
   if (enable_excl_atomic)
 - thr_args-num_bufs = TEST_NUM_BUFS_EXCL;
 + thr_args-num_bufs = NUM_BUFS_EXCL;
   else
 - thr_args-num_bufs = TEST_NUM_BUFS;
 + thr_args-num_bufs = NUM_BUFS;
   thr_args-num_cores = globals-core_count;
   thr_args-enable_schd_multi = enable_schd_multi;
   thr_args-enable_excl_atomic = enable_excl_atomic;
 @@ -459,6 +461,56 @@ static void test_schedule_multi_1q_mt_a_excl(void)
ENABLE_EXCL_ATOMIC);
  }

 +static void test_schedule_pause_resume(void)
 +{
 + odp_queue_t queue;
 + odp_buffer_t buf;
 + odp_queue_t from;
 + int i;
 + int local_bufs = 0;
 +
 + queue = odp_queue_lookup(sched_0_0_n);
 + CU_ASSERT(queue != ODP_QUEUE_INVALID);
 +
 + pool = odp_buffer_pool_lookup(MSG_POOL_NAME);
 + CU_ASSERT_FATAL(pool != ODP_BUFFER_POOL_INVALID);
 +
 +
 + for (i = 0; i  NUM_BUFS_PAUSE; i++) {
 + buf = odp_buffer_alloc(pool);
 + CU_ASSERT(buf != ODP_BUFFER_INVALID);
 + odp_queue_enq(queue, buf);
 + }
 +
 + for (i = 0; i  NUM_BUFS_BEFORE_PAUSE; i++) {
 + buf = odp_schedule(from, ODP_SCHED_NO_WAIT);
 + CU_ASSERT(from == queue);
 + odp_buffer_free(buf);
 + }
 +
 + odp_schedule_pause();
 +
 + while (1) {
 + buf = odp_schedule(from, ODP_SCHED_NO_WAIT);
 + if (buf == ODP_BUFFER_INVALID)
 + break;
 +
 + CU_ASSERT(from == queue);
 + odp_buffer_free(buf);
 + local_bufs++;
 + }
 +
 + CU_ASSERT(local_bufs  NUM_BUFS_PAUSE - NUM_BUFS_BEFORE_PAUSE);

 Whats is the expected behavior here, Shouldn't it be CU_ASSERT(local_bufs == 
 0) ?
 meaning, the complete pause ?

Sorry about the delay, I've been playing around with mutt and I must
have accidentally marked this email as read.
The explanation here is that after pausing the scheduling, there might
still be locally reserved buffers (see the odp_schedule_pause
documentation). For linux-generic for instance the scheduler dequeues
buffers in bursts, odp_scheduler_pause only stops further dequeues,
buffers may still be in the 'reservoirs'. With that in mind, the check
above makes sure that after pausing only a limited number of packets
are still scheduled, or else said pausing seems to work, not all
packets being drained.


 +
 + odp_schedule_resume();
 +
 + for (i = local_bufs + NUM_BUFS_BEFORE_PAUSE; i  NUM_BUFS_PAUSE; i++) {
 + buf = odp_schedule(from, ODP_SCHED_WAIT);
 + CU_ASSERT(from == queue);
 + odp_buffer_free(buf);
 + }
 +}
 +
  static int create_queues(void)
  {
   int i, j, prios;
 @@ -594,6 +646,7 @@ struct CU_TestInfo test_odp_schedule[] = {
   {schedule_multi_mq_mt_prio_a, test_schedule_multi_mq_mt_prio_a},
   {schedule_multi_mq_mt_prio_o, test_schedule_multi_mq_mt_prio_o},
   {schedule_multi_1q_mt_a_excl, test_schedule_multi_1q_mt_a_excl},
 + {schedule_pause_resume,   test_schedule_pause_resume},
   CU_TEST_INFO_NULL,
  };

 --
 1.8.3.2


 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 http://lists.linaro.org/mailman/listinfo/lng-odp

___
lng-odp mailing list

Re: [lng-odp] [PATCH] validation: add odp_schedule_pause and odp_schedule_resume tests

2015-01-06 Thread Mike Holmes
Should a bug be made to track a needed change or is it important for 1.0
and needs to be in the delta doc ?

On 6 January 2015 at 08:40, Bill Fischofer bill.fischo...@linaro.org
wrote:

 Caches should be transparent.  While this may be needed here, it's a poor
 set of semantics to expose as part of the formal APIs.  This is definitely
 something we need to address.  My suggestion is that a odp_schedule_pause()
 should cause an implicit cache flush if the implementation is using a
 scheduling cache.  That way any cache being used is truly transparent and
 moreover there won't be unnecessary delays in event processing since who
 knows how long a pause may last?  Clearly it won't be brief since otherwise
 the application would not have bothered with a pause/resume in the first
 place.

 On Tue, Jan 6, 2015 at 7:17 AM, Ciprian Barbu ciprian.ba...@linaro.org
 wrote:

 On Wed, Dec 17, 2014 at 5:09 PM, Jerin Jacob
 jerin.ja...@caviumnetworks.com wrote:
  On Wed, Dec 17, 2014 at 03:10:11PM +0200, Ciprian Barbu wrote:
  Signed-off-by: Ciprian Barbu ciprian.ba...@linaro.org
  ---
   test/validation/odp_schedule.c | 63
 ++
   1 file changed, 58 insertions(+), 5 deletions(-)
 
  diff --git a/test/validation/odp_schedule.c
 b/test/validation/odp_schedule.c
  index 31be742..bdbcf77 100644
  --- a/test/validation/odp_schedule.c
  +++ b/test/validation/odp_schedule.c
  @@ -11,9 +11,11 @@
   #define MSG_POOL_SIZE(4*1024*1024)
   #define QUEUES_PER_PRIO  16
   #define BUF_SIZE 64
  -#define TEST_NUM_BUFS100
  +#define NUM_BUFS 100
   #define BURST_BUF_SIZE   4
  -#define TEST_NUM_BUFS_EXCL   1
  +#define NUM_BUFS_EXCL1
  +#define NUM_BUFS_PAUSE   1000
  +#define NUM_BUFS_BEFORE_PAUSE10
 
   #define GLOBALS_SHM_NAME test_globals
   #define MSG_POOL_NAMEmsg_pool
  @@ -229,7 +231,7 @@ static void schedule_common(odp_schedule_sync_t
 sync, int num_queues,
args.sync = sync;
args.num_queues = num_queues;
args.num_prio = num_prio;
  - args.num_bufs = TEST_NUM_BUFS;
  + args.num_bufs = NUM_BUFS;
args.num_cores = 1;
args.enable_schd_multi = enable_schd_multi;
args.enable_excl_atomic = 0;/* Not needed with a single core
 */
  @@ -261,9 +263,9 @@ static void parallel_execute(odp_schedule_sync_t
 sync, int num_queues,
thr_args-num_queues = num_queues;
thr_args-num_prio = num_prio;
if (enable_excl_atomic)
  - thr_args-num_bufs = TEST_NUM_BUFS_EXCL;
  + thr_args-num_bufs = NUM_BUFS_EXCL;
else
  - thr_args-num_bufs = TEST_NUM_BUFS;
  + thr_args-num_bufs = NUM_BUFS;
thr_args-num_cores = globals-core_count;
thr_args-enable_schd_multi = enable_schd_multi;
thr_args-enable_excl_atomic = enable_excl_atomic;
  @@ -459,6 +461,56 @@ static void test_schedule_multi_1q_mt_a_excl(void)
 ENABLE_EXCL_ATOMIC);
   }
 
  +static void test_schedule_pause_resume(void)
  +{
  + odp_queue_t queue;
  + odp_buffer_t buf;
  + odp_queue_t from;
  + int i;
  + int local_bufs = 0;
  +
  + queue = odp_queue_lookup(sched_0_0_n);
  + CU_ASSERT(queue != ODP_QUEUE_INVALID);
  +
  + pool = odp_buffer_pool_lookup(MSG_POOL_NAME);
  + CU_ASSERT_FATAL(pool != ODP_BUFFER_POOL_INVALID);
  +
  +
  + for (i = 0; i  NUM_BUFS_PAUSE; i++) {
  + buf = odp_buffer_alloc(pool);
  + CU_ASSERT(buf != ODP_BUFFER_INVALID);
  + odp_queue_enq(queue, buf);
  + }
  +
  + for (i = 0; i  NUM_BUFS_BEFORE_PAUSE; i++) {
  + buf = odp_schedule(from, ODP_SCHED_NO_WAIT);
  + CU_ASSERT(from == queue);
  + odp_buffer_free(buf);
  + }
  +
  + odp_schedule_pause();
  +
  + while (1) {
  + buf = odp_schedule(from, ODP_SCHED_NO_WAIT);
  + if (buf == ODP_BUFFER_INVALID)
  + break;
  +
  + CU_ASSERT(from == queue);
  + odp_buffer_free(buf);
  + local_bufs++;
  + }
  +
  + CU_ASSERT(local_bufs  NUM_BUFS_PAUSE - NUM_BUFS_BEFORE_PAUSE);
 
  Whats is the expected behavior here, Shouldn't it be
 CU_ASSERT(local_bufs == 0) ?
  meaning, the complete pause ?

 Sorry about the delay, I've been playing around with mutt and I must
 have accidentally marked this email as read.
 The explanation here is that after pausing the scheduling, there might
 still be locally reserved buffers (see the odp_schedule_pause
 documentation). For linux-generic for instance the scheduler dequeues
 buffers in bursts, odp_scheduler_pause only stops further dequeues,
 buffers may still be in the 'reservoirs'. With that in mind, the check
 above makes sure that after pausing only a limited number of packets
 are still scheduled, or else said pausing seems to 

Re: [lng-odp] [PATCH] validation: add odp_schedule_pause and odp_schedule_resume tests

2015-01-06 Thread Bill Fischofer
Caches should be transparent.  While this may be needed here, it's a poor
set of semantics to expose as part of the formal APIs.  This is definitely
something we need to address.  My suggestion is that a odp_schedule_pause()
should cause an implicit cache flush if the implementation is using a
scheduling cache.  That way any cache being used is truly transparent and
moreover there won't be unnecessary delays in event processing since who
knows how long a pause may last?  Clearly it won't be brief since otherwise
the application would not have bothered with a pause/resume in the first
place.

On Tue, Jan 6, 2015 at 7:17 AM, Ciprian Barbu ciprian.ba...@linaro.org
wrote:

 On Wed, Dec 17, 2014 at 5:09 PM, Jerin Jacob
 jerin.ja...@caviumnetworks.com wrote:
  On Wed, Dec 17, 2014 at 03:10:11PM +0200, Ciprian Barbu wrote:
  Signed-off-by: Ciprian Barbu ciprian.ba...@linaro.org
  ---
   test/validation/odp_schedule.c | 63
 ++
   1 file changed, 58 insertions(+), 5 deletions(-)
 
  diff --git a/test/validation/odp_schedule.c
 b/test/validation/odp_schedule.c
  index 31be742..bdbcf77 100644
  --- a/test/validation/odp_schedule.c
  +++ b/test/validation/odp_schedule.c
  @@ -11,9 +11,11 @@
   #define MSG_POOL_SIZE(4*1024*1024)
   #define QUEUES_PER_PRIO  16
   #define BUF_SIZE 64
  -#define TEST_NUM_BUFS100
  +#define NUM_BUFS 100
   #define BURST_BUF_SIZE   4
  -#define TEST_NUM_BUFS_EXCL   1
  +#define NUM_BUFS_EXCL1
  +#define NUM_BUFS_PAUSE   1000
  +#define NUM_BUFS_BEFORE_PAUSE10
 
   #define GLOBALS_SHM_NAME test_globals
   #define MSG_POOL_NAMEmsg_pool
  @@ -229,7 +231,7 @@ static void schedule_common(odp_schedule_sync_t
 sync, int num_queues,
args.sync = sync;
args.num_queues = num_queues;
args.num_prio = num_prio;
  - args.num_bufs = TEST_NUM_BUFS;
  + args.num_bufs = NUM_BUFS;
args.num_cores = 1;
args.enable_schd_multi = enable_schd_multi;
args.enable_excl_atomic = 0;/* Not needed with a single core
 */
  @@ -261,9 +263,9 @@ static void parallel_execute(odp_schedule_sync_t
 sync, int num_queues,
thr_args-num_queues = num_queues;
thr_args-num_prio = num_prio;
if (enable_excl_atomic)
  - thr_args-num_bufs = TEST_NUM_BUFS_EXCL;
  + thr_args-num_bufs = NUM_BUFS_EXCL;
else
  - thr_args-num_bufs = TEST_NUM_BUFS;
  + thr_args-num_bufs = NUM_BUFS;
thr_args-num_cores = globals-core_count;
thr_args-enable_schd_multi = enable_schd_multi;
thr_args-enable_excl_atomic = enable_excl_atomic;
  @@ -459,6 +461,56 @@ static void test_schedule_multi_1q_mt_a_excl(void)
 ENABLE_EXCL_ATOMIC);
   }
 
  +static void test_schedule_pause_resume(void)
  +{
  + odp_queue_t queue;
  + odp_buffer_t buf;
  + odp_queue_t from;
  + int i;
  + int local_bufs = 0;
  +
  + queue = odp_queue_lookup(sched_0_0_n);
  + CU_ASSERT(queue != ODP_QUEUE_INVALID);
  +
  + pool = odp_buffer_pool_lookup(MSG_POOL_NAME);
  + CU_ASSERT_FATAL(pool != ODP_BUFFER_POOL_INVALID);
  +
  +
  + for (i = 0; i  NUM_BUFS_PAUSE; i++) {
  + buf = odp_buffer_alloc(pool);
  + CU_ASSERT(buf != ODP_BUFFER_INVALID);
  + odp_queue_enq(queue, buf);
  + }
  +
  + for (i = 0; i  NUM_BUFS_BEFORE_PAUSE; i++) {
  + buf = odp_schedule(from, ODP_SCHED_NO_WAIT);
  + CU_ASSERT(from == queue);
  + odp_buffer_free(buf);
  + }
  +
  + odp_schedule_pause();
  +
  + while (1) {
  + buf = odp_schedule(from, ODP_SCHED_NO_WAIT);
  + if (buf == ODP_BUFFER_INVALID)
  + break;
  +
  + CU_ASSERT(from == queue);
  + odp_buffer_free(buf);
  + local_bufs++;
  + }
  +
  + CU_ASSERT(local_bufs  NUM_BUFS_PAUSE - NUM_BUFS_BEFORE_PAUSE);
 
  Whats is the expected behavior here, Shouldn't it be
 CU_ASSERT(local_bufs == 0) ?
  meaning, the complete pause ?

 Sorry about the delay, I've been playing around with mutt and I must
 have accidentally marked this email as read.
 The explanation here is that after pausing the scheduling, there might
 still be locally reserved buffers (see the odp_schedule_pause
 documentation). For linux-generic for instance the scheduler dequeues
 buffers in bursts, odp_scheduler_pause only stops further dequeues,
 buffers may still be in the 'reservoirs'. With that in mind, the check
 above makes sure that after pausing only a limited number of packets
 are still scheduled, or else said pausing seems to work, not all
 packets being drained.

 
  +
  + odp_schedule_resume();
  +
  + for (i = local_bufs + NUM_BUFS_BEFORE_PAUSE; i  NUM_BUFS_PAUSE;
 i++) {
  + buf = 

Re: [lng-odp] [PATCH] validation: add odp_schedule_pause and odp_schedule_resume tests

2015-01-06 Thread Bill Fischofer
I think it's something we need to discuss during the sync call.

On Tue, Jan 6, 2015 at 7:48 AM, Mike Holmes mike.hol...@linaro.org wrote:

 Should a bug be made to track a needed change or is it important for 1.0
 and needs to be in the delta doc ?

 On 6 January 2015 at 08:40, Bill Fischofer bill.fischo...@linaro.org
 wrote:

 Caches should be transparent.  While this may be needed here, it's a poor
 set of semantics to expose as part of the formal APIs.  This is definitely
 something we need to address.  My suggestion is that a odp_schedule_pause()
 should cause an implicit cache flush if the implementation is using a
 scheduling cache.  That way any cache being used is truly transparent and
 moreover there won't be unnecessary delays in event processing since who
 knows how long a pause may last?  Clearly it won't be brief since otherwise
 the application would not have bothered with a pause/resume in the first
 place.

 On Tue, Jan 6, 2015 at 7:17 AM, Ciprian Barbu ciprian.ba...@linaro.org
 wrote:

 On Wed, Dec 17, 2014 at 5:09 PM, Jerin Jacob
 jerin.ja...@caviumnetworks.com wrote:
  On Wed, Dec 17, 2014 at 03:10:11PM +0200, Ciprian Barbu wrote:
  Signed-off-by: Ciprian Barbu ciprian.ba...@linaro.org
  ---
   test/validation/odp_schedule.c | 63
 ++
   1 file changed, 58 insertions(+), 5 deletions(-)
 
  diff --git a/test/validation/odp_schedule.c
 b/test/validation/odp_schedule.c
  index 31be742..bdbcf77 100644
  --- a/test/validation/odp_schedule.c
  +++ b/test/validation/odp_schedule.c
  @@ -11,9 +11,11 @@
   #define MSG_POOL_SIZE(4*1024*1024)
   #define QUEUES_PER_PRIO  16
   #define BUF_SIZE 64
  -#define TEST_NUM_BUFS100
  +#define NUM_BUFS 100
   #define BURST_BUF_SIZE   4
  -#define TEST_NUM_BUFS_EXCL   1
  +#define NUM_BUFS_EXCL1
  +#define NUM_BUFS_PAUSE   1000
  +#define NUM_BUFS_BEFORE_PAUSE10
 
   #define GLOBALS_SHM_NAME test_globals
   #define MSG_POOL_NAMEmsg_pool
  @@ -229,7 +231,7 @@ static void schedule_common(odp_schedule_sync_t
 sync, int num_queues,
args.sync = sync;
args.num_queues = num_queues;
args.num_prio = num_prio;
  - args.num_bufs = TEST_NUM_BUFS;
  + args.num_bufs = NUM_BUFS;
args.num_cores = 1;
args.enable_schd_multi = enable_schd_multi;
args.enable_excl_atomic = 0;/* Not needed with a single
 core */
  @@ -261,9 +263,9 @@ static void parallel_execute(odp_schedule_sync_t
 sync, int num_queues,
thr_args-num_queues = num_queues;
thr_args-num_prio = num_prio;
if (enable_excl_atomic)
  - thr_args-num_bufs = TEST_NUM_BUFS_EXCL;
  + thr_args-num_bufs = NUM_BUFS_EXCL;
else
  - thr_args-num_bufs = TEST_NUM_BUFS;
  + thr_args-num_bufs = NUM_BUFS;
thr_args-num_cores = globals-core_count;
thr_args-enable_schd_multi = enable_schd_multi;
thr_args-enable_excl_atomic = enable_excl_atomic;
  @@ -459,6 +461,56 @@ static void
 test_schedule_multi_1q_mt_a_excl(void)
 ENABLE_EXCL_ATOMIC);
   }
 
  +static void test_schedule_pause_resume(void)
  +{
  + odp_queue_t queue;
  + odp_buffer_t buf;
  + odp_queue_t from;
  + int i;
  + int local_bufs = 0;
  +
  + queue = odp_queue_lookup(sched_0_0_n);
  + CU_ASSERT(queue != ODP_QUEUE_INVALID);
  +
  + pool = odp_buffer_pool_lookup(MSG_POOL_NAME);
  + CU_ASSERT_FATAL(pool != ODP_BUFFER_POOL_INVALID);
  +
  +
  + for (i = 0; i  NUM_BUFS_PAUSE; i++) {
  + buf = odp_buffer_alloc(pool);
  + CU_ASSERT(buf != ODP_BUFFER_INVALID);
  + odp_queue_enq(queue, buf);
  + }
  +
  + for (i = 0; i  NUM_BUFS_BEFORE_PAUSE; i++) {
  + buf = odp_schedule(from, ODP_SCHED_NO_WAIT);
  + CU_ASSERT(from == queue);
  + odp_buffer_free(buf);
  + }
  +
  + odp_schedule_pause();
  +
  + while (1) {
  + buf = odp_schedule(from, ODP_SCHED_NO_WAIT);
  + if (buf == ODP_BUFFER_INVALID)
  + break;
  +
  + CU_ASSERT(from == queue);
  + odp_buffer_free(buf);
  + local_bufs++;
  + }
  +
  + CU_ASSERT(local_bufs  NUM_BUFS_PAUSE - NUM_BUFS_BEFORE_PAUSE);
 
  Whats is the expected behavior here, Shouldn't it be
 CU_ASSERT(local_bufs == 0) ?
  meaning, the complete pause ?

 Sorry about the delay, I've been playing around with mutt and I must
 have accidentally marked this email as read.
 The explanation here is that after pausing the scheduling, there might
 still be locally reserved buffers (see the odp_schedule_pause
 documentation). For linux-generic for instance the scheduler dequeues
 buffers in bursts, odp_scheduler_pause only stops further dequeues,
 buffers may still be in the 'reservoirs'. With 

Re: [lng-odp] [PATCH] validation: add odp_schedule_pause and odp_schedule_resume tests

2014-12-17 Thread Jerin Jacob
On Wed, Dec 17, 2014 at 03:10:11PM +0200, Ciprian Barbu wrote:
 Signed-off-by: Ciprian Barbu ciprian.ba...@linaro.org
 ---
  test/validation/odp_schedule.c | 63 
 ++
  1 file changed, 58 insertions(+), 5 deletions(-)
 
 diff --git a/test/validation/odp_schedule.c b/test/validation/odp_schedule.c
 index 31be742..bdbcf77 100644
 --- a/test/validation/odp_schedule.c
 +++ b/test/validation/odp_schedule.c
 @@ -11,9 +11,11 @@
  #define MSG_POOL_SIZE(4*1024*1024)
  #define QUEUES_PER_PRIO  16
  #define BUF_SIZE 64
 -#define TEST_NUM_BUFS100
 +#define NUM_BUFS 100
  #define BURST_BUF_SIZE   4
 -#define TEST_NUM_BUFS_EXCL   1
 +#define NUM_BUFS_EXCL1
 +#define NUM_BUFS_PAUSE   1000
 +#define NUM_BUFS_BEFORE_PAUSE10
  
  #define GLOBALS_SHM_NAME test_globals
  #define MSG_POOL_NAMEmsg_pool
 @@ -229,7 +231,7 @@ static void schedule_common(odp_schedule_sync_t sync, int 
 num_queues,
   args.sync = sync;
   args.num_queues = num_queues;
   args.num_prio = num_prio;
 - args.num_bufs = TEST_NUM_BUFS;
 + args.num_bufs = NUM_BUFS;
   args.num_cores = 1;
   args.enable_schd_multi = enable_schd_multi;
   args.enable_excl_atomic = 0;/* Not needed with a single core */
 @@ -261,9 +263,9 @@ static void parallel_execute(odp_schedule_sync_t sync, 
 int num_queues,
   thr_args-num_queues = num_queues;
   thr_args-num_prio = num_prio;
   if (enable_excl_atomic)
 - thr_args-num_bufs = TEST_NUM_BUFS_EXCL;
 + thr_args-num_bufs = NUM_BUFS_EXCL;
   else
 - thr_args-num_bufs = TEST_NUM_BUFS;
 + thr_args-num_bufs = NUM_BUFS;
   thr_args-num_cores = globals-core_count;
   thr_args-enable_schd_multi = enable_schd_multi;
   thr_args-enable_excl_atomic = enable_excl_atomic;
 @@ -459,6 +461,56 @@ static void test_schedule_multi_1q_mt_a_excl(void)
ENABLE_EXCL_ATOMIC);
  }
  
 +static void test_schedule_pause_resume(void)
 +{
 + odp_queue_t queue;
 + odp_buffer_t buf;
 + odp_queue_t from;
 + int i;
 + int local_bufs = 0;
 +
 + queue = odp_queue_lookup(sched_0_0_n);
 + CU_ASSERT(queue != ODP_QUEUE_INVALID);
 +
 + pool = odp_buffer_pool_lookup(MSG_POOL_NAME);
 + CU_ASSERT_FATAL(pool != ODP_BUFFER_POOL_INVALID);
 +
 +
 + for (i = 0; i  NUM_BUFS_PAUSE; i++) {
 + buf = odp_buffer_alloc(pool);
 + CU_ASSERT(buf != ODP_BUFFER_INVALID);
 + odp_queue_enq(queue, buf);
 + }
 +
 + for (i = 0; i  NUM_BUFS_BEFORE_PAUSE; i++) {
 + buf = odp_schedule(from, ODP_SCHED_NO_WAIT);
 + CU_ASSERT(from == queue);
 + odp_buffer_free(buf);
 + }
 +
 + odp_schedule_pause();
 +
 + while (1) {
 + buf = odp_schedule(from, ODP_SCHED_NO_WAIT);
 + if (buf == ODP_BUFFER_INVALID)
 + break;
 +
 + CU_ASSERT(from == queue);
 + odp_buffer_free(buf);
 + local_bufs++;
 + }
 +
 + CU_ASSERT(local_bufs  NUM_BUFS_PAUSE - NUM_BUFS_BEFORE_PAUSE);

Whats is the expected behavior here, Shouldn't it be CU_ASSERT(local_bufs == 0) 
? 
meaning, the complete pause ?

 +
 + odp_schedule_resume();
 +
 + for (i = local_bufs + NUM_BUFS_BEFORE_PAUSE; i  NUM_BUFS_PAUSE; i++) {
 + buf = odp_schedule(from, ODP_SCHED_WAIT);
 + CU_ASSERT(from == queue);
 + odp_buffer_free(buf);
 + }
 +}
 +
  static int create_queues(void)
  {
   int i, j, prios;
 @@ -594,6 +646,7 @@ struct CU_TestInfo test_odp_schedule[] = {
   {schedule_multi_mq_mt_prio_a, test_schedule_multi_mq_mt_prio_a},
   {schedule_multi_mq_mt_prio_o, test_schedule_multi_mq_mt_prio_o},
   {schedule_multi_1q_mt_a_excl, test_schedule_multi_1q_mt_a_excl},
 + {schedule_pause_resume,   test_schedule_pause_resume},
   CU_TEST_INFO_NULL,
  };
  
 -- 
 1.8.3.2
 
 
 ___
 lng-odp mailing list
 lng-odp@lists.linaro.org
 http://lists.linaro.org/mailman/listinfo/lng-odp

___
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp