What NPS does in HW is that each buffer you dequeue actually stays on the queue 
(but is marked as consumed by the application) and you are given a "tag" for it.

When I want to queue the buffer to the next, I give the HW the tag I got, not 
the buffer handle and what the HW does is only than fully dequeue the buffer 
from the original queue before queuing it into the new queue – but only after 
all preceding (older) buffers on the original queue have been fully dequeued,

Free of a buffer causes a dequeue from the original queue as well.

And I can also give release a buffer from the original queue (give the HW the 
tag back and say "you don't have to wait for this buffer any longer).

I hope this helps,
Gilad

Gilad Ben-Yossef
Software Architect
EZchip Technologies Ltd.
37 Israel Pollak Ave, Kiryat Gat 82025 ,Israel
Tel: +972-4-959-6666 ext. 576, Fax: +972-8-681-1483
Mobile: +972-52-826-0388, US Mobile: +1-973-826-0388
Email: gil...@ezchip.com<mailto:gil...@ezchip.com>, Web: 
http://www.ezchip.com<http://www.ezchip.com/>

"Ethernet always wins."
        — Andy Bechtolsheim

From: Bill Fischofer [mailto:bill.fischo...@linaro.org]
Sent: Sunday, November 02, 2014 5:44 PM
To: Gilad Ben Yossef
Cc: Savolainen, Petri (NSN - FI/Espoo); lng-odp@lists.linaro.org
Subject: Re: [lng-odp] [PATCH] Scheduler atomic and ordered definitions

Thanks Gilad.  Can you elaborate on that a bit more?  I can understand how 
step-by-step would be potentially easier to implement, but does it also capture 
the majority of expected application use cases?

A good confirmation that this is the correct approach for v1.0 though.

On Sun, Nov 2, 2014 at 9:26 AM, Gilad Ben Yossef 
<gil...@ezchip.com<mailto:gil...@ezchip.com>> wrote:

For what it's worth, as a SoC vendor rep. that has ordered queue in HW, Petri's 
definition is actually preferred for us even going forward ☺


Thanks,
Gilad

Gilad Ben-Yossef
Software Architect
EZchip Technologies Ltd.
37 Israel Pollak Ave, Kiryat Gat 82025 ,Israel
Tel: +972-4-959-6666 ext. 576<tel:%2B972-4-959-6666%20ext.%20576>, Fax: 
+972-8-681-1483<tel:%2B972-8-681-1483>
Mobile: +972-52-826-0388<tel:%2B972-52-826-0388>, US Mobile: 
+1-973-826-0388<tel:%2B1-973-826-0388>
Email: gil...@ezchip.com<mailto:gil...@ezchip.com>, Web: 
http://www.ezchip.com<http://www.ezchip.com/>

"Ethernet always wins."
        — Andy Bechtolsheim

From: lng-odp-boun...@lists.linaro.org<mailto:lng-odp-boun...@lists.linaro.org> 
[mailto:lng-odp-boun...@lists.linaro.org<mailto:lng-odp-boun...@lists.linaro.org>]
 On Behalf Of Bill Fischofer
Sent: Friday, October 31, 2014 2:59 PM
To: Savolainen, Petri (NSN - FI/Espoo)
Cc: lng-odp@lists.linaro.org<mailto:lng-odp@lists.linaro.org>

Subject: Re: [lng-odp] [PATCH] Scheduler atomic and ordered definitions

This may well be a reasonable restriction for ODP v1.0 but I believe it's 
something we need to put on the list for "production grade" improvements for 
2015.

Bill

On Fri, Oct 31, 2014 at 7:57 AM, Savolainen, Petri (NSN - FI/Espoo) 
<petri.savolai...@nsn.com<mailto:petri.savolai...@nsn.com>> wrote:
Yes, it’s step-by-step and I think it’s the level of ordering we need for v1.0. 
Most SoCs can implement it, even when  the HW scheduler would not have order 
support but only atomic/parallel. This way defined atomic scheduling can be 
used to implement functionality correct ordered queues, the throughput is not 
improved but it functions correctly.

-Petri


From: ext Bill Fischofer 
[mailto:bill.fischo...@linaro.org<mailto:bill.fischo...@linaro.org>]
Sent: Friday, October 31, 2014 2:48 PM
To: Alexandru Badicioiu
Cc: Petri Savolainen; lng-odp@lists.linaro.org<mailto:lng-odp@lists.linaro.org>
Subject: Re: [lng-odp] [PATCH] Scheduler atomic and ordered definitions

I can well imagine the step-by-step order preservation to be simpler to 
implement (in SW) but it would also seem to limit performance since the only 
way to ensure end-to-end order preservation would be if each intermediate queue 
from ingress to egress were an ordered queue.  If there is a parallel queue 
anywhere in the chain that would break things.

The question is: Is this restriction needed and/or sufficient for ODP v1.0?

On Fri, Oct 31, 2014 at 7:42 AM, Alexandru Badicioiu 
<alexandru.badici...@linaro.org<mailto:alexandru.badici...@linaro.org>> wrote:
"+ * The original enqueue order of the source queue is maintained when buffers 
are
+ * enqueued to their destination queue(s) before another schedule call"

Is this assuming that the order will be restored always at the next enqueue? I 
think there should be an option to explicitly indicate if the next enqueue is 
supposed to restore the order or not, especially when packets move from queue 
to queue. Ordered queues are costly compared with the ordinary ones.

Alex

On 31 October 2014 14:25, Petri Savolainen 
<petri.savolai...@linaro.org<mailto:petri.savolai...@linaro.org>> wrote:
Improved atomic and ordered synchronisation definitions. Added
order skip function prototype.

Signed-off-by: Petri Savolainen 
<petri.savolai...@linaro.org<mailto:petri.savolai...@linaro.org>>

---
This is the ordered queue definition (in patch format) promised
in the call yesterday.
---
 platform/linux-generic/include/api/odp_queue.h    | 31 +++++++++++++++-
 platform/linux-generic/include/api/odp_schedule.h | 45 ++++++++++++++++++-----
 2 files changed, 64 insertions(+), 12 deletions(-)

diff --git a/platform/linux-generic/include/api/odp_queue.h 
b/platform/linux-generic/include/api/odp_queue.h
index b8ac4bb..c0c3969 100644
--- a/platform/linux-generic/include/api/odp_queue.h
+++ b/platform/linux-generic/include/api/odp_queue.h
@@ -78,8 +78,35 @@ typedef int odp_schedule_prio_t;
 typedef int odp_schedule_sync_t;

 #define ODP_SCHED_SYNC_NONE     0  /**< Queue not synchronised */
-#define ODP_SCHED_SYNC_ATOMIC   1  /**< Atomic queue */
-#define ODP_SCHED_SYNC_ORDERED  2  /**< Ordered queue */
+
+/**
+ * Atomic queue synchronisation
+ *
+ * The scheduler gives buffers from a queue to a single core at a time. This
+ * serialises processing of the buffers from the source queue and helps user to
+ * avoid SW locking. Another schedule call will implicitely release the atomic
+ * synchronisation of the source queue and free the scheduler to give buffers
+ * from the queue to other cores.
+ *
+ * User can hint the scheduler to release the atomic synchronisation early with
+ * odp_schedule_release_atomic().
+ */
+#define ODP_SCHED_SYNC_ATOMIC   1
+
+/**
+ * Ordered queue synchronisation
+ *
+ * The scheduler may give out buffers to multiple cores for parallel 
processing.
+ * The original enqueue order of the source queue is maintained when buffers 
are
+ * enqueued to their destination queue(s) before another schedule call. Buffers
+ * from the same ordered (source) queue appear in their original order when
+ * dequeued from a destination queue. The destination queue type (POLL/SCHED) 
or
+ * synchronisation (NONE/ATOMIC/ORDERED) is not limited.
+ *
+ * User can command the scheduler to skip ordering of a buffer with
+ * odp_schedule_skip_order().
+ */
+#define ODP_SCHED_SYNC_ORDERED  2

 /** Default queue synchronisation */
 #define ODP_SCHED_SYNC_DEFAULT  ODP_SCHED_SYNC_ATOMIC
diff --git a/platform/linux-generic/include/api/odp_schedule.h 
b/platform/linux-generic/include/api/odp_schedule.h
index 91fec10..2a1a642 100644
--- a/platform/linux-generic/include/api/odp_schedule.h
+++ b/platform/linux-generic/include/api/odp_schedule.h
@@ -52,8 +52,8 @@ uint64_t odp_schedule_wait_time(uint64_t ns);
  * for a buffer according to the wait parameter setting. Returns
  * ODP_BUFFER_INVALID if reaches end of the wait period.
  *
- * @param from    Output parameter for the source queue (where the buffer was
- *                dequeued from). Ignored if NULL.
+ * @param src     The source queue (output). Indicates from which queue the
+ *                buffer was dequeued. Ignored if NULL.
  * @param wait    Minimum time to wait for a buffer. Waits infinitely, if set 
to
  *                ODP_SCHED_WAIT. Does not wait, if set to ODP_SCHED_NO_WAIT.
  *                Use odp_schedule_wait_time() to convert time to other wait
@@ -61,7 +61,7 @@ uint64_t odp_schedule_wait_time(uint64_t ns);
  *
  * @return Next highest priority buffer, or ODP_BUFFER_INVALID
  */
-odp_buffer_t odp_schedule(odp_queue_t *from, uint64_t wait);
+odp_buffer_t odp_schedule(odp_queue_t *src, uint64_t wait);

 /**
  * Schedule one buffer
@@ -76,8 +76,8 @@ odp_buffer_t odp_schedule(odp_queue_t *from, uint64_t wait);
  *
  * User can exit the schedule loop without first calling odp_schedule_pause().
  *
- * @param from    Output parameter for the source queue (where the buffer was
- *                dequeued from). Ignored if NULL.
+ * @param src     The source queue (output). Indicates from which queue the
+ *                buffer was dequeued. Ignored if NULL.
  * @param wait    Minimum time to wait for a buffer. Waits infinitely, if set 
to
  *                ODP_SCHED_WAIT. Does not wait, if set to ODP_SCHED_NO_WAIT.
  *                Use odp_schedule_wait_time() to convert time to other wait
@@ -85,7 +85,7 @@ odp_buffer_t odp_schedule(odp_queue_t *from, uint64_t wait);
  *
  * @return Next highest priority buffer, or ODP_BUFFER_INVALID
  */
-odp_buffer_t odp_schedule_one(odp_queue_t *from, uint64_t wait);
+odp_buffer_t odp_schedule_one(odp_queue_t *src, uint64_t wait);


 /**
@@ -93,8 +93,8 @@ odp_buffer_t odp_schedule_one(odp_queue_t *from, uint64_t 
wait);
  *
  * Like odp_schedule(), but returns multiple buffers from a queue.
  *
- * @param from    Output parameter for the source queue (where the buffer was
- *                dequeued from). Ignored if NULL.
+ * @param src     The source queue (output). Indicates from which queue the
+ *                buffer was dequeued. Ignored if NULL.
  * @param wait    Minimum time to wait for a buffer. Waits infinitely, if set 
to
  *                ODP_SCHED_WAIT. Does not wait, if set to ODP_SCHED_NO_WAIT.
  *                Use odp_schedule_wait_time() to convert time to other wait
@@ -104,7 +104,7 @@ odp_buffer_t odp_schedule_one(odp_queue_t *from, uint64_t 
wait);
  *
  * @return Number of buffers outputed (0 ... num)
  */
-int odp_schedule_multi(odp_queue_t *from, uint64_t wait, odp_buffer_t 
out_buf[],
+int odp_schedule_multi(odp_queue_t *src, uint64_t wait, odp_buffer_t out_buf[],
                       unsigned int num);

 /**
@@ -129,11 +129,36 @@ void odp_schedule_pause(void);
 void odp_schedule_resume(void);

 /**
- * Release currently hold atomic context
+ * Release the current atomic context
+ *
+ * This calls is valid only when the source queue has ATOMIC synchronisation. 
It
+ * hints the scheduler that the user has completed processing the critical
+ * section that needs atomic synchronisation. After the call, the scheduler is
+ * allowed to give next buffer from the same queue to another core.
+ *
+ * Usage of the call may increase parallelism and thus system performance, but
+ * carefulness is needed when splitting processing into critical vs.
+ * non-critical sections.
  */
 void odp_schedule_release_atomic(void);

 /**
+ * Skip ordering of the buffer
+ *
+ * This calls is valid only when source queue has ORDERED synchronisation. It
+ * commands the scheduler to skip ordering of the buffer. Scheduler maintains
+ * ordering between two queues: source and destination. The dest parameter
+ * identifies the destination queue of that pair. After the call, ordering is
+ * not maintained for the buffer anymore but user still owns the buffer. User
+ * can e.g. store it, free it or enqueue it (to the same or another queue).
+ *
+ * @param dest    Destination queue
+ * @param buf     Buffer
+ */
+void odp_schedule_skip_order(odp_queue_t dest, odp_buffer_t buf);
+
+
+/**
  * Number of scheduling priorities
  *
  * @return Number of scheduling priorities
--
2.1.1


_______________________________________________
lng-odp mailing list
lng-odp@lists.linaro.org<mailto:lng-odp@lists.linaro.org>
http://lists.linaro.org/mailman/listinfo/lng-odp


_______________________________________________
lng-odp mailing list
lng-odp@lists.linaro.org<mailto:lng-odp@lists.linaro.org>
http://lists.linaro.org/mailman/listinfo/lng-odp



_______________________________________________
lng-odp mailing list
lng-odp@lists.linaro.org
http://lists.linaro.org/mailman/listinfo/lng-odp

Reply via email to