On 2023-09-21 20:36, Jerin Jacob wrote:
On Mon, Sep 4, 2023 at 6:39 PM Mattias Rönnblom
<mattias.ronnb...@ericsson.com> wrote:

The purpose of the dispatcher library is to help reduce coupling in an
Eventdev-based DPDK application.

In addition, the dispatcher also provides a convenient and flexible
way for the application to use service cores for application-level
processing.

Signed-off-by: Mattias Rönnblom <mattias.ronnb...@ericsson.com>
Tested-by: Peter Nilsson <peter.j.nils...@ericsson.com>
Reviewed-by: Heng Wang <heng.w...@ericsson.com>


+static inline void
+evd_dispatch_events(struct rte_dispatcher *dispatcher,
+                   struct rte_dispatcher_lcore *lcore,
+                   struct rte_dispatcher_lcore_port *port,
+                   struct rte_event *events, uint16_t num_events)
+{
+       int i;
+       struct rte_event bursts[EVD_MAX_HANDLERS][num_events];
+       uint16_t burst_lens[EVD_MAX_HANDLERS] = { 0 };
+       uint16_t drop_count = 0;
+       uint16_t dispatch_count;
+       uint16_t dispatched = 0;
+
+       for (i = 0; i < num_events; i++) {
+               struct rte_event *event = &events[i];
+               int handler_idx;
+
+               handler_idx = evd_lookup_handler_idx(lcore, event);
+
+               if (unlikely(handler_idx < 0)) {
+                       drop_count++;
+                       continue;
+               }
+
+               bursts[handler_idx][burst_lens[handler_idx]] = *event;

Looks like it caching the event to accumulate ? If flow or queue is
configured as RTE_SCHED_TYPE_ORDERED?


The ordering guarantees (and lack thereof) are covered in detail in the programming guide.

"Delivery order" (the order the callbacks see the events) is maintained only for events destined for the same handler.

I have considered adding a flags field to the create function, to then in turn (now, or in the future) add an option to maintain strict ordering between handlers. In my mind, and in the applications where this pattern has been used in the past, the "clustering" of events going to the same handler is a feature, not a bug, since it much improves cache temporal locality and provides more opportunity for software prefetching/preloading. (Prefetching may be done already in the match function.)

If your event device does clustering already, or if the application implements this pattern already, you will obviously see no gains. If neither of those are true, the application will likely suffer fewer cache misses, much outweighing the tiny bit of extra processing required in the event dispatcher.

This reshuffling ("clustering") of events is the only thing I think could be offloaded to hardware. The event device is already free to reshuffle events as long as it conforms to whatever ordering guarantees the eventdev scheduling types in questions require, but the event dispatcher relaxes those further, and give further hints to the platform, what events are actually related.

Will it completely lose ordering as next rte_event_enqueue_burst will
release context? >

It is the dequeue operation that will release the context (provided "implicit release" is not disabled). See the documentation you quote below.

(Total) ordering is guaranteed between dequeue bursts.


Definition of RTE_SCHED_TYPE_ORDERED

#define RTE_SCHED_TYPE_ORDERED          0
/**< Ordered scheduling
  *
  * Events from an ordered flow of an event queue can be scheduled to multiple
  * ports for concurrent processing while maintaining the original event order.
  * This scheme enables the user to achieve high single flow throughput by
  * avoiding SW synchronization for ordering between ports which bound to cores.
  *
  * The source flow ordering from an event queue is maintained when events are
  * enqueued to their destination queue within the same ordered flow context.
  * An event port holds the context until application call
  * rte_event_dequeue_burst() from the same port, which implicitly releases
  * the context.
  * User may allow the scheduler to release the context earlier than that
  * by invoking rte_event_enqueue_burst() with RTE_EVENT_OP_RELEASE operation.
  *
  * Events from the source queue appear in their original order when dequeued
  * from a destination queue.
  * Event ordering is based on the received event(s), but also other
  * (newly allocated or stored) events are ordered when enqueued within the same
  * ordered context. Events not enqueued (e.g. released or stored) within the
  * context are  considered missing from reordering and are skipped at this time
  * (but can be ordered again within another context).
  *
  * @see rte_event_queue_setup(), rte_event_dequeue_burst(), 
RTE_EVENT_OP_RELEASE
  */

Reply via email to