/*
@@ -658,9 +768,21 @@ static int do_schedule(odp_queue_t *out_queue, odp_event_t 
out_ev[],
                       ret = copy_events(out_ev, max_num);


This update doesn't modify the preceding code:

ordered = sched_cb_queue_is_ordered(qi);

/* Do not cache ordered events locally to improve
* parallelism. Ordered context can only be released
* when the local cache is empty. */
if (ordered && max_num < MAX_DEQ)
max_deq = max_num;

num = sched_cb_queue_deq_multi(qi, sched_local.ev_stash,
     max_deq);

which seems incorrect since this code appears to do the opposite of
what the comment claims--assign multiple consecutive ordered events to
the same thread meaning that they cannot be processed in parallel.

In my opinion it is the application's choice how many events it wants to 
schedule and the implementation should not force single event operation for 
ordered queues. If the application wants the maximal parallelism it can always 
request a single event a time.

This if clause was added to make sure that ordered events aren't saved in the 
thread local scheduler cache, which improves parallelism. It also enables 
implementing the ordered locks using atomic counters, which is done in the 
following patch.



static void order_unlock(void)
@@ -795,6 +925,15 @@ static void order_unlock(void)

static void schedule_order_lock(unsigned lock_index ODP_UNUSED)
{
+       queue_entry_t *queue;
+
+       queue = sched_local.ordered.src_queue;
+       if (!queue || lock_index >= queue->s.param.sched.lock_count) {
+               ODP_ERR("Invalid ordered lock usage\n");
+               return;
+       }
+

The previous versions of schedule_order_lock() and
schedule_order_unlock() had ODP_ASSERTs() to guard against stale
lock/unlock requests. These should be retained as they are useful for
debugging and ODP_ASSERT() compiles away in non-debug builds.

OK, will change this.

-Matias

Reply via email to