Re: [lng-odp] Sequence requirments for odp_schedule_order_lock()

2017-04-06 Thread Radosław Biernacki
Hi Bill,

Thank you for reply and sorry for my long reply.

IMHO the description which you give could be copied to
doc/users-guide/users-guide.adoc as there are not many information how this
should work across all implementations.
I don't fully understand following sentence "since each individual index
follows the same order, which is fixed for both Thread A and B"

Could you please describe what should happen in below example?
Each line specify the time of event:
T1: WorkerA calls odp_schedule() -> gets packet 1 from queueA
T2: WorkerB calls odp_schedule() -> gets packet 2 from queueA (which came
into interface after packet1)

T3: WorkerB calls odp_schedule_order_lock(1)
T3: WorkerA calls odp_schedule_order_lock(0)

than WorkerA calls odp_schedule_order_unlock(0) and calls
odp_schedule_order_lock(1) (and unlock(1))
while WorkerB only unlock the 1

Will WorkerB be suspended till WorkerA unlocks order lock 1?
We implementing this API now and our world seems totally different than
linux-generic so it would be best if you could specify the remaining part
of the sequence (as it would be a sequence requirement for us).

Topic 2: I guess that nested order locks are forbidden? If that's the case,
than we should also mention that in docs.

2017-03-29 19:38 GMT+02:00 Bill Fischofer :

> On Wed, Mar 29, 2017 at 7:18 AM, Radosław Biernacki 
> wrote:
> > Hi all,
> >
> > The documentation for odp_schedule_order_lock(unsigned lock_index) does
> not
> > specify the sequence in which the lock_index need to be given.
>
> That's because there is no such required sequence. Each lock index is
> a distinct ordered synchronization point and the only requirement is
> that each thread may only use each index at most once per ordered
> context. Threads may skip any or all indexes as there is no obligation
> for threads to use ordered locks when they are present in the context,
> and the index order doesn't matter to ODP.
>
> When threads enter an ordered context they hold a sequence that is
> determined by the source ordered queue. For a thread to be able to
> acquire ordered lock index i, all that is required is that all threads
> holding lower sequences have either acquired and released that index
> or else definitively passed on using it by exiting the ordered
> context. So if thread A tries to acquire index 1 first and then index
> 2 while Thread B tries to acquire index 2 first and then index 1 it
> doesn't matter since each individual index follows the same order,
> which is fixed for both Thread A and B.
>
> Note that there may be a loss of parallelism if indexes are permuted,
> however there is no possibility of deadlock. Best parallelism will be
> achieved when all threads use indexes in the same sequence, but ODP
> doesn't care what sequence of indexes the application chooses to use.
>
> >
> > Shouldn't the following statements be included in description of this
> > function?
> > 1) All code paths calling this function (in the same synchronization
> > context) need to use the same lock_index sequence, eg: 1, 3, 2, 4 or the
> > results are undefined (like the sequence of events will not be preserved)
> > if this rule is not followed
> > 2) The doc should emphasize a bit more to what the synchronization
> context
> > is bound to (source queue). For eg. it should say that lock_index
> sequence
> > can be different for different source queues (synchronization contexts).
> > 3) It is possible to skip some lock_index in sequence. But skipped
> > lock_indexes cannot be used outside of the sequence (since this will
> alter
> > the sequence which is violation of rule 1).
>


[lng-odp] Sequence requirments for odp_schedule_order_lock()

2017-03-29 Thread Radosław Biernacki
Hi all,

The documentation for odp_schedule_order_lock(unsigned lock_index) does not
specify the sequence in which the lock_index need to be given.

Shouldn't the following statements be included in description of this
function?
1) All code paths calling this function (in the same synchronization
context) need to use the same lock_index sequence, eg: 1, 3, 2, 4 or the
results are undefined (like the sequence of events will not be preserved)
if this rule is not followed
2) The doc should emphasize a bit more to what the synchronization context
is bound to (source queue). For eg. it should say that lock_index sequence
can be different for different source queues (synchronization contexts).
3) It is possible to skip some lock_index in sequence. But skipped
lock_indexes cannot be used outside of the sequence (since this will alter
the sequence which is violation of rule 1).


Re: [lng-odp] Scheduling packets from control threads

2017-01-20 Thread Radosław Biernacki
Hi,

I would propose to add more clear description about posibility of calling
the ODP API for event and packet processing.
For e.g it is unclear now if CONTROL threads can operate on queues or
scheduler.

 51  * Control threads do not participate the main packet flow
through the
 52  * system, but e.g. control or monitor the worker threads, or
handle
 53  * exceptions. These threads may perform general purpose
processing,
 54  * use system calls, share the CPU with other threads and be
interrupt
 55  * driven.
+ Those threads can use all ODP API functions including pktio, queues and
scheduler.


2017-01-20 9:26 GMT+01:00 Stanislaw Kardach :

> On 01/19/2017 06:06 PM, Bill Fischofer wrote:
>
>> On Thu, Jan 19, 2017 at 10:03 AM, Stanislaw Kardach 
>> wrote:
>>
>>>
>>>
>>> Best Regards,
>>> Stanislaw Kardach
>>>
>>>
>>> On 01/19/2017 04:57 PM, Bill Fischofer wrote:
>>>

 On Thu, Jan 19, 2017 at 7:17 AM, Stanislaw Kardach 
 wrote:

>
> Hi all,
>
> While going through thread and scheduler APIs I've stumbled on one
> uncertainty in ODP API that I could not find straight solution to.
>
> Does ODP allow scheduling packets from an ODP_THREAD_CONTROL thread?
>


 When a thread calls odp_schedule() the only requirement is that it be
 a member of a schedule group that contains queues of type
 ODP_QUEUE_TYPE_SCHED.  Since threads by default are in group
 ODP_SCHED_GROUP_ALL and that's also the default scheduler group for
 queues, this is normally not a consideration.


> If yes then what would be the difference between ODP_THREAD_CONTROL and
> ODP_THREAD_WORKER threads beside isolating cores for worker threads?
> API
> suggests this approach as both control and worker threads are treated
> the
> same way in odp_thrmask_* calls. Moreover:
> a. schedule groups take odp_thrmask_t (no comment on whether it has
>to only contain worker threads)
> b. There is a schedule group ODP_SCHED_GROUP_ALL which would imply that
>user can create a scheduler queue that control threads can use.
>


 The provision of WORKER/CONTROL threads is for application convenience
 in organizing itself. But there is no semantic difference between the
 two as far as the ODP API spec is concerned.


> On the other hand I can find the following in ODP_THREAD_CONTROL
> description: "Control threads do not participate the main packet flow
> through the system". That looks to me like an implication that control
> threads should not do any packet processing. However if that's the case
> then
> ODP_THREAD_COUNT_MAX does not differentiate between worker or control
> threads and similarly  odp_thrmask_t doesn't (and by extension schedule
> groups).
>


 The expectation here is that worker threads will want to run on
 isolated cores for best performance while control thread can share
 cores and be timesliced without performance impact. That's the main
 reason for having this division. Threads that do performance-critical
 work would normally be designated worker thread while those that do
 less performance-critical work would be designated as control threads.
 Again, this is a convenience feature that applications can use to
 manage core/thread assignments but ODP imposes no requirements on
 applications in this area.


> To put this discussion in a concrete context, on the platform which I'm
> working on, each thread that wants to interact with a scheduler needs
> to
> do
> it via special hardware handle of which I have a limited number. For me
> it
> makes sense to reserve such handles only for threads which are going to
> do
> the traffic processing (hence worker threads) and leave control threads
> unlimited. In summary let application spawn as many control threads as
> it
> wants but limit worker threads by the amount of handles that I have to
> spare.
>


 That's certainly one possibility. On such platforms the odp_schedule()
 API might take the thread type into consideration in determining how
 to process schedule requests, however in the case you outline a better
 and more portable way might be to have a hardware-queue schedule group
 and assign queues that are holding performance-critical events to that
 schedule group. The point is that both applications and ODP
 implementations have a lot of flexibility in how they operate within
 the ODP API spec.

 Do I understand you correctly that this hardware-queue schedule group
>>> would
>>> be the one to utilize the "hardware handles" (and hence hardware
>>> scheduler)
>>> where as other schedule groups rely on software scheduler?
>>>
>>
>> That would be one possible way to organize an ODP implementation. You
>> could 

Re: [lng-odp] [RFC 1/2] api: classification: Add queue group to classification

2016-09-14 Thread Radosław Biernacki
Hi

I cannot find this patch either next nor api-next, so I assume that it was
not accepted yet.


2016-04-25 14:31 GMT+02:00 Balasubramanian Manoharan <
bala.manoha...@linaro.org>:

> Adds queue group to classification
>
> Signed-off-by: Balasubramanian Manoharan 
> ---
>  include/odp/api/spec/classification.h | 19 ++-
>  1 file changed, 18 insertions(+), 1 deletion(-)
>
> diff --git a/include/odp/api/spec/classification.h b/include/odp/api/spec/
> classification.h
> index 6eca9ab..cf56852 100644
> --- a/include/odp/api/spec/classification.h
> +++ b/include/odp/api/spec/classification.h
> @@ -126,6 +126,12 @@ typedef struct odp_cls_capability_t {
>
> /** A Boolean to denote support of PMR range */
> odp_bool_t pmr_range_supported;
> +
> +   /** A Boolean to denote support of queue group */
> +   odp_bool_t queue_group_supported;
> +
> +   /** A Boolena to denote support of queue */
> +   odp_bool_t queue_supported;
>  } odp_cls_capability_t;
>
>  /**
> @@ -162,7 +168,18 @@ typedef enum {
>   * Used to communicate class of service creation options
>   */
>  typedef struct odp_cls_cos_param {
> -   odp_queue_t queue;  /**< Queue associated with CoS */
> +   /** If True, odp_queue_t is linked with CoS,
> +* if False odp_queue_group_t is linked with CoS.
> +*/
> +   odp_bool_t enable_queue;
>

Since this flag defines either queue or queue_gropup field from union will
be used, it might be better to use some enum instead of bool.
Bool is OK for enable/disable flags but may be confusing when used as
selector type.


> +
> +   typedef union {
> +   /** Queue associated with CoS */
> +   odp_queue_t queue;
> +
> +   /** Queue Group associated with CoS */
> +   odp_queue_group_t queue_group;
> +   };
> odp_pool_t pool;/**< Pool associated with CoS */
> odp_cls_drop_t drop_policy; /**< Drop policy associated with
> CoS */
>  } odp_cls_cos_param_t;
> --
> 1.9.1
>
> ___
> lng-odp mailing list
> lng-odp@lists.linaro.org
> https://lists.linaro.org/mailman/listinfo/lng-odp
>