Hi,

I would propose to add more clear description about posibility of calling
the ODP API for event and packet processing.
For e.g it is unclear now if CONTROL threads can operate on queues or
scheduler.

 51          * Control threads do not participate the main packet flow
through the
 52          * system, but e.g. control or monitor the worker threads, or
handle
 53          * exceptions. These threads may perform general purpose
processing,
 54          * use system calls, share the CPU with other threads and be
interrupt
 55          * driven.
+ Those threads can use all ODP API functions including pktio, queues and
scheduler.


2017-01-20 9:26 GMT+01:00 Stanislaw Kardach <k...@semihalf.com>:

> On 01/19/2017 06:06 PM, Bill Fischofer wrote:
>
>> On Thu, Jan 19, 2017 at 10:03 AM, Stanislaw Kardach <k...@semihalf.com>
>> wrote:
>>
>>>
>>>
>>> Best Regards,
>>> Stanislaw Kardach
>>>
>>>
>>> On 01/19/2017 04:57 PM, Bill Fischofer wrote:
>>>
>>>>
>>>> On Thu, Jan 19, 2017 at 7:17 AM, Stanislaw Kardach <k...@semihalf.com>
>>>> wrote:
>>>>
>>>>>
>>>>> Hi all,
>>>>>
>>>>> While going through thread and scheduler APIs I've stumbled on one
>>>>> uncertainty in ODP API that I could not find straight solution to.
>>>>>
>>>>> Does ODP allow scheduling packets from an ODP_THREAD_CONTROL thread?
>>>>>
>>>>
>>>>
>>>> When a thread calls odp_schedule() the only requirement is that it be
>>>> a member of a schedule group that contains queues of type
>>>> ODP_QUEUE_TYPE_SCHED.  Since threads by default are in group
>>>> ODP_SCHED_GROUP_ALL and that's also the default scheduler group for
>>>> queues, this is normally not a consideration.
>>>>
>>>>
>>>>> If yes then what would be the difference between ODP_THREAD_CONTROL and
>>>>> ODP_THREAD_WORKER threads beside isolating cores for worker threads?
>>>>> API
>>>>> suggests this approach as both control and worker threads are treated
>>>>> the
>>>>> same way in odp_thrmask_* calls. Moreover:
>>>>> a. schedule groups take odp_thrmask_t (no comment on whether it has
>>>>>    to only contain worker threads)
>>>>> b. There is a schedule group ODP_SCHED_GROUP_ALL which would imply that
>>>>>    user can create a scheduler queue that control threads can use.
>>>>>
>>>>
>>>>
>>>> The provision of WORKER/CONTROL threads is for application convenience
>>>> in organizing itself. But there is no semantic difference between the
>>>> two as far as the ODP API spec is concerned.
>>>>
>>>>
>>>>> On the other hand I can find the following in ODP_THREAD_CONTROL
>>>>> description: "Control threads do not participate the main packet flow
>>>>> through the system". That looks to me like an implication that control
>>>>> threads should not do any packet processing. However if that's the case
>>>>> then
>>>>> ODP_THREAD_COUNT_MAX does not differentiate between worker or control
>>>>> threads and similarly  odp_thrmask_t doesn't (and by extension schedule
>>>>> groups).
>>>>>
>>>>
>>>>
>>>> The expectation here is that worker threads will want to run on
>>>> isolated cores for best performance while control thread can share
>>>> cores and be timesliced without performance impact. That's the main
>>>> reason for having this division. Threads that do performance-critical
>>>> work would normally be designated worker thread while those that do
>>>> less performance-critical work would be designated as control threads.
>>>> Again, this is a convenience feature that applications can use to
>>>> manage core/thread assignments but ODP imposes no requirements on
>>>> applications in this area.
>>>>
>>>>
>>>>> To put this discussion in a concrete context, on the platform which I'm
>>>>> working on, each thread that wants to interact with a scheduler needs
>>>>> to
>>>>> do
>>>>> it via special hardware handle of which I have a limited number. For me
>>>>> it
>>>>> makes sense to reserve such handles only for threads which are going to
>>>>> do
>>>>> the traffic processing (hence worker threads) and leave control threads
>>>>> unlimited. In summary let application spawn as many control threads as
>>>>> it
>>>>> wants but limit worker threads by the amount of handles that I have to
>>>>> spare.
>>>>>
>>>>
>>>>
>>>> That's certainly one possibility. On such platforms the odp_schedule()
>>>> API might take the thread type into consideration in determining how
>>>> to process schedule requests, however in the case you outline a better
>>>> and more portable way might be to have a hardware-queue schedule group
>>>> and assign queues that are holding performance-critical events to that
>>>> schedule group. The point is that both applications and ODP
>>>> implementations have a lot of flexibility in how they operate within
>>>> the ODP API spec.
>>>>
>>>> Do I understand you correctly that this hardware-queue schedule group
>>> would
>>> be the one to utilize the "hardware handles" (and hence hardware
>>> scheduler)
>>> where as other schedule groups rely on software scheduler?
>>>
>>
>> That would be one possible way to organize an ODP implementation. You
>> could have a scheduling group that takes full advantage of HW
>> scheduling capabilities while others rely on SW scheduling. Or you
>> could share limited HW scheduling objects and use them as a sort of
>> cache to accelerate all queues, relying on the fact that those more
>> frequently used would tend to be those threads/queues doing heavy
>> packet processing. It's really up to the implementer as to how they
>> want to organize things.
>>
>> Correct me if I'm wrong but having both HW and SW scheduler with the
> current API would require to move packets between SW and HW schedulers
> which in general can be pain to maintain ordering at acceptable speeds?
>
> The second approach can be problematic for some platforms when the HW
> scheduling objects also hold ordering context which would make it a very
> complex task to make sure the order is maintained when another thread is
> trying to use the scheduling object.
>
> What I'm trying to point out is that it seems a bit confusing to let the
> ODP implementation know of the type of a thread and act on it and yet do
> not allow it specifying any capabilities in that sense, i.e. number of
> worker and control threads separately and if any of them can or cannot
> access scheduler. Similar issue was in DPDK when it was unspecified whether
> threads not being lcores could receive packets and it resulted in some PMD
> drivers not caring while others crashing (I've experienced it while testing
> OVS-DPDK).
>
>
>
>>>
>>>>> --
>>>>> Best Regards,
>>>>> Stanislaw Kardach
>>>>>
>>>>

Reply via email to