Petri Savolainen(psavol) replied on github web page:

include/odp/api/spec/event.h
line 54
@@ -141,6 +142,27 @@ odp_event_type_t odp_event_types(odp_event_t event,
 int odp_event_type_multi(const odp_event_t event[], int num,
                         odp_event_type_t *type);
 
+/**
+ * Filter and convert packet events
+ *
+ * Checks event type of all input events, converts all packet events and 
outputs
+ * packet handles. Returns the number packet handles outputted. Outputs the
+ * remaining, non-packet event handles to 'remain' array. Handles are outputted
+ * to both arrays in the same order those are stored in 'event' array. Both
+ * output arrays must fit 'num' elements.
+ *
+ * @param      event    Array of event handles
+ * @param[out] packet   Packet handle array for output
+ * @param[out] remain   Event handle array for output of remaining, non-packet
+ *                      events
+ * @param      num      Number of events (> 0)
+ *
+ * @return Number of packets outputted (0 ... num)
+ */
+int odp_event_filter_packet(const odp_event_t event[],
+                           odp_packet_t packet[],
+                           odp_event_t remain[], int num);


Comment:
Schedulers do not work that way. You cannot peak into head of schedule queue 
and ignore non-packets and just select packets. Scheduler returns N events.

Also storing non-packets locally (inside scheduler SW) would not help, since 
application could easily starve non-packets by polling those too seldom. 

> Petri Savolainen(psavol) wrote:
> The same answer. Improved implementation efficiency and cleaner application 
> code.


>> Petri Savolainen(psavol) wrote:
>> pkts = odp_schedule_multi(NULL, ODP_SCHED_NO_WAIT, ev_tbl, MAX_PKT_BURST);
>> 
>> if (pkts <= 0)
>>      continue;
>> 
>> for (i = 0; i < pkts; i++)
>>      pkt_tbl[i] = odp_packet_from_event(ev_tbl[i]);
>> 
>> VS.
>> 
>> odp_packet_from_event_multi(pkt_tlb, ev_tbl, num_pkts);
>> 
>> 1) Single call vs. many calls makes difference when not inlined (ABI compat).
>> 2) This is not only cast, it's also a copy of the handle. Implementation may 
>> be vectorized, for loop unrolled, etc.
>> 3) Depending on implementation the conversion may include additional things 
>> to the handle copy: extra checks, post processing of the event, etc 
>> HW/implementation specific tricks. Which would open more optimization 
>> opportunity in addition to load+store.
>> 4) Application code is cleaner 


>>> Petri Savolainen(psavol) wrote:
>>> The same answer. Improved implementation efficiency.


>>>> Petri Savolainen(psavol) wrote:
>>>> " Handles are outputted to both arrays in the same order those are stored 
>>>> in 'event' array. "
>>>> 
>>>> Events are not reordered. Packets are copied into packet[] and non-packets 
>>>> into remain[]. Both arrays maintain p and np order of the original array. 
>>>> Also original array is not modified.
>>>> 
>>>> packet[] = p1, p2, p3
>>>> remain[] = np1, np2


>>>>> muvarov wrote
>>>>> why not odp_schedule_type(ODP_PACKET,  void  *array) ? I.e. if you need 
>>>>> only packets then do scheduling only for packets. That has to be more 
>>>>> fast than get all and then filer them.


>>>>>> muvarov wrote
>>>>>> is it valid case when odp_schedule() returns mixed events types? I.e. 
>>>>>> some packets, some timeouts, some packets from one pool and packets from 
>>>>>> other pool.


>>>>>>> Petri Savolainen(psavol) wrote:
>>>>>>> Application knows the configuration, so it knows when all packets (e.g. 
>>>>>>> from certain queue/pkt input/etc) are allocated from the same pool. 
>>>>>>> When that's the case, free implementation can be more efficient (than 
>>>>>>> today) as it does not have to access/check/sort all packets of the 
>>>>>>> burst. It can just free all into the pool of the first packet. 


>>>>>>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>>>>>>> Doesn't this reorder events? As described consider an array of events 
>>>>>>>> that consist of packets (p) and non-packets (np).
>>>>>>>> 
>>>>>>>> With an input array of events: np1, p1, np2, p2, p3 as described 
>>>>>>>> `odp_event_filter_packet()` would have RC = 3, the `packet[]` array 
>>>>>>>> would contain p1, p2, p3, and the `remain[]` event would contain np1, 
>>>>>>>> np2.  Is that behavior what we want?


>>>>>>>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>>>>>>>> Even with the `odp_event_type_multi()` API semantics, it's still not 
>>>>>>>>> clear why this is needed. Given that `odp_packet_from_event()` is 
>>>>>>>>> likely just a cast there doesn't seem to be a great deal to be gained 
>>>>>>>>> by having a `multi` version of this which can't be easily inlined 
>>>>>>>>> away.


>>>>>>>>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>>>>>>>>> Again since `odp_packet_to_event()` is likely to be just a cast that 
>>>>>>>>>> can be inlined away it's not clear why a `multi` version is needed.


>>>>>>>>>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>>>>>>>>>> Again, it's not clear why this is needed. 


>>>>>>>>>>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>>>>>>>>>>> Not clear why this is needed. How does an application determine 
>>>>>>>>>>>> this more efficiently than the implementation would?


https://github.com/Linaro/odp/pull/318#discussion_r154608403
updated_at 2017-12-04 10:23:18

Reply via email to