Petri Savolainen(psavol) replied on github web page:

include/odp/api/spec/packet.h
line 32
@@ -235,6 +246,18 @@ int odp_packet_reset(odp_packet_t pkt, uint32_t len);
  */
 odp_packet_t odp_packet_from_event(odp_event_t ev);
 
+/**
+ * Convert multiple packet events to packet handles
+ *
+ * All events must be of type ODP_EVENT_PACKET.
+ *
+ * @param[out] pkt  Packet handle array for output
+ * @param      ev   Array of event handles to convert
+ * @param      num  Number of packets and events
+ */
+void odp_packet_from_event_multi(odp_packet_t pkt[], const odp_event_t ev[],
+                                int num);


Comment:
pkts = odp_schedule_multi(NULL, ODP_SCHED_NO_WAIT, ev_tbl, MAX_PKT_BURST);

if (pkts <= 0)
        continue;

for (i = 0; i < pkts; i++)
        pkt_tbl[i] = odp_packet_from_event(ev_tbl[i]);

VS.

odp_packet_from_event_multi(pkt_tlb, ev_tbl, num_pkts);

1) Single call vs. many calls makes difference when not inlined (ABI compat).
2) This is not only cast, it's also a copy of the handle. Implementation may be 
vectorized, for loop unrolled, etc.
3) Depending on implementation the conversion may include additional things to 
the handle copy: extra checks, post processing of the event, etc 
HW/implementation specific tricks. Which would open more optimization 
opportunity in addition to load+store.
4) Application code is cleaner 


> Petri Savolainen(psavol) wrote:
> The same answer. Improved implementation efficiency.


>> Petri Savolainen(psavol) wrote:
>> " Handles are outputted to both arrays in the same order those are stored in 
>> 'event' array. "
>> 
>> Events are not reordered. Packets are copied into packet[] and non-packets 
>> into remain[]. Both arrays maintain p and np order of the original array. 
>> Also original array is not modified.
>> 
>> packet[] = p1, p2, p3
>> remain[] = np1, np2


>>> muvarov wrote
>>> why not odp_schedule_type(ODP_PACKET,  void  *array) ? I.e. if you need 
>>> only packets then do scheduling only for packets. That has to be more fast 
>>> than get all and then filer them.


>>>> muvarov wrote
>>>> is it valid case when odp_schedule() returns mixed events types? I.e. some 
>>>> packets, some timeouts, some packets from one pool and packets from other 
>>>> pool.


>>>>> Petri Savolainen(psavol) wrote:
>>>>> Application knows the configuration, so it knows when all packets (e.g. 
>>>>> from certain queue/pkt input/etc) are allocated from the same pool. When 
>>>>> that's the case, free implementation can be more efficient (than today) 
>>>>> as it does not have to access/check/sort all packets of the burst. It can 
>>>>> just free all into the pool of the first packet. 


>>>>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>>>>> Doesn't this reorder events? As described consider an array of events 
>>>>>> that consist of packets (p) and non-packets (np).
>>>>>> 
>>>>>> With an input array of events: np1, p1, np2, p2, p3 as described 
>>>>>> `odp_event_filter_packet()` would have RC = 3, the `packet[]` array 
>>>>>> would contain p1, p2, p3, and the `remain[]` event would contain np1, 
>>>>>> np2.  Is that behavior what we want?


>>>>>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>>>>>> Even with the `odp_event_type_multi()` API semantics, it's still not 
>>>>>>> clear why this is needed. Given that `odp_packet_from_event()` is 
>>>>>>> likely just a cast there doesn't seem to be a great deal to be gained 
>>>>>>> by having a `multi` version of this which can't be easily inlined away.


>>>>>>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>>>>>>> Again since `odp_packet_to_event()` is likely to be just a cast that 
>>>>>>>> can be inlined away it's not clear why a `multi` version is needed.


>>>>>>>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>>>>>>>> Again, it's not clear why this is needed. 


>>>>>>>>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>>>>>>>>> Not clear why this is needed. How does an application determine this 
>>>>>>>>>> more efficiently than the implementation would?


https://github.com/Linaro/odp/pull/318#discussion_r154606661
updated_at 2017-12-04 10:15:48

Reply via email to