bogdanPricope replied on github web page:

example/generator/odp_generator.c
line 200
@@ -784,10 +811,33 @@ static void print_pkts(int thr, thread_args_t *thr_args,
        unsigned i;
        size_t offset;
        char msg[1024];
+       interface_t *itfs, *itf;
+
+       itfs = thr_args->rx.ifs;
 
        for (i = 0; i < len; ++i) {
                pkt = pkt_tbl[i];
 
+               itf = &itfs[odp_pktio_index(odp_packet_input(pkt))];
+
+               if (odp_packet_has_ipv4(pkt)) {
+                       if (itf->config.pktin.bit.ipv4_chksum) {
+                               if (odp_packet_has_l3_error(pkt))
+                                       printf("HW detected L3 error\n");
+                       }
+               }
+
+               if (odp_packet_has_udp(pkt)) {
+                       if (itf->config.pktin.bit.udp_chksum) {
+                               if (odp_packet_has_l4_error(pkt))
+                                       printf("HW detected L4 error\n");
+                       }
+               }
+
+               /* Drop packets with errors */
+               if (odp_unlikely(odp_packet_has_error(pkt)))


Comment:
This csum check is done with newer API in API-NEXT 
(odp_packet_l3_chksum_status()). No sense to optimize this part for this older 
implementation

> bogdanPricope wrote
> Yes, this can be part of another PR.


>> bogdanPricope wrote
>>  * @return Number of events outputted (0 ... num)


>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>> The new `odp_event_filter_packet()` API would be useful here.


>>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>>> Why `ODP_SCHED_NO_WAIT` vs. `ODP_SCHED_WAIT` here? You're just spinning if 
>>>> no packets are available so why not let the scheduler do the waiting?


>>>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>>>> Agree with @muvarov, this could use some comments to explain why these 
>>>>> calls are being used. You'd expect a dedicated RX thread to simply wait 
>>>>> for packet input.


>>>>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>>>>> Checksum errors will result in `odp_packet_has_error()` being set as 
>>>>>> well, so these checks can be done only if the summary packet error 
>>>>>> predicate is set, avoiding unnecessary checks for known good packets.


>>>>>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>>>>>> Might be good to have options for controlling the queue sync type here 
>>>>>>> as `ODP_SCHED_SYNC_PARALLEL` should result in highest throughput, and 
>>>>>>> `ODP_SCHED_SYNC_ORDERED` would be useful in testing performance of 
>>>>>>> scheduler implementations (in theory should be better than 
>>>>>>> `ODP_SCHED_SYNC_ATOMIC`).
>>>>>>> 
>>>>>>> Something to explore in another PR


>>>>>>>> muvarov wrote
>>>>>>>> ok


>>>>>>>>> muvarov wrote
>>>>>>>>> and why odp_pktin_recv_tmo() and not odp_pktin_recv() ?


>>>>>>>>>> muvarov wrote
>>>>>>>>>> why not ODP_PKTIN_WAIT?


>>>>>>>>>>> muvarov wrote
>>>>>>>>>>> not all events are packets.


>>>>>>>>>>>> muvarov wrote
>>>>>>>>>>>> ```
>>>>>>>>>>>>  * @return Next highest priority event
>>>>>>>>>>>>  * @retval ODP_EVENT_INVALID on timeout and no events available
>>>>>>>>>>>> ```


>>>>>>>>>>>>> muvarov wrote
>>>>>>>>>>>>> just separate rx function for scheduler and on thread start you 
>>>>>>>>>>>>> just select scheduler or direct.


>>>>>>>>>>>>>> bogdanPricope wrote
>>>>>>>>>>>>>> This will complicate this already over-complicated code: we may 
>>>>>>>>>>>>>> need to decide between ultimate performance and feature 
>>>>>>>>>>>>>> richness. 


>>>>>>>>>>>>>>> bogdanPricope wrote
>>>>>>>>>>>>>>> No -  we need to print csum errors first.
>>>>>>>>>>>>>>> This part was significantly changed in api-next (csum checks 
>>>>>>>>>>>>>>> use different/ new API) and it makes no sense to optimize it 
>>>>>>>>>>>>>>> for the old (master) code. After integration in api-next, this 
>>>>>>>>>>>>>>> part will be  reworked  to use less parser flags (reduce 
>>>>>>>>>>>>>>> parsing level).
>>>>>>>>>>>>>>> For example, removing L4 parsing and locating interface is 
>>>>>>>>>>>>>>> bringing an extra 1 mpps.


>>>>>>>>>>>>>>>> bogdanPricope wrote
>>>>>>>>>>>>>>>> '-r' may work.


>>>>>>>>>>>>>>>>> Bill Fischofer(Bill-Fischofer-Linaro) wrote:
>>>>>>>>>>>>>>>>> Having an option to use direct mode seems reasonable, but 
>>>>>>>>>>>>>>>>> shouldn't we retain schedule mode (perhaps as a command line 
>>>>>>>>>>>>>>>>> switch)? This would provide an easy means of testing 
>>>>>>>>>>>>>>>>> scheduler efficiency as it is tuned. At least in some 
>>>>>>>>>>>>>>>>> environments we'd like schedule mode to show better 
>>>>>>>>>>>>>>>>> performance than direct.


>>>>>>>>>>>>>>>>>> muvarov wrote
>>>>>>>>>>>>>>>>>> that has to be the first check.


>>>>>>>>>>>>>>>>>>> muvarov wrote
>>>>>>>>>>>>>>>>>>> -r ?


https://github.com/Linaro/odp/pull/343#discussion_r158211236
updated_at 2017-12-21 07:30:11

Reply via email to