> > Hi, > > The SW eventdev rx adapter has an internal enqueue buffer 'rx_adapter- > >event_enqueue_buffer', which stores packets received from the NIC until at > least BATCH_SIZE (=32) packets have been received before enqueueing them > to eventdev. For example in case of validation testing, where often a small > number of specific test packets is sent to the NIC, this causes a lot of > problems. One would always have to transmit at least BATCH_SIZE test > packets before anything can be received from eventdev. Additionally, if the rx > packet rate is slow this also adds a considerable amount of additional delay. > > Looking at the rx adapter API and sw implementation code there doesn’t > seem to be a way to disable this internal caching. In my opinion this > “functionality" makes testing sw rx adapter so cumbersome that either the > implementation should be modified to enqueue the cached packets after a > while (some performance penalty) or there should be some method to > disable caching. Any opinions how this issue could be fixed? At the minimum, I would think there should be a compile time option. From a use case perspective, I think it falls under latency vs throughput considerations. If there is a latency sensitive application, it might not want to wait till 32 packets are received.
> > > Regards, > Matias