On 06/25/2015 09:44 PM, Thomas Monjalon wrote:
> 2015-06-25 18:46, Avi Kivity:
>> On 06/25/2015 06:18 PM, Matthew Hall wrote:
>>> On Thu, Jun 25, 2015 at 09:14:53AM +0000, Vass, Sandor (Nokia - 
>>> HU/Budapest) wrote:
>>>> According to my understanding each packet should go
>>>> through BR as fast as possible, but it seems that the rte_eth_rx_burst
>>>> retrieves packets only when there are at least 2 packets on the RX queue of
>>>> the NIC. At least most of the times as there are cases (rarely - according
>>>> to my console log) when it can retrieve 1 packet also and sometimes only 3
>>>> packets can be retrieved...
>>> By default DPDK is optimized for throughput not latency. Try a test with
>>> heavier traffic.
>>>
>>> There is also some work going on now for DPDK interrupt-driven mode, which
>>> will work more like traditional Ethernet drivers instead of polling mode
>>> Ethernet drivers.
>>>
>>> Though I'm not an expert on it, there is also a series of ways to optimize 
>>> for
>>> latency, which hopefully some others could discuss... or maybe search the
>>> archives / web site / Intel tuning documentation.
>>>
>> What would be useful is a runtime switch between polling and interrupt
>> modes.  This was if the load is load you use interrupts, and as
>> mitigation, you switch to poll mode, until the load drops again.
> DPDK is not a stack. It's up to the DPDK application to poll or use interrupts
> when needed.

As long as DPDK provides a mechanism for a runtime switch, the 
application can do that.

Reply via email to