On 2018/09/11 1:21, Ilias Apalodimas wrote:
>>> @@ -707,6 +731,26 @@ static int netsec_process_rx(struct netsec_priv *priv, 
>>> int budget)
>>>             if (unlikely(!buf_addr))
>>>                     break;
>>>  
>>> +           if (xdp_prog) {
>>> +                   xdp_result = netsec_run_xdp(desc, priv, xdp_prog,
>>> +                                               pkt_len);
>>> +                   if (xdp_result != NETSEC_XDP_PASS) {
>>> +                           xdp_flush |= xdp_result & NETSEC_XDP_REDIR;
>>> +
>>> +                           dma_unmap_single_attrs(priv->dev,
>>> +                                                  desc->dma_addr,
>>> +                                                  desc->len, DMA_TO_DEVICE,
>>> +                                                  DMA_ATTR_SKIP_CPU_SYNC);
>>> +
>>> +                           desc->len = desc_len;
>>> +                           desc->dma_addr = dma_handle;
>>> +                           desc->addr = buf_addr;
>>> +                           netsec_rx_fill(priv, idx, 1);
>>> +                           nsetsec_adv_desc(&dring->tail);
>>> +                   }
>>> +                   continue;
>>
>> Continue even on XDP_PASS? Is this really correct?
>>
>> Also seems there is no handling of adjust_head/tail for XDP_PASS case.
>>
> A question on this. Should XDP related frames be allocated using 1 page
> per packet?

AFAIK there is no such constraint, e.g. i40e allocates 1 page per 2 packets.

-- 
Toshiaki Makita

Reply via email to