When rte_pktmbuf_free_bulk is faster because it does single
mempool operation rather than per-packet.

Signed-off-by: Stephen Hemminger <[email protected]>
Acked-by: Konstantin Ananyev <[email protected]>
Reviewed-by: Marat Khalili <[email protected]>
---
 lib/bpf/bpf_pkt.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/lib/bpf/bpf_pkt.c b/lib/bpf/bpf_pkt.c
index 01f813c56b..087ac0f244 100644
--- a/lib/bpf/bpf_pkt.c
+++ b/lib/bpf/bpf_pkt.c
@@ -177,8 +177,7 @@ apply_filter(struct rte_mbuf *mb[], const uint64_t rc[], 
uint32_t num,
 
        if (drop != 0) {
                /* free filtered out mbufs */
-               for (i = 0; i != k; i++)
-                       rte_pktmbuf_free(dr[i]);
+               rte_pktmbuf_free_bulk(dr, k);
        } else {
                /* copy filtered out mbufs beyond good ones */
                for (i = 0; i != k; i++)
-- 
2.51.0

Reply via email to