Hello!

We using vanilla pf-ring 5.6.0 in transparent_mode=2 to filter and
reflect packets from one interface to another (some kind of bridge). 
It is critical to use non-dna pf_ring.

We do it as follows:

1) Open ring
2) Creating rule with reflect interface field set
3) Adding rule to ring
3) Enabling selected ring

After that userspace bridge sleeps until termination is required.

Is it correct, or there is more efficient way to bridge packets with
non-dna pf-ring? 

Hardware configuration is: 
2xIntel Xeon CPU E5-2665 @ 2.40GHz (32 logical cpus)
64 Gb RAM
Network card is Intel 82599-based with two network interfaces (ixgbe
pf-ring aware driver).

We also have interrupt affinity of RxTx queues to different CPU cores
set and disabled gro, gso, tso, lro and tx- rx- checksumming features
with ethtool.

Our current implementation shows about 4 Mpps at 60-byte UDP packets
flow with our bridge, pfbridge example gives ~500Kpps.

What is the typical performance for bridging and receiving traffic on
such  hardware configuration and non-DNA pf_ring? 


Also, pf_ring UserGuide states that interfaces can be opened in ethX@Y
format - where user space can bind to specific network interface queue
in multiqueue NIC. 

Is it true for vanilla pf-ring, or this is only for DNA-interfaces?


In fact, when we use it with our bridge, packets are duplicated on
outgoing interface. 

I.e. for 16 applications attached to ethX@0 ... ethX@15 we have 16x
number of outgoing packets vs incoming.

Is this behavior correct for vanilla pf_ring?


Thanks in advance!

Maxim Samoylov, Moscow State University.

_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to