Hi, Alfredo!

We were thinking about DNA, but it arises one more question: if we have
a filtering module implemented as PF_RING module (kernelspace), we'll
have to rewrite it as a userspace app to work with DNA, won't we? Or
it's possible to get PF_RING modules somewhere on the path between
capturing interface and userspace?

Regards,

Dennis

05.09.2013 13:12, Alfredo Cardigliano пишет:
> Hi Dennis
> you should not expect a significant performance boost using pfring aware 
> drivers for this application, the linux bridge is at the very bottom of the 
> network stack thus overhead is negligible. You should use DNA drivers for 
> better performance (take a look at pfdnabounce in userland/examples).
>
> Best Regards
> Alfredo
>
> On Sep 5, 2013, at 10:55 AM, Dennis Gamayunov <[email protected]> wrote:
>
>> Hi all,
>>
>> We're trying to get L2 tansparent bridge using PF_RING 5.6, PF_RING aware 
>> ixgbe, Linux 3.2 (debian wheezy) and PF_RING transparent mode 2. 
>>
>> Testing with pktgen and PF_RING show that bridging small (64 byte) packets 
>> hits the same threshold that we see when using standart linux bridge or 
>> simply test with netperf (UDP), i.e. about 480Mbit/s (about 1.5Mpps).
>>
>> The detailed description of hardware used and testing conditions are given 
>> below.
>>
>> Is this an expected behavior or we missed something in PF_RING configuration 
>> and/or ixgbe and kernel tuning? PF_RING in this config behaves almost the 
>> same way as original Linux stack, and that concerns me and makes think that 
>> we missed something in configuration.
>>
>> Hardware configuration:
>>
>> System: Supermicro X9DRW
>> Processors: 2 x Intel Xeon CPU E5-2665 @ 2.40GHz. 2 x 8 cores with HT.
>> Chipset: Intel C602
>> Memory: 8x8Gb DIMM DDR3-1600 - 64 Gb total
>> Network adapters: 2 x 82599EB 10 Gigabit TN Network Connection (one adapter 
>> equipped with twisted pair, other with optical SFP+ modules). Each adapter 
>> has two ports.
>>
>> Testing configuration:
>> Vanilla pf_ring (version 5.6.0) with pf_ring-aware ixgbe (3.11.33) drivers 
>> and transparent_mode=2 (non-transparent) is used. 
>> LRO, gro features disabled.
>> irq-affinities spread across all processors
>>
>> No DNA present.
>> Traffic generated with pktgen reporting approximately 4 Mpps.
>> Adapter with twisted pair media used and pf_ring is used to bridge traffic 
>> within two ports of one network adapter.
>>
>> We are using rings in userspace for bridging in following manner:
>>
>> ....
>> ring1 = pfring_open (interface1, snapshot, 0 | (promiscuous ? 
>> PF_RING_PROMISC : 0))
>> ring2 = pfring_open (interface2, snapshot, 0 | (promiscuous ? 
>> PF_RING_PROMISC : 0))
>> memset(&rule1, 0, sizeof(rule));
>> memset(&rule2, 0, sizeof(rule));
>> rule1.rule_id = 1;
>> rule2.rule_id = 2;
>> strcpy(rule1.reflector_device_
>> name, interface2);
>> strcpy(rule2.reflector_device_name, interface1);
>> rule1.rule_action = reflect_packet_and_stop_rule_evaluation;
>> rule2.rule_action = reflect_packet_and_stop_rule_evaluation;
>> pfring_enable_ring(ring1);
>> pfring_enable_ring(ring2);
>> while(true) sleep(1);
>> .....
>>
>> On bridge output interface we have only 37.5% of generated traffic, so 62.5% 
>> drop.
>> If userspace application opens 8 rings for each reflect direction we have 
>> 43.5% of bridged drivers, and 57.5 % drop. This gives approximately the same 
>> 1.5-2Mpps.
>>
>> Kind regards,
>>
>> Dennis Gamayunov
>> _______________________________________________
>> Ntop-misc mailing list
>> [email protected]
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> _______________________________________________
> Ntop-misc mailing list
> [email protected]
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to