ok, thank you for your reply.

in general we managed the applications for working both with pfring drivers and vanilla drivers in transparent_mode=1

anyway, it would be more appropriate defining exactly what is the behaviuor of the applications once pfring manages the system network.
the information on the manual contain this description:

"· transparent_mode=0 (default)
Packets are received via the standard Linux interface. Any driver can use this mode.
· transparent_mode=1 (Both vanilla and PF_RING-aware drivers)
  Packets are memcpy() to PF_RING and also to the standard Linux path.
· transparent_mode=2 (PF_RING -aware drivers only)
Packets are ONLY memcpy() to PF_RING and not to the standard Linux path (i.e. tcpdump won't
  see anything).
"

from a user point of view it would be better defining, (>> for each of the 3 modes <<), the following functionalities:

 * applications compiled _with_ pfring support:
     o use pfring network drivers for sniffing/injecting packets (yes/no)
     o use vanilla network drivers for sniffing/injecting packets (yes/no)
 * applications compiled _without_ pfring support:
     o use pfring network drivers for sniffing/injecting packets (yes/no)
     o use vanilla network drivers for sniffing/injecting packets (yes/no)

these 4 situazions i think can all behave in a different way.
and to further clarify describe if 'pfring support' means: pfring support just for the library that takes care of the packtes (libpcap) or also for the application itself.

one more thing:
reading the official documentation it can be understood that with transparent mode = 2 all NOT pfring application will not see traffic from ANY network interface even from the vanilla ones becasue PF_RING is steering all the packets in his stack and not the one of the kernel. now WHY, if i load PFRING in mode=2, i can still reach my machine with SSH after that ?? the documentation is a bit wrong in this case, i think it must be specified that still in in mode = 2 NOT pfring applications can still read and write pkts on vanilla interfaces. i think this is because of a completely separated network stack for each of the network card on the system managed by the kernel.

for vanilla drivers loaded even AFTER of before pfring modules, all standard applications still work and packets are still processed from the linux kernel, even on mode=2. so in the end the statement in the documentation on mode=2 " tcpdump won't see anything " can be misleading and should be changed in "tcpdump compiled without pfring support and on an interface with pfring drivers"

regarding our problem with the dna version of igb:
we have a Dell PowerEdge 1950
i do not know for example if the problem can be the pci-e bus of only 4x

this is the output after loading the pfring version NOT the DNA!!!, now i cannot test it again.
igb 0000:0b:00.1: PCI INT B disabled
igb 0000:0b:00.0: PCI INT A disabled
igb 0000:0b:00.0: PCI INT A -> GSI 16 (level, low) -> IRQ 16
igb 0000:0b:00.0: setting latency timer to 64
igb: 0000:0b:00.0: igb_validate_option: Interrupt Mode set to 2
igb 0000:0b:00.0: irq 122 for MSI/MSI-X
igb 0000:0b:00.0: irq 123 for MSI/MSI-X
igb 0000:0b:00.0: Intel(R) Gigabit Ethernet Network Connection
igb 0000:0b:00.0: eth0: (PCIe:2.5GT/s:Width x4) 00:1b:21:48:3c:ec
igb 0000:0b:00.0: eth0: PBA No: E43709-003
igb 0000:0b:00.0: Using MSI-X interrupts. 1 rx queue(s), 1 tx queue(s)
igb 0000:0b:00.1: PCI INT B -> GSI 17 (level, low) -> IRQ 17
igb 0000:0b:00.1: setting latency timer to 64
igb 0000:0b:00.1: irq 124 for MSI/MSI-X
igb 0000:0b:00.1: irq 125 for MSI/MSI-X
igb 0000:0b:00.0: MTU > 9216 not supported.
igb 0000:0b:00.1: Intel(R) Gigabit Ethernet Network Connection
igb 0000:0b:00.1: eth1: (PCIe:2.5GT/s:Width x4) 00:1b:21:48:3c:ed
igb 0000:0b:00.1: eth1: PBA No: E43709-003
igb 0000:0b:00.1: Using MSI-X interrupts. 1 rx queue(s), 1 tx queue(s)
igb 0000:0b:00.1: MTU > 9216 not supported.
igb: eth0 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX
igb: eth1 NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX

Enrico.


_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Reply via email to