Hi,
I’m having trouble using ZC drivers. I’ll include the relevant details of
loading, etc at the bottom. Essentially I can’t get my usual apps (such as
argus) which I compile against libpcap to recognize interfaces in the usual
combinations, in ZC or non-ZC modes. I’m falling back to just testing the
tcpdump distributed with PF_RING, and still having some issues. There’s a good
chance these are based on my own misconceptions, so please bear with me. :)
Initially I tried doing a tcpdump with a similar interface call as used to work
in the 5.X series:
# ldd /usr/local/sbin/tcpdump
linux-vdso.so.1 => (0x00007fff4ff8b000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x0000003959e00000)
libnuma.so.1 => /usr/lib64/libnuma.so.1 (0x000000395b200000)
libc.so.6 => /lib64/libc.so.6 (0x0000003959a00000)
/lib64/ld-linux-x86-64.so.2 (0x0000003959600000)
# /usr/local/sbin/tcpdump -i 'p1p1;p2p1' -nn -c 100
<snip>
100 packets captured
100 packets received by filter
0 packets dropped by kernel
So far so good. After playing a bit, I could not figure out how to do this with
the ZC drivers directly; as this may be part of the issue with using argus, I’d
like to know if it’s possible to call tcpdump in this fashion and invoke the ZC
drivers...Could you provide an example if this is possible?
The next issue I ran into was related to using BPF filters; I could not get
this to work using non-ZC drivers...For instance, this tcpdump will capture no
packets:
# /usr/local/sbin/tcpdump -i 'p1p1;p2p1' -nn -c 100 "ip"
tcpdump: WARNING: SIOCGIFADDR: p1p1;p2p1: No such device
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on p1p1;p2p1, link-type EN10MB (Ethernet), capture size 8192 bytes
^C
0 packets captured
0 packets received by filter
0 packets dropped by kernel
Since this shouldn’t be using ZC at all, and this invocation worked on previous
versions, it seems this must be a bug?
Another area I’m having trouble understanding is the use of the z* example
programs...It would seem that I should be using zbalance_ipc to put the two
streams together and then share them to multiple processes...Trying to do this
to a single process exhibits the same issues with filters:
# ./zbalance_ipc -i p1p1,p2p1 -c 99 -g 1
<switch windows>
#/usr/local/sbin/tcpdump -i zc:99@0 -nn -c 10
<snip> 10 packets are displayed
#/usr/local/sbin/tcpdump -i zc:99@0 -nn -c 10 “ip”
<nothing>
I’d like to resolve the BPF issue to start, and perhaps get some confirmation
that I should use zbalance_ipc to get ZC’ed data to my applications...And if
you can see I’m already heading down a road to inefficient usage, please let me
know. :)
Thanks for the time, and reading this far,
Jesse
# cat /etc/modprobe.d/pfring.conf
options min_num_slots=8096 transparent_mode=2
# cat /proc/net/pf_ring/info
PF_RING Version : 6.0.1 ($Revision: exported$)
Total rings : 0
Standard (non DNA) Options
Ring slots : 4096
Slot version : 15
Capture TX : Yes [RX+TX]
IP Defragment : No
Socket Mode : Standard
Transparent mode : Yes [mode 0]
Total plugins : 0
Cluster Fragment Queue : 0
Cluster Fragment Discard : 0
# ./load_driver.sh
mkdir: cannot create directory `/mnt/huge': File exists
Configuring p1p1
no rx vectors found on p1p1
no tx vectors found on p1p1
p1p1 mask=1 for /proc/irq/61/smp_affinity
p1p1 mask=2 for /proc/irq/62/smp_affinity
p1p1 mask=4 for /proc/irq/63/smp_affinity
p1p1 mask=8 for /proc/irq/64/smp_affinity
p1p1 mask=10 for /proc/irq/65/smp_affinity
p1p1 mask=20 for /proc/irq/66/smp_affinity
p1p1 mask=40 for /proc/irq/67/smp_affinity
p1p1 mask=80 for /proc/irq/68/smp_affinity
p1p1 mask=100 for /proc/irq/69/smp_affinity
p1p1 mask=200 for /proc/irq/70/smp_affinity
p1p1 mask=400 for /proc/irq/71/smp_affinity
p1p1 mask=800 for /proc/irq/72/smp_affinity
p1p1 mask=1000 for /proc/irq/73/smp_affinity
p1p1 mask=2000 for /proc/irq/74/smp_affinity
p1p1 mask=4000 for /proc/irq/75/smp_affinity
p1p1 mask=8000 for /proc/irq/76/smp_affinity
Configuring p2p1
no rx vectors found on p2p1
no tx vectors found on p2p1
p2p1 mask=1 for /proc/irq/78/smp_affinity
p2p1 mask=2 for /proc/irq/79/smp_affinity
p2p1 mask=4 for /proc/irq/80/smp_affinity
p2p1 mask=8 for /proc/irq/81/smp_affinity
p2p1 mask=10 for /proc/irq/82/smp_affinity
p2p1 mask=20 for /proc/irq/83/smp_affinity
p2p1 mask=40 for /proc/irq/84/smp_affinity
p2p1 mask=80 for /proc/irq/85/smp_affinity
p2p1 mask=100 for /proc/irq/86/smp_affinity
p2p1 mask=200 for /proc/irq/87/smp_affinity
p2p1 mask=400 for /proc/irq/88/smp_affinity
p2p1 mask=800 for /proc/irq/89/smp_affinity
p2p1 mask=1000 for /proc/irq/90/smp_affinity
p2p1 mask=2000 for /proc/irq/91/smp_affinity
p2p1 mask=4000 for /proc/irq/92/smp_affinity
# modinfo pf_ring
filename:
/lib/modules/2.6.32-431.23.3.el6.x86_64/kernel/net/pf_ring/pf_ring.ko
alias: net-pf-27
description: Packet capture acceleration and analysis
author: Luca Deri <[email protected]>
license: GPL
srcversion: 9205E6179CCDF3C754F2122
depends:
vermagic: 2.6.32-431.23.3.el6.x86_64 SMP mod_unload modversions
parm: min_num_slots:Min number of ring slots (uint)
parm: perfect_rules_hash_size:Perfect rules hash size (uint)
parm: transparent_mode:0=standard Linux, 1=direct2pfring+transparent,
2=direct2pfring+non transparentFor 1 and 2 you need to use a PF_RING aware
driver (uint)
parm: enable_debug:Set to 1 to enable PF_RING debug tracing into the
syslog (uint)
parm: enable_tx_capture:Set to 1 to capture outgoing packets (uint)
parm: enable_frag_coherence:Set to 1 to handle fragments (flow
coherence) in clusters (uint)
parm: enable_ip_defrag:Set to 1 to enable IP defragmentation(only rx
traffic is defragmentead) (uint)
parm: quick_mode:Set to 1 to run at full speed but with upto one
socket per interface (uint)
_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc