Hello,
I am seeing some odd ring count behavior when starting and stopping many
instances of tcpdump and/or snort. Basically, after all processes have
been killed, pf_ring/info is still reporting rings open.
A transcript of the problem:
-------------------------BEGIN
$ cat /proc/net/pf_ring/info
PF_RING Version : 6.0.1 ($Revision: $)
Total rings : 0
Standard (non DNA) Options
Ring slots : 4096
Slot version : 15
Capture TX : Yes [RX+TX]
IP Defragment : No
Socket Mode : Standard
Transparent mode : Yes [mode 0]
Total plugins : 0
Cluster Fragment Queue : 0
Cluster Fragment Discard : 0
$ ~/test.sh (contents listed below)
$ pgrep tcpdump | wc -l
0
$ ls -l /proc/net/pf_ring/*eth*
ls: cannot access /proc/net/pf_ring/*eth*: No such file or directory
$ grep rings /proc/net/pf_ring/info
Total rings : 30
---------------------------------END
So there are no longer any tcpdump processes running, yet pf_ring/info
reports 30 rings being open. Is this expected behavior? Is there a
preferred way to kill a process that is holding a ring open? Short of
rmmod & insmod'ing the pf_ring.ko module, is there another way to reset the
ring count?
I see there is a MAX_NUM_RING_SOCKETS in pf_ring.h. Is a potential
ramifications of this over time that PF_RING will stop opening rings for
new processes?
For reference, this test was run on an Ubuntu 12.04.4 LTS 64-bit system,
with PF_RING 6.0.1 compiled against the 3.2.0-65-generic kernel. The
problem also manifests when swapping snort out for tcpdump in the test
script.
------------------------------Contents of test.sh
#!/bin/sh
for loop in $(seq 1 60)
do
sudo killall tcpdump
for proc in $(seq 1 5)
do
sudo tcpdump -i eth3 -w /dev/null &
done
sleep 1
done
sudo killall tcpdump
-------------------------------End test.sh
Thanks for any help.
Jason
_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc