No go on that one either, I'm pretty well scratching my head over here.
I'm going through and recompiling and reinstalling everything to make
sure that it is all using the proper libpcap.
Anything special I would need to do on exporting that environment
variable? I just added it to /etc/profile and logged out and back in.
Here's snort:
# ldd `which snort`
linux-vdso.so.1 => (0x00007fff015ff000)
libdnet.1 => /usr/local/lib/libdnet.1 (0x00007f4dad6f6000)
libpcre.so.0 => /lib64/libpcre.so.0 (0x00007f4dad4c4000)
libnsl.so.1 => /lib64/libnsl.so.1 (0x00007f4dad2ab000)
libm.so.6 => /lib64/libm.so.6 (0x00007f4dad027000)
libdl.so.2 => /lib64/libdl.so.2 (0x00007f4dace22000)
libsfbpf.so.0 => /usr/local/lib/libsfbpf.so.0 (0x00007f4dacbfd000)
*libpcap.so.1 => /usr/local/lib/libpcap.so.1 (0x00007f4dac9a8000)*
libz.so.1 => /lib64/libz.so.1 (0x00007f4dac791000)
libpthread.so.0 => /lib64/libpthread.so.0 (0x00007f4dac574000)
libc.so.6 => /lib64/libc.so.6 (0x00007f4dac1e1000)
/lib64/ld-linux-x86-64.so.2 (0x00007f4dad907000)
The libpcap is the one compiled/installed out of the PF_RING/userland
folder. I can't help but think it is something further down the chain
since if I use the DNA based drivers I can see the traffic.
RG
On 03/19/2013 11:21 AM, Tritium Cat wrote:
You might have to change the clustering mechanism because the default
method does not include the VLAN.
To change the clustering method you set an environment variable
recognized by libpcap patched for PF_RING.
userland/libpcap-1.1.1-ring/pcap-linux.c:
if(getenv("PCAP_PF_RING_USE_CLUSTER_PER_FLOW"))
userland/libpcap-1.1.1-ring/pcap-linux.c-
pfring_set_cluster(handle->ring, atoi(clusterId), cluster_per_flow);
userland/libpcap-1.1.1-ring/pcap-linux.c: else if(
getenv("PCAP_PF_RING_USE_CLUSTER_PER_FLOW_2_TUPLE"))
userland/libpcap-1.1.1-ring/pcap-linux.c-
pfring_set_cluster(handle->ring, atoi(clusterId),
cluster_per_flow_2_tuple);
userland/libpcap-1.1.1-ring/pcap-linux.c: else if(
getenv("PCAP_PF_RING_USE_CLUSTER_PER_FLOW_4_TUPLE"))
userland/libpcap-1.1.1-ring/pcap-linux.c-
pfring_set_cluster(handle->ring, atoi(clusterId),
cluster_per_flow_4_tuple);
userland/libpcap-1.1.1-ring/pcap-linux.c: else if(
getenv("PCAP_PF_RING_USE_CLUSTER_PER_FLOW_TCP_5_TUPLE"))
userland/libpcap-1.1.1-ring/pcap-linux.c-
pfring_set_cluster(handle->ring, atoi(clusterId),
cluster_per_flow_tcp_5_tuple);
userland/libpcap-1.1.1-ring/pcap-linux.c: else if(
getenv("PCAP_PF_RING_USE_CLUSTER_PER_FLOW_5_TUPLE"))
userland/libpcap-1.1.1-ring/pcap-linux.c-
pfring_set_cluster(handle->ring, atoi(clusterId),
cluster_per_flow_5_tuple);
Choose PCAP_PF_RING_USE_CLUSTER_PER_FLOW_5_TUPLE to set
cluster_per_flow_5_tuple.
From kernel/pf_ring.c
/* ********************************** */
static u_int hash_pkt_cluster(ring_cluster_element *cluster_ptr,
struct pfring_pkthdr *hdr)
{
u_int idx;
switch(cluster_ptr->cluster.hashing_mode) {
case cluster_round_robin:
idx = cluster_ptr->cluster.hashing_id++;
break;
case cluster_per_flow_2_tuple:
idx = hash_pkt_header(hdr,
HASH_PKT_HDR_RECOMPUTE|HASH_PKT_HDR_MASK_PORT|HASH_PKT_HDR_MASK_PROTO|HASH_PKT_HDR_MASK_VLAN);
break;
case cluster_per_flow_4_tuple:
idx = hash_pkt_header(hdr,
HASH_PKT_HDR_RECOMPUTE|HASH_PKT_HDR_MASK_PROTO|HASH_PKT_HDR_MASK_VLAN);
break;
case cluster_per_flow_tcp_5_tuple:
if(((hdr->extended_hdr.parsed_pkt.tunnel.tunnel_id == NO_TUNNEL_ID) ?
hdr->extended_hdr.parsed_pkt.l3_proto :
hdr->extended_hdr.parsed_pkt.tunnel.tunneled_proto) == IPPROTO_TCP)
idx = hash_pkt_header(hdr,
HASH_PKT_HDR_RECOMPUTE|HASH_PKT_HDR_MASK_VLAN); /* 5 tuple */
else
idx = hash_pkt_header(hdr,
HASH_PKT_HDR_RECOMPUTE|HASH_PKT_HDR_MASK_VLAN); /* 2 tuple */
break;
case cluster_per_flow_5_tuple:
idx = hash_pkt_header(hdr,
HASH_PKT_HDR_RECOMPUTE|HASH_PKT_HDR_MASK_VLAN);
break;
case cluster_per_flow:
default:
idx = hash_pkt_header(hdr, 0);
break;
}
return(idx % cluster_ptr->cluster.num_cluster_elements);
}
--TC
On Tue, Mar 19, 2013 at 8:39 AM, Ryan <[email protected]
<mailto:[email protected]>> wrote:
Ack, wasn't sending this back to the list.
No such luck for me on disabling the kernel config directives:
# cat /boot/config-`uname -r` | grep -i vlan
CONFIG_BRIDGE_EBT_VLAN=m
CONFIG_VLAN_8021Q=n
CONFIG_VLAN_8021Q_GVRP=n
CONFIG_MACVLAN=m
CONFIG_R8169_VLAN=y
After a reboot, and still a no go. I've seen some things around
about the ixgbe drivers not properly disabling vlan tag stripping
in some of the older versions, I wonder if the pf_ring_aware
driver was built on one of those versions. Doesn't make sense why
ethtool isn't able to modify the offload settings, though.
On 03/19/2013 09:37 AM, Josip Djuricic wrote:
Check that its not compiled in kernel.
We had same issue with our application, until we unloaded
kernel module
it didnt work.
Perhaps it is different in your case. From: Garrett, Ryan
Sent: 19.3.2013. 22:34
To: Josip Djuricic
Subject: RE: [Ntop-misc] Using PF_RING Aware Drivers with
VLAN Trunk
No VLAN module loaded:
# rmmod 8021q
ERROR: Module 8021q does not exist in /proc/modules
Maybe I should try adding it.
Thanks
-----Original Message-----
From: Josip Djuricic [mailto:[email protected]
<mailto:[email protected]>]
Sent: Tuesday, March 19, 2013 9:32 AM
To: Garrett, Ryan; [email protected]
<mailto:[email protected]>
Subject: RE: [Ntop-misc] Using PF_RING Aware Drivers with
VLAN Trunk
Try unloading vlan module, solved issue for us. From: Ryan
Sent: 19.3.2013. 22:27
To: [email protected]
<mailto:[email protected]>
Subject: [Ntop-misc] Using PF_RING Aware Drivers with VLAN
Trunk I'm
running into an interesting issue, and I was curious if
anyone else
has ran into it.
I can run the DNA drivers and be able to pull traffic into
Snort/TCPDUMP without an issue, but if I try to run the
PF_RING Aware
Drivers for my ixgbe card I get no traffic. I'm pretty
sure it is to
do with VLAN tagging, I just haven't been able to figure
out what
exactly. I've tried using ethtool to disable 'rxvlan' but
it isn't
able to make the changes.
Has anyone else ran into this? I really don't want to have
to make a
tagged interface for each VLAN, and the DNA drivers won't
work for us
since we'll be pushing out to multiple IDS applications,
and only one
application can exist on a queue, although I may just be
misunderstanding how the DNA drivers work.
Here's some output from ethtool:
#ifconfig p1p1
Link encap:Ethernet HWaddr 00:xx
UP BROADCAST RUNNING PROMISC MULTICAST
MTU:1500 Metric:1
RX packets:1536044361 errors:0 dropped:0
overruns:0 frame:0
TX packets:29 errors:0 dropped:0 overruns:0
carrier:0
collisions:0 txqueuelen:1000
RX bytes:1414833145971 (1.2 TiB) TX
bytes:3582 (3.4 KiB)
# ethtool -k p1p1
Features for p1p1:
rx-checksumming: on
tx-checksumming: on
scatter-gather: on
tcp-segmentation-offload: on
udp-fragmentation-offload: off
generic-segmentation-offload: on
generic-receive-offload: on
large-receive-offload: on
rx-vlan-offload: on
tx-vlan-offload: on
ntuple-filters: off
receive-hashing: on
ethtool -d p1p1 | grep VLAN
0x05088: VLNCTRL (VLAN Control register) 0x00008100
VLAN Mode: disabled
VLAN Filter: disabled
0x05AC0: IMIRVP (Immed. Interr. Rx VLAN Prior.)
0x00000000
# ethtool -K p1p1 rxvlan off
Could not change any device features
Anyone have any ideas on this?
Thanks
_______________________________________________
Ntop-misc mailing list
[email protected]
<mailto:[email protected]>
http://listgateway.unipi.it/mailman/listinfo/ntop-misc
_______________________________________________
Ntop-misc mailing list
[email protected] <mailto:[email protected]>
http://listgateway.unipi.it/mailman/listinfo/ntop-misc
_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc
_______________________________________________
Ntop-misc mailing list
[email protected]
http://listgateway.unipi.it/mailman/listinfo/ntop-misc