Re: [Ntop-misc] [Ntop] PFring 7.5.0 ZC unable to read packet when bpf filter is enabled

2020-11-25 Thread Alfredo Cardigliano
Hi Jatin
I replied on github, please keep using it for issues (and please do not
send the same request to multiple channels..)

Thank you
Alfredo

> On 25 Nov 2020, at 12:53, Jatin Sahu  wrote:
> 
> Hi,
> 
> I am using PFRing ZC libraries to read VLAN tagged GTP traffic from intel 
> card.
> But when the bpf filter is enabled. It is not able to read any traffic.
> As soon as I disable the bpf filter it starts reading the packets.
> 
> Sharing the details and the sample pcap captured by tcpdump.
> 
> Please advise the resolution and let us know if any other details needed.
> 
> BPFFilter
> (udp and (port 2123 or port 2152 or port 3386 or (ip[6:2] & 0x1fff != 0))) or 
> (vlan and udp and (port 2123 or port 2152 or port 3386 or (ip[6:2] & 0x1fff 
> != 0)))
> 
> Server:
> cat /etc/redhat-release
> Red Hat Enterprise Linux Server release 7.7 (Maipo)
> 
> Linux version 3.10.0-1062.el7.x86_64 
> (mockbu...@x86-040.build.eng.bos.redhat.com 
> ) (gcc version 4.8.5 
> 20150623 (Red Hat 4.8.5-39) (GCC) ) #1 
>  SMP Thu Jul 18 20:25:13 UTC 2019
> 
> Card:
> *-network:0
> description: Ethernet interface
> product: Ethernet Controller X710 for 10GbE SFP+
> vendor: Intel Corporation
> physical id: 0
> bus info: pci@:5d:00.0
> logical name: eno5
> version: 01
> serial: 48:df:37:85:ae:40
> size: 10Gbit/s
> width: 64 bits
> clock: 33MHz
> capabilities: pm msi msix pciexpress vpd bus_master cap_list rom ethernet 
> physical fibre autonegotiation
> configuration: autonegotiation=off broadcast=yes driver=i40e 
> driverversion=2.8.10-k duplex=full firmware=10.51.5 latency=0 link=yes 
> multicast=yes port=fibre speed=10Gbit/s
> resources: irq:34 memory:e700-e7ff memory:e900-e9007fff 
> memory:e908-e90f memory:ca0-cdf 
> memory:cf0-cff
> 
> PFRing:
> 
> cat /proc/net/pf_ring/info
> PF_RING Version : 7.5.0 (unknown)
> Total rings : 1
> 
> Standard (non ZC) Options
> Ring slots : 409600
> Slot version : 17
> Capture TX : No [RX only]
> IP Defragment : No
> Socket Mode : Standard
> 
> [root@RCEM-Probe2 ~]# cat /proc/net/pf_ring/dev/eno5/info
> Name: eno5
> Index: 16
> Address: 48:DF:37:85:AE:40
> Polling Mode: NAPI/ZC
> Type: Ethernet
> Family: Intel i40e
> TX Queues: 4
> RX Queues: 4
> Num RX Slots: 512
> Num TX Slots: 512
> 
> ___
> Ntop mailing list
> n...@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop

___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] enable RSS cause kernel crashed

2020-05-11 Thread Alfredo Cardigliano
Hi Wang
did you update from a previous pf_ring version that was working with no crashes?
Can I see the zcount command line which is causing this?
Weird this is happening with X520 and not happening with 82599 as they should
behave the same, need to investigate. What 7.6 version are you running exactly?
Could you provide cat /proc/net/pf_ring/info?

Thank you
Alfredo

> On 11 May 2020, at 04:44, Wang  wrote:
> 
> Hello,
> I installed PF Ring ZC 7.6.0 on a centos 7 box. When I configured the ixgde 
> driver with RSS queue number to 8, the centos kernel would crash after zcount 
> worked for hours. This would happen only when the RSS queue configured to 
> more than 1 and the more queue number it configured, the more frequent this 
> would happen.
> The kernel is 3.10.0-1062.9.1.el7.x86_64 and the NIC is a intel X520 duel 
> port. 
> I also installed a intel 82599 duel port and it works fine, it appeared there 
> was something wrong with x520 and pfring zc driver.
> 
> Any comment? 
> 
> Regards,
> Wang
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc


Re: [Ntop-misc] PF_RING 7.5.0 on Ubuntu 18.04

2020-03-16 Thread Alfredo Cardigliano
Hi Hovsep
please provide the zbalance_ipc and pfcount command lines
and output (please do not run any other application in this test)

Regards
Alfredo

> On 13 Mar 2020, at 15:15, Hovsep Levi  wrote:
> 
> Here's what I've tried.
> 
> The interface driver looks ok, ixgbe cpu_affinity values use a CoreID on the 
> NUMA node.  RSS is 1,1
> 
> When zbalance_ipc is running you can see the point where the data pipeline to 
> the userspace app is established.  Even so multiple userspace apps cannot 
> access the data stream.  Suricata is complaining about invalid ioctls on each 
> zc interface.  PF_Ring tcpdump doesn't work either.  Same for pcount.
> 
> I suspected AppArmor although disabling it had no effect.  Kernel version 
> seems ok.
> 
> Not sure what else to try other than debugging.
> 
> 
> -Hovsep
> 
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc


Re: [Ntop-misc] PF_RING 7.5.0 on Ubuntu 18.04

2020-02-03 Thread Alfredo Cardigliano
Hi Hovsep
are you able to capture from queues > 0 with pfcount?

Regards
Alfredo

> On 31 Jan 2020, at 17:52, Hovsep Levi  wrote:
> 
> Hello !
> 
> 
> I have a problem to make PF_RING work on Ubuntu 18.04.
> 
> zbalance_ipc will start OK with multiple consumer queues.  Tcpdump cannot 
> read packets from the ZC queues > 0.  Reading from queue 0 is only empty 
> packets with all zeros.
> 
> PF_RING libpcap/tcpdump is installed and is the only libpcap on the system.
> 
> What do you recommend I try ?
> 
> 
> Thanks,
> 
> Hovsep
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc


Re: [Ntop-misc] zc with RSS kill the management interface of my host

2020-01-22 Thread Alfredo Cardigliano
Please update to latest package/code as I pushed a fix that *could* address this

Alfredo

> On 21 Jan 2020, at 16:06, Wang  wrote:
> 
> sorry but I do not have lspci installed in this host. What I know is that I 
> have 2 onboard 1Gb ports and one 2-port 82599em and one 4-port intel 1Gb NIC.
> I have installed zc license for p1p1. The management port is em1.
> the ifconfig and ethtool info are listed here:
> 
> [root@r510 ~]# ethtool -i p1p1
> driver: ixgbe
> version: 5.5.3
> firmware-version: 0x2b2c0001
> expansion-rom-version:
> bus-info: :08:00.0
> supports-statistics: yes
> supports-test: yes
> supports-eeprom-access: yes
> supports-register-dump: yes
> supports-priv-flags: yes
> 
> 
> [root@r510 ~]# ethtool -i p1p2
> driver: ixgbe
> version: 5.5.3
> firmware-version: 0x2b2c0001
> expansion-rom-version:
> bus-info: :08:00.1
> supports-statistics: yes
> supports-test: yes
> supports-eeprom-access: yes
> supports-register-dump: yes
> supports-priv-flags: yes
> 
> 
> [root@r510 ~]# ethtool -i em1
> driver: bnx2
> version: 2.2.6
> firmware-version: 6.0.1 bc 5.2.3 NCSI 2.0.10
> expansion-rom-version:
> bus-info: :01:00.0
> supports-statistics: yes
> supports-test: yes
> supports-eeprom-access: yes
> supports-register-dump: yes
> supports-priv-flags: no
> 
> 
> [root@r510 ~]# ifconfig
> em1: flags=4163  mtu 1500
> inet 192.168.66.101  netmask 255.255.255.0  broadcast 192.168.66.255
> inet6 fe80::2001:70e3:7246:b725  prefixlen 64  scopeid 0x20
> ether 84:2b:2b:78:8a:62  txqueuelen 1000  (Ethernet)
> RX packets 26069  bytes 3177036 (3.0 MiB)
> RX errors 0  dropped 1  overruns 0  frame 0
> TX packets 5945  bytes 1443645 (1.3 MiB)
> TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
> 
> em2: flags=4099  mtu 1500
> ether 84:2b:2b:78:8a:63  txqueuelen 1000  (Ethernet)
> RX packets 0  bytes 0 (0.0 B)
> RX errors 0  dropped 0  overruns 0  frame 0
> TX packets 0  bytes 0 (0.0 B)
> TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
> 
> lo: flags=73  mtu 65536
> inet 127.0.0.1  netmask 255.0.0.0
> inet6 ::1  prefixlen 128  scopeid 0x10
> loop  txqueuelen 1000  (Local Loopback)
> RX packets 19  bytes 1737 (1.6 KiB)
> RX errors 0  dropped 0  overruns 0  frame 0
> TX packets 19  bytes 1737 (1.6 KiB)
> TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
> 
> p1p1: flags=4419  mtu 1500
> ether 48:f8:db:7e:da:5c  txqueuelen 1000  (Ethernet)
> RX packets 6322614433  bytes 3091202586919 (2.8 TiB)
> RX errors 0  dropped 0  overruns 0  frame 0
> TX packets 869  bytes 147082 (143.6 KiB)
> TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
> device memory 0xdd30-dd38
> 
> p1p2: flags=4099  mtu 1500
> ether 48:f8:db:7e:da:5d  txqueuelen 1000  (Ethernet)
> RX packets 0  bytes 0 (0.0 B)
> RX errors 0  dropped 0  overruns 0  frame 0
> TX packets 0  bytes 0 (0.0 B)
> TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
> device memory 0xdd38-dd40
> 
> p2p1: flags=4163  mtu 1500
> ether 00:1b:21:b0:5c:a8  txqueuelen 1000  (Ethernet)
> RX packets 287  bytes 98154 (95.8 KiB)
> RX errors 0  dropped 0  overruns 0  frame 0
> TX packets 0  bytes 0 (0.0 B)
> TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
> device memory 0xdebc-debd
> 
> p2p2: flags=4099  mtu 1500
> ether 00:1b:21:b0:5c:a9  txqueuelen 1000  (Ethernet)
> RX packets 0  bytes 0 (0.0 B)
> RX errors 0  dropped 0  overruns 0  frame 0
> TX packets 0  bytes 0 (0.0 B)
> TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
> device memory 0xdebe-debf
> 
> p2p3: flags=4099  mtu 1500
> ether 00:1b:21:b0:5c:ac  txqueuelen 1000  (Ethernet)
> RX packets 0  bytes 0 (0.0 B)
> RX errors 0  dropped 0  overruns 0  frame 0
> TX packets 0  bytes 0 (0.0 B)
> TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
> device memory 0xddbc-ddbd
> 
> p2p4: flags=4099  mtu 1500
> ether 00:1b:21:b0:5c:ad  txqueuelen 1000  (Ethernet)
> RX packets 0  bytes 0 (0.0 B)
> RX errors 0  dropped 0  overruns 0  frame 0
> TX packets 0  bytes 0 (0.0 B)
> TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
> device memory 0xddbe-ddbf
> 
> On Tue, Jan 21, 2020 at 6:08 PM Alfredo Cardigliano  <mailto:car

Re: [Ntop-misc] zc with RSS kill the management interface of my host

2020-01-21 Thread Alfredo Cardigliano
Hi Wang
please provide:

1. lspci | grep Eth
2. ifconfig
3. ethtool -i {p1p1,p1p2,}

Thank you
Alfredo

> On 21 Jan 2020, at 08:44, Wang  wrote:
> 
> hi, 
> I have 2 capture interfaces(p1p1 and p1p2, 10Gb) and 1 management interface 
> in my server. After I have setup the number of rss queue of p1p1 with 
> 'pf_ringcfg --configure-driver ixgbe --rss-queues 2', I tried to start up two 
> pfcount to do some test: 'pfcount -i zc:p1p1@0' and 'pfcount -i zc:p1p1@1'. 
> After the second pfcount started, I lost my ssh session with my server and I 
> cannot ping it neither. Everything came back after several minutes waiting 
> without doing anything.
> My pfring version is 7.5.0, and Is there anything wrong with it?
> 
> thanks,
> wang
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] confused abount ZC Load-Balancing

2020-01-20 Thread Alfredo Cardigliano
Do you mean between hw distribution with RSS and 
software distribution with zbalance_ipc?
The former provides higher performance, the latter
higher flexibility.

Alfredo

> On 20 Jan 2020, at 17:30, Yong Wang  wrote:
> 
> is there any difference between the performance of these two?
> 
> Sent from my iPhone
> 
>> On Jan 21, 2020, at 12:24 AM, Alfredo Cardigliano  
>> wrote:
>> 
>> Correct. A zbalance_ipc queue is a software queue belonging to the
>> cluster created by zbalance_ipc. You can use zcount to open a RSS queue.
>> 
>> Alfredo
>> 
>>> On 20 Jan 2020, at 17:22, Yong Wang >> <mailto:cnwangy...@gmail.com>> wrote:
>>> 
>>> so is this because the zc queue created by zbalance_ipc is a software 
>>> queue? If I use rss to do the load balance, will zcount work?
>>> 
>>> Sent from my iPhone
>>> 
>>>> On Jan 21, 2020, at 12:07 AM, Alfredo Cardigliano >>> <mailto:cardigli...@ntop.org>> wrote:
>>>> 
>>>> Hi
>>>> please use zcount_ipc to attack to a ZC queue created by zbalance_ipc,
>>>> the zcount example you are using is meant to be used to capture from
>>>> an interface (it creates a cluster).
>>>> 
>>>> Alfredo
>>>> 
>>>>> On 20 Jan 2020, at 17:04, Wang >>>> <mailto:cnwangy...@gmail.com>> wrote:
>>>>> 
>>>>> Hi there,
>>>>> 
>>>>> I am using zbalance_ipc to spread packets to multiple applications, say 
>>>>> zcount for example. I got this error from zcount:
>>>>> pfring_zc_create_cluster error [Socket operation on non-socket] Please 
>>>>> check that pf_ring.ko is loaded and hugetlb fs is mounted
>>>>> 
>>>>> When I switched to pfcount -i zc:10@0 and pfcount -i zc:10@1, it worked. 
>>>>> So, is this still zero copy? it means I cannot use zc api with 
>>>>> zbalance_ipc?
>>>>> 
>>>>> Thanks
>>>>> ___
>>>>> Ntop-misc mailing list
>>>>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>>>>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>>> ___
>>>> Ntop-misc mailing list
>>>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>>>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>___
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>> 
>> ___
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] confused abount ZC Load-Balancing

2020-01-20 Thread Alfredo Cardigliano
Correct. A zbalance_ipc queue is a software queue belonging to the
cluster created by zbalance_ipc. You can use zcount to open a RSS queue.

Alfredo

> On 20 Jan 2020, at 17:22, Yong Wang  wrote:
> 
> so is this because the zc queue created by zbalance_ipc is a software queue? 
> If I use rss to do the load balance, will zcount work?
> 
> Sent from my iPhone
> 
>> On Jan 21, 2020, at 12:07 AM, Alfredo Cardigliano  
>> wrote:
>> 
>> Hi
>> please use zcount_ipc to attack to a ZC queue created by zbalance_ipc,
>> the zcount example you are using is meant to be used to capture from
>> an interface (it creates a cluster).
>> 
>> Alfredo
>> 
>>> On 20 Jan 2020, at 17:04, Wang >> <mailto:cnwangy...@gmail.com>> wrote:
>>> 
>>> Hi there,
>>> 
>>> I am using zbalance_ipc to spread packets to multiple applications, say 
>>> zcount for example. I got this error from zcount:
>>> pfring_zc_create_cluster error [Socket operation on non-socket] Please 
>>> check that pf_ring.ko is loaded and hugetlb fs is mounted
>>> 
>>> When I switched to pfcount -i zc:10@0 and pfcount -i zc:10@1, it worked. 
>>> So, is this still zero copy? it means I cannot use zc api with zbalance_ipc?
>>> 
>>> Thanks
>>> ___
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>> 
>> ___
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] confused abount ZC Load-Balancing

2020-01-20 Thread Alfredo Cardigliano
Hi
please use zcount_ipc to attack to a ZC queue created by zbalance_ipc,
the zcount example you are using is meant to be used to capture from
an interface (it creates a cluster).

Alfredo

> On 20 Jan 2020, at 17:04, Wang  wrote:
> 
> Hi there,
> 
> I am using zbalance_ipc to spread packets to multiple applications, say 
> zcount for example. I got this error from zcount:
> pfring_zc_create_cluster error [Socket operation on non-socket] Please check 
> that pf_ring.ko is loaded and hugetlb fs is mounted
> 
> When I switched to pfcount -i zc:10@0 and pfcount -i zc:10@1, it worked. So, 
> is this still zero copy? it means I cannot use zc api with zbalance_ipc?
> 
> Thanks
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] Query related to disk2n + ZC license

2020-01-09 Thread Alfredo Cardigliano
HI Chandrika
unfortunately it is not possible to generate a license for an old pf_ring 
version,
you need to renew the disk2n maintenance..

Alfredo

> On 9 Jan 2020, at 12:00, Chandrika Gautam  
> wrote:
> 
> HI Alfredo, 
> 
> Are you saying that we will need to upgrade disk2n to latest version?  But 
> that will require renewal of disk2n license also right ?
> Cant we generate license for old version of pf_ring zc so that we can use the 
> existing disk2n application?
> 
> Regards.
> Chandrika
> 
> On Thu, Jan 9, 2020 at 4:24 PM Alfredo Cardigliano  <mailto:cardigli...@ntop.org>> wrote:
> Hi Chandrika
> your application is too old to support latest licenses, a software update is 
> required..
> 
> Regards
> Alfredo
> 
>> On 9 Jan 2020, at 11:44, Chandrika Gautam > <mailto:chandrika.iitd.r...@gmail.com>> wrote:
>> 
>> Hi Alfredo &Team, 
>> We have disk2n (v.2.7.160920) license and Pfring_ZC license installed on 
>> same server. We have installed one more new NIC of 10G on the same server 
>> and we have one spare pfring_zc license and want to generate and install 
>> license on same server for already installed disk2n version (v.2.7.160920).
>> 
>> We receive below error when we run disk2n on  new NIC interface -  
>> 
>> #
>> # ERROR: You do not seem to have a valid PF_RING ZC license 6.5.0.160912 for 
>> ens3f0 [Intel 10 Gbit ixgbe 82599-based]
>> # ERROR: Please get one at http://shop.ntop.org/ <http://shop.ntop.org/>.
>> #
>> # We're now working in demo mode with packet capture and  
>> # transmission limited to 5 minutes
>> #
>> 
>> We feel that we should be generating license for this  (6.5.0.160912) 
>> version of pfring_zc. Please confirm so that we can  go ahead with license 
>> generation. 
>> 
>> 
>> Below are the details of rpm installed of pfring_zc and disk2n:
>> 
>> # rpm -qa | grep -i pf
>> pfring-dkms-6.5.0-832.noarch
>> pfring-6.5.0-832.x86_64
>> 
>> # rpm -qa | grep -i disk
>> n2disk-2.7.160920-4695.x86_64
>> 
>> # disk2n -v
>> 08/Jan/2020 13:42:07 [disk2n.c:2319] WARNING: Some mandatory parameters are 
>> missing
>> Welcome to disk2n v.2.7.160920 (r4695) [Westmere]
>> 
>> Regards,
>> Chandrika
>> ___
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] Query related to disk2n + ZC license

2020-01-09 Thread Alfredo Cardigliano
Hi Chandrika
your application is too old to support latest licenses, a software update is 
required..

Regards
Alfredo

> On 9 Jan 2020, at 11:44, Chandrika Gautam  
> wrote:
> 
> Hi Alfredo &Team, 
> We have disk2n (v.2.7.160920) license and Pfring_ZC license installed on same 
> server. We have installed one more new NIC of 10G on the same server and we 
> have one spare pfring_zc license and want to generate and install license on 
> same server for already installed disk2n version (v.2.7.160920).
> 
> We receive below error when we run disk2n on  new NIC interface -  
> 
> #
> # ERROR: You do not seem to have a valid PF_RING ZC license 6.5.0.160912 for 
> ens3f0 [Intel 10 Gbit ixgbe 82599-based]
> # ERROR: Please get one at http://shop.ntop.org/ .
> #
> # We're now working in demo mode with packet capture and  
> # transmission limited to 5 minutes
> #
> 
> We feel that we should be generating license for this  (6.5.0.160912) version 
> of pfring_zc. Please confirm so that we can  go ahead with license 
> generation. 
> 
> 
> Below are the details of rpm installed of pfring_zc and disk2n:
> 
> # rpm -qa | grep -i pf
> pfring-dkms-6.5.0-832.noarch
> pfring-6.5.0-832.x86_64
> 
> # rpm -qa | grep -i disk
> n2disk-2.7.160920-4695.x86_64
> 
> # disk2n -v
> 08/Jan/2020 13:42:07 [disk2n.c:2319] WARNING: Some mandatory parameters are 
> missing
> Welcome to disk2n v.2.7.160920 (r4695) [Westmere]
> 
> Regards,
> Chandrika
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] how to get the timestamp of packets recv by pfring zc driver

2020-01-08 Thread Alfredo Cardigliano
Hi Wang
the ZC library does not compute the timestamp by default for max performance,
please add the PF_RING_ZC_DEVICE_SW_TIMESTAMP flag to pfring_zc_open_device()

Regards
Alfredo

> On 8 Jan 2020, at 11:04, Wang  wrote:
> 
> Hi, there, 
> 
> I found that the ts is always set to 0 if the packet recv by a zc driver. For 
> example, in zcount.c:
> 
>  if (buffer->ts.tv_nsec)
> printf("[%u.%u] [hash=%08X] ", buffer->ts.tv_sec, buffer->ts.tv_nsec, 
> buffer->hash);
> 
> ts.tv_nsec is always set to 0 if I open ZC:p1p1.
> So, the question is, how can I get the timestamp if I cannot get it from 
> buffer->ts?
> 
> Thanks,
> Wang
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc


Re: [Ntop-misc] any drop statistics for pfring zc driver

2020-01-02 Thread Alfredo Cardigliano
Hi Wang
please note ZC is a kernel-bypass technology, the kernel module does not
have visibility of drop counters on the interface, you should look at 
application
stats (as the application can read drop counters from the ZC interface). This
said, our applications export stats to /proc/net/pf_ring/stats/* through the 
socket.

Regards
Alfredo

> On 2 Jan 2020, at 10:36, Wang  wrote:
> 
> I cannot find any statistics of packet dropped by pfring under 
> /proc/net/pf_ring any more. It's a litter different from the non zc version. 
> where can I find it?
> 
> Thanks,
> 
> Wang
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc


Re: [Ntop-misc] help with packet dropping when bursting traffic

2019-12-29 Thread Alfredo Cardigliano
Hi Wang
in this case you can use any of the 2 numa nodes.

Alfredo

> On 29 Dec 2019, at 04:20, Yong Wang  wrote:
> 
> Thank you, it worked and now there’s  no drop any more.
> As to the output of my lstopo, p1p1 is connected to a PCIBridge, so which 
> numa node should I bind zc driver and my app to.
> 
> Many thanks,
> Wang
> 
> Sent from my iPhone
> 
>> On Dec 27, 2019, at 5:29 PM, Alfredo Cardigliano > <mailto:cardigli...@ntop.org>> wrote:
>> 
>> Hi
>> it seems you are not actually enabling ZC capture, please add the zc: prefix 
>> to the interface name (e.g. -i zc:p1p1)
>> 
>> Regards
>> Alfredo
>> 
>>> On 25 Dec 2019, at 06:46, Wang >> <mailto:cnwangy...@gmail.com>> wrote:
>>> 
>>> Hi Alfredo, sorry for the late reply. I have being try to solve it but 
>>> unfortunately it doesn't work so far. So, here is some updated info for you:
>>> 
>>> what happened:
>>> about 2Gbps captured by a 10Gb port, zcount reported no drops, ethtool -S 
>>> reported no rs_dropped, but ip -s link report more than 300K packets 
>>> dropped.
>>> 
>>> -OS:
>>> centos 7, kernel 3.10 
>>> 
>>> -capture port: 
>>> 82599EB based dual-port card
>>> 
>>> -command line:
>>> zcount -i p1p1
>>> 
>>> - cat /proc/net/pf_ring/dev//info
>>> [root@r510 p1p1]# cat info
>>> Name: p1p1
>>> Index:10
>>> Address:  48:F8:DB:7E:DA:5C
>>> Polling Mode: NAPI/ZC
>>> Type: Ethernet
>>> Family:   Intel ixgbe 82599
>>> TX Queues:1
>>> RX Queues:1
>>> Num RX Slots: 32768
>>> Num TX Slots: 32768
>>> 
>>> 
>>> [root@r510 pf_ring]# cat 31372-p1p1.120
>>> Bound Device(s): p1p1
>>> Active : 1
>>> Breed  : Standard
>>> Appl. Name : pfring-zc-99-p1p1
>>> Socket Mode: RX only
>>> Capture Direction  : RX only
>>> Sampling Rate  : 1
>>> Filtering Sampling Rate: 0
>>> IP Defragment  : No
>>> BPF Filtering  : Disabled
>>> Sw Filt Hash Rules : 0
>>> Sw Filt WC Rules   : 0
>>> Sw Filt Hash Match : 0
>>> Sw Filt Hash Miss  : 0
>>> Sw Filt Hash Filtered  : 0
>>> Hw Filt Rules  : 0
>>> Poll Pkt Watermark : 128
>>> Num Poll Calls : 0
>>> Poll Watermark Timeout : 0
>>> Channel Id Mask: 0x
>>> VLAN Id: 65535
>>> Slot Version   : 17 [7.5.0]
>>> Min Num Slots  : 65538
>>> Bucket Len : 1518
>>> Slot Len   : 1568 [bucket+header]
>>> Tot Memory : 102772736
>>> Tot Packets: 20306173
>>> Tot Pkt Lost   : 0
>>> Tot Insert : 20306173
>>> Tot Read   : 20306172
>>> Insert Offset  : 37222704
>>> Remove Offset  : 37220768
>>> Num Free Slots : 65536
>>> Reflect: Fwd Ok: 0
>>> Reflect: Fwd Errors: 0
>>> 
>>> 
>>> 
>>> - cat /proc/cpuinfo
>>> 24core cpu, this is the last one, all others looks like this:
>>> 
>>> processor   : 23
>>> vendor_id   : GenuineIntel
>>> cpu family  : 6
>>> model   : 44
>>> model name  : Intel(R) Xeon(R) CPU   X5675  @ 3.07GHz
>>> stepping: 2
>>> microcode   : 0x1f
>>> cpu MHz : 3066.920
>>> cache size  : 12288 KB
>>> physical id : 0
>>> siblings: 12
>>> core id : 10
>>> cpu cores   : 6
>>> apicid  : 21
>>> initial apicid  : 21
>>> fpu : yes
>>> fpu_exception   : yes
>>> cpuid level : 11
>>> wp  : yes
>>> flags   : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca 
>>> cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht 
>>>   tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc 
>>> arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf 
>>> eagerfpu pn  i pclmulqdq dtes64 monitor 
>>> ds_cpl vmx smx est tm2 ssse3 cx16 xtpr pdcm pcid dca sse4_1 sse4_2 popcnt 
>>> aes lahf_lm ssbd ibrs ibpb st

Re: [Ntop-misc] help with packet dropping when bursting traffic

2019-12-27 Thread Alfredo Cardigliano
 L#15 (P#15)
> L2 L#8 (256KB) + L1d L#8 (32KB) + L1i L#8 (32KB) + Core L#8
>   PU L#16 (P#5)
>   PU L#17 (P#17)
> L2 L#9 (256KB) + L1d L#9 (32KB) + L1i L#9 (32KB) + Core L#9
>   PU L#18 (P#7)
>   PU L#19 (P#19)
> L2 L#10 (256KB) + L1d L#10 (32KB) + L1i L#10 (32KB) + Core L#10
>   PU L#20 (P#9)
>   PU L#21 (P#21)
> L2 L#11 (256KB) + L1d L#11 (32KB) + L1i L#11 (32KB) + Core L#11
>   PU L#22 (P#11)
>   PU L#23 (P#23)
>   Misc(MemoryModule)
>   Misc(MemoryModule)
>   Misc(MemoryModule)
>   Misc(MemoryModule)
>   Misc(MemoryModule)
>   Misc(MemoryModule)
>   Misc(MemoryModule)
>   Misc(MemoryModule)
>   HostBridge L#0
> PCIBridge
>   PCI 14e4:163b
> Net L#0 "em1"
>   PCI 14e4:163b
> Net L#1 "em2"
> PCIBridge
>   PCI 1000:0079
> Block(Disk) L#2 "sda"
> PCIBridge
>   PCIBridge
> PCIBridge
>   PCI 8086:10e8
> Net L#3 "p2p1"
>   PCI 8086:10e8
> Net L#4 "p2p2"
> PCIBridge
>   PCI 8086:10e8
> Net L#5 "p2p3"
>   PCI 8086:10e8
> Net L#6 "p2p4"
> PCIBridge
>   PCI 8086:10fb
> Net L#7 "p1p1"
>   PCI 8086:10fb
> Net L#8 "p1p2"
> PCIBridge
>   PCI 102b:0532
> GPU L#9 "card0"
> GPU L#10 "controlD64"
> 
> 
> 
> 
> 
> 
> 
> On Mon, Nov 25, 2019 at 4:11 PM Alfredo Cardigliano  <mailto:cardigli...@ntop.org>> wrote:
> Hi
> please provide:
> - the pfcount command you are using
> - cat /proc/net/pf_ring/dev//info
> - cat /proc/cpuinfo
> - lstopo
> 
> Thank you
> Alfredo
> 
>> On 24 Nov 2019, at 07:38, Yong Wang > <mailto:cnwangy...@gmail.com>> wrote:
>> 
>>  
>> Hi there, I am struggling with a weird packet dropping.  My environment is 
>> pfring zc with intel x520 nic. I started a pfcount to listen to one of 10Gb 
>> port of this card. When a burst traffic up to 2Gb came in, ifconfig showed 
>> thousands packets were dropped. The reason I called it weird is when the 
>> traffic remained in 2Gb, no packet dropped any more.
>>  
>> Any comment?
>> ___
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] n2disk: 100Gbps?

2019-12-13 Thread Alfredo Cardigliano
Hi Marcel
we have a reference box that we tested at 100Gbps, however all the boxes
we have in production at the moment are <60Gbps.
If you tell me what is your preferred hw manufacturer I can send you some specs.

Regards
Alfredo

> On 13 Dec 2019, at 15:13, Lüthi Marcel FUB  wrote:
> 
> Hi Alfredo
> 
> That sounds a little bit like a challenge.
> There is no "nBox Recorder" for 100Gbps?
> 
> Regards,
> Marcel
> 
> 
> 
> -Ursprüngliche Nachricht-
> Von: ntop-misc-boun...@listgateway.unipi.it 
>  Im Auftrag von Alfredo Cardigliano
> Gesendet: Freitag, 13. Dezember 2019 15:01
> An: ntop-misc@listgateway.unipi.it
> Betreff: Re: [Ntop-misc] n2disk: 100Gbps?
> 
> Hi Marcel
> there are a few options, for n2disk we recommend Napatech or Fiberblaze as 
> they support native bulk PCAP mode.
> As of the storage, you can use n2disk to write to multiple NVMe disks in 
> parallel (I do not have a specific model to recommend) or you can use SAS 
> HDDs with 2-3 Raid controllers (each controller is usually able to write up 
> to 35-45Gbps).
> 
> Regards
> Alfredo
> 
>> On 13 Dec 2019, at 14:55, Lüthi Marcel FUB  
>> wrote:
>> 
>> Hi Alfredo
>> 
>> Thank you for the quick reply!
>> Which FPGA card, NVMe, system would you suggest?
>> 
>> Regards,
>> Marcel
>> 
>> 
>> 
>> -Ursprüngliche Nachricht-
>> Von: ntop-misc-boun...@listgateway.unipi.it 
>>  Im Auftrag von Alfredo 
>> Cardigliano
>> Gesendet: Freitag, 13. Dezember 2019 14:40
>> An: ntop-misc@listgateway.unipi.it
>> Betreff: Re: [Ntop-misc] n2disk: 100Gbps?
>> 
>> Hi Marcel
>> fm10k-based adapters support 100G connectivity, however they are not 
>> optimized to handle full 100Gbps.
>> Please consider using FPGA adapters if you want to record 100Gbps to disk 
>> (and a good NVMe disks array).
>> 
>> Regards
>> Alfredo
>> 
>>> On 13 Dec 2019, at 14:22, Lüthi Marcel FUB  
>>> wrote:
>>> 
>>> Hi.
>>> 
>>> We acquired a "Silicom PE3100G2DQIR-QX4 V:1.4" which is based on Intel 
>>> FM10420.
>>> As described in the PF_RING guide 
>>> (https://www.ntop.org/guides/pf_ring/zc.html#supported-cards), FM10420 is 
>>> supported.
>>> Therefore, can we assume that 100Gbps can be recorded with n2disk? (a 
>>> corresponding hardware provided!)
>>> 
>>> Thank you for your help.
>>> Regards,
>>> Marcel
>>> 
>>> ___
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>> 
>> ___
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
>> ___
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] n2disk: 100Gbps?

2019-12-13 Thread Alfredo Cardigliano
Hi Marcel
there are a few options, for n2disk we recommend Napatech or 
Fiberblaze as they support native bulk PCAP mode.
As of the storage, you can use n2disk to write to multiple NVMe
disks in parallel (I do not have a specific model to recommend) or
you can use SAS HDDs with 2-3 Raid controllers (each controller is
usually able to write up to 35-45Gbps).

Regards
Alfredo

> On 13 Dec 2019, at 14:55, Lüthi Marcel FUB  wrote:
> 
> Hi Alfredo
> 
> Thank you for the quick reply!
> Which FPGA card, NVMe, system would you suggest?
> 
> Regards,
> Marcel
> 
> 
> 
> -Ursprüngliche Nachricht-
> Von: ntop-misc-boun...@listgateway.unipi.it 
>  Im Auftrag von Alfredo Cardigliano
> Gesendet: Freitag, 13. Dezember 2019 14:40
> An: ntop-misc@listgateway.unipi.it
> Betreff: Re: [Ntop-misc] n2disk: 100Gbps?
> 
> Hi Marcel
> fm10k-based adapters support 100G connectivity, however they are not 
> optimized to handle full 100Gbps.
> Please consider using FPGA adapters if you want to record 100Gbps to disk 
> (and a good NVMe disks array).
> 
> Regards
> Alfredo
> 
>> On 13 Dec 2019, at 14:22, Lüthi Marcel FUB  
>> wrote:
>> 
>> Hi.
>> 
>> We acquired a "Silicom PE3100G2DQIR-QX4 V:1.4" which is based on Intel 
>> FM10420.
>> As described in the PF_RING guide 
>> (https://www.ntop.org/guides/pf_ring/zc.html#supported-cards), FM10420 is 
>> supported.
>> Therefore, can we assume that 100Gbps can be recorded with n2disk? (a 
>> corresponding hardware provided!)
>> 
>> Thank you for your help.
>> Regards,
>> Marcel
>> 
>> ___
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] n2disk: 100Gbps?

2019-12-13 Thread Alfredo Cardigliano
Hi Marcel
fm10k-based adapters support 100G connectivity, however 
they are not optimized to handle full 100Gbps.
Please consider using FPGA adapters if you want to record
100Gbps to disk (and a good NVMe disks array).

Regards
Alfredo

> On 13 Dec 2019, at 14:22, Lüthi Marcel FUB  wrote:
> 
> Hi.
> 
> We acquired a "Silicom PE3100G2DQIR-QX4 V:1.4" which is based on Intel 
> FM10420.
> As described in the PF_RING guide 
> (https://www.ntop.org/guides/pf_ring/zc.html#supported-cards), FM10420 is 
> supported.
> Therefore, can we assume that 100Gbps can be recorded with n2disk? (a 
> corresponding hardware provided!)
> 
> Thank you for your help.
> Regards,
> Marcel
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] Problem using systemctl cluster@

2019-12-11 Thread Alfredo Cardigliano
Hi Mike
I updated the doc.

Thank you
Alfredo

> On 10 Dec 2019, at 16:49, Mike Iglesias  wrote:
> 
> On 12/10/19 7:38 AM, Alfredo Cardigliano wrote:
>> Hi Mike
>> please use one option per line, as in the example below:
>> 
>> -c=99
>> -i=p1p1
>> -n=2
>> -m=2
>> -g=2
> 
> Ok, thanks.  I'll admit I missed the "one per line" part in the docs, but I
> don't see anything about using the "=".
> 
> 
> Mike

___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc


Re: [Ntop-misc] Problem using systemctl cluster@

2019-12-10 Thread Alfredo Cardigliano
Hi Mike
please use one option per line, as in the example below:

-c=99
-i=p1p1
-n=2
-m=2
-g=2

Alfredo

> On 10 Dec 2019, at 00:45, Mike Iglesias  wrote:
> 
> I'm trying to set up the systemctl cluster@ files to use zbalance_ipc to
> balance a stream.  In /etc/cluster/cluster-99.conf I have this:
> 
> -i p1p1 -m 2 -n 2 -g 2 -c 99
> 
> I copied /lib/systemd/system/cluster@.service to
> /etc/systemd/system/cluster@99.service.  When I try to start it I get an error
> from systemctl saying that the service didn't start.
> 
> If I do this:
> 
> zbalance_ipc /etc/cluster/cluster-99.conf
> 
> I get the help output with all the options to the zbalance_ipc program, so it
> doesn't like something in the cluster-99.conf file.  If I run zbalance_ipc 
> with
> the options in the cluster-99.conf file on the command line it works.
> 
> I'm using version 7.5.0-2747.
> 
> 
> -- 
> Mike Iglesias  Email:   igles...@uci.edu
> University of California, Irvine   phone:   949-824-6926
> Office of Information Technology   FAX: 949-824-2270
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc


Re: [Ntop-misc] help with packet dropping when bursting traffic

2019-11-25 Thread Alfredo Cardigliano
Hi
please provide:
- the pfcount command you are using
- cat /proc/net/pf_ring/dev//info
- cat /proc/cpuinfo
- lstopo

Thank you
Alfredo

> On 24 Nov 2019, at 07:38, Yong Wang  wrote:
> 
>  
> Hi there, I am struggling with a weird packet dropping.  My environment is 
> pfring zc with intel x520 nic. I started a pfcount to listen to one of 10Gb 
> port of this card. When a burst traffic up to 2Gb came in, ifconfig showed 
> thousands packets were dropped. The reason I called it weird is when the 
> traffic remained in 2Gb, no packet dropped any more.
>  
> Any comment?
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it 
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
> 
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] Query regarding zc driver & license

2019-11-11 Thread Alfredo Cardigliano
Hi Chandrika
you do not need a licenze to use the adapter in standard mode, even if you are 
loading
a ZC driver. Please note that ZC drivers are patched Intel drivers, which are 
slightly different
from vanilla linux driver: I would not expect huge performance differences, 
however there
could be some difference.

Alfredo

> On 11 Nov 2019, at 14:23, Chandrika Gautam  
> wrote:
> 
> 
> Hi team, 
> 
> If we use optimized zc driver from pfring package and open the device in 
> normal mode(non zc mode) in our software. We have observed a better 
> performance than using standard drivers available On the server. Please 
> validate if this observation is correct? Do we need to still buy ZC license 
> for that interface ?
> 
> Regards,
> Chandrika 
> 
> Sent from my iPhone
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc


Re: [Ntop-misc] Query on software using Pfring 7.5 and i40e driver

2019-09-10 Thread Alfredo Cardigliano
Hi Chandrika
I suggest you to use other cards from the ixgbe family, e.g. X520.

Alfredo

> On 10 Sep 2019, at 13:40, Chandrika Gautam  
> wrote:
> 
> Hi Afredo, 
> 
> Can you please help which card you feel will give the same performance as 
> intel 82599 with pfring to process 4-5 gbps of traffic. 
> We are observing lots of drops at interface for x710 card while processing 
> 2-3 gbps of traffic.
> 
> Please note that intel 82599 is end of life.
> 
> Regards,
> Chandrika
> 
> Sent from my iPhone
> 
> On Sep 10, 2019, at 11:41 AM, Chandrika Gautam  <mailto:chandrika.iitd.r...@gmail.com>> wrote:
> 
>> Hi Alfredo,
>> Please provide an update on this !
>> Regards, 
>> Chandrika
>> 
>> 
>> 
>> Sent from my iPhone
>> 
>> Begin forwarded message:
>> 
>>> From: Chandrika Gautam >> <mailto:chandrika.iitd.r...@gmail.com>>
>>> Date: September 9, 2019 at 8:47:55 AM GMT+5:30
>>> To: ntop-misc@listgateway.unipi.it <mailto:ntop-misc@listgateway.unipi.it>
>>> Subject: Re: [Ntop-misc] Query on software using Pfring 7.5 and i40e driver
>>> 
>>> Hi Alfredo, 
>>> 
>>> Have you been able to look at this further ???
>>> 
>>> Any feedback on ksoftirq reaching 100% continuously.
>>> 
>>> Regards, 
>>> Chandrika
>>> 
>>> Sent from my iPhone
>>> 
>>> On Sep 4, 2019, at 8:11 PM, Alfredo Cardigliano >> <mailto:cardigli...@ntop.org>> wrote:
>>> 
>>>> Hi Chandrika
>>>> this is definitely strange: please note that with standard drivers 
>>>> interrupts 
>>>> are not controlled by PF_RING as they are managed by the driver itself..
>>>> this is interesting, I will take a look asap.
>>>> 
>>>> Alfredo
>>>> 
>>>>> On 4 Sep 2019, at 14:15, Chandrika Gautam >>>> <mailto:chandrika.iitd.r...@gmail.com>> wrote:
>>>>> 
>>>>> That’s the secondary issue which we will focus later on. 
>>>>> 
>>>>> So let me rephrase the problem statements - 
>>>>> 1. This does not involve zc!  Problem 1- When we are using our software 
>>>>> integrated with pfring 7.5.0, we do not see counters incremented in cat 
>>>>> /proc/interrupts/ for the interface from which we are reading the packets.
>>>>> We are using standard i40e. driver on rhel 7.6 and have reduced the 
>>>>> number of queues of this card interfaces to 4 and set the smp affinity 
>>>>> explicitly. 
>>>>> Second problem - we can see ksoftirq for the assigned cores taking 100% 
>>>>> of CPU due to which we are observing drop at the interface as well.
>>>>> When we run tcpdump on the same interface, we can see the interrupts 
>>>>> counts incrementing using above same command. 
>>>>> 
>>>>> 2- above observation is same when we deploy zc compiled i40e driver ; 
>>>>> 
>>>>> Will there be no interrupts with i40e driver when used with pf_ring ??
>>>>> 
>>>>> Regards, 
>>>>> Chandrika
>>>>> Sent from my iPhone
>>>>> 
>>>>> On Sep 4, 2019, at 1:04 PM, Alfredo Cardigliano >>>> <mailto:cardigli...@ntop.org>> wrote:
>>>>> 
>>>>>> Hi Chandrika
>>>>>> are you observing better performance with ixgbe? Please note that ixgbe 
>>>>>> has a bigger buffer in the card,
>>>>>> this could explain better performance in case of high traffic rate (what 
>>>>>> is the pps rate in this case?)
>>>>>> 
>>>>>> Alfredo
>>>>>> 
>>>>>>> On 28 Aug 2019, at 12:56, Chandrika Gautam 
>>>>>>> mailto:chandrika.iitd.r...@gmail.com>> 
>>>>>>> wrote:
>>>>>>> 
>>>>>>> Hi Alfredo,
>>>>>>> 
>>>>>>> Interrupts are not seen either with standard driver or with the zc 
>>>>>>> compiled i40e driver as well.
>>>>>>> 
>>>>>>> What we do in our application is -
>>>>>>> We load zc compiled i40e driver and put a zc license on that interface 
>>>>>>> but we do not open device using zc: prefix.
>>>>>>> Using this we have observed better performAnce in our application when 
>>>>>&

Re: [Ntop-misc] Query on software using Pfring 7.5 and i40e driver

2019-09-10 Thread Alfredo Cardigliano
Hi Chandrika
sorry for late reply, I’ve been busy with other activities, the irq rate 
depends on 
the way the application handles poll/select, the traffic rate, and other 
factors. 
This said, I was not able to reproduce the rate you are describing, probably it
also depends on the traffic you have or other conditions.

Alfredo

> On 9 Sep 2019, at 05:17, Chandrika Gautam  
> wrote:
> 
> Hi Alfredo, 
> 
> Have you been able to look at this further ???
> 
> Any feedback on ksoftirq reaching 100% continuously.
> 
> Regards, 
> Chandrika
> 
> Sent from my iPhone
> 
> On Sep 4, 2019, at 8:11 PM, Alfredo Cardigliano  <mailto:cardigli...@ntop.org>> wrote:
> 
>> Hi Chandrika
>> this is definitely strange: please note that with standard drivers 
>> interrupts 
>> are not controlled by PF_RING as they are managed by the driver itself..
>> this is interesting, I will take a look asap.
>> 
>> Alfredo
>> 
>>> On 4 Sep 2019, at 14:15, Chandrika Gautam >> <mailto:chandrika.iitd.r...@gmail.com>> wrote:
>>> 
>>> That’s the secondary issue which we will focus later on. 
>>> 
>>> So let me rephrase the problem statements - 
>>> 1. This does not involve zc!  Problem 1- When we are using our software 
>>> integrated with pfring 7.5.0, we do not see counters incremented in cat 
>>> /proc/interrupts/ for the interface from which we are reading the packets.
>>> We are using standard i40e. driver on rhel 7.6 and have reduced the number 
>>> of queues of this card interfaces to 4 and set the smp affinity explicitly. 
>>> Second problem - we can see ksoftirq for the assigned cores taking 100% of 
>>> CPU due to which we are observing drop at the interface as well.
>>> When we run tcpdump on the same interface, we can see the interrupts counts 
>>> incrementing using above same command. 
>>> 
>>> 2- above observation is same when we deploy zc compiled i40e driver ; 
>>> 
>>> Will there be no interrupts with i40e driver when used with pf_ring ??
>>> 
>>> Regards, 
>>> Chandrika
>>> Sent from my iPhone
>>> 
>>> On Sep 4, 2019, at 1:04 PM, Alfredo Cardigliano >> <mailto:cardigli...@ntop.org>> wrote:
>>> 
>>>> Hi Chandrika
>>>> are you observing better performance with ixgbe? Please note that ixgbe 
>>>> has a bigger buffer in the card,
>>>> this could explain better performance in case of high traffic rate (what 
>>>> is the pps rate in this case?)
>>>> 
>>>> Alfredo
>>>> 
>>>>> On 28 Aug 2019, at 12:56, Chandrika Gautam >>>> <mailto:chandrika.iitd.r...@gmail.com>> wrote:
>>>>> 
>>>>> Hi Alfredo,
>>>>> 
>>>>> Interrupts are not seen either with standard driver or with the zc 
>>>>> compiled i40e driver as well.
>>>>> 
>>>>> What we do in our application is -
>>>>> We load zc compiled i40e driver and put a zc license on that interface 
>>>>> but we do not open device using zc: prefix.
>>>>> Using this we have observed better performAnce in our application when 
>>>>> used with ixgbe driver.
>>>>> 
>>>>> Regards,
>>>>> Chandrika
>>>>> 
>>>>> Sent from my iPhone
>>>>> 
>>>>> On Aug 28, 2019, at 12:58 PM, Alfredo Cardigliano >>>> <mailto:cardigli...@ntop.org>> wrote:
>>>>> 
>>>>>> Hi Chandrika
>>>>>> are you using the i40e-zc driver or the standard driver?
>>>>>> Please note that ZC enables interrupts only when required for 
>>>>>> performance reason,
>>>>>> for example some libpcap-based applications require interrupts as they 
>>>>>> use poll/select,
>>>>>> instead many pf_ring-based application do not use interrupts at all 
>>>>>> (they do active polling).
>>>>>> 
>>>>>> Alfredo
>>>>>> 
>>>>>>> On 28 Aug 2019, at 08:59, Chandrika Gautam 
>>>>>>> mailto:chandrika.iitd.r...@gmail.com>> 
>>>>>>> wrote:
>>>>>>> 
>>>>>>> 
>>>>>>> Hi Team,
>>>>>>> 
>>>>>>> 
>>>>>>> 
>>>>>>> We have compiled our s

Re: [Ntop-misc] 10/40Gb simple Tap

2019-09-09 Thread Alfredo Cardigliano
Hi Oren
this means receiving 4 Mpps and sending 13.5 Mpps total worst case, while 
applying a filter.
If you replace the BPF filter with a simple VLAN lookup (since it seems this is 
enough for you)
and you leverage on RSS, this looks definitely feasible. A 4 core 3 Ghz CPU 
should be enough,
with the minimum amount of ram to use all cpu channels, no disk is required.

Regards
Alfredo

> On 9 Sep 2019, at 09:40, Oren N  wrote:
> 
> Hi Alfredo,
> This is hard to say as this is for future needs
> Interpolating from existing stats. 
>  2MPPS on average 
>  4MPPS peaks
> Thanks,
> Oren
> 
> From: ntop-misc-boun...@listgateway.unipi.it 
>  on behalf of Alfredo Cardigliano 
> 
> Sent: Thursday, September 5, 2019 4:29 PM
> To: ntop-misc@listgateway.unipi.it 
> Subject: Re: [Ntop-misc] 10/40Gb simple Tap
>  
> Hi Oren
> as of the traffic rate, do you also have numbers about the avg/peak 
> packets/sec rate?
> 
> Thank you
> Alfredo
> 
>> On 5 Sep 2019, at 15:23, Oren N > <mailto:theore...@hotmail.com>> wrote:
>> 
>> Hi Alfredo,
>> Thanks for the quick resp.
>> To answer your questions:
>> Ingress: 1x 10Gb / Egress: 3x 10Gb + 1x 1Gb
>> <5Gbps
>> The 1Gb filter is set by selecting 802.1q VLANs  (alternatively, may be 
>> replaced by Segment/IP range filter); 10Gb are unfiltered - namely copy 
>> everything
>> Thanks,
>> Oren
>> 
>> From: ntop-misc-boun...@listgateway.unipi.it 
>> <mailto:ntop-misc-boun...@listgateway.unipi.it> 
>> > <mailto:ntop-misc-boun...@listgateway.unipi.it>> on behalf of Alfredo 
>> Cardigliano mailto:cardigli...@ntop.org>>
>> Sent: Wednesday, August 28, 2019 12:59 PM
>> To: ntop-misc@listgateway.unipi.it <mailto:ntop-misc@listgateway.unipi.it> 
>> mailto:ntop-misc@listgateway.unipi.it>>
>> Subject: Re: [Ntop-misc] 10/40Gb simple Tap
>>  
>> Hi Oren
>> traffic aggregation from multiple ingress ports and duplication/distribution
>> to multiple egress interfaces is available with PF_RING ZC, we have a
>> tool (zbalance_ipc) which is based on the ZC API and able to do that
>> A small subset of the functionalities provided by this tool are described at
>> https://www.ntop.org/guides/pf_ring/rss.html#zc-load-balancing-zbalance-ipc 
>> <https://www.ntop.org/guides/pf_ring/rss.html#zc-load-balancing-zbalance-ipc>
>> please note that it is able to distribute traffic to processes on the same 
>> host,
>> as well as egress interfaces. What is missing is traffic filterins on the 
>> egress
>> interfaces, however that is already supported by the ZC API, and the tool
>> can support it with small changes.
>> The main concern here is about performance, it really depends on a few 
>> factors:
>> 1. number of ingress/egress links
>> 2. traffic rate
>> 3. filters
>> Do you have some number?
>> 
>> Alfredo
>> 
>>> On 28 Aug 2019, at 14:33, Oren N >> <mailto:theore...@hotmail.com>> wrote:
>>> 
>>> Hi,
>>> Is it possible to build an inexpensive 10/40Gb simple Tap using PF_RING + 
>>> ZC?  The base requirements is as follows:
>>> 1. Monitor a mix of 10Gb and 1Gb interfaces
>>> 2. Reduplicate traffic to N x 10/40Gb output interfaces
>>> 3. Each output interface may have a network filter
>>> 4. Each output interface receives all input packets that match the network 
>>> filter
>>> 5. Simple CLI/GUI
>>> 
>>> Is this a feasible/documented solution using PF_RING?
>>> What are the limitations? E.g., packet loss if input > output
>>> What are the HW specs for such box? E.g.,  8 core CPU, mem, disk, ... for 
>>> input < 2x 10Gb+2x 1G input ; Enough PCI slots
>>> Are there additional HW/SW costs beyond the PF_RING ZC NICs?
>>> 
>>> Thanks,
>>> Oren
>>> ___
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>> ___
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] 10/40Gb simple Tap

2019-09-05 Thread Alfredo Cardigliano
Hi Oren
as of the traffic rate, do you also have numbers about the avg/peak packets/sec 
rate?

Thank you
Alfredo

> On 5 Sep 2019, at 15:23, Oren N  wrote:
> 
> Hi Alfredo,
> Thanks for the quick resp.
> To answer your questions:
> Ingress: 1x 10Gb / Egress: 3x 10Gb + 1x 1Gb
> <5Gbps
> The 1Gb filter is set by selecting 802.1q VLANs  (alternatively, may be 
> replaced by Segment/IP range filter); 10Gb are unfiltered - namely copy 
> everything
> Thanks,
> Oren
> 
> From: ntop-misc-boun...@listgateway.unipi.it 
>  on behalf of Alfredo Cardigliano 
> 
> Sent: Wednesday, August 28, 2019 12:59 PM
> To: ntop-misc@listgateway.unipi.it 
> Subject: Re: [Ntop-misc] 10/40Gb simple Tap
>  
> Hi Oren
> traffic aggregation from multiple ingress ports and duplication/distribution
> to multiple egress interfaces is available with PF_RING ZC, we have a
> tool (zbalance_ipc) which is based on the ZC API and able to do that
> A small subset of the functionalities provided by this tool are described at
> https://www.ntop.org/guides/pf_ring/rss.html#zc-load-balancing-zbalance-ipc 
> <https://www.ntop.org/guides/pf_ring/rss.html#zc-load-balancing-zbalance-ipc>
> please note that it is able to distribute traffic to processes on the same 
> host,
> as well as egress interfaces. What is missing is traffic filterins on the 
> egress
> interfaces, however that is already supported by the ZC API, and the tool
> can support it with small changes.
> The main concern here is about performance, it really depends on a few 
> factors:
> 1. number of ingress/egress links
> 2. traffic rate
> 3. filters
> Do you have some number?
> 
> Alfredo
> 
>> On 28 Aug 2019, at 14:33, Oren N > <mailto:theore...@hotmail.com>> wrote:
>> 
>> Hi,
>> Is it possible to build an inexpensive 10/40Gb simple Tap using PF_RING + 
>> ZC?  The base requirements is as follows:
>> 1. Monitor a mix of 10Gb and 1Gb interfaces
>> 2. Reduplicate traffic to N x 10/40Gb output interfaces
>> 3. Each output interface may have a network filter
>> 4. Each output interface receives all input packets that match the network 
>> filter
>> 5. Simple CLI/GUI
>> 
>> Is this a feasible/documented solution using PF_RING?
>> What are the limitations? E.g., packet loss if input > output
>> What are the HW specs for such box? E.g.,  8 core CPU, mem, disk, ... for 
>> input < 2x 10Gb+2x 1G input ; Enough PCI slots
>> Are there additional HW/SW costs beyond the PF_RING ZC NICs?
>> 
>> Thanks,
>> Oren
>> ___
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] Query on software using Pfring 7.5 and i40e driver

2019-09-04 Thread Alfredo Cardigliano
Hi Chandrika
this is definitely strange: please note that with standard drivers interrupts 
are not controlled by PF_RING as they are managed by the driver itself..
this is interesting, I will take a look asap.

Alfredo

> On 4 Sep 2019, at 14:15, Chandrika Gautam  
> wrote:
> 
> That’s the secondary issue which we will focus later on. 
> 
> So let me rephrase the problem statements - 
> 1. This does not involve zc!  Problem 1- When we are using our software 
> integrated with pfring 7.5.0, we do not see counters incremented in cat 
> /proc/interrupts/ for the interface from which we are reading the packets.
> We are using standard i40e. driver on rhel 7.6 and have reduced the number of 
> queues of this card interfaces to 4 and set the smp affinity explicitly. 
> Second problem - we can see ksoftirq for the assigned cores taking 100% of 
> CPU due to which we are observing drop at the interface as well.
> When we run tcpdump on the same interface, we can see the interrupts counts 
> incrementing using above same command. 
> 
> 2- above observation is same when we deploy zc compiled i40e driver ; 
> 
> Will there be no interrupts with i40e driver when used with pf_ring ??
> 
> Regards, 
> Chandrika
> Sent from my iPhone
> 
> On Sep 4, 2019, at 1:04 PM, Alfredo Cardigliano  <mailto:cardigli...@ntop.org>> wrote:
> 
>> Hi Chandrika
>> are you observing better performance with ixgbe? Please note that ixgbe has 
>> a bigger buffer in the card,
>> this could explain better performance in case of high traffic rate (what is 
>> the pps rate in this case?)
>> 
>> Alfredo
>> 
>>> On 28 Aug 2019, at 12:56, Chandrika Gautam >> <mailto:chandrika.iitd.r...@gmail.com>> wrote:
>>> 
>>> Hi Alfredo,
>>> 
>>> Interrupts are not seen either with standard driver or with the zc compiled 
>>> i40e driver as well.
>>> 
>>> What we do in our application is -
>>> We load zc compiled i40e driver and put a zc license on that interface but 
>>> we do not open device using zc: prefix.
>>> Using this we have observed better performAnce in our application when used 
>>> with ixgbe driver.
>>> 
>>> Regards,
>>> Chandrika
>>> 
>>> Sent from my iPhone
>>> 
>>> On Aug 28, 2019, at 12:58 PM, Alfredo Cardigliano >> <mailto:cardigli...@ntop.org>> wrote:
>>> 
>>>> Hi Chandrika
>>>> are you using the i40e-zc driver or the standard driver?
>>>> Please note that ZC enables interrupts only when required for performance 
>>>> reason,
>>>> for example some libpcap-based applications require interrupts as they use 
>>>> poll/select,
>>>> instead many pf_ring-based application do not use interrupts at all (they 
>>>> do active polling).
>>>> 
>>>> Alfredo
>>>> 
>>>>> On 28 Aug 2019, at 08:59, Chandrika Gautam >>>> <mailto:chandrika.iitd.r...@gmail.com>> wrote:
>>>>> 
>>>>> 
>>>>> Hi Team,
>>>>> 
>>>>> 
>>>>> 
>>>>> We have compiled our software with PF_RING 7.5.0 which reads from an 
>>>>> interface on i40e driver. 
>>>>> 
>>>>> Whenever  we start our software, it process the traffic received on the 
>>>>> interface but no interrupts are seen. 
>>>>> 
>>>>> Whenever we start tcpdump on the same interface, then interrupts can be 
>>>>> seen. 
>>>>> 
>>>>> Can you please help to validate this behavior. Is it expected behavior ? 
>>>>> We need to validate that the interrupts are coming on the cores to which 
>>>>> we have set the affnity.
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> Regards,
>>>>> 
>>>>> Chandrika
>>>>> 
>>>>> ___
>>>>> Ntop-misc mailing list
>>>>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>>>>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>>>> ___
>>>> Ntop-misc mailing list
>>>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>>>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>___
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>> ___
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] Query on software using Pfring 7.5 and i40e driver

2019-09-04 Thread Alfredo Cardigliano
Hi Chandrika
are you observing better performance with ixgbe? Please note that ixgbe has a 
bigger buffer in the card,
this could explain better performance in case of high traffic rate (what is the 
pps rate in this case?)

Alfredo

> On 28 Aug 2019, at 12:56, Chandrika Gautam  
> wrote:
> 
> Hi Alfredo,
> 
> Interrupts are not seen either with standard driver or with the zc compiled 
> i40e driver as well.
> 
> What we do in our application is -
> We load zc compiled i40e driver and put a zc license on that interface but we 
> do not open device using zc: prefix.
> Using this we have observed better performAnce in our application when used 
> with ixgbe driver.
> 
> Regards,
> Chandrika
> 
> Sent from my iPhone
> 
> On Aug 28, 2019, at 12:58 PM, Alfredo Cardigliano  <mailto:cardigli...@ntop.org>> wrote:
> 
>> Hi Chandrika
>> are you using the i40e-zc driver or the standard driver?
>> Please note that ZC enables interrupts only when required for performance 
>> reason,
>> for example some libpcap-based applications require interrupts as they use 
>> poll/select,
>> instead many pf_ring-based application do not use interrupts at all (they do 
>> active polling).
>> 
>> Alfredo
>> 
>>> On 28 Aug 2019, at 08:59, Chandrika Gautam >> <mailto:chandrika.iitd.r...@gmail.com>> wrote:
>>> 
>>> 
>>> Hi Team,
>>> 
>>> 
>>> 
>>> We have compiled our software with PF_RING 7.5.0 which reads from an 
>>> interface on i40e driver. 
>>> 
>>> Whenever  we start our software, it process the traffic received on the 
>>> interface but no interrupts are seen. 
>>> 
>>> Whenever we start tcpdump on the same interface, then interrupts can be 
>>> seen. 
>>> 
>>> Can you please help to validate this behavior. Is it expected behavior ? We 
>>> need to validate that the interrupts are coming on the cores to which we 
>>> have set the affnity.
>>> 
>>> 
>>> 
>>> 
>>> 
>>> Regards,
>>> 
>>> Chandrika
>>> 
>>> ___
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>> ___
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] 10/40Gb simple Tap

2019-08-28 Thread Alfredo Cardigliano
Hi Oren
traffic aggregation from multiple ingress ports and duplication/distribution
to multiple egress interfaces is available with PF_RING ZC, we have a
tool (zbalance_ipc) which is based on the ZC API and able to do that
A small subset of the functionalities provided by this tool are described at
https://www.ntop.org/guides/pf_ring/rss.html#zc-load-balancing-zbalance-ipc 

please note that it is able to distribute traffic to processes on the same host,
as well as egress interfaces. What is missing is traffic filterins on the egress
interfaces, however that is already supported by the ZC API, and the tool
can support it with small changes.
The main concern here is about performance, it really depends on a few 
factors:
1. number of ingress/egress links
2. traffic rate
3. filters
Do you have some number?

Alfredo

> On 28 Aug 2019, at 14:33, Oren N  wrote:
> 
> Hi,
> Is it possible to build an inexpensive 10/40Gb simple Tap using PF_RING + ZC? 
>  The base requirements is as follows:
> 1. Monitor a mix of 10Gb and 1Gb interfaces
> 2. Reduplicate traffic to N x 10/40Gb output interfaces
> 3. Each output interface may have a network filter
> 4. Each output interface receives all input packets that match the network 
> filter
> 5. Simple CLI/GUI
> 
> Is this a feasible/documented solution using PF_RING?
> What are the limitations? E.g., packet loss if input > output
> What are the HW specs for such box? E.g.,  8 core CPU, mem, disk, ... for 
> input < 2x 10Gb+2x 1G input ; Enough PCI slots
> Are there additional HW/SW costs beyond the PF_RING ZC NICs?
> 
> Thanks,
> Oren
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it 
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
> 
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] Query on software using Pfring 7.5 and i40e driver

2019-08-28 Thread Alfredo Cardigliano
Hi Chandrika
are you using the i40e-zc driver or the standard driver?
Please note that ZC enables interrupts only when required for performance 
reason,
for example some libpcap-based applications require interrupts as they use 
poll/select,
instead many pf_ring-based application do not use interrupts at all (they do 
active polling).

Alfredo

> On 28 Aug 2019, at 08:59, Chandrika Gautam  
> wrote:
> 
> 
> Hi Team,
> 
> 
> 
> We have compiled our software with PF_RING 7.5.0 which reads from an 
> interface on i40e driver. 
> 
> Whenever  we start our software, it process the traffic received on the 
> interface but no interrupts are seen. 
> 
> Whenever we start tcpdump on the same interface, then interrupts can be seen. 
> 
> Can you please help to validate this behavior. Is it expected behavior ? We 
> need to validate that the interrupts are coming on the cores to which we have 
> set the affnity.
> 
> 
> 
> 
> 
> Regards,
> 
> Chandrika
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] Not able to achieve consistent 10Gbps on Napatech 2x40G card for more than 10 seconds

2019-08-27 Thread Alfredo Cardigliano
Hi
it depends on the SSD, if you are using a good NVMe you are probably able 
to read 10 Gbps, instead if you are using a standard SSD, the read speed is
much lower..

Alfredo

> On 27 Aug 2019, at 12:33, Chandrika Gautam  
> wrote:
> 
> Hi Alfredo,
> We are using SSD and wonder how can that be slow !! 
> Please help to validate it!
> 
> Sent from my iPhone
> 
> On Aug 23, 2019, at 1:49 PM, Alfredo Cardigliano  <mailto:cardigli...@ntop.org>> wrote:
> 
>> Hi
>> please note the "WARNING: [sender] waiting.. (buffer empty)” messages in 
>> your output,
>> this means that the IO is not fast enough reading data from disk and 
>> providing it to disk2n,
>> what kind of storage are you using?
>> 
>> Alfredo
>> 
>>> On 23 Aug 2019, at 10:05, Chandrika Gautam >> <mailto:chandrika.iitd.r...@gmail.com>> wrote:
>>> 
>>> 
>>> 
>>> Sent from my iPhone
>>> 
>>> Begin forwarded message:
>>> 
>>>> From: Chandrika Gautam >>> <mailto:chandrika.gau...@mobileum.com>>
>>>> Date: August 23, 2019 at 1:34:05 PM GMT+5:30
>>>> To: "chandrika.iitd.r...@gmail.com <mailto:chandrika.iitd.r...@gmail.com>" 
>>>> mailto:chandrika.iitd.r...@gmail.com>>
>>>> Subject: Fw: Not able to achieve consistent 10Gbps on Napatech 2x40G card 
>>>> for more than 10 seconds
>>>> 
>>>> Hi,
>>>> 
>>>> We are evaluating disk2n on Napatech 2x40G card to achieve sending traffic 
>>>> more than 10G. 
>>>> 
>>>> We started for 10G traffic with 9 pcaps of around 1.5G to send on Napatech 
>>>> port 0. 
>>>> 
>>>> Initially for few seconds we were able to observe 10G traffic then 
>>>> afterwards it gradually reduces to 2.6Gbps and remains around 2-2.6Gbps 
>>>> for rest of the period.
>>>> 
>>>> So, we were not able to achieve consistent 10G traffic with disk2n on 
>>>> Napatech card.
>>>> 
>>>> We have also purchased the license for n2disk. 
>>>> 
>>>> Question. How can we achieve 10Gbps traffic on 2x40Napatech card with 
>>>> driver ntanl_package_3gd-11.6.0-linux?
>>>> 
>>>> OS details
>>>> 
>>>> # uname -r
>>>> 3.10.0-957.1.3.el7.x86_64
>>>> 
>>>> # cat /etc/redhat-release 
>>>> CentOS Linux release 7.6.1810 (Core) 
>>>> 
>>>> # numactl --hardware
>>>> available: 2 nodes (0-1)
>>>> node 0 cpus: 0 1 2 3 4 5 6 7 8 9 20 21 22 23 24 25 26 27 28 29
>>>> node 0 size: 131034 MB
>>>> node 0 free: 127156 MB
>>>> node 1 cpus: 10 11 12 13 14 15 16 17 18 19 30 31 32 33 34 35 36 37 38 39
>>>> node 1 size: 131071 MB
>>>> node 1 free: 125674 MB
>>>> node distances:
>>>> node   0   1 
>>>>   0:  10  20 
>>>>   1:  20  10 
>>>> 
>>>> 
>>>> 
>>>> - O utput of disk2n is shared in  attached file.
>>>> 
>>>> 
>>>> 
>>>> root@RW-MUM-COUCHBASE1 ~]# 
>>>> 
>>>> We tried without caching option also (-b) and the max throughput was 2.5G
>>>> 
>>>> # disk2n -i nt:0 -m /tmp/playpcap.txt -c 1 -w 2 -S 3   -C 1024 -I 450  -v
>>>> 
>>>> While browsing the link 
>>>> https://www.ntop.org/guides/pf_ring/modules/napatech.html
>>>>  <https://www.ntop.org/guides/pf_ring/modules/napatech.html>
>>>> in section 
>>>> 
>>>> 8.4. Napatech and Packet Copy
>>>> If you use the PF_RING (non-ZC) API packets are read in zero-copy. Instead 
>>>> if you use PF_RING ZC API, a per-packet copy takes place, which is 
>>>> required to move payload data from Napatech-memory to ZC memory. Keep this 
>>>> in mind!
>>>> 
>>>> It describes that if we use PF_RING ZC then 1 copy happens. Could you 
>>>> please explain this?
>>>> 
>>>> -Iqbal
>>>> 
>>> 
>>> ___
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>> ___
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] Not able to achieve consistent 10Gbps on Napatech 2x40G card for more than 10 seconds

2019-08-23 Thread Alfredo Cardigliano
Hi
please note the "WARNING: [sender] waiting.. (buffer empty)” messages in your 
output,
this means that the IO is not fast enough reading data from disk and providing 
it to disk2n,
what kind of storage are you using?

Alfredo

> On 23 Aug 2019, at 10:05, Chandrika Gautam  
> wrote:
> 
> 
> 
> Sent from my iPhone
> 
> Begin forwarded message:
> 
>> From: Chandrika Gautam > >
>> Date: August 23, 2019 at 1:34:05 PM GMT+5:30
>> To: "chandrika.iitd.r...@gmail.com " 
>> mailto:chandrika.iitd.r...@gmail.com>>
>> Subject: Fw: Not able to achieve consistent 10Gbps on Napatech 2x40G card 
>> for more than 10 seconds
>> 
>> Hi,
>> 
>> We are evaluating disk2n on Napatech 2x40G card to achieve sending traffic 
>> more than 10G. 
>> 
>> We started for 10G traffic with 9 pcaps of around 1.5G to send on Napatech 
>> port 0. 
>> 
>> Initially for few seconds we were able to observe 10G traffic then 
>> afterwards it gradually reduces to 2.6Gbps and remains around 2-2.6Gbps for 
>> rest of the period.
>> 
>> So, we were not able to achieve consistent 10G traffic with disk2n on 
>> Napatech card.
>> 
>> We have also purchased the license for n2disk. 
>> 
>> Question. How can we achieve 10Gbps traffic on 2x40Napatech card with driver 
>> ntanl_package_3gd-11.6.0-linux?
>> 
>> OS details
>> 
>> # uname -r
>> 3.10.0-957.1.3.el7.x86_64
>> 
>> # cat /etc/redhat-release 
>> CentOS Linux release 7.6.1810 (Core) 
>> 
>> # numactl --hardware
>> available: 2 nodes (0-1)
>> node 0 cpus: 0 1 2 3 4 5 6 7 8 9 20 21 22 23 24 25 26 27 28 29
>> node 0 size: 131034 MB
>> node 0 free: 127156 MB
>> node 1 cpus: 10 11 12 13 14 15 16 17 18 19 30 31 32 33 34 35 36 37 38 39
>> node 1 size: 131071 MB
>> node 1 free: 125674 MB
>> node distances:
>> node   0   1 
>>   0:  10  20 
>>   1:  20  10 
>> 
>> 
>> 
>> - O utput of disk2n is shared in  attached file.
>> 
>> 
>> 
>> root@RW-MUM-COUCHBASE1 ~]# 
>> 
>> We tried without caching option also (-b) and the max throughput was 2.5G
>> 
>> # disk2n -i nt:0 -m /tmp/playpcap.txt -c 1 -w 2 -S 3   -C 1024 -I 450  -v
>> 
>> While browsing the link 
>> https://www.ntop.org/guides/pf_ring/modules/napatech.html
>>  
>> in section 
>> 
>> 8.4. Napatech and Packet Copy
>> If you use the PF_RING (non-ZC) API packets are read in zero-copy. Instead 
>> if you use PF_RING ZC API, a per-packet copy takes place, which is required 
>> to move payload data from Napatech-memory to ZC memory. Keep this in mind!
>> 
>> It describes that if we use PF_RING ZC then 1 copy happens. Could you please 
>> explain this?
>> 
>> -Iqbal
>> 
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] sar output/zbalance_ipc

2019-06-21 Thread Alfredo Cardigliano
Hi Jim
in order to enable ZC you should add the zc: prefix to the interface name:

zbalance_ipc -i zc:ens5f0 -m 4 -n 48 -c 99 -g 70 -S 71 -p

Alfredo

> On 21 Jun 2019, at 00:04, Jim Hranicky  wrote:
> 
> Should have sent this in. PF_RING version v.7.5.0.190404 .
> 
> Jim
> 
> On 6/20/19 3:30 PM, Jim Hranicky wrote:
>> I seem to be : 
>> 
>>  % lsmod | egrep 'pf_ring|ixgbe'
>>  ixgbe_zc  339253  0 
>>  pf_ring  1246667  103 ixgbe_zc
>>  ptp19231  2 tg3,ixgbe_zc
>> 
>> Still seeing the same behavior with sar: 
>> 
>>   # sar -n DEV 1 4 | grep ens5f0
>>  03:27:05 PMens5f0 1074552.00  0.00 2394420.68  0.00  0.00   
>>0.00  3.00
>>  03:27:06 PMens5f0  0.00  0.00  0.00  0.00  0.00 
>>  0.00  0.00
>>  03:27:07 PMens5f0 1093220.00  0.00 2394094.27  0.00  0.00   
>>0.00  8.00
>>  03:27:08 PMens5f0  0.00  0.00  0.00  0.00  0.00 
>>  0.00  0.00
>>  Average:   ens5f0 541943.00  0.00 1197128.74  0.00  0.00
>>   0.00  2.75
>> 
>>   # sar -n EDEV 1 4 | grep ens5f0
>>  03:27:16 PMens5f0  0.00  0.00  0.00  0.00  0.00 
>>  0.00  0.00  0.00  0.00
>>  03:27:17 PMens5f0  0.00  0.00  0.00 1646283.00  0.00
>>   0.00  0.00  0.00  0.00
>>  03:27:18 PMens5f0  0.00  0.00  0.00  0.00  0.00 
>>  0.00  0.00  0.00  0.00
>>  03:27:19 PMens5f0  0.00  0.00  0.00 1618365.00  0.00
>>   0.00  0.00  0.00  0.00
>>  Average:   ens5f0  0.00  0.00  0.00 816162.00  0.00 
>>  0.00  0.00  0.00  0.00
>> 
>> zbalance_ipc is running without throwing any errors: 
>> 
>>  % ps -ef | grep zbalance
>>  root 7554  99 06:27 ? 12:12:02 /usr/local/pf/sbin/zbalance_ipc -i ens5f0 -m 
>> 4 -n 48 -c 99 -g 70 -S 71 -p
>> 
>> Output at start: 
>> 
>>  20/Jun/2019 06:27:29 [zbalance_ipc.c:1116] Starting balancer with 48 
>> consumer queues..
>>  20/Jun/2019 06:27:29 [zbalance_ipc.c:1126] Run your application instances 
>> as follows:
>>  20/Jun/2019 06:27:29 [zbalance_ipc.c:1131]  pfcount -i zc:99@0
>>  20/Jun/2019 06:27:29 [zbalance_ipc.c:1131]  pfcount -i zc:99@1
>> 
>> Jim
>> 
>> On 6/20/19 4:26 AM, Alfredo Cardigliano wrote:
>>> Hi Jim
>>> please note that you are not using ZC,  your adapter is supported by the 
>>> ixgbe-zc driver,
>>> for best performance please configure it following this guide: 
>>> https://urldefense.proofpoint.com/v2/url?u=http-3A__www.ntop.org_guides_pf-5Fring_get-5Fstarted_packages-5Finstallation.html&d=DwIFAg&c=sJ6xIWYx-zLMB3EPkvcnVg&r=V4XBkE-sKNhwbB4FTyGW0Q&m=aQ70XP8PJ2JSfZ7Uq3B3nHVW3Odt3gEkhvAOp5O6PiI&s=EViEEq_oXhcA_9Idbzm7eCsxLYKENRsp1G6_7HRV4aU&e=
>>>   
>>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.ntop.org_guides_pf-5Fring_get-5Fstarted_packages-5Finstallation.html&d=DwIFAg&c=sJ6xIWYx-zLMB3EPkvcnVg&r=V4XBkE-sKNhwbB4FTyGW0Q&m=aQ70XP8PJ2JSfZ7Uq3B3nHVW3Odt3gEkhvAOp5O6PiI&s=EViEEq_oXhcA_9Idbzm7eCsxLYKENRsp1G6_7HRV4aU&e=
>>>  >
>>> 
>>> Regards
>>> Alfredo
>>> 
>>>> On 19 Jun 2019, at 00:11, Jim Hranicky  wrote:
>>>> 
>>>> I've noticed some strange output from sar when running zbalance_ipc. 
>>>> It seems I only get stats every other second, on odd seconds in this
>>>> case: 
>>>> 
>>>> % sar -n DEV 1 10 | grep ens5f0
>>>> 05:58:09 PM  IFACE   rxpck/s   txpck/s rxkB/stxkB/s   rxcmp/s  
>>>>  txcmp/s  rxmcst/s
>>>> 05:58:09 PMens5f0 1001033.00  0.00 1440901.59  0.00  0.00  
>>>> 0.00 20.00
>>>> 05:58:10 PMens5f0   0.00  0.00   0.00  0.00  0.00  
>>>> 0.00  0.00
>>>> 05:58:11 PMens5f0 1024305.00  0.00 1422458.64  0.00  0.00  
>>>> 0.00 24.00
>>>> 05:58:12 PMens5f0   0.00  0.00   0.00  0.00  0.00  
>>>> 0.00  0.00
>>>> 05:58:13 PMens5f0 1028284.00  0.00 1567748.74  0.00  0.00  
>>>> 0.00 36.00
>>>> 05:58:14 PMens5f0   0.00  0.00   0.00  0.00  0.00  
>>>> 0.00  0.00
>>>> 05:58:15 PMens5f0 1037512.00  0.00 156836

Re: [Ntop-misc] sar output/zbalance_ipc

2019-06-20 Thread Alfredo Cardigliano
Hi Jim
please note that you are not using ZC,  your adapter is supported by the 
ixgbe-zc driver,
for best performance please configure it following this guide: 
http://www.ntop.org/guides/pf_ring/get_started/packages_installation.html 


Regards
Alfredo

> On 19 Jun 2019, at 00:11, Jim Hranicky  wrote:
> 
> I've noticed some strange output from sar when running zbalance_ipc. 
> It seems I only get stats every other second, on odd seconds in this
> case: 
> 
> % sar -n DEV 1 10 | grep ens5f0
> 05:58:09 PM  IFACE   rxpck/s   txpck/s rxkB/stxkB/s   rxcmp/s   
> txcmp/s  rxmcst/s
> 05:58:09 PMens5f0 1001033.00  0.00 1440901.59  0.00  0.00 
>  0.00 20.00
> 05:58:10 PMens5f0   0.00  0.00   0.00  0.00  0.00 
>  0.00  0.00
> 05:58:11 PMens5f0 1024305.00  0.00 1422458.64  0.00  0.00 
>  0.00 24.00
> 05:58:12 PMens5f0   0.00  0.00   0.00  0.00  0.00 
>  0.00  0.00
> 05:58:13 PMens5f0 1028284.00  0.00 1567748.74  0.00  0.00 
>  0.00 36.00
> 05:58:14 PMens5f0   0.00  0.00   0.00  0.00  0.00 
>  0.00  0.00
> 05:58:15 PMens5f0 1037512.00  0.00 1568361.43  0.00  0.00 
>  0.00 20.00
> 05:58:16 PMens5f0   0.00  0.00   0.00  0.00  0.00 
>  0.00  0.00
> 05:58:17 PMens5f0 1009894.00  0.00 1482515.69  0.00  0.00 
>  0.00 10.00
> 05:58:18 PMens5f0   0.00  0.00   0.00  0.00  0.00 
>  0.00  0.00
> Average:   ens5f0 510102.80   0.00 748198.61   0.00  0.00 
>  0.00 11.00
> 
> Checking for drops, I see them on the even seconds: 
> 
> % sar -n EDEV 1 10 | grep ens5f0
> 05:58:38 PM IFACE   rxerr/s   txerr/scoll/s  rxdrop/s  txdrop/s  
> txcarr/s  rxfram/s  rxfifo/s  txfifo/s
> 05:58:39 PMens5f0  0.00  0.00  0.00  0.00  0.00  
> 0.00  0.00  0.00  0.00
> 05:58:40 PMens5f0  0.00  0.00  0.00 629536.00  0.00  
> 0.00  0.00  0.00  0.00
> 05:58:41 PMens5f0  0.00  0.00  0.00  0.00  0.00  
> 0.00  0.00  0.00  0.00
> 05:58:42 PMens5f0  0.00  0.00  0.00 517831.00  0.00  
> 0.00  0.00  0.00  0.00
> 05:58:43 PMens5f0  0.00  0.00  0.00  0.00  0.00  
> 0.00  0.00  0.00  0.00
> 05:58:44 PMens5f0  0.00  0.00  0.00 558658.00  0.00  
> 0.00  0.00  0.00  0.00
> 05:58:45 PMens5f0  0.00  0.00  0.00  0.00  0.00  
> 0.00  0.00  0.00  0.00
> 05:58:46 PMens5f0  0.00  0.00  0.00 595583.00  0.00  
> 0.00  0.00  0.00  0.00
> 05:58:47 PMens5f0  0.00  0.00  0.00  0.00  0.00  
> 0.00  0.00  0.00  0.00
> 05:58:48 PMens5f0  0.00  0.00  0.00 588821.00  0.00  
> 0.00  0.00  0.00  0.00
> Average:   ens5f0  0.00  0.00  0.00 289042.90  0.00  
> 0.00  0.00  0.00  0.00
> 
> ethtool is reporting a lot of missed packets: 
> 
> % ethtool -S ens5f0 | egrep 'rx_dropped|rx_missed|rx_packets|errors'
> rx_packets: 1775168454257
> rx_errors: 0
> tx_errors: 0
> rx_dropped: 0
> rx_over_errors: 0
> rx_crc_errors: 0
> rx_frame_errors: 0
> rx_fifo_errors: 0
> rx_missed_errors: 1031957653637
> tx_aborted_errors: 0
> tx_carrier_errors: 0
> tx_fifo_errors: 0
> tx_heartbeat_errors: 0
> rx_length_errors: 0
> rx_long_length_errors: 0
> rx_short_length_errors: 0
> rx_csum_offload_errors: 13923182
> fcoe_last_errors: 0
> 
> zbalance_ipc : 
> 
>  /usr/local/pf/sbin/zbalance_ipc -i ens5f0 -m 4 -n 48 -c 99 -g 70 -S 71 -p
> 
> 48 snorts are running along with zbalance_ipc . 
> 
> Can anyone account for this behavior? Is zbalance_ipc unable to keep up, 
> or are there config changes I should make?
> 
> Card info : 
> 
>  81:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ 
> Network Connection (rev 01)
> 
> Thanks,
> --
> Jim Hranicky
> Data Security Specialist
> UF Information Technology
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] Problem with bro/zeek and pf_ring/ZC

2019-03-14 Thread Alfredo Cardigliano
Hi Jim
it seems that Zeek has not been linked against the pf_ring aware libpcap:

Could you double check with:

ldd /usr/local/bro/bin/bro | grep pcap
libpcap.so.1 => /usr/local/lib/libpcap.so.1 (0x7fa371e33000)

As explained in this guide?
http://www.ntop.org/guides/pf_ring/thirdparty/bro.html 


Alfredo

> On 14 Mar 2019, at 02:46, Jim Hranicky  wrote:
> 
> Today I upgraded from zeek-2.6-beta2 and pf_ring 7.3.0
> 
>  PF_RING 7.3.0 ($Revision: dev:c85efbc90d5abb7ef471be17cf9192b88a842ac4$)
> 
> to zeek 2.6.1 and the latest pf_ring from git
> 
>  PF_RING 7.5.0 ($Revision: dev:342b85fe63a2f0cdd70cd16fefebe99e6a8657af$)
> 
> My interfaces were configured like so to work with zbalance_ipc :
> 
>  [worker-1]
>  type=worker
>  host=localhost
>  interface=zc:99@0
>  lb_method=pf_ring
>  lb_procs=1
> 
>  [worker-2]
>  type=worker
>  host=localhost
>  interface=zc:99@2
>  lb_method=pf_ring
>  lb_procs=1
> 
> etc.
> 
> When I start up zeek/bro, all the workers crash with
> 
>  fatal error: problem with interface zc:99@0@0 (pcap_error: SIOCGIFHWADDR: No 
> such device (pcap_activate))
> 
> Anyone know what I need to tweak to get this to work?
> 
> Thanks,
> 
> --
> Jim Hranicky
> Data Security Specialist
> UF Information Technology
> 720 SW 2nd Avenue Suite 450, North Tower, Gainesville, FL 32605
> 352-273-1341
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] PF_RING 7.x, ZeroCopy and ksoftirqd

2018-10-03 Thread Alfredo Cardigliano
Hi Hovsep
please provide /proc/net/pf_ring/dev/eth4/info

Alfredo

> On 3 Oct 2018, at 17:21, Hovsep Levi  wrote:
> 
> I forgot about my testing using CPU 17 with previous post.
> 
> This is a test using CPU 16 which is the first CPU on the local NUMA node 
> where the NIC is attached. 
> 
> /opt/pfring/sbin/zbalance_ipc -i eth4 -c 2 -n 14 -g 16 -m 1 -q 2048
> 
> ksoftirqd will spend 50% of CPU time dealing with interrupts.  Maybe because 
> not using the "ZC" interface label.
> 
>   PID USER  PR  NIVIRTRESSHR S  %CPU %MEM TIME+ COMMAND   
>   
>   
>   108 root  20   0   0  0  0 R  50.0  0.0   9:04.96 
> ksoftirqd/16  
> 
>  2848 root  20   0 1171116   3248   1916 S  50.0  0.0   0:04.08 
> zbalance_ipc
> 
> Why does the interface label "zc" not work with 7.x ? 
> 
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc

___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc


Re: [Ntop-misc] nprobe dying with Accolade

2018-09-13 Thread Alfredo Cardigliano
Hi David
I will send you instructions for generating a core dump in order to debug this 
issue,
btw please use https://github.com/ntop/nProbe/issues 
 for support requests/tickets.

Thank you
Alfredo

> On 13 Sep 2018, at 09:26, David Notivol  wrote:
> 
> Hello,
> 
> We're using nprobe with an Accolade ANIC-20ku card. We run zbalance_ipc to 
> merge the 2 anic interfaces using GTP hash to 16 queues. We run 16 nprobe 
> instances reading this traffic.
> The problem is the nprobe instances are dying after some minutes working. 
> They dump to console the backtrace I'm pasting below.
> 
> Do you have any idea of why this is happening or what should I look into?
> Thanks a lot in advance.
> 
> I just bought the nprobe-accolade license today, sorry if if this is not the 
> place to raise this issue).
> 
> -- System:
> 
> nProbe:   nprobe-8.5.180814-6251.x86_64
> System RAM: 128GB
> System CPU: 56 cores (2 sockets * 14 cores* 2 threads)
> System OS:CentOS Linux release 7.5.1804 (Core)
> Linux Kernel:   3.10.0-862.6.3.el7.x86_64 #1 SMP Tue Jun 26 16:32:21 UTC 2018 
> x86_64 x86_64 x86_64 GNU/Linux
> 
> 
> -- Backtrace:
> 
> *** Error in `/usr/local/bin/nprobe': corrupted size vs. prev_size: 
> 0x7f4af2b9c720 ***
> === Backtrace: =
> /lib64/libc.so.6(+0x7f5e4)[0x7f4b9a86b5e4]
> /lib64/libc.so.6(+0x816db)[0x7f4b9a86d6db]
> /usr/local/lib/libnprobe-8.5.180814.so 
> (purgeBucket+0x6d5)[0x7f4b9d719342]
> /usr/local/lib/libnprobe-8.5.180814.so 
> (dequeueBucketToExport+0x307)[0x7f4b9d718b5a]
> /lib64/libpthread.so.0(+0x7e25)[0x7f4b9c4a2e25]
> /lib64/libc.so.6(clone+0x6d)[0x7f4b9a8eabad]
> === Memory map: 
> 0040-0043f000 r-xp  08:02 3564759
> /usr/local/bin/nprobe
> 0063e000-0063f000 r--p 0003e000 08:02 3564759
> /usr/local/bin/nprobe
> 0063f000-0064 rw-p 0003f000 08:02 3564759
> /usr/local/bin/nprobe
> 0064-00689000 rw-p  00:00 0
> 02296000-02617000 rw-p  00:00 0  
> [heap]
> 2ac0-2aab2ac0 rw-s  00:24 176861 
> /dev/hugepages/pfring_zc_1
> 7f4aec00-7f4af3fa2000 rw-p  00:00 0
> 
> [...lines deleted, mail getting too big...]
> 
> 7f4b04021000-7f4b0800 ---p  00:00 0
> 7f4b080d1000-7f4b0c00 r--s  08:02 534485 
> /usr/share/ntopng/httpdocs/geoip/GeoLite2-City.mmdb
> 7f4b0c00-7f4b0c021000 rw-p  00:00 0
> 7f4b0c021000-7f4b1000 ---p  00:00 0
> 
> [...lines deleted, mail getting too big...]
> 
> 7f4b912c1000-7f4b91ac1000 rw-p  00:00 0
> 7f4b91ac1000-7f4b91ac5000 r-xp  08:02 3540940
> /usr/lib64/sasl2/libanonymous.so.3.0.0
> 7f4b91ac5000-7f4b91cc4000 ---p 4000 08:02 3540940
> /usr/lib64/sasl2/libanonymous.so.3.0.0
> 7f4b91cc4000-7f4b91cc5000 r--p 3000 08:02 3540940
> /usr/lib64/sasl2/libanonymous.so.3.0.0
> 7f4b91cc5000-7f4b91cc6000 rw-p 4000 08:02 3540940
> /usr/lib64/sasl2/libanonymous.so.3.0.0
> 7f4b91cc6000-7f4b91cca000 r-xp  08:02 3557678
> /usr/lib64/sasl2/liblogin.so.3.0.0
> 7f4b91cca000-7f4b91ec9000 ---p 4000 08:02 3557678
> /usr/lib64/sasl2/liblogin.so.3.0.0
> 7f4b91ec9000-7f4b91eca000 r--p 3000 08:02 3557678
> /usr/lib64/sasl2/liblogin.so.3.0.0
> 7f4b91eca000-7f4b91ecb000 rw-p 4000 08:02 3557678
> /usr/lib64/sasl2/liblogin.so.3.0.0
> 7f4b91ecb000-7f4b9208 r-xp  08:02 3540137
> /usr/lib64/libdb-5.3.so 
> 7f4b9208-7f4b9228 ---p 001b5000 08:02 3540137
> /usr/lib64/libdb-5.3.so 
> 7f4b9228-7f4b92287000 r--p 001b5000 08:02 3540137
> /usr/lib64/libdb-5.3.so 
> 7f4b92287000-7f4b9228a000 rw-p 001bc000 08:02 3540137
> /usr/lib64/libdb-5.3.so 
> 7f4b9228a000-7f4b9229 r-xp  08:02 3540943
> /usr/lib64/sasl2/libsasldb.so.3.0.0
> 7f4b9229-7f4b9248f000 ---p 6000 08:02 3540943
> /usr/lib64/sasl2/libsasldb.so.3.0.0
> 
> [...lines deleted, mail getting too big...]
> 
> 7f4b92695000-7f4b92696000 rw-p 4000 08:02 3557681
> /usr/lib64/sasl2/libplain.so.3.0.0
> 7f4b92696000-7f4b9270a000 r-xp  08:02 534538 
> /usr/local/lib/nprobe/plugins/libssdpPlugin-8.5.180814.so 
> 
> 
> [...lines deleted, mail getting too big...]
> 
> 7f4b9d6c1000-7f4b9d8cd000 r-xp  08:02 3564760
> /usr/local/lib/libnprobe-8.5.180814.so 
> 7f4b9d8cd000-7f4b9dacd0

Re: [Ntop-misc] distributed nprobe

2018-09-07 Thread Alfredo Cardigliano
Hi Felix
you can use the standard pf_ring kernel clustering in nProbe
adding the --cluster-id  option (you need to specify the same id
for all nProbe instances in the group in order to distribute the traffic).
You can use a bpf filter (--bpf-filter|-f ) to filter traffic.

Regards
Alfredo

> On 7 Sep 2018, at 14:55, erlac...@campus.uni-paderborn.de wrote:
> 
> Signed PGP part
> Dear ntop people,
> 
> I use nprobe to aggregate ip packets to IPFIX flows (and then analyze
> them on another machine). Because I also aggregate http fields I had to
> use multiple nprobe instances to keep up with high throughput rates.
> Until now I used zbalance_ipc -m 1 to distribute packets according to
> their IP hash to the single nprobe instances.
> The problem is that now I need to do kernel routing on the incoming
> device, and thus can not use zero copy (or zbalance_ipc) anymore because
> that makes the device invisible to the kernel.
> The question is:
> 
> -Is there another way to distribute the incoming traffic to multiple
> nprobe instances (as with IP hashing)?
> 
> -Is there a way that I can filter packets in nprobe, so that they are
> distributed more or less equally among multiple nprobe instances (again,
> same IP should go to same instance)?
> 
> Thanks for any hints!
> 
> regards
> 
> Felix
> 
> 
> 
> 



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] PF_RING Intel 82599ES hardware filters

2018-08-17 Thread Alfredo Cardigliano
Hi Raphael
I double checked and it actually seems that there is some confusion with the 
FAQ,
pf_ring currently supports perfect filters (8K-2) and 5-tuple filters (128), I 
will fix the
FAQ to clarify this. Thank you for reporting.

Best Regards
Alfredo

> On 16 Aug 2018, at 17:03, Raphael Benedet  wrote:
> 
> Hi Alfredo,
> 
> pfring_add_hw_rule() is already the function I use. I modified the code of 
> pfcount_82599.c to add rules of type intel_82599_perfect_filter_rule in a 
> loop:
> int i;
> for (i = 1; i < 4; i++) {
>   memset(&rule, 0, sizeof(rule)), rule.rule_family_type = 
> intel_82599_perfect_filter_rule;
>   rule.rule_id = rule_id++, perfect_rule->queue_id = -1, 
> perfect_rule->proto = 17,
> perfect_rule->d_port = i;
>   rc = pfring_add_hw_rule(pd, &rule);
>   if(rc != 0) {
> printf("pfring_add_hw_rule(%d) failed [rc=%d]: did you enable the 
> FlowDirector (ethtool -K ethX ntuple on)\n", rule.rule_id, rc);
> break;
>   }
> }
> 
> And pfring_add_hw_rule fails after 8190 rules insertions.
> 
> When using 5-tuple rules (intel_82599_five_tuple_rule), pfring_add_hw_rule 
> fails after 128 rules. This is in-line with the Intel 82599 data sheet 
> (https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/82599-10-gbe-controller-datasheet.pdf
>  
> <https://www.intel.com/content/dam/www/public/us/en/documents/datasheets/82599-10-gbe-controller-datasheet.pdf>,
>  page 292).
> 
> So, I'm wondering which code to use to be able to insert the 32K filters you 
> mention in the FAQ?
> 
> Best Regards,
> Raphael
> 
> 
> 
> On Thu, Aug 16, 2018 at 4:35 PM, Alfredo Cardigliano  <mailto:cardigli...@ntop.org>> wrote:
> Hi Raphael
> please note that it is possible to use both filters types using the 
> pfring_add_hw_rule() API,
> just set hw_filtering_rule.rule_family_type to intel_82599_five_tuple_rule or
> intel_82599_perfect_filter_rule and set the corresponding struct
> (intel_82599_five_tuple_filter_hw_rule / struct 
> intel_82599_perfect_filter_hw_rule)
> Please see the sample code at
> https://github.com/ntop/PF_RING/blob/dev/userland/examples/pfcount_82599.c 
> <https://github.com/ntop/PF_RING/blob/dev/userland/examples/pfcount_82599.c>
> 
> Alfredo
> 
>> On 16 Aug 2018, at 16:22, Raphael Benedet > <mailto:raphael.bene...@netaxis.be>> wrote:
>> 
>> Hello,
>> 
>> I'm trying to set up hardware filtering on an Intel X520 board. I wanted to 
>> check how many filters I could set on this board so I started from the 
>> example pfcount_82599.c file and added a loop to add filters sequentially. 
>> The function pfring_add_hw_rule fails after ~ 8K filters.
>> 
>> On the PF_RING packets filtering page 
>> (https://www.ntop.org/products/packet-capture/pf_ring/hardware-packet-filtering/
>>  
>> <https://www.ntop.org/products/packet-capture/pf_ring/hardware-packet-filtering/>),
>>  the FAQ mentions that 32K filters are supported:
>> 
>> Q. How many filters a 82599-based card typically supports?
>> A. You can have up to 32K hardware filters.
>> 
>> I checked the Intel 82599 data sheet and the chapter "Flow Director Filters" 
>> mentions both a limit of ~8K filters (the one I seem to hit) and ~32K 
>> filters:
>> 
>> The 82599 support two types of filtering modes (static setting by the 
>> FDIRCTRL.PerfectMatch bit):
>> • Perfect match filters — The hardware checks a match between the masked 
>> fields of the received packets and the programmed filters. Masked fields 
>> should be programmed as zeros in the filter context. The 82599 support up to 
>> 8 K - 2 perfect match filters.
>> • Signature filters — The hardware checks a match between a hash-based 
>> signature of the masked fields of the received packet. The 82599 supports up 
>> to 32 K - 2 signature filters.
>> 
>> Do you know if there is a way to have access to these ~ 32K filters through 
>> PF_RING?
>> 
>> Best Regards,
>> Raphael
>> ___
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] PF_RING Intel 82599ES hardware filters

2018-08-16 Thread Alfredo Cardigliano
Hi Raphael
please note that it is possible to use both filters types using the 
pfring_add_hw_rule() API,
just set hw_filtering_rule.rule_family_type to intel_82599_five_tuple_rule or
intel_82599_perfect_filter_rule and set the corresponding struct
(intel_82599_five_tuple_filter_hw_rule / struct 
intel_82599_perfect_filter_hw_rule)
Please see the sample code at
https://github.com/ntop/PF_RING/blob/dev/userland/examples/pfcount_82599.c 


Alfredo

> On 16 Aug 2018, at 16:22, Raphael Benedet  wrote:
> 
> Hello,
> 
> I'm trying to set up hardware filtering on an Intel X520 board. I wanted to 
> check how many filters I could set on this board so I started from the 
> example pfcount_82599.c file and added a loop to add filters sequentially. 
> The function pfring_add_hw_rule fails after ~ 8K filters.
> 
> On the PF_RING packets filtering page 
> (https://www.ntop.org/products/packet-capture/pf_ring/hardware-packet-filtering/
>  
> ),
>  the FAQ mentions that 32K filters are supported:
> 
> Q. How many filters a 82599-based card typically supports?
> A. You can have up to 32K hardware filters.
> 
> I checked the Intel 82599 data sheet and the chapter "Flow Director Filters" 
> mentions both a limit of ~8K filters (the one I seem to hit) and ~32K filters:
> 
> The 82599 support two types of filtering modes (static setting by the 
> FDIRCTRL.PerfectMatch bit):
> • Perfect match filters — The hardware checks a match between the masked 
> fields of the received packets and the programmed filters. Masked fields 
> should be programmed as zeros in the filter context. The 82599 support up to 
> 8 K - 2 perfect match filters.
> • Signature filters — The hardware checks a match between a hash-based 
> signature of the masked fields of the received packet. The 82599 supports up 
> to 32 K - 2 signature filters.
> 
> Do you know if there is a way to have access to these ~ 32K filters through 
> PF_RING?
> 
> Best Regards,
> Raphael
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] Determining rx queues of an interface

2018-07-11 Thread Alfredo Cardigliano
Hi Amir
the number of rx queues is determined loading the driver or at runtime
with ethtool ‘combined’ as you can see at:

https://github.com/ntop/PF_RING/blob/dev/drivers/intel/i40e/i40e-2.4.6-zc/src/load_driver.sh#L39
 


Please also make sure that you are actually unloading the old driver
before running insmod (maybe the new insmod is failing)

Alfredo

> On 10 Jul 2018, at 15:46, Amir Kaduri  wrote:
> 
> Do the rx queues of an interface are determined only when loading the kernel 
> driver, or is there another thing that affects that?
> 
> My problem is that I get a number that differs than what I think should be 
> (because maybe something during the machine boot doesn't work properly)
> 
> For example:
> What I assume our load script do is "insmod 
> /usr/local/pfring/drivers/intel/i40e.ko RSS=12,12,12,12,12,12,12,12"
> What I actually get when I run "cat /proc/net/pf_ring/dev/eth4/info"
> is "RX Queues:14" (from previous reboot).
> 
> So maybe the "insmod" doesn't actually work as expected, or is there anything 
> else that I'm not aware of?
> 
> A'ii appreciate any help.
> 
> Thanks
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] problem starting PF_RING ZC with multiple receive threads

2018-07-02 Thread Alfredo Cardigliano
Hi Robert
please note that ZC is not compatible with kernel clustering, this means that:
1. you should not set --pfring-cluster-id=99 --pfring-cluster-type=cluster_flow
2. if you want multiple capture threads, you should use RSS and capture from
each RSS interface/queue (e.g. zc:p1p1@0 and zc:p1p1@1 if you have RSS=2)
In your example, Suricata is trying to enable multiple sockets on the same
interface/queue, thus the failure.

Alfredo

> On 2 Jul 2018, at 19:54, Robert Cyphers  wrote:
> 
> Hello NTOP users.
> 
> I'm looking for hints on running Suricata over PF_RING ZC with multiple 
> receive threads.
> 
> I have it running in single threaded mode, but it doesn't want to startup 
> with more than one thread.
> 
> 
> One thread runs ok:
> 
> ```
> shoshin@pit6:~$ sudo suricata --pfring-int=zc:p1p1 --pfring-cluster-id=99 
> --pfring-cluster-type=cluster_flow -c 
> /usr/local/etc/suricata/rcc/suricata-pfring-zc-v1.yaml --init-errors-fatal 
> --runmode workers -v
> 2/7/2018 -- 13:04:02 -  - This is Suricata version 4.0.4 RELEASE
> 2/7/2018 -- 13:04:02 -  - CPUs/cores online: 80
> 2/7/2018 -- 13:04:03 -  - Running in live mode, activating unix socket
> 2/7/2018 -- 13:04:06 -  - 38 rule files processed. 12462 rules 
> successfully loaded, 0 rules failed
> 2/7/2018 -- 13:04:06 -  - Threshold config parsed: 0 rule(s) found
> 2/7/2018 -- 13:04:06 -  - 12467 signatures processed. 1168 are IP-only 
> rules, 5189 are inspecting packet payload, 7608 inspect application layer, 0 
> are decoder event only
> 2/7/2018 -- 13:04:12 -  - fast output device (regular) initialized: 
> fast.log
> 2/7/2018 -- 13:04:12 -  - eve-log output device (regular) initialized: 
> eve.json
> 2/7/2018 -- 13:04:12 -  - stats output device (regular) initialized: 
> stats.log
> 2/7/2018 -- 13:04:12 -  - Using flow cluster mode for PF_RING (iface 
> zc:p1p1)
> 2/7/2018 -- 13:04:12 -  - Going to use 1 thread(s)
> #
> # ERROR: You do not seem to have a valid PF_RING ZC license 7.3.0.180618 for 
> p1p1 [Intel 10/40 Gbit i40e family]
> # ERROR: Please get one at http://shop.ntop.org/.
> #
> # We're now working in demo mode with packet capture and
> # transmission limited to 5 minutes
> #
> 2/7/2018 -- 13:04:13 -  - ZC interface detected, not adding thread to 
> cluster
> 2/7/2018 -- 13:04:13 -  - RunModeIdsPfringWorkers initialised
> 2/7/2018 -- 13:04:13 -  - Running in live mode, activating unix socket
> 2/7/2018 -- 13:04:13 -  - Using unix socket file 
> '/usr/local/var/run/suricata/suricata-command.socket'
> 2/7/2018 -- 13:04:13 -  - all 1 packet processing threads, 4 
> management threads initialized, engine started.
> 2/7/2018 -- 13:04:34 -  - [ERRCODE: SC_ERR_PF_RING_VLAN(302)] - no 
> VLAN header in the raw packet. See #2355.
> ^C2/7/2018 -- 13:06:17 -  - Signal Received.  Stopping engine.
> 2/7/2018 -- 13:07:49 -  - [ERRCODE: SC_ERR_FATAL(171)] - Engine unable 
> to disable detect thread - "FM#01".  Killing engine
> ```
> 
> ---
> 
> Two threads fails to start:
> 
> ```
> shoshin@pit6:~$ sudo suricata --pfring-int=zc:p1p1 --pfring-cluster-id=99 
> --pfring-cluster-type=cluster_flow -c 
> /usr/local/etc/suricata/rcc/suricata-pfring-zc-v1.yaml --init-errors-fatal 
> --runmode workers -v
> 2/7/2018 -- 13:01:01 -  - This is Suricata version 4.0.4 RELEASE
> 2/7/2018 -- 13:01:01 -  - CPUs/cores online: 80
> 2/7/2018 -- 13:01:02 -  - Running in live mode, activating unix socket
> 2/7/2018 -- 13:01:04 -  - 38 rule files processed. 12462 rules 
> successfully loaded, 0 rules failed
> 2/7/2018 -- 13:01:04 -  - Threshold config parsed: 0 rule(s) found
> 2/7/2018 -- 13:01:05 -  - 12467 signatures processed. 1168 are IP-only 
> rules, 5189 are inspecting packet payload, 7608 inspect application layer, 0 
> are decoder event only
> 2/7/2018 -- 13:01:11 -  - fast output device (regular) initialized: 
> fast.log
> 2/7/2018 -- 13:01:11 -  - eve-log output device (regular) initialized: 
> eve.json
> 2/7/2018 -- 13:01:11 -  - stats output device (regular) initialized: 
> stats.log
> 2/7/2018 -- 13:01:11 -  - Using flow cluster mode for PF_RING (iface 
> zc:p1p1)
> 2/7/2018 -- 13:01:11 -  - Going to use 2 thread(s)
> #
> # ERROR: You do not seem to have a valid PF_RING ZC license 7.3.0.180618 for 
> p1p1 [Intel 10/40 Gbit i40e family]
> # ERROR: Please get one at http://shop.ntop.org/.
> #
> # We're now working in demo mode with packet capture and
> # transmission limited to 5 minutes
> #
> 2/7/2018 -- 13:01:12 -  - ZC interface detected, not adding thread to 
> cluster
> ##

Re: [Ntop-misc] nbpf questions

2018-06-29 Thread Alfredo Cardigliano
HI Bowen
libpcap-over-pfring actually uses standard BPF, unless you are 1. capturing
from an adapter supporting hw filters (in that case pf_ring translates bpf to
hw rules using nbpf, and it uses standard bpf in userspace as fallback), or
2. extracting traffic from a n2disk dumpset with the timeline enabled.

Alfredo

> On 29 Jun 2018, at 04:39, Bowen Li  wrote:
> 
> Hi Alfredo
> I did not write custom code using nbpf_parse and nbpf_match, I test nbpf 
> using bro ids with libpcap from PF_RING,
> I thin pcap_compile and pcap_setfilter in libpcap from PF_RING uses nbpf by 
> default, and I find that bpf operation
> in libpfring also uses functions in libpcap, am I correct?
> Just now I rerun my test under 10Gbit environment, it seems that the 
> number of host item in bpf string still has no
> effect on the processing speed of PF_RING.
> What is the main influential factor about the maximum num of host which 
> could be supported by nbpf in bpf string?
> 
> Alfredo Cardigliano mailto:cardigli...@ntop.org>> 
> 于2018年6月28日周四 下午3:34写道:
> Hi Bowen
> said that I am still missing something in your implementation (did you write
> custom code using nbpf_parse and nbpf_match ?), your test results could
> be reliable if you are checking the processing speed at 1Gbit.
> 
> Alfredo
> 
>> On 28 Jun 2018, at 09:23, Bowen Li > <mailto:newfire...@gmail.com>> wrote:
>> 
>> Hi Alfredo
>> Thanks for replying.
>> My test environment:
>> CentOS Linux release 7.2.1511 (Core)  3.10.0-327.13.1.el7.x86_64
>> Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz
>> Memory: 128G
>> 
>> PF_RING Version  : 7.2.0 
>> (7.2.0-stable:745f567720be0f28385ce923ba9f4957d6fe35cf)
>> Total rings  : 21
>> Standard (non ZC) Options
>> Ring slots   : 4096
>> Slot version : 17
>> Capture TX   : Yes [RX+TX]
>> IP Defragment: No
>> Socket Mode  : Standard
>> Cluster Fragment Queue   : 0
>> Cluster Fragment Discard : 0
>> 
>> Ethernet controller: Intel Corporation 82574L Gigabit Network 
>> Connection
>> Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ 
>> Network Connection (rev 01)
>> 
>> bro ids version 2.5.2
>> 
>> My goal is to use nbpf to shunt traffic from some hosts instead of 
>> catching traffic from specific hosts, so I did the test.
>> I use two 10G interface on same nic to send traffic from one to another 
>> one(I also do this on 1G nic) using pfsend, bro ids listen the receiving 
>> interface with bpf filter, I use
>> "cmd_line_bpf_filter" param in bro to pass filter to PF_RING, my test result 
>> is: with format "not host A and not host B and ...", the maximum num of host
>>  is 466 and it seems that the number of host item has no effect on the 
>> processing speed of PF_RING. Are my test result reliable?
>> 
>> Alfredo Cardigliano mailto:cardigli...@ntop.org>> 
>> 于2018年6月27日周三 下午4:05写道:
>> Hi Bowen
>> the nbpf syntax actually supports the not operator, however it depends
>> on the actual backend (we probably need to extend the guide commenting
>> more about this). For instance translating the filter into hw rules for 
>> offloading
>> it to the adapter, in most cases it is not possible to use the not operator.
>> What is your use case/application/card where you are using nbpf?
>> 
>> Regards
>> Alfredo
>> 
>>> On 27 Jun 2018, at 04:48, Bowen Li >> <mailto:newfire...@gmail.com>> wrote:
>>> 
>>> Hi all,
>>> The README of ndpf section in github notes that “NOT” cannot be used as 
>>> keyword in filter, however, I used “NOT” and the filter is effective in my 
>>> test process. I want to know if there is something wrong in the official 
>>> documents or I omitted anything in my code.
>>> If the used format of filter is “not host A and not host B and...”, how 
>>> many hosts that ndpf could support to filter in maximum? Besides, could you 
>>> please tell me if pcap processing speed of PF_RING will be influenced with 
>>> the increase of filter length?
>>> Any insight would be helpful.
>>> ___
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>>>

Re: [Ntop-misc] nbpf questions

2018-06-28 Thread Alfredo Cardigliano
Hi Bowen
said that I am still missing something in your implementation (did you write
custom code using nbpf_parse and nbpf_match ?), your test results could
be reliable if you are checking the processing speed at 1Gbit.

Alfredo

> On 28 Jun 2018, at 09:23, Bowen Li  wrote:
> 
> Hi Alfredo
> Thanks for replying.
> My test environment:
> CentOS Linux release 7.2.1511 (Core)  3.10.0-327.13.1.el7.x86_64
> Intel(R) Xeon(R) CPU E5-2690 v2 @ 3.00GHz
> Memory: 128G
> 
> PF_RING Version  : 7.2.0 
> (7.2.0-stable:745f567720be0f28385ce923ba9f4957d6fe35cf)
> Total rings  : 21
> Standard (non ZC) Options
> Ring slots   : 4096
> Slot version : 17
> Capture TX   : Yes [RX+TX]
> IP Defragment: No
> Socket Mode  : Standard
> Cluster Fragment Queue   : 0
> Cluster Fragment Discard : 0
> 
> Ethernet controller: Intel Corporation 82574L Gigabit Network 
> Connection
> Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ 
> Network Connection (rev 01)
> 
> bro ids version 2.5.2
> 
> My goal is to use nbpf to shunt traffic from some hosts instead of 
> catching traffic from specific hosts, so I did the test.
> I use two 10G interface on same nic to send traffic from one to another 
> one(I also do this on 1G nic) using pfsend, bro ids listen the receiving 
> interface with bpf filter, I use
> "cmd_line_bpf_filter" param in bro to pass filter to PF_RING, my test result 
> is: with format "not host A and not host B and ...", the maximum num of host
>  is 466 and it seems that the number of host item has no effect on the 
> processing speed of PF_RING. Are my test result reliable?
> 
> Alfredo Cardigliano mailto:cardigli...@ntop.org>> 
> 于2018年6月27日周三 下午4:05写道:
> Hi Bowen
> the nbpf syntax actually supports the not operator, however it depends
> on the actual backend (we probably need to extend the guide commenting
> more about this). For instance translating the filter into hw rules for 
> offloading
> it to the adapter, in most cases it is not possible to use the not operator.
> What is your use case/application/card where you are using nbpf?
> 
> Regards
> Alfredo
> 
>> On 27 Jun 2018, at 04:48, Bowen Li > <mailto:newfire...@gmail.com>> wrote:
>> 
>> Hi all,
>> The README of ndpf section in github notes that “NOT” cannot be used as 
>> keyword in filter, however, I used “NOT” and the filter is effective in my 
>> test process. I want to know if there is something wrong in the official 
>> documents or I omitted anything in my code.
>> If the used format of filter is “not host A and not host B and...”, how 
>> many hosts that ndpf could support to filter in maximum? Besides, could you 
>> please tell me if pcap processing speed of PF_RING will be influenced with 
>> the increase of filter length?
>> Any insight would be helpful.
>> ___
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] nProbe performance, zbalance packet drops

2018-06-27 Thread Alfredo Cardigliano
Hi David

> On 27 Jun 2018, at 14:20, David Notivol  wrote:
> 
> Hi Alfredo,
> Thanks for  your recommendations.
> 
> I tested using core affinity as you suggested, and the in drops disappeared 
> in zbalance. The output drops persist, but the absolute drops are less than 
> before.
> Actually I had tested the core affinity, but I didn't have in mind the 
> physical cores. Now I put zbalance in one physical core, and 10 nprobe 
> instances not sharing the physical core with zbalance.
> 
> About your point 2, by using zc drivers, how could I run several nprobe 
> instances to share the load? I'm testing with one instance: -i zc:p2p1,zc:p2p2

You can keep using zbalance_ipc (-i zc:p2p1,zc:p2p2), or you can use RSS 
(running nprobe on  -i zc:p2p1@,zc:p2p2@)

> Attached you can find:
> - 0.log = top output for the scenario in my previous email.
> - 1.log = scenario in your point 1, including top, zbalance output, and 
> nprobe stats.


I do not see the attachments, did you forget to enclose them?

Alfredo

> 
> El mié., 27 jun. 2018 a las 12:13, Alfredo Cardigliano ( <mailto:cardigli...@ntop.org>>) escribió:
> Hi David
> it seems that you have packet loss both on zbalance and nprobe,
> I recommend you to:
> 1. set the core affinity for both zbalance_ipc and the nprobe instances, 
> trying to
> use a different core for each (at least do not share the zbalance_ipc 
> physical core
> with nprobe instances)
> 2. did you try using zc drivers for capturing traffic from the interfaces? 
> (zc:p2p1,zc:p2p2)
> Please also provide the top output (press 1 to see all cored) with the 
> current configuration,
> I guess kernel is using some of the available cpu with this configuration.
> 
> Alfredo
> 
>> On 26 Jun 2018, at 16:31, David Notivol > <mailto:dnoti...@gmail.com>> wrote:
>> 
>> Hi Alfredo,
>> Thanks for replying.
>> This is an excerpt of the zbalance and nprobe statistics:
>> 
>> 26/Jun/2018 17:29:58 [zbalance_ipc.c:265] =
>> 26/Jun/2018 17:29:58 [zbalance_ipc.c:266] Absolute Stats: Recv 1'285'430'239 
>> pkts (1'116'181'903 drops) - Forwarded 1'266'272'285 pkts (19'157'949 drops)
>> 26/Jun/2018 17:29:58 [zbalance_ipc.c:305] p2p1,p2p2 RX 
>> 1285430267 pkts Dropped 1116181981 pkts (46.5 %)
>> 26/Jun/2018 17:29:58 [zbalance_ipc.c:319] Q 0 RX 77050882 
>> pkts Dropped 1127883 pkts (1.4 %)
>> 26/Jun/2018 17:29:58 [zbalance_ipc.c:319] Q 1 RX 70722562 
>> pkts Dropped 756409 pkts (1.1 %)
>> 26/Jun/2018 17:29:58 [zbalance_ipc.c:319] Q 2 RX 76092418 
>> pkts Dropped 1017335 pkts (1.3 %)
>> 26/Jun/2018 17:29:58 [zbalance_ipc.c:319] Q 3 RX 75088386 
>> pkts Dropped 896678 pkts (1.2 %)
>> 26/Jun/2018 17:29:58 [zbalance_ipc.c:319] Q 4 RX 91991042 
>> pkts Dropped 2114739 pkts (2.2 %)
>> 26/Jun/2018 17:29:58 [zbalance_ipc.c:319] Q 5 RX 81384450 
>> pkts Dropped 1269385 pkts (1.5 %)
>> 26/Jun/2018 17:29:58 [zbalance_ipc.c:319] Q 6 RX 84310018 
>> pkts Dropped 1801848 pkts (2.1 %)
>> 26/Jun/2018 17:29:58 [zbalance_ipc.c:319] Q 7 RX 84554242 
>> pkts Dropped 1487329 pkts (1.7 %)
>> 26/Jun/2018 17:29:58 [zbalance_ipc.c:319] Q 8 RX 84090370 
>> pkts Dropped 1482864 pkts (1.7 %)
>> 26/Jun/2018 17:29:58 [zbalance_ipc.c:319] Q 9 RX 73642498 
>> pkts Dropped 732237 pkts (1.0 %)
>> 26/Jun/2018 17:29:58 [zbalance_ipc.c:319] Q 10 RX 76481026 
>> pkts Dropped 1000496 pkts (1.3 %)
>> 26/Jun/2018 17:29:58 [zbalance_ipc.c:319] Q 11 RX 72496642 
>> pkts Dropped 929049 pkts (1.3 %)
>> 26/Jun/2018 17:29:58 [zbalance_ipc.c:319] Q 12 RX 79386626 
>> pkts Dropped 1122169 pkts (1.4 %)
>> 26/Jun/2018 17:29:58 [zbalance_ipc.c:319] Q 13 RX 79418370 
>> pkts Dropped 1187172 pkts (1.5 %)
>> 26/Jun/2018 17:29:58 [zbalance_ipc.c:319] Q 14 RX 80284162 
>> pkts Dropped 1195559 pkts (1.5 %)
>> 26/Jun/2018 17:29:58 [zbalance_ipc.c:319] Q 15 RX 79143426 
>> pkts Dropped 1036797 pkts (1.3 %)
>> 26/Jun/2018 17:29:58 [zbalance_ipc.c:338] Actual Stats: Recv 369'127.51 pps 
>> (555'069.74 drops) - Forwarded 369'129.51 pps (0.00 drops)
>> 26/Jun/2018 17:29:58 [zbalance_ipc.c:348] =
>> 
>> 
>> # cat /proc/net/pf_ring/stats/*
>> ClusterId: 1
>> TotQueues: 16
>> Applications:  1
>> App0Queues:16
>> Dur

Re: [Ntop-misc] nProbe performance, zbalance packet drops

2018-06-27 Thread Alfredo Cardigliano
Hi David
it seems that you have packet loss both on zbalance and nprobe,
I recommend you to:
1. set the core affinity for both zbalance_ipc and the nprobe instances, trying 
to
use a different core for each (at least do not share the zbalance_ipc physical 
core
with nprobe instances)
2. did you try using zc drivers for capturing traffic from the interfaces? 
(zc:p2p1,zc:p2p2)
Please also provide the top output (press 1 to see all cored) with the current 
configuration,
I guess kernel is using some of the available cpu with this configuration.

Alfredo

> On 26 Jun 2018, at 16:31, David Notivol  wrote:
> 
> Hi Alfredo,
> Thanks for replying.
> This is an excerpt of the zbalance and nprobe statistics:
> 
> 26/Jun/2018 17:29:58 [zbalance_ipc.c:265] =
> 26/Jun/2018 17:29:58 [zbalance_ipc.c:266] Absolute Stats: Recv 1'285'430'239 
> pkts (1'116'181'903 drops) - Forwarded 1'266'272'285 pkts (19'157'949 drops)
> 26/Jun/2018 17:29:58 [zbalance_ipc.c:305] p2p1,p2p2 RX 
> 1285430267 pkts Dropped 1116181981 pkts (46.5 %)
> 26/Jun/2018 17:29:58 [zbalance_ipc.c:319] Q 0 RX 77050882 
> pkts Dropped 1127883 pkts (1.4 %)
> 26/Jun/2018 17:29:58 [zbalance_ipc.c:319] Q 1 RX 70722562 
> pkts Dropped 756409 pkts (1.1 %)
> 26/Jun/2018 17:29:58 [zbalance_ipc.c:319] Q 2 RX 76092418 
> pkts Dropped 1017335 pkts (1.3 %)
> 26/Jun/2018 17:29:58 [zbalance_ipc.c:319] Q 3 RX 75088386 
> pkts Dropped 896678 pkts (1.2 %)
> 26/Jun/2018 17:29:58 [zbalance_ipc.c:319] Q 4 RX 91991042 
> pkts Dropped 2114739 pkts (2.2 %)
> 26/Jun/2018 17:29:58 [zbalance_ipc.c:319] Q 5 RX 81384450 
> pkts Dropped 1269385 pkts (1.5 %)
> 26/Jun/2018 17:29:58 [zbalance_ipc.c:319] Q 6 RX 84310018 
> pkts Dropped 1801848 pkts (2.1 %)
> 26/Jun/2018 17:29:58 [zbalance_ipc.c:319] Q 7 RX 84554242 
> pkts Dropped 1487329 pkts (1.7 %)
> 26/Jun/2018 17:29:58 [zbalance_ipc.c:319] Q 8 RX 84090370 
> pkts Dropped 1482864 pkts (1.7 %)
> 26/Jun/2018 17:29:58 [zbalance_ipc.c:319] Q 9 RX 73642498 
> pkts Dropped 732237 pkts (1.0 %)
> 26/Jun/2018 17:29:58 [zbalance_ipc.c:319] Q 10 RX 76481026 
> pkts Dropped 1000496 pkts (1.3 %)
> 26/Jun/2018 17:29:58 [zbalance_ipc.c:319] Q 11 RX 72496642 
> pkts Dropped 929049 pkts (1.3 %)
> 26/Jun/2018 17:29:58 [zbalance_ipc.c:319] Q 12 RX 79386626 
> pkts Dropped 1122169 pkts (1.4 %)
> 26/Jun/2018 17:29:58 [zbalance_ipc.c:319] Q 13 RX 79418370 
> pkts Dropped 1187172 pkts (1.5 %)
> 26/Jun/2018 17:29:58 [zbalance_ipc.c:319] Q 14 RX 80284162 
> pkts Dropped 1195559 pkts (1.5 %)
> 26/Jun/2018 17:29:58 [zbalance_ipc.c:319] Q 15 RX 79143426 
> pkts Dropped 1036797 pkts (1.3 %)
> 26/Jun/2018 17:29:58 [zbalance_ipc.c:338] Actual Stats: Recv 369'127.51 pps 
> (555'069.74 drops) - Forwarded 369'129.51 pps (0.00 drops)
> 26/Jun/2018 17:29:58 [zbalance_ipc.c:348] =
> 
> 
> # cat /proc/net/pf_ring/stats/*
> ClusterId: 1
> TotQueues: 16
> Applications:  1
> App0Queues:16
> Duration:  0:00:41:18:386
> Packets:   1191477340
> Forwarded: 1174033613
> Processed: 1173893301
> IFPackets: 1191477364
> IFDropped: 1036448041
> 
> Duration: 0:00:41:15:587
> Bytes:42626434538
> Packets:  71510530
> Dropped:  845465
> 
> Duration: 0:00:41:15:557
> Bytes:40686677370
> Packets:  65656322
> Dropped:  533675
> 
> Duration: 0:00:41:15:534
> Bytes:41463519299
> Packets:  70565378
> Dropped:  804282
> 
> Duration: 0:00:41:15:523
> Bytes:42321923225
> Packets:  69566978
> Dropped:  650333
> 
> Duration: 0:00:41:14:659
> Bytes:45415334638
> Packets:  85479938
> Dropped:  1728521
> 
> Duration: 0:00:41:14:597
> Bytes:42615821825
> Packets:  75445250
> Dropped:  951386
> 
> Duration: 0:00:41:14:598
> Bytes:44722410915
> Packets:  78252409
> Dropped:  1479387
> 
> Duration: 0:00:41:14:613
> Bytes:44788855334
> Packets:  78318926
> Dropped:  1202905
> 
> Duration: 0:00:41:14:741
> Bytes:43950263720
> Packets:  77821954
> Dropped:  1135693
> 
> Duration: 0:00:41:14:608
> Bytes:41211162757
> Packets:  68241354
> Dropped:  496494
> 
> Duration: 0:00:41:14:629
> Bytes:43064091353
> Packets:  70834104
> Dropped:  712427
> 
> Duration: 0:00:41:14:551
> Bytes:42072869897
> Packets:  67360770
> Dropped:  696460
> 
> Dur

Re: [Ntop-misc] nbpf questions

2018-06-27 Thread Alfredo Cardigliano
Hi Bowen
the nbpf syntax actually supports the not operator, however it depends
on the actual backend (we probably need to extend the guide commenting
more about this). For instance translating the filter into hw rules for 
offloading
it to the adapter, in most cases it is not possible to use the not operator.
What is your use case/application/card where you are using nbpf?

Regards
Alfredo

> On 27 Jun 2018, at 04:48, Bowen Li  wrote:
> 
> Hi all,
> The README of ndpf section in github notes that “NOT” cannot be used as 
> keyword in filter, however, I used “NOT” and the filter is effective in my 
> test process. I want to know if there is something wrong in the official 
> documents or I omitted anything in my code.
> If the used format of filter is “not host A and not host B and...”, how 
> many hosts that ndpf could support to filter in maximum? Besides, could you 
> please tell me if pcap processing speed of PF_RING will be influenced with 
> the increase of filter length?
> Any insight would be helpful.
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] nProbe performance, zbalance packet drops

2018-06-26 Thread Alfredo Cardigliano
Hi David
please also provide statistics from zbalance_ipc (output or log file)
and nprobe (you can get live stats from /proc/net/pf_ring/stats/)

Thank you
Alfredo

> On 26 Jun 2018, at 15:32, David Notivol  wrote:
> 
> Hello list,
> 
> We're using nProbe to export flows information to kafka. We're listening from 
> two 10Gb interfaces that we merge with zbalance_ipc, and we split them into 
> 16 queues to have 16 nprobe instances.
> 
> The problem is we are seeing about 40% packet drops reported by zbalance_ipc, 
> so it looks like nprobe is not capable of reading and processing all the 
> traffic. The CPU usage is really high, and the load average is over 25-30.
> 
> Merging both interfaces we're getting up to 5.5 Gbps, and  1.2 million 
> packets / second; and we're using i40e_zc driver.
> 
> Do you have any advice to try to improve this performance?
> Does it make sense we're having packet drops with this amount of traffic, and 
> we're reaching the server limits? Or is any configuration we could tune up to 
> improve it?
> 
> Thanks in advance.
> 
> 
> 
> -- System:
> 
> nProbe:  nProbe v.8.5.180625 (r6185)
> System RAM: 64GB
> System CPU:  Intel(R) Xeon(R) CPU E5-2620 v3 @ 2.40GHz, 12 cores (6 cores,  2 
> threads per core)
> System OS:CentOS Linux release 7.4.1708 (Core)
> Linux Kernel:   3.10.0-693.17.1.el7.x86_64 #1 SMP Thu Jan 25 20:13:58 UTC 
> 2018 x86_64 x86_64 x86_64 GNU/Linux
> 
> -- zbalance configuration:
> 
> zbalance_ipc -i p2p1,p2p2 -c 1 -n 16 -m 4 -a -p -l /var/tmp/zbalance.log -v -w
> 
> -- nProbe configuration:
> 
> --interface=zc:1@0
> --pid-file=/var/run/nprobe-zc1-00.pid
> --dump-stats=/var/log/nprobe/zc1-00_flows_stats.txt
> --kafka "192.168.0.1:9092 ,192.168.0.2:9092 
> ,192.168.0.3:9092;topic"
> --collector=none
> --idle-timeout=60
> --snaplen=128
> --aggregation=0/1/1/1/0/0/0
> --all-collectors=0
> --verbose=1
> --dump-format=t
> --vlanid-as-iface-idx=none
> --hash-size=1024000
> --flow-delay=1
> --count-delay=10
> --min-flow-size=0
> --netflow-engine=0:0
> --sample-rate=1:1
> --as-list=/usr/share/ntopng/httpdocs/geoip/GeoIPASNum.dat
> --city-list=/usr/share/ntopng/httpdocs/geoip/GeoLiteCity.dat
> --flow-templ="%IPV4_SRC_ADDR %IPV4_DST_ADDR %IN_PKTS %IN_BYTES %OUT_PKTS 
> %OUT_BYTES %FIRST_SWITCHED %LAST_SWITCHED %L4_SRC_PORT %L4_DST_PORT 
> %TCP_FLAGS %PROTOCOL %SRC_TOS %SRC_AS %DST_AS %L7_PROTO %L7_PROTO_NAME 
> %SRC_IP_COUNTRY %SRC_IP_CITY %SRC_IP_LONG %SRC_IP_LAT %DST_IP_COUNTRY 
> %DST_IP_CITY %DST_IP_LONG %DST_IP_LAT %SRC_VLAN %DST_VLAN %DOT1Q_SRC_VLAN 
> %DOT1Q_DST_VLAN %DIRECTION %SSL_SERVER_NAME %SRC_AS_MAP %DST_AS_MAP 
> %HTTP_METHOD %HTTP_RET_CODE %HTTP_REFERER %HTTP_UA %HTTP_MIME %HTTP_HOST 
> %HTTP_SITE %UPSTREAM_TUNNEL_ID %UPSTREAM_SESSION_ID %DOWNSTREAM_TUNNEL_ID 
> %DOWNSTREAM_SESSION_ID %UNTUNNELED_PROTOCOL %UNTUNNELED_IPV4_SRC_ADDR 
> %UNTUNNELED_L4_SRC_PORT %UNTUNNELED_IPV4_DST_ADDR %UNTUNNELED_L4_DST_PORT 
> %GTPV2_REQ_MSG_TYPE %GTPV2_RSP_MSG_TYPE %GTPV2_C2S_S1U_GTPU_TEID 
> %GTPV2_C2S_S1U_GTPU_IP %GTPV2_S2C_S1U_GTPU_TEID %GTPV2_S5_S8_GTPC_TEID 
> %GTPV2_S2C_S1U_GTPU_IP %GTPV2_C2S_S5_S8_GTPU_TEID %GTPV2_S2C_S5_S8_GTPU_TEID 
> %GTPV2_C2S_S5_S8_GTPU_IP %GTPV2_S2C_S5_S8_GTPU_IP %GTPV2_END_USER_IMSI 
> %GTPV2_END_USER_MSISDN %GTPV2_APN_NAME %GTPV2_ULI_MCC %GTPV2_ULI_MNC 
> %GTPV2_ULI_CELL_TAC %GTPV2_ULI_CELL_ID %GTPV2_RESPONSE_CAUSE %GTPV2_RAT_TYPE 
> %GTPV2_PDN_IP %GTPV2_END_USER_IMEI %GTPV2_C2S_S5_S8_GTPC_IP 
> %GTPV2_S2C_S5_S8_GTPC_IP %GTPV2_C2S_S5_S8_SGW_GTPU_TEID 
> %GTPV2_S2C_S5_S8_SGW_GTPU_TEID %GTPV2_C2S_S5_S8_SGW_GTPU_IP 
> %GTPV2_S2C_S5_S8_SGW_GTPU_IP %GTPV1_REQ_MSG_TYPE %GTPV1_RSP_MSG_TYPE 
> %GTPV1_C2S_TEID_DATA %GTPV1_C2S_TEID_CTRL %GTPV1_S2C_TEID_DATA 
> %GTPV1_S2C_TEID_CTRL %GTPV1_END_USER_IP %GTPV1_END_USER_IMSI 
> %GTPV1_END_USER_MSISDN %GTPV1_END_USER_IMEI %GTPV1_APN_NAME %GTPV1_RAT_TYPE 
> %GTPV1_RAI_MCC %GTPV1_RAI_MNC %GTPV1_RAI_LAC %GTPV1_RAI_RAC %GTPV1_ULI_MCC 
> %GTPV1_ULI_MNC %GTPV1_ULI_CELL_LAC %GTPV1_ULI_CELL_CI %GTPV1_ULI_SAC 
> %GTPV1_RESPONSE_CAUSE %SRC_FRAGMENTS %DST_FRAGMENTS %CLIENT_NW_LATENCY_MS 
> %SERVER_NW_LATENCY_MS %APPL_LATENCY_MS %RETRANSMITTED_IN_BYTES 
> %RETRANSMITTED_IN_PKTS %RETRANSMITTED_OUT_BYTES %RETRANSMITTED_OUT_PKTS 
> %OOORDER_IN_PKTS %OOORDER_OUT_PKTS %FLOW_ACTIVE_TIMEOUT 
> %FLOW_INACTIVE_TIMEOUT %MIN_TTL %MAX_TTL %IN_SRC_MAC %OUT_DST_MAC 
> %PACKET_SECTION_OFFSET %FRAME_LENGTH %SRC_TO_DST_MAX_THROUGHPUT 
> %SRC_TO_DST_MIN_THROUGHPUT %SRC_TO_DST_AVG_THROUGHPUT 
> %DST_TO_SRC_MAX_THROUGHPUT %DST_TO_SRC_MIN_THROUGHPUT 
> %DST_TO_SRC_AVG_THROUGHPUT %NUM_PKTS_UP_TO_128_BYTES 
> %NUM_PKTS_128_TO_256_BYTES %NUM_PKTS_256_TO_512_BYTES 
> %NUM_PKTS_512_TO_1024_BYTES %NUM_PKTS_1024_TO_1514_BYTES 
> %NUM_PKTS_OVER_1514_BYTES %LONGEST_FLOW_PKT %SHORTEST_FLOW_PKT 
> %NUM_PKTS_TTL_EQ_1 %NUM_PKTS_TTL_2_5 %NUM_PKTS_TTL_5_32 %NUM_PKTS_TTL_32_64 
> %NUM_PKTS_TTL_64_96 %NUM_PKTS_TTL_96_128 %NUM_PKTS_TTL_128_160 
> %NUM_PKTS_TTL_160_192 %NUM_PKTS_TTL_192_224 %NUM_P

Re: [Ntop-misc] pfring devel rpm package

2018-06-19 Thread Alfredo Cardigliano
Hi
what is the issue you are experiencing installing the standard package?
It should work according to the guide at 
http://www.ntop.org/guides/pf_ring/thirdparty/bro.html 

please let us know if something is broken..

Alfredo

> On 19 Jun 2018, at 11:05, C. L. Martinez  wrote:
> 
> Hi all,
> 
>  Is it possible to create a pfring-devel rpm package? I have installed pfring 
> from ntop's rpm repos but now I need to install Bro from source ... and 
> without this package, I need to use libpcap-devel from CentoS7 base repos ..
> 
>  Is it possible?
> 
> Thanks.
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] cento crash - corrupted double-linked list

2018-05-24 Thread Alfredo Cardigliano
Hi Mike
I will contact you directly for debugging this issue.

Regards
Alfredo

> On 23 May 2018, at 20:04, Lang, Michael  wrote:
> 
> Hello,
> 
> I’m having an issue where cento crashes around 10-14 days after launch.  I’m 
> currently running:
> 
> [root@madmax ~]# cento --version
> v.1.4.180420
> GIT rev:   1.4-stable:e961715e3e362d70b12bdb6bedb865e5a71591e6:20180420
> System Id: 7D16DBE2760C6A00
> Built on:  CentOS Linux release 7.4.1708 (Core)
> [root@madmax ~]#
> 
> I caught the crash in a screen session, looks like this:
> 
> 20/May/2018 08:57:43 [NetworkInterface.cpp:1116] [zc:enp1s0@0] [123'890 
> pps/0.89 Gbps][32'253/845'662'076/0/512'000 act/exp/drop/max 
> flows][1'359'350/0 RX/TX pkt drops][0 TX pps][32'286 active flows]
> 20/May/2018 08:57:43 [FlowExporter.cpp:96] [zc:enp1s0@0] Flow Export Queue 
> Len [0 tot queued][0/0 IPv4/v6][Drops queue too long 0/0 IPv4/v6][1867314376 
> tot exported flows]
> 20/May/2018 08:57:43 [NetworkInterface.cpp:1116] [zc:enp1s0@1] [27'517 
> pps/0.16 Gbps][32'197/845'217'650/0/512'000 act/exp/drop/max flows][0/0 RX/TX 
> pkt drops][0 TX pps][32'216 active flows]
> 20/May/2018 08:57:43 [FlowExporter.cpp:96] [zc:enp1s0@1] Flow Export Queue 
> Len [0 tot queued][0/0 IPv4/v6][Drops queue too long 0/0 IPv4/v6][1866497230 
> tot exported flows]
> 20/May/2018 08:57:43 [NetworkInterface.cpp:1116] [zc:enp1s0@2] [43'231 
> pps/0.34 Gbps][32'054/845'667'806/0/512'000 act/exp/drop/max flows][0/0 RX/TX 
> pkt drops][0 TX pps][32'086 active flows]
> 20/May/2018 08:57:43 [FlowExporter.cpp:96] [zc:enp1s0@2] Flow Export Queue 
> Len [0 tot queued][0/0 IPv4/v6][Drops queue too long 0/0 IPv4/v6][1867534214 
> tot exported flows]
> 20/May/2018 08:57:43 [NetworkInterface.cpp:1116] [zc:enp1s0@3] [51'056 
> pps/0.38 Gbps][140'624/1'098'592'923/0/512'000 act/exp/drop/max flows][0/0 
> RX/TX pkt drops][0 TX pps][140'755 active flows]
> 20/May/2018 08:57:43 [FlowExporter.cpp:96] [zc:enp1s0@3] Flow Export Queue 
> Len [0 tot queued][0/0 IPv4/v6][Drops queue too long 0/0 IPv4/v6][2373273970 
> tot exported flows]
> *** Error in `/usr/local/bin/cento': corrupted double-linked list (not 
> small): 0x7f2d83b10a10 ***
> === Backtrace: =
> /lib64/libc.so.6(+0x7ab54)[0x7f2d944c3b54]
> /lib64/libc.so.6(+0x7c821)[0x7f2d944c5821]
> /usr/local/bin/cento[0x40d504]
> /usr/local/bin/cento[0x40d948]
> /usr/local/bin/cento[0x40d9f9]
> /lib64/libpthread.so.0(+0x7e25)[0x7f2d966e3e25]
> /lib64/libc.so.6(clone+0x6d)[0x7f2d9454134d]
> === Memory map: 
> 0040-00536000 r-xp  fd:00 100665017  
> /usr/local/bin/cento
> 00736000-00739000 r--p 00136000 fd:00 100665017  
> /usr/local/bin/cento
> 00739000-0074e000 rw-p 00139000 fd:00 100665017  
> /usr/local/bin/cento
> 0074e000-00755000 rw-p  00:00 0
> 01e83000-029a5000 rw-p  00:00 0  
> [heap]
> 2ac0-2aaacac0 rw-s  00:26 191611 
> /dev/hugepages/pfring_zc_10
> 7f2d6000-7f2d60021000 rw-p  00:00 0
> 
> 
> Any thoughts on how to resolve this?
> 
> Thanks in advance,
> 
> -  Mike
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it 
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
> 


signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] Debian Jessie (3.16.0-6-amd64) and PFRING 7.0.0

2018-05-09 Thread Alfredo Cardigliano
Hi Rich
latest Debian 8 3.16.0-6 kernel is now supported, I pushed a patch on github 
and building new packages right now.

Best Regards
Alfredo

> On 8 May 2018, at 22:19, Richard Angeletti  wrote:
> 
> Hello,
> 
> I'm having issues getting the stable packages for Debian Jessie to startup on 
> kernel 3.16.0-6-amd64.
> 
> Every time I try to start the kernel module, I get " Unknown symbol in 
> module, or unknown parameter".
> 
> Any hints or tips about what I'm doing wrong are appreciated.
> 
> Here some of the diagnostics:
> 
> root@system:/home/user1# systemctl status pf_ring.service
> ● pf_ring.service - Start/stop pfring service
>Loaded: loaded (/etc/systemd/system/pf_ring.service; enabled)
>Active: failed (Result: exit-code) since Tue 2018-05-08 15:35:12 EDT; 
> 38min ago
>   Process: 12016 ExecStop=/usr/local/bin/pf_ringctl stop (code=exited, 
> status=0/SUCCESS)
>   Process: 12058 ExecStart=/usr/local/bin/pf_ringctl start (code=exited, 
> status=99)
>  Main PID: 12058 (code=exited, status=99)
> 
> May 08 15:34:56 system pf_ringctl[12058]: Error! The module pfring 7.0.0 is 
> not currently installed.
> May 08 15:34:56 system pf_ringctl[12058]: This module is not currently ACTIVE 
> for kernel 3.16.0-6-amd64 (x86_64).
> May 08 15:34:59 system pf_ringctl[12058]: Compiling new pfring driver
> May 08 15:35:08 system pf_ringctl[12058]: Installing new pfring driver
> May 08 15:35:12 system pf_ringctl[12058]: modprobe: ERROR: could not insert 
> 'pf_ring': Unknown symbol in module, or unknown parameter (see dmesg)
> May 08 15:35:12 system pf_ringctl[12058]: Unable to load PF_RING. Exiting ... 
> failed!
> May 08 15:35:12 system pf_ringctl[12058]: failed!
> May 08 15:35:12 system systemd[1]: pf_ring.service: main process exited, 
> code=exited, status=99/n/a
> May 08 15:35:12 system systemd[1]: Failed to start Start/stop pfring service.
> May 08 15:35:12 system systemd[1]: Unit pf_ring.service entered failed state.
> 
> 
> root@system:/home/user1# dmesg | grep pf_ring
> [  485.332588] pf_ring: Unknown symbol sk_attach_filter (err 0)
> [  485.332623] pf_ring: Unknown symbol sk_detach_filter (err 0)
> [  674.075182] pf_ring: Unknown symbol sk_attach_filter (err 0)
> [  674.075220] pf_ring: Unknown symbol sk_detach_filter (err 0)
> [  689.858020] pf_ring: Unknown symbol sk_attach_filter (err 0)
> [  689.858061] pf_ring: Unknown symbol sk_detach_filter (err 0)
> [  832.802681] pf_ring: Unknown symbol sk_attach_filter (err 0)
> [  832.802722] pf_ring: Unknown symbol sk_detach_filter (err 0)
> [ 2146.272453] pf_ring: Unknown symbol sk_attach_filter (err 0)
> [ 2146.272492] pf_ring: Unknown symbol sk_detach_filter (err 0)
> 
> 
> ​Thanks,
> Rich ​
> 
> 
> --
> Rich Angeletti
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] More questions

2018-05-09 Thread Alfredo Cardigliano


> On 8 May 2018, at 18:13, Carl Rotenan  wrote:
> 
> First off, Alfredo thanks for answering all my questions as I get up to speed 
> on PF_RING.
> 
> I have a few other questions based off of some /proc/net/pf_ring/ output.
> 
> 1. In the first cat I have 0 free slots and the second 13k, am I dropping 
> traffic when there aren't free slots?

This happens when the application is too slow processing traffic

> Should I increase the slots even more?

Yes, that would help with traffic bursts, however it doesn’t help much if the 
application cannot keep up with sustained traffic

> 2. Why do I have lost packets? I have some rings with 0% and others with 
> about 5% packet drops. Is this normal?

This depends on traffic distribution or other factors such as core affinity of 
the application threads

> 3. My capture mode is set for RX only in the info file, but the ring file 
> says both?

With "info file” do you mean the bro configuration? It seems that bro is not 
setting the capture direction.

> 4. Is socket mode something I should worry about?

You can ignore it, it’s just the ‘ability’ to receive and send (even if you are 
not actually sending)

Alfredo

> 
> Thanks,
> 
> 
> PF_RING Version  : 7.0.0 
> (7.0.0-stable:aef22ec763b2541f8c0632ed86b242cf7cff88aa)
> Total rings  : 18
> 
> Standard (non ZC) Options
> Ring slots   : 65534
> Slot version : 16
> Capture TX   : No [RX only]
> IP Defragment: No
> Socket Mode  : Standard
> Cluster Fragment Queue   : 0
> Cluster Fragment Discard : 0
> 
> cat 14952-p1p1.36
> 
> Bound Device(s): p1p1
> Active : 1
> Breed  : Standard
> Appl. Name : bro-p1p1
> Socket Mode: RX+TX
> Capture Direction  : RX+TX
> Sampling Rate  : 1
> IP Defragment  : No
> BPF Filtering  : Disabled
> Sw Filt Hash Rules : 0
> Sw Filt WC Rules   : 0
> Sw Filt Hash Match : 0
> Sw Filt Hash Miss  : 0
> Hw Filt Rules  : 0
> Poll Pkt Watermark : 1
> Num Poll Calls : 150166290
> Channel Id Mask: 0x
> VLAN Id: 65535
> Cluster Id : 1
> Slot Version   : 16 [7.0.0]
> Min Num Slots  : 65534
> Bucket Len : 8192
> Slot Len   : 8248 [bucket+header]
> Tot Memory : 540536832
> Tot Packets: 1136861832
> Tot Pkt Lost   : 69635693
> Tot Insert : 1067226139
> Tot Read   : 1067142762
> Insert Offset  : 435486256
> Remove Offset  : 315309304
> Num Free Slots : 0
> TX: Send Ok: 0
> TX: Send Errors: 0
> Reflect: Fwd Ok: 0
> Reflect: Fwd Errors: 0
> 
> Bound Device(s): p1p1
> Active : 1
> Breed  : Standard
> Appl. Name : bro-p1p1
> Socket Mode: RX+TX
> Capture Direction  : RX+TX
> Sampling Rate  : 1
> IP Defragment  : No
> BPF Filtering  : Disabled
> Sw Filt Hash Rules : 0
> Sw Filt WC Rules   : 0
> Sw Filt Hash Match : 0
> Sw Filt Hash Miss  : 0
> Hw Filt Rules  : 0
> Poll Pkt Watermark : 1
> Num Poll Calls : 150166290
> Channel Id Mask: 0x
> VLAN Id: 65535
> Cluster Id : 1
> Slot Version   : 16 [7.0.0]
> Min Num Slots  : 65534
> Bucket Len : 8192
> Slot Len   : 8248 [bucket+header]
> Tot Memory : 540536832
> Tot Packets: 1136867856
> Tot Pkt Lost   : 69635693
> Tot Insert : 1067232163
> Tot Read   : 1067179680
> Insert Offset  : 444167560
> Remove Offset  : 368221880
> Num Free Slots : 13051
> TX: Send Ok: 0
> TX: Send Errors: 0
> Reflect: Fwd Ok: 0
> Reflect: Fwd Errors: 0
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] /proc/net/pf_ring

2018-05-08 Thread Alfredo Cardigliano
Hi Carl
that number is the ring ID.

Alfredo

> On 8 May 2018, at 05:18, Carl Rotenan  wrote:
> 
> Hi,
> 
> Sorry in advance if this is a dumb question, but what does the last number 
> after the . signify?
> 
> -.
> 
> Since this listing starts at 19 and I'm running with 18 cores, and I stopped 
> and started Bro I'm guessing it's a generation thing.
> 
> Thanks in advance!
> 
> -r--r--r-- 1 root root 0 May  8 03:11 14952-p1p1.36
> -r--r--r-- 1 root root 0 May  8 03:11 14959-p1p1.23
> -r--r--r-- 1 root root 0 May  8 03:11 15006-p1p1.26
> -r--r--r-- 1 root root 0 May  8 03:11 15015-p1p1.31
> -r--r--r-- 1 root root 0 May  8 03:11 15029-p1p1.19
> -r--r--r-- 1 root root 0 May  8 03:11 15031-p1p1.28
> -r--r--r-- 1 root root 0 May  8 03:11 15045-p1p1.21
> -r--r--r-- 1 root root 0 May  8 03:11 15046-p1p1.27
> -r--r--r-- 1 root root 0 May  8 03:11 15050-p1p1.35
> -r--r--r-- 1 root root 0 May  8 03:11 15061-p1p1.22
> -r--r--r-- 1 root root 0 May  8 03:11 15064-p1p1.25
> -r--r--r-- 1 root root 0 May  8 03:11 15067-p1p1.24
> -r--r--r-- 1 root root 0 May  8 03:11 15083-p1p1.32
> -r--r--r-- 1 root root 0 May  8 03:11 15084-p1p1.20
> -r--r--r-- 1 root root 0 May  8 03:11 15085-p1p1.29
> -r--r--r-- 1 root root 0 May  8 03:11 15087-p1p1.30
> -r--r--r-- 1 root root 0 May  8 03:11 15089-p1p1.33
> -r--r--r-- 1 root root 0 May  8 03:11 15090-p1p1.34
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] Module option question

2018-05-07 Thread Alfredo Cardigliano
Hi Carl

> On 7 May 2018, at 19:20, Carl Rotenan  wrote:
> 
> Could someone please explain the modules options in a little bit more detail?
> 
> enable_ip_defrag
> Set to 1 to enable IP defragmentation, only rx traffic is defragmented.

This option should be set to enable fragments reconstruction, this is based on 
the kernel defragmentation support.

> quick_mode
> Set to 1 to run at full speed but with up to one socket per interface.

This option can be used to improve the capture performance with kernel drivers, 
with the constraint that you cannot run more than one process per 
interface/queue.

Alfredo

> I'm looking for options that will help me tweak Bro performance, running in a 
> RX only mode.
> 
> Thanks
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] Ring slots

2018-05-07 Thread Alfredo Cardigliano
Hi Carl
increasing the buffering power increases the ability to handle spikes without 
loosing packets
in case of applications doing heavy processing. In general it is recommended to 
use the max
number of slots supported by the adapter in case of ZC (default if you use our 
script), and a
reasonable number of in-kernel ring slots in case of standard drivers (the 
default is usually fine).

Best Regards
Alfredo

> On 7 May 2018, at 15:57, Carl Rotenan  wrote:
> 
> Is there any information on the benefits having 4096 vs 32768 vs 65534 ring 
> slots in PF_RING vanilla and ZC? Thanks.
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] Question regarding implementation of API function pfring_get_ring_id()

2018-05-03 Thread Alfredo Cardigliano
Hi Amir
please note I am not against caching it in the library, I was just wondering
why you are not caching it in the application as it was the easiest solution
and I am not aware of other use cases / applications reading it so often.

Alfredo

> On 3 May 2018, at 15:40, Amir Kaduri  wrote:
> 
> Hi Alfredo,
> 
> No reason not to cache it inside the application, but why let any application 
> to implement it, if its possible to implement it in pf_ring's userspace API 
> (the way I mentioned)?
> BTW, if its sounds reasonable, I can push a suggested solution, and you can 
> consider adopting it.
> 
> Amir
> 
> On Thu, May 3, 2018 at 4:11 PM, Alfredo Cardigliano  <mailto:cardigli...@ntop.org>> wrote:
> Hi Amir
> ring_id should not change, thus it can be cached in userspace,
> any reason you cannot cache it inside the application?
> 
> Alfredo
> 
> > On 3 May 2018, at 15:09, Amir Kaduri  > <mailto:akadur...@gmail.com>> wrote:
> >
> > User-space API function pfring_get_ring_id() always bring the ring id from 
> > the kernel. In some applications, this API call might be used within debug 
> > messages, in order to be able to track a relevant ring. If the application 
> > uses this API too often (for debugging purposes), there could be too much 
> > calls for the kernel, for getting information that usually doesn't change.
> > So my questions are:
> > 1. Once a ring is initialized with a ring_id (in function ring_create()), 
> > will it ever be changed during the application life-time?
> > 2. Assuming the answer is negative (i.e. once a ring id is determined, it 
> > doesn't change),
> > is there a room for caching this value in userspace and avoid future calls 
> > to kernel?
> > (e.g. by adding "ring_id" member to "struct __pfring" in pfring.h, and let 
> > function pfring_mod_get_ring_id() to return it once initialized, instead of 
> > the call to getsockopt(.. SO_GET_RING_ID ..))
> >
> > Thanks, Amir
> >
> >
> > ___
> > Ntop-misc mailing list
> > Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
> > http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
> > <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
> 
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] Question regarding implementation of API function pfring_get_ring_id()

2018-05-03 Thread Alfredo Cardigliano
Hi Amir
ring_id should not change, thus it can be cached in userspace,
any reason you cannot cache it inside the application?

Alfredo

> On 3 May 2018, at 15:09, Amir Kaduri  wrote:
> 
> User-space API function pfring_get_ring_id() always bring the ring id from 
> the kernel. In some applications, this API call might be used within debug 
> messages, in order to be able to track a relevant ring. If the application 
> uses this API too often (for debugging purposes), there could be too much 
> calls for the kernel, for getting information that usually doesn't change.
> So my questions are:
> 1. Once a ring is initialized with a ring_id (in function ring_create()), 
> will it ever be changed during the application life-time?
> 2. Assuming the answer is negative (i.e. once a ring id is determined, it 
> doesn't change),
> is there a room for caching this value in userspace and avoid future calls to 
> kernel?
> (e.g. by adding "ring_id" member to "struct __pfring" in pfring.h, and let 
> function pfring_mod_get_ring_id() to return it once initialized, instead of 
> the call to getsockopt(.. SO_GET_RING_ID ..))
> 
> Thanks, Amir
> 
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] pf_ring: Clarification regarding the relation between poll-watermark and poll-duration

2018-04-30 Thread Alfredo Cardigliano


> On 30 Apr 2018, at 17:52, Amir Kaduri  wrote:
> 
> Thanks for the answers.
> 
> So the only way to make handlep->timeout>=0, is by setting the 
> file-descriptor to "blocking" (nonblock=0) according to the logic in function 
> pcap_setnonblock_mmap() and this is something that we would like to avoid.
> Therefore, we do the polling (non-blocking) in the application that uses 
> pcap/pf_ring.
> The problem we have is with low-traffic network. According to the logic in 
> function copy_data_to_ring(), as long as the queue didn't reach the 
> "poll_num_pkts_watermark" threshold (in our case 128 packets),
> the poll() (in userspace) won't be called (since  wake_up_interruptible(..) 
> is not called), which means that we have packets that are stuck in the ring 
> till the queue reaches the watermark.
> 
> I wonder if you see any rationale in improving the pf_ring kernel module 
> code, to call  wake_up_interruptible() (in order to flush the queue) if some 
> "timeout" passed and the queue is not empty (but still didn't reach the 
> watermark).

I think that using the watermark in combination with a timeout is a good idea.

Alfredo

> Amir
> 
> 
> On Thu, Apr 26, 2018 at 6:00 PM, Alfredo Cardigliano  <mailto:cardigli...@ntop.org>> wrote:
> 
> 
>> On 26 Apr 2018, at 15:34, Amir Kaduri > <mailto:akadur...@gmail.com>> wrote:
>> 
>> Hi Alfredo,
>> 
>> My code is based on libpcap, while pfring's userland examples use pfring 
>> APIs directly, therefore things are a bit harder for me.
>> 
>> Short clarification about a related code-line:
>> Please look at the following line: 
>> https://github.com/ntop/PF_RING/blob/dev/userland/libpcap-1.8.1/pcap-linux.c#L1875
>>  
>> <https://github.com/ntop/PF_RING/blob/dev/userland/libpcap-1.8.1/pcap-linux.c#L1875>
>> 
>> (1)  If I understand it correctly, if wait_for_incoming_packet is true, then 
>> pfring_poll() should be called.
>>   Don't you want  wait_for_incoming_packet to be true in case  
>> pf_ring_active_poll is true?
> 
> “active” means spinning, thus poll should not be used in that case.
> 
>>   Currently, its the opposite (i.e. if pf_ring_active_poll is true, 
>> wait_for_incoming_packet will be false thus pfring_poll() won't be called).
> 
> This seems to be correct
> 
>> 
>> (2) If the code is ok, then the only way for me to make  
>> wait_for_incoming_packet true (for pfring_poll() to be called) is by making 
>> handlep->timeout >= 0.
>>  Correct?
> 
> Correct
> 
> Alfredo
> 
>> 
>> Thanks,
>> Amir
>> 
>> On Mon, Apr 9, 2018 at 10:51 AM, Alfredo Cardigliano > <mailto:cardigli...@ntop.org>> wrote:
>> Hi Amir
>> if I understand correctly, pfcount_multichannel is working, while in your 
>> application
>> it seems that poll does not honor the timeout, if this is the case it seems 
>> the problem
>> is not in the kernel module, I think you should look for differences between 
>> the two applications..
>> 
>> Alfredo
>> 
>>> On 9 Apr 2018, at 07:20, Amir Kaduri >> <mailto:akadur...@gmail.com>> wrote:
>>> 
>>> Hi Alfredo,
>>> 
>>> I'm back to investigate/debug this issue in my environment, and maybe 
>>> you'll manage to save me some time:
>>> 
>>> When I use the example program "pfcount_multichannel", poll-duration works 
>>> for me as expected:
>>> For watermark=128, poll-duration=1000, even if less than 128 packets 
>>> received, I get them in pfcount_multichannel.
>>> 
>>> On the other hand, in my other program (which is a complex one), the 
>>> userspace application gets the packets only after 128 packets
>>> aggregated by the ring, regardless the polling rate (which is done always 
>>> using 50ms timeout).
>>> 
>>> Maybe you can figure out what can "hold" the packets in the ring and 
>>> forward them to userspace only when the watermark threshold passes?
>>> Maybe something is missing during initialization?
>>> (for simplicity I'm not using rehash, and not using any filters).
>>> 
>>> Thanks
>>> 
>>> On Tue, Oct 31, 2017 at 6:32 PM, Alfredo Cardigliano >> <mailto:cardigli...@ntop.org>> wrote:
>>> Hi Amir
>>> that's correct, however for some reason it seems it is not the case in your 
>>> tests.
>>> 
>>> Alfredo
>>> 
>>> On 31 Oct 2017, at 12:08, 

Re: [Ntop-misc] pf_ring: Clarification regarding the relation between poll-watermark and poll-duration

2018-04-26 Thread Alfredo Cardigliano


> On 26 Apr 2018, at 15:34, Amir Kaduri  wrote:
> 
> Hi Alfredo,
> 
> My code is based on libpcap, while pfring's userland examples use pfring APIs 
> directly, therefore things are a bit harder for me.
> 
> Short clarification about a related code-line:
> Please look at the following line: 
> https://github.com/ntop/PF_RING/blob/dev/userland/libpcap-1.8.1/pcap-linux.c#L1875
>  
> <https://github.com/ntop/PF_RING/blob/dev/userland/libpcap-1.8.1/pcap-linux.c#L1875>
> 
> (1)  If I understand it correctly, if wait_for_incoming_packet is true, then 
> pfring_poll() should be called.
>   Don't you want  wait_for_incoming_packet to be true in case  
> pf_ring_active_poll is true?

“active” means spinning, thus poll should not be used in that case.

>   Currently, its the opposite (i.e. if pf_ring_active_poll is true, 
> wait_for_incoming_packet will be false thus pfring_poll() won't be called).

This seems to be correct

> 
> (2) If the code is ok, then the only way for me to make  
> wait_for_incoming_packet true (for pfring_poll() to be called) is by making 
> handlep->timeout >= 0.
>  Correct?

Correct

Alfredo

> 
> Thanks,
> Amir
> 
> On Mon, Apr 9, 2018 at 10:51 AM, Alfredo Cardigliano  <mailto:cardigli...@ntop.org>> wrote:
> Hi Amir
> if I understand correctly, pfcount_multichannel is working, while in your 
> application
> it seems that poll does not honor the timeout, if this is the case it seems 
> the problem
> is not in the kernel module, I think you should look for differences between 
> the two applications..
> 
> Alfredo
> 
>> On 9 Apr 2018, at 07:20, Amir Kaduri > <mailto:akadur...@gmail.com>> wrote:
>> 
>> Hi Alfredo,
>> 
>> I'm back to investigate/debug this issue in my environment, and maybe you'll 
>> manage to save me some time:
>> 
>> When I use the example program "pfcount_multichannel", poll-duration works 
>> for me as expected:
>> For watermark=128, poll-duration=1000, even if less than 128 packets 
>> received, I get them in pfcount_multichannel.
>> 
>> On the other hand, in my other program (which is a complex one), the 
>> userspace application gets the packets only after 128 packets
>> aggregated by the ring, regardless the polling rate (which is done always 
>> using 50ms timeout).
>> 
>> Maybe you can figure out what can "hold" the packets in the ring and forward 
>> them to userspace only when the watermark threshold passes?
>> Maybe something is missing during initialization?
>> (for simplicity I'm not using rehash, and not using any filters).
>> 
>> Thanks
>> 
>> On Tue, Oct 31, 2017 at 6:32 PM, Alfredo Cardigliano > <mailto:cardigli...@ntop.org>> wrote:
>> Hi Amir
>> that's correct, however for some reason it seems it is not the case in your 
>> tests.
>> 
>> Alfredo
>> 
>> On 31 Oct 2017, at 12:08, Amir Kaduri > <mailto:akadur...@gmail.com>> wrote:
>> 
>>> Thanks. tot_insert apparently works ok.
>>> 
>>> Regarding function copy_data_to_ring():
>>> At the end of it there is the statement:
>>>  if(num_queued_pkts(pfr) >= pfr->poll_num_pkts_watermark)
>>>  wake_up_interruptible(&pfr->ring_slots_waitqueue);
>>> 
>>> Since watermark is set to 128, and I send <128 packets, this causes them to 
>>> wait in kernel queue.
>>> But since poll_duration is set to 1 (1 millisecond I assume), I expect the 
>>> condition to check this also (meaning, there are packets in queue but 1 
>>> millisecond passed and they weren't read),
>>> the wake_up_interruptible should also be called. No?
>>> 
>>> Thanks,
>>> Amir
>>> 
>>> 
>>> On Tue, Oct 31, 2017 at 10:20 AM, Alfredo Cardigliano >> <mailto:cardigli...@ntop.org>> wrote:
>>> 
>>> 
>>>> On 31 Oct 2017, at 08:42, Amir Kaduri >>> <mailto:akadur...@gmail.com>> wrote:
>>>> 
>>>> Hi Alfredo,
>>>> 
>>>> I'm trying to debug the issue, and I have a question about the code, to 
>>>> make sure that there is no problem there:
>>>> Specifically, I'm referring to the function "pfring_mod_recv":
>>>> In order that the line that refers to poll_duration ("pfring_poll(ring, 
>>>> ring->poll_duration)") will be reached, there are 2 conditions that should 
>>>> occur:
>>>> 1. pfring_there_is_pkt_available(ring) 

Re: [Ntop-misc] pf_ring: Clarification regarding the relation between poll-watermark and poll-duration

2018-04-09 Thread Alfredo Cardigliano
Hi Amir
if I understand correctly, pfcount_multichannel is working, while in your 
application
it seems that poll does not honor the timeout, if this is the case it seems the 
problem
is not in the kernel module, I think you should look for differences between 
the two applications..

Alfredo

> On 9 Apr 2018, at 07:20, Amir Kaduri  wrote:
> 
> Hi Alfredo,
> 
> I'm back to investigate/debug this issue in my environment, and maybe you'll 
> manage to save me some time:
> 
> When I use the example program "pfcount_multichannel", poll-duration works 
> for me as expected:
> For watermark=128, poll-duration=1000, even if less than 128 packets 
> received, I get them in pfcount_multichannel.
> 
> On the other hand, in my other program (which is a complex one), the 
> userspace application gets the packets only after 128 packets
> aggregated by the ring, regardless the polling rate (which is done always 
> using 50ms timeout).
> 
> Maybe you can figure out what can "hold" the packets in the ring and forward 
> them to userspace only when the watermark threshold passes?
> Maybe something is missing during initialization?
> (for simplicity I'm not using rehash, and not using any filters).
> 
> Thanks
> 
> On Tue, Oct 31, 2017 at 6:32 PM, Alfredo Cardigliano  <mailto:cardigli...@ntop.org>> wrote:
> Hi Amir
> that's correct, however for some reason it seems it is not the case in your 
> tests.
> 
> Alfredo
> 
> On 31 Oct 2017, at 12:08, Amir Kaduri  <mailto:akadur...@gmail.com>> wrote:
> 
>> Thanks. tot_insert apparently works ok.
>> 
>> Regarding function copy_data_to_ring():
>> At the end of it there is the statement:
>>  if(num_queued_pkts(pfr) >= pfr->poll_num_pkts_watermark)
>>  wake_up_interruptible(&pfr->ring_slots_waitqueue);
>> 
>> Since watermark is set to 128, and I send <128 packets, this causes them to 
>> wait in kernel queue.
>> But since poll_duration is set to 1 (1 millisecond I assume), I expect the 
>> condition to check this also (meaning, there are packets in queue but 1 
>> millisecond passed and they weren't read),
>> the wake_up_interruptible should also be called. No?
>> 
>> Thanks,
>> Amir
>> 
>> 
>> On Tue, Oct 31, 2017 at 10:20 AM, Alfredo Cardigliano > <mailto:cardigli...@ntop.org>> wrote:
>> 
>> 
>>> On 31 Oct 2017, at 08:42, Amir Kaduri >> <mailto:akadur...@gmail.com>> wrote:
>>> 
>>> Hi Alfredo,
>>> 
>>> I'm trying to debug the issue, and I have a question about the code, to 
>>> make sure that there is no problem there:
>>> Specifically, I'm referring to the function "pfring_mod_recv":
>>> In order that the line that refers to poll_duration ("pfring_poll(ring, 
>>> ring->poll_duration)") will be reached, there are 2 conditions that should 
>>> occur:
>>> 1. pfring_there_is_pkt_available(ring) should return false (otherwise, the 
>>> function returns at the end of the condition).
>>> 2. wait_for_incoming_packet should be set to true.
>>> Currently, I'm referring to the first one:
>>> In order that the macro pfring_there_is_pkt_available(ring) will return 
>>> false, ring->slots_info->tot_insert should be equal to 
>>> ring->slots_info->tot_read.
>>> What I see in my tests that they don't get equal. I always see that 
>>> tot_insert>tot_read, and sometimes they get eual when tot_read++ is called 
>>> but it happens inside the condition, so the "pfring_mod_recv" returns with 
>>> 1.
>> 
>> It seems to be correct. The kernel module inserts packets into the ring 
>> increasing tot_insert, the userspace library reads packets from the ring 
>> increasing tot_read. This means that if tot_insert == tot_read there is no 
>> packet to read. If there is a bug, it should be in the kernel module that is 
>> somehow not adding packets to the ring (thus not updating tot_insert).
>> 
>> Alfredo
>> 
>>> I remind that I set the watermark to be high, in order to see the 
>>> poll_duration takes effect.
>>> 
>>> Could you please approve that you don't see any problem in the code?
>>> 
>>> Thanks,
>>> Amir
>>> 
>>> 
>>> On Thu, Oct 26, 2017 at 12:22 PM, Alfredo Cardigliano >> <mailto:cardigli...@ntop.org>> wrote:
>>> Hi Amir
>>> yes, that’s the way it should work, if this is not the case, some debugging 
>>> is needed 

Re: [Ntop-misc] packet drops are observed in pf_ring statistics

2018-03-29 Thread Alfredo Cardigliano
Yes please provide /proc/net/pf_ring/ and ethtool -S 

Alfredo

> On 29 Mar 2018, at 09:15, Chandrika Gautam  
> wrote:
> 
> Hi Alfredo,
> 
> Yes I am quite sure that this is due to fragmentation logic in kernel not in 
> application.
>  We have one deployment where fragmentation logic is enabled in application 
> but we do not see any drops in pf_ring statistics also.
> 
> Will output of  /proc/net/pf_ring/ be enough ? Are you looking for some 
> other statistics?
> 
> Regards,
> Chandrika
> 
> On Tue, Mar 20, 2018 at 2:23 PM, Alfredo Cardigliano  <mailto:cardigli...@ntop.org>> wrote:
> Hi Gautam
> pf_ring leverages on the linux kernel defragmentation support, thus we cannot 
> do much on our side for that,
> I see that your application is not exceeding 30% of cpu usage, are you sure 
> you do not have spikes that are
> hard to detect and your application cannot keep up with processing? Please 
> provide pf_ring statistics in order
> to figure out if packet loss is at kernel level or application level.
> 
> Alfredo
> 
>> On 20 Mar 2018, at 05:19, Chandrika Gautam > <mailto:chandrika.iitd.r...@gmail.com>> wrote:
>> 
>> Hi Alfredo,
>> 
>> Yes its a standard driver (Intel 82599 10 G) and server is hpdl380g9 
>> servers(cpu clock speed 2GHz ).
>> 
>> We have another deployment in which fragmentation logic is enabled in 
>> applications(g8 2.9 GHz) and we are not observing a drop in pf_ring stats.
>> 
>> How can we prove this to customer that enabling fragmentation in pf_ring 
>> kernel could be causing this?
>> Or can we tune ipfrag related parameters to support more number of 
>> fragmented packets.
>> 
>> 
>> On Mon, Mar 19, 2018 at 1:51 PM, Alfredo Cardigliano > <mailto:cardigli...@ntop.org>> wrote:
>> Hi Gautam
>> I guess you are using standard drivers, with defragmentation enabled in the 
>> pf_ring kernel module,
>> in that case if you have many fragments, at that rate, that could slow down 
>> the processing. Did you
>> try disabling defragmentation to figure out if the bottleneck is there?
>> 
>> Regards
>> Alfredo
>> 
>> > On 19 Mar 2018, at 06:35, Chandrika Gautam > > <mailto:chandrika.iitd.r...@gmail.com>> wrote:
>> >
>> > Hi,
>> >
>> > We are observing some intermittent drops in pf_ring statistics in one of 
>> > the production sites. I need your valuable inputs to debug this issue.
>> >
>> > Brief description of the setup is -
>> >
>> > Fragmentation is enabled at kernel level. Traffic at site is varying from 
>> > 500Mbps to 1 Gbps. We have observed cpu usage per thread of our 
>> > application and it is not exceeding ~20-30 percent and there are no alarms 
>> > raised by applications.
>> > So I am suspecting whether kernel fragmentation logic could slow down the 
>> > processing and causing pf_ring to drop these packets. Can you please 
>> > confirm whether this can cause the drop ?If yes, How shall we debug this 
>> > issue.
>> > Please let me know if you need any data from the sites.
>> >
>> >
>> > Regards,
>> > Gautam
>> > ___
>> > Ntop-misc mailing list
>> > Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>> > http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>> > <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>> 
>> 
>> ___
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>> 
>> ___
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] numa_cpu_affinity/binding cores for snort

2018-03-21 Thread Alfredo Cardigliano
Hi Jim
everything looks good, just one change to the driver parameters, this is enough:

RSS=1,1 numa_cpu_affinity=18,18

Regards
Alfredo

> On 21 Mar 2018, at 22:41, Jim Hranicky  wrote:
> 
> If 'hwloc-ls' tells me my ixgbe device is on node 1:
> 
>  NUMANode L#0 (P#0 62GB)
>  [...]
>  PCIBridge
>PCI 14e4:1657
>  Net L#2 "eno1"
>PCI 14e4:1657
>  Net L#3 "eno2"
>PCI 14e4:1657
>  Net L#4 "eno3"
>PCI 14e4:1657
>  Net L#5 "eno4"
>  [...]
>  NUMANode L#1 (P#1 63GB)
>  [...]
>HostBridge L#8
>  PCIBridge
>PCI 8086:10fb
>  Net L#7 "ens5f0"
>PCI 8086:10fb
>  Net L#8 "ens5f1"
> 
> and 'numactl --hardware' tells me my cpu cores are located as follows:
> 
>  available: 2 nodes (0-1)
>  node 0 cpus: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 36 37 38 39 40 41 
> 42 43 44 45 46 47 48 49 50 51 52 53
>  node 0 size: 63470 MB
>  node 0 free: 26682 MB
>  node 1 cpus: 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 54 55 56 
> 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71
> 
> and I want to run 31 snorts with zbalance_ipc, should I be doing
> this?
> 
>  modprobe ixgbe RSS=1,1 numa_cpu_affinity=18,19,...,35,54,55,...,N
>  /usr/local/pf/sbin/zbalance_ipc -i zc:ens5f0 -m 4 -n 31,1 -c 99 -g 70 -S 71
> 
> and using
> 
>  --daq-var bindcpu=18
>  --daq-var bindcpu=19
>  [...]
>  --daq-var bindcpu=35
>  --daq-var bindcpu=54
>  --daq-var bindcpu=55
>  --daq-var bindcpu=N
> 
> for my snort processes?
> 
> Thanks,
> 
> --
> Jim Hranicky
> Data Security Specialist
> UF Information Technology
> 105 NW 16TH ST Room #104 GAINESVILLE FL 32603-1826
> 352-273-1341
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] packet drops are observed in pf_ring statistics

2018-03-20 Thread Alfredo Cardigliano
Hi Gautam
pf_ring leverages on the linux kernel defragmentation support, thus we cannot 
do much on our side for that,
I see that your application is not exceeding 30% of cpu usage, are you sure you 
do not have spikes that are
hard to detect and your application cannot keep up with processing? Please 
provide pf_ring statistics in order
to figure out if packet loss is at kernel level or application level.

Alfredo

> On 20 Mar 2018, at 05:19, Chandrika Gautam  
> wrote:
> 
> Hi Alfredo,
> 
> Yes its a standard driver (Intel 82599 10 G) and server is hpdl380g9 
> servers(cpu clock speed 2GHz ).
> 
> We have another deployment in which fragmentation logic is enabled in 
> applications(g8 2.9 GHz) and we are not observing a drop in pf_ring stats.
> 
> How can we prove this to customer that enabling fragmentation in pf_ring 
> kernel could be causing this?
> Or can we tune ipfrag related parameters to support more number of fragmented 
> packets.
> 
> 
> On Mon, Mar 19, 2018 at 1:51 PM, Alfredo Cardigliano  <mailto:cardigli...@ntop.org>> wrote:
> Hi Gautam
> I guess you are using standard drivers, with defragmentation enabled in the 
> pf_ring kernel module,
> in that case if you have many fragments, at that rate, that could slow down 
> the processing. Did you
> try disabling defragmentation to figure out if the bottleneck is there?
> 
> Regards
> Alfredo
> 
> > On 19 Mar 2018, at 06:35, Chandrika Gautam  > <mailto:chandrika.iitd.r...@gmail.com>> wrote:
> >
> > Hi,
> >
> > We are observing some intermittent drops in pf_ring statistics in one of 
> > the production sites. I need your valuable inputs to debug this issue.
> >
> > Brief description of the setup is -
> >
> > Fragmentation is enabled at kernel level. Traffic at site is varying from 
> > 500Mbps to 1 Gbps. We have observed cpu usage per thread of our application 
> > and it is not exceeding ~20-30 percent and there are no alarms raised by 
> > applications.
> > So I am suspecting whether kernel fragmentation logic could slow down the 
> > processing and causing pf_ring to drop these packets. Can you please 
> > confirm whether this can cause the drop ?If yes, How shall we debug this 
> > issue.
> > Please let me know if you need any data from the sites.
> >
> >
> > Regards,
> > Gautam
> > ___
> > Ntop-misc mailing list
> > Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
> > http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
> > <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
> 
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] packet drops are observed in pf_ring statistics

2018-03-19 Thread Alfredo Cardigliano
Hi Gautam
I guess you are using standard drivers, with defragmentation enabled in the 
pf_ring kernel module,
in that case if you have many fragments, at that rate, that could slow down the 
processing. Did you
try disabling defragmentation to figure out if the bottleneck is there?

Regards
Alfredo

> On 19 Mar 2018, at 06:35, Chandrika Gautam  
> wrote:
> 
> Hi,
> 
> We are observing some intermittent drops in pf_ring statistics in one of the 
> production sites. I need your valuable inputs to debug this issue.
> 
> Brief description of the setup is -
> 
> Fragmentation is enabled at kernel level. Traffic at site is varying from 
> 500Mbps to 1 Gbps. We have observed cpu usage per thread of our application 
> and it is not exceeding ~20-30 percent and there are no alarms raised by 
> applications.
> So I am suspecting whether kernel fragmentation logic could slow down the 
> processing and causing pf_ring to drop these packets. Can you please confirm 
> whether this can cause the drop ?If yes, How shall we debug this issue.
> Please let me know if you need any data from the sites.
> 
> 
> Regards,
> Gautam
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] nProbe Pro won't do more then 1Gb/s?

2018-01-17 Thread Alfredo Cardigliano
“Absolute Stats” is the total / average number of packets/bytes
“Actual Stats” is the current number of packets/bytes (last second)

Alfredo

> On 17 Jan 2018, at 21:13, Marco Teixeira  wrote:
> 
> Hello list,
> 
> Any PFRING wizard that can offer clues on where to start troubleshooting this 
> variance between "Absolut Stats" vs "Actual Stats"...?
> 
> ​===
> [marco@nprobe ~]$ sudo pfcount -i ens2f0
> [sudo] password for marco:
> Using PF_RING v.7.0.0
> Capturing from ens2f0 [mac: D8:D3:85:A0:12:50][if_index: 5][speed: 1Mb/s]
> # Device RX channels: 1
> # Polling threads:1
> Dumping statistics on /proc/net/pf_ring/stats/3096-ens2f0.3
> =
> Absolute Stats: [136'980 pkts total][0 pkts dropped][0.0% dropped]
> [136'980 pkts rcvd][126'559'478 bytes rcvd]
> =
> 
> =
> Absolute Stats: [274'202 pkts total][0 pkts dropped][0.0% dropped]
> [274'202 pkts rcvd][254'708'653 bytes rcvd][274'163.89 pkt/sec][2'037.38 
> Mbit/sec]
> =
> Actual Stats: [137'222 pkts rcvd][1'000.13 ms][137'202.92 pps][1.03 Gbps]
> =
> 
> =
> Absolute Stats: [411'199 pkts total][0 pkts dropped][0.0% dropped]
> [411'199 pkts rcvd][382'383'683 bytes rcvd][205'575.03 pkt/sec][1'529.35 
> Mbit/sec]
> =
> Actual Stats: [136'997 pkts rcvd][1'000.09 ms][136'983.43 pps][1.02 Gbps]
> =
> ===
> 
> Thankx
> Marco
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] nProbe Pro won't do more then 1Gb/s?

2018-01-16 Thread Alfredo Cardigliano
Hi Marco
it seems there is no packet loss, did you check the interface stats with 
ethtool -S to check if pats are getting lost at interface level?

Best Regards
Alfredo

> On 16 Jan 2018, at 17:17, Marco Teixeira  wrote:
> 
> Hello,
> 
> Is PF_RING the culprit here ?? How to tweak this?
> Absolute stats showing around 2Gb/s and Actual stats near 1Gb/s...
> 
> ===
> [marco@nprobe ~]$ sudo pfcount -i ens2f0
> Using PF_RING v.7.0.0
> Capturing from ens2f0 [mac: D8:D3:85:A0:12:50][if_index: 5][speed: 1Mb/s]
> # Device RX channels: 1
> # Polling threads:1
> Dumping statistics on /proc/net/pf_ring/stats/2553-ens2f0.2
> =
> Absolute Stats: [136'889 pkts total][0 pkts dropped][0.0% dropped]
> [136'889 pkts rcvd][128'561'396 bytes rcvd]
> =
> 
> =
> Absolute Stats: [274'090 pkts total][0 pkts dropped][0.0% dropped]
> [274'090 pkts rcvd][252'023'439 bytes rcvd][274'048.34 pkt/sec][2'015.88 
> Mbit/sec]
> =
> Actual Stats: [137'201 pkts rcvd][1'000.15 ms][137'180.28 pps][0.99 Gbps]
> =
> 
> ===
> 
> Cumprimentos,
> 
> Marco Teixeira
> 
> ---
> Serviços de Comunicações da Universidade do Minho
> Campus de Azurém, 4800-058 Guimarães - Portugal
> Tel.: +351 253510141, Fax: +351 253604021
> ma...@scom.uminho.pt  | 
> http://www.scom.uminho.pt 
> ---
> 
> 
> 2018-01-16 15:52 GMT+00:00 Marco Teixeira  >:
> Hi list,
> 
> Do you know of any limitation (license wise) on the capture speed of nProbe?
> Can't seem to go above 1Gb/s, but machine still has plenty of CPU available, 
> and PCIe 10Gb/s NIC...
> 
> ===
> Build OS:  CentOS Linux release 7.4.1708 (Core)
> GIT rev:   8.2-stable:fe33351b54075fa76a242548fb830e2bdf1d9224:20180112
> Edition:   nProbe Pro
> License Type:  Permanent License
> ===
> 
> ===
> [marco@nprobe ~]$ more /proc/net/pf_ring/stats/1361-ens2f0.1
> Duration: 0:00:33:05:185
> Bytes:238825829235
> Packets:  272020616
> Dropped:  0
> 
> [marco@nprobe ~]$ more /proc/net/pf_ring/1361-ens2f0.1
> Bound Device(s): ens2f0
> Active : 1
> Breed  : Standard
> Appl. Name : nProbe
> Socket Mode: RX only
> Capture Direction  : RX+TX
> Sampling Rate  : 1
> IP Defragment  : No
> BPF Filtering  : Disabled
> Sw Filt Hash Rules : 0
> Sw Filt WC Rules   : 0
> Sw Filt Hash Match : 0
> Sw Filt Hash Miss  : 0
> Hw Filt Rules  : 0
> Poll Pkt Watermark : 8
> Num Poll Calls : 0
> Channel Id Mask: 0x
> VLAN Id: 65535
> Slot Version   : 16 [7.0.0]
> Min Num Slots  : 4108
> Bucket Len : 128
> Slot Len   : 336 [bucket+header]
> Tot Memory : 1388544
> Tot Packets: 276360318
> Tot Pkt Lost   : 0
> Tot Insert : 276360318
> Tot Read   : 276360318
> Insert Offset  : 711632
> Remove Offset  : 711632
> Num Free Slots : 4108
> Reflect: Fwd Ok: 0
> Reflect: Fwd Errors: 0
> ===
> 
> Regards,
> Marco
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] PF_RING Error : causes memory size to wrap, unable to allocate memory

2018-01-09 Thread Alfredo Cardigliano
That error was logged opening the loopback right?

Alfredo

> On 9 Jan 2018, at 10:13, Chandrika Gautam  
> wrote:
> 
> I will need one more information -
> 
> As per code, slot len is calculated as below.
>   the_slot_len = pfr->slot_header_len + pfr->bucket_len;
> = ALIGN(the_slot_len + sizeof(u_int16_t) /* RING_MAGIC_VALUE */, 
> sizeof(u_int64_t));
> 
> But below error logs slot length is logged as 65576, shouldn't this have been 
> quite smaller if MTU is set as 1500.
> [PF_RING] ERROR: min_num_slots (409600, slot len = 65576) causes memory size 
> to wrap
> 
> Regards,
> Gautam
> 
> On Tue, Jan 9, 2018 at 2:32 PM, Chandrika Gautam 
> mailto:chandrika.iitd.r...@gmail.com>> wrote:
> Thanks Alfredo!
> 
> Regards,
> Chandrika
> 
> On Tue, Jan 9, 2018 at 2:13 PM, Alfredo Cardigliano  <mailto:cardigli...@ntop.org>> wrote:
> Correct
> 
> Alfredo
> 
> 
>> On 9 Jan 2018, at 08:38, Chandrika Gautam > <mailto:chandrika.iitd.r...@gmail.com>> wrote:
>> 
>> As per the code, it seems that all these memory allocation happens at the 
>> time of ring creation only.
>> And a ring will be created whenever any new application(using pf_ring) is 
>> spwaned.
>> And there should not be multiple occurrences of these errors if any 
>> application is already up and running.
>> Please  let me know if my understanding is correct.
>> 
>> Regards,
>> Gautam
>> 
>> On Tue, Jan 9, 2018 at 12:43 PM, Chandrika Gautam 
>> mailto:chandrika.iitd.r...@gmail.com>> wrote:
>> Hi Alfredo,
>> 
>> Does this error mean that packets are getting lost ? Problem is that I have 
>> not faced this issue in our lab. Its hit in one of the production server.
>> Can you also explain what have you fixed to resolve this?
>> 
>> Regards,
>> Gautam
>> 
>> On Sun, Jan 7, 2018 at 5:10 PM, Alfredo Cardigliano > <mailto:cardigli...@ntop.org>> wrote:
>> Hi Gautam
>> this is not related to hugepages (actually hugepages reserve physical 
>> memory, thus
>> they can affect allocation, however in this specific case it was due to a 
>> limit in the
>> pf_ring buffer size).
>> Please note I just pushed a patch to handle this case resizing the buffer 
>> size when
>> limit is exceeded, please check if it’s working for you and let me know.
>> 
>> Regards
>> Alfredo
>> 
>> 
>>> On 5 Jan 2018, at 06:38, Chandrika Gautam >> <mailto:chandrika.iitd.r...@gmail.com>> wrote:
>>> 
>>> Hi Alfredo,
>>> 
>>> Can this issue be faced if hugepages are not created on server? Because 
>>> these errors disappeared after creating the hugepages.
>>> 
>>> Regards,
>>> Gautam
>>> 
>>> On Fri, Jan 5, 2018 at 10:52 AM, Chandrika Gautam 
>>> mailto:chandrika.iitd.r...@gmail.com>> 
>>> wrote:
>>> Hi Alfredo,
>>> 
>>> MTU is 1518 for all 6 interfaces except for lo (loopback) interface(65535).
>>> 
>>> Regards,
>>> Gautam
>>> 
>>> 
>>> On Tue, Jan 2, 2018 at 6:46 PM, Alfredo Cardigliano >> <mailto:cardigli...@ntop.org>> wrote:
>>> What is the MTU size? It seems you are trying to allocate more than 25GB of 
>>> memory,
>>> thus you get this failure. Please try reducing the number of slots, if you 
>>> cannot reduce
>>> the buffers size.
>>> 
>>> Regards
>>> Alfredo
>>> 
>>>> On 2 Jan 2018, at 10:40, Chandrika Gautam >>> <mailto:chandrika.iitd.r...@gmail.com>> wrote:
>>>> 
>>>> [PF_RING] ERROR: min_num_slots (409600, slot len = 65576) causes memory 
>>>> size to wrap
>>>> 
>>>> [PF_RING] ring_mmap(): unable to allocate memory
>>>> 
>>>> Regards,
>>>> Gautam
>>>> 
>>>> On Tue, Jan 2, 2018 at 2:46 PM, Alfredo Cardigliano >>> <mailto:cardigli...@ntop.org>> wrote:
>>>> Hi Gautam
>>>> please provide the error, I see just the insmod command..
>>>> 
>>>> Regards
>>>> Alfredo
>>>> 
>>>> > On 27 Dec 2017, at 06:26, Chandrika Gautam 
>>>> > mailto:chandrika.iitd.r...@gmail.com>> 
>>>> > wrote:
>>>> >
>>>> > Hi,
>>>> >
>>>> > We are receiving below error while loading the pf_ring using below 
>>>> > command even though there is free memory available on server. P

Re: [Ntop-misc] PF_RING Error : causes memory size to wrap, unable to allocate memory

2018-01-09 Thread Alfredo Cardigliano
Correct

Alfredo

> On 9 Jan 2018, at 08:38, Chandrika Gautam  
> wrote:
> 
> As per the code, it seems that all these memory allocation happens at the 
> time of ring creation only.
> And a ring will be created whenever any new application(using pf_ring) is 
> spwaned.
> And there should not be multiple occurrences of these errors if any 
> application is already up and running.
> Please  let me know if my understanding is correct.
> 
> Regards,
> Gautam
> 
> On Tue, Jan 9, 2018 at 12:43 PM, Chandrika Gautam 
> mailto:chandrika.iitd.r...@gmail.com>> wrote:
> Hi Alfredo,
> 
> Does this error mean that packets are getting lost ? Problem is that I have 
> not faced this issue in our lab. Its hit in one of the production server.
> Can you also explain what have you fixed to resolve this?
> 
> Regards,
> Gautam
> 
> On Sun, Jan 7, 2018 at 5:10 PM, Alfredo Cardigliano  <mailto:cardigli...@ntop.org>> wrote:
> Hi Gautam
> this is not related to hugepages (actually hugepages reserve physical memory, 
> thus
> they can affect allocation, however in this specific case it was due to a 
> limit in the
> pf_ring buffer size).
> Please note I just pushed a patch to handle this case resizing the buffer 
> size when
> limit is exceeded, please check if it’s working for you and let me know.
> 
> Regards
> Alfredo
> 
> 
>> On 5 Jan 2018, at 06:38, Chandrika Gautam > <mailto:chandrika.iitd.r...@gmail.com>> wrote:
>> 
>> Hi Alfredo,
>> 
>> Can this issue be faced if hugepages are not created on server? Because 
>> these errors disappeared after creating the hugepages.
>> 
>> Regards,
>> Gautam
>> 
>> On Fri, Jan 5, 2018 at 10:52 AM, Chandrika Gautam 
>> mailto:chandrika.iitd.r...@gmail.com>> wrote:
>> Hi Alfredo,
>> 
>> MTU is 1518 for all 6 interfaces except for lo (loopback) interface(65535).
>> 
>> Regards,
>> Gautam
>> 
>> 
>> On Tue, Jan 2, 2018 at 6:46 PM, Alfredo Cardigliano > <mailto:cardigli...@ntop.org>> wrote:
>> What is the MTU size? It seems you are trying to allocate more than 25GB of 
>> memory,
>> thus you get this failure. Please try reducing the number of slots, if you 
>> cannot reduce
>> the buffers size.
>> 
>> Regards
>> Alfredo
>> 
>>> On 2 Jan 2018, at 10:40, Chandrika Gautam >> <mailto:chandrika.iitd.r...@gmail.com>> wrote:
>>> 
>>> [PF_RING] ERROR: min_num_slots (409600, slot len = 65576) causes memory 
>>> size to wrap
>>> 
>>> [PF_RING] ring_mmap(): unable to allocate memory
>>> 
>>> Regards,
>>> Gautam
>>> 
>>> On Tue, Jan 2, 2018 at 2:46 PM, Alfredo Cardigliano >> <mailto:cardigli...@ntop.org>> wrote:
>>> Hi Gautam
>>> please provide the error, I see just the insmod command..
>>> 
>>> Regards
>>> Alfredo
>>> 
>>> > On 27 Dec 2017, at 06:26, Chandrika Gautam >> > <mailto:chandrika.iitd.r...@gmail.com>> wrote:
>>> >
>>> > Hi,
>>> >
>>> > We are receiving below error while loading the pf_ring using below 
>>> > command even though there is free memory available on server. Please let 
>>> > me know if you need any other information.
>>> >
>>> > insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0 
>>> > enable_frag_coherence=0
>>> >
>>> > [root@CHOIPPROBE04B logs]# cat /proc/meminfo
>>> > MemTotal:   32743804 kB
>>> > MemFree:24744428 kB
>>> > Buffers:   43208 kB
>>> > Cached:   445016 kB
>>> > SwapCached:0 kB
>>> > Active:  6281204 kB
>>> > Inactive: 384396 kB
>>> > Active(anon):6177672 kB
>>> > Inactive(anon):8 kB
>>> > Active(file): 103532 kB
>>> > Inactive(file):   384388 kB
>>> > Unevictable:   0 kB
>>> > Mlocked:   0 kB
>>> > SwapTotal:  50331644 kB
>>> > SwapFree:   50331644 kB
>>> > Dirty:  2080 kB
>>> > Writeback: 0 kB
>>> > AnonPages:   6200104 kB
>>> > Mapped:   847312 kB
>>> > Shmem:   220 kB
>>> > Slab: 103712 kB
>>> > SReclaimable:  36832 kB
>>> > SUnreclaim:66880 kB
>>> > KernelStack:   14208 kB
>>> > PageTables:19664 kB
>&

Re: [Ntop-misc] PF_RING Error : causes memory size to wrap, unable to allocate memory

2018-01-09 Thread Alfredo Cardigliano
Hi Gautam
no packet is getting lost, the application was unable to open the pf_ring 
socket in case
of huge mtu and many slots configured in min_num_slots, that was rewuiring too 
much memory,
now it automatically adapt the min_num_slots when this occurs.

Alfredo

> On 9 Jan 2018, at 08:13, Chandrika Gautam  
> wrote:
> 
> Hi Alfredo,
> 
> Does this error mean that packets are getting lost ? Problem is that I have 
> not faced this issue in our lab. Its hit in one of the production server.
> Can you also explain what have you fixed to resolve this?
> 
> Regards,
> Gautam
> 
> On Sun, Jan 7, 2018 at 5:10 PM, Alfredo Cardigliano  <mailto:cardigli...@ntop.org>> wrote:
> Hi Gautam
> this is not related to hugepages (actually hugepages reserve physical memory, 
> thus
> they can affect allocation, however in this specific case it was due to a 
> limit in the
> pf_ring buffer size).
> Please note I just pushed a patch to handle this case resizing the buffer 
> size when
> limit is exceeded, please check if it’s working for you and let me know.
> 
> Regards
> Alfredo
> 
> 
>> On 5 Jan 2018, at 06:38, Chandrika Gautam > <mailto:chandrika.iitd.r...@gmail.com>> wrote:
>> 
>> Hi Alfredo,
>> 
>> Can this issue be faced if hugepages are not created on server? Because 
>> these errors disappeared after creating the hugepages.
>> 
>> Regards,
>> Gautam
>> 
>> On Fri, Jan 5, 2018 at 10:52 AM, Chandrika Gautam 
>> mailto:chandrika.iitd.r...@gmail.com>> wrote:
>> Hi Alfredo,
>> 
>> MTU is 1518 for all 6 interfaces except for lo (loopback) interface(65535).
>> 
>> Regards,
>> Gautam
>> 
>> 
>> On Tue, Jan 2, 2018 at 6:46 PM, Alfredo Cardigliano > <mailto:cardigli...@ntop.org>> wrote:
>> What is the MTU size? It seems you are trying to allocate more than 25GB of 
>> memory,
>> thus you get this failure. Please try reducing the number of slots, if you 
>> cannot reduce
>> the buffers size.
>> 
>> Regards
>> Alfredo
>> 
>>> On 2 Jan 2018, at 10:40, Chandrika Gautam >> <mailto:chandrika.iitd.r...@gmail.com>> wrote:
>>> 
>>> [PF_RING] ERROR: min_num_slots (409600, slot len = 65576) causes memory 
>>> size to wrap
>>> 
>>> [PF_RING] ring_mmap(): unable to allocate memory
>>> 
>>> Regards,
>>> Gautam
>>> 
>>> On Tue, Jan 2, 2018 at 2:46 PM, Alfredo Cardigliano >> <mailto:cardigli...@ntop.org>> wrote:
>>> Hi Gautam
>>> please provide the error, I see just the insmod command..
>>> 
>>> Regards
>>> Alfredo
>>> 
>>> > On 27 Dec 2017, at 06:26, Chandrika Gautam >> > <mailto:chandrika.iitd.r...@gmail.com>> wrote:
>>> >
>>> > Hi,
>>> >
>>> > We are receiving below error while loading the pf_ring using below 
>>> > command even though there is free memory available on server. Please let 
>>> > me know if you need any other information.
>>> >
>>> > insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0 
>>> > enable_frag_coherence=0
>>> >
>>> > [root@CHOIPPROBE04B logs]# cat /proc/meminfo
>>> > MemTotal:   32743804 kB
>>> > MemFree:24744428 kB
>>> > Buffers:   43208 kB
>>> > Cached:   445016 kB
>>> > SwapCached:0 kB
>>> > Active:  6281204 kB
>>> > Inactive: 384396 kB
>>> > Active(anon):6177672 kB
>>> > Inactive(anon):8 kB
>>> > Active(file): 103532 kB
>>> > Inactive(file):   384388 kB
>>> > Unevictable:   0 kB
>>> > Mlocked:   0 kB
>>> > SwapTotal:  50331644 kB
>>> > SwapFree:   50331644 kB
>>> > Dirty:  2080 kB
>>> > Writeback: 0 kB
>>> > AnonPages:   6200104 kB
>>> > Mapped:   847312 kB
>>> > Shmem:   220 kB
>>> > Slab: 103712 kB
>>> > SReclaimable:  36832 kB
>>> > SUnreclaim:66880 kB
>>> > KernelStack:   14208 kB
>>> > PageTables:19664 kB
>>> > NFS_Unstable:  0 kB
>>> > Bounce:0 kB
>>> > WritebackTmp:  0 kB
>>> > CommitLimit:66703544 kB
>>> > Committed_AS:9755140 kB
>>> > VmallocTotal:  

Re: [Ntop-misc] PF_RING Error : causes memory size to wrap, unable to allocate memory

2018-01-07 Thread Alfredo Cardigliano
Hi Gautam
this is not related to hugepages (actually hugepages reserve physical memory, 
thus
they can affect allocation, however in this specific case it was due to a limit 
in the
pf_ring buffer size).
Please note I just pushed a patch to handle this case resizing the buffer size 
when
limit is exceeded, please check if it’s working for you and let me know.

Regards
Alfredo

> On 5 Jan 2018, at 06:38, Chandrika Gautam  
> wrote:
> 
> Hi Alfredo,
> 
> Can this issue be faced if hugepages are not created on server? Because these 
> errors disappeared after creating the hugepages.
> 
> Regards,
> Gautam
> 
> On Fri, Jan 5, 2018 at 10:52 AM, Chandrika Gautam 
> mailto:chandrika.iitd.r...@gmail.com>> wrote:
> Hi Alfredo,
> 
> MTU is 1518 for all 6 interfaces except for lo (loopback) interface(65535).
> 
> Regards,
> Gautam
> 
> 
> On Tue, Jan 2, 2018 at 6:46 PM, Alfredo Cardigliano  <mailto:cardigli...@ntop.org>> wrote:
> What is the MTU size? It seems you are trying to allocate more than 25GB of 
> memory,
> thus you get this failure. Please try reducing the number of slots, if you 
> cannot reduce
> the buffers size.
> 
> Regards
> Alfredo
> 
>> On 2 Jan 2018, at 10:40, Chandrika Gautam > <mailto:chandrika.iitd.r...@gmail.com>> wrote:
>> 
>> [PF_RING] ERROR: min_num_slots (409600, slot len = 65576) causes memory size 
>> to wrap
>> 
>> [PF_RING] ring_mmap(): unable to allocate memory
>> 
>> Regards,
>> Gautam
>> 
>> On Tue, Jan 2, 2018 at 2:46 PM, Alfredo Cardigliano > <mailto:cardigli...@ntop.org>> wrote:
>> Hi Gautam
>> please provide the error, I see just the insmod command..
>> 
>> Regards
>> Alfredo
>> 
>> > On 27 Dec 2017, at 06:26, Chandrika Gautam > > <mailto:chandrika.iitd.r...@gmail.com>> wrote:
>> >
>> > Hi,
>> >
>> > We are receiving below error while loading the pf_ring using below command 
>> > even though there is free memory available on server. Please let me know 
>> > if you need any other information.
>> >
>> > insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0 
>> > enable_frag_coherence=0
>> >
>> > [root@CHOIPPROBE04B logs]# cat /proc/meminfo
>> > MemTotal:   32743804 kB
>> > MemFree:24744428 kB
>> > Buffers:   43208 kB
>> > Cached:   445016 kB
>> > SwapCached:0 kB
>> > Active:  6281204 kB
>> > Inactive: 384396 kB
>> > Active(anon):6177672 kB
>> > Inactive(anon):8 kB
>> > Active(file): 103532 kB
>> > Inactive(file):   384388 kB
>> > Unevictable:   0 kB
>> > Mlocked:   0 kB
>> > SwapTotal:  50331644 kB
>> > SwapFree:   50331644 kB
>> > Dirty:  2080 kB
>> > Writeback: 0 kB
>> > AnonPages:   6200104 kB
>> > Mapped:   847312 kB
>> > Shmem:   220 kB
>> > Slab: 103712 kB
>> > SReclaimable:  36832 kB
>> > SUnreclaim:66880 kB
>> > KernelStack:   14208 kB
>> > PageTables:19664 kB
>> > NFS_Unstable:  0 kB
>> > Bounce:0 kB
>> > WritebackTmp:  0 kB
>> > CommitLimit:66703544 kB
>> > Committed_AS:9755140 kB
>> > VmallocTotal:   34359738367 kB
>> > VmallocUsed: 1437596 kB
>> > VmallocChunk:   34340005064 kB
>> > HardwareCorrupted: 0 kB
>> > AnonHugePages:   6119424 kB
>> > HugePages_Total:   0
>> > HugePages_Free:0
>> > HugePages_Rsvd:0
>> > HugePages_Surp:0
>> > Hugepagesize:   2048 kB
>> > DirectMap4k:   14336 kB
>> > DirectMap2M: 2009088 kB
>> > DirectMap1G:31457280 kB
>> >
>> > Regards,
>> > Gautam
>> > ___
>> > Ntop-misc mailing list
>> > Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>> > http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>> > <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>> 
>> 
>> ___
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] PF_RING Error : causes memory size to wrap, unable to allocate memory

2018-01-02 Thread Alfredo Cardigliano
What is the MTU size? It seems you are trying to allocate more than 25GB of 
memory,
thus you get this failure. Please try reducing the number of slots, if you 
cannot reduce
the buffers size.

Regards
Alfredo

> On 2 Jan 2018, at 10:40, Chandrika Gautam  
> wrote:
> 
> [PF_RING] ERROR: min_num_slots (409600, slot len = 65576) causes memory size 
> to wrap
> 
> [PF_RING] ring_mmap(): unable to allocate memory
> 
> Regards,
> Gautam
> 
> On Tue, Jan 2, 2018 at 2:46 PM, Alfredo Cardigliano  <mailto:cardigli...@ntop.org>> wrote:
> Hi Gautam
> please provide the error, I see just the insmod command..
> 
> Regards
> Alfredo
> 
> > On 27 Dec 2017, at 06:26, Chandrika Gautam  > <mailto:chandrika.iitd.r...@gmail.com>> wrote:
> >
> > Hi,
> >
> > We are receiving below error while loading the pf_ring using below command 
> > even though there is free memory available on server. Please let me know if 
> > you need any other information.
> >
> > insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0 
> > enable_frag_coherence=0
> >
> > [root@CHOIPPROBE04B logs]# cat /proc/meminfo
> > MemTotal:   32743804 kB
> > MemFree:24744428 kB
> > Buffers:   43208 kB
> > Cached:   445016 kB
> > SwapCached:0 kB
> > Active:  6281204 kB
> > Inactive: 384396 kB
> > Active(anon):6177672 kB
> > Inactive(anon):8 kB
> > Active(file): 103532 kB
> > Inactive(file):   384388 kB
> > Unevictable:   0 kB
> > Mlocked:   0 kB
> > SwapTotal:  50331644 kB
> > SwapFree:   50331644 kB
> > Dirty:  2080 kB
> > Writeback: 0 kB
> > AnonPages:   6200104 kB
> > Mapped:   847312 kB
> > Shmem:   220 kB
> > Slab: 103712 kB
> > SReclaimable:  36832 kB
> > SUnreclaim:66880 kB
> > KernelStack:   14208 kB
> > PageTables:19664 kB
> > NFS_Unstable:  0 kB
> > Bounce:0 kB
> > WritebackTmp:  0 kB
> > CommitLimit:66703544 kB
> > Committed_AS:9755140 kB
> > VmallocTotal:   34359738367 kB
> > VmallocUsed: 1437596 kB
> > VmallocChunk:   34340005064 kB
> > HardwareCorrupted: 0 kB
> > AnonHugePages:   6119424 kB
> > HugePages_Total:   0
> > HugePages_Free:0
> > HugePages_Rsvd:0
> > HugePages_Surp:0
> > Hugepagesize:   2048 kB
> > DirectMap4k:   14336 kB
> > DirectMap2M: 2009088 kB
> > DirectMap1G:31457280 kB
> >
> > Regards,
> > Gautam
> > ___
> > Ntop-misc mailing list
> > Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
> > http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
> > <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
> 
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] PF_RING Error : causes memory size to wrap, unable to allocate memory

2018-01-02 Thread Alfredo Cardigliano
Hi Gautam
please provide the error, I see just the insmod command..

Regards
Alfredo

> On 27 Dec 2017, at 06:26, Chandrika Gautam  
> wrote:
> 
> Hi,
> 
> We are receiving below error while loading the pf_ring using below command 
> even though there is free memory available on server. Please let me know if 
> you need any other information.
> 
> insmod pf_ring.ko min_num_slots=409600 enable_tx_capture=0 
> enable_frag_coherence=0
> 
> [root@CHOIPPROBE04B logs]# cat /proc/meminfo
> MemTotal:   32743804 kB
> MemFree:24744428 kB
> Buffers:   43208 kB
> Cached:   445016 kB
> SwapCached:0 kB
> Active:  6281204 kB
> Inactive: 384396 kB
> Active(anon):6177672 kB
> Inactive(anon):8 kB
> Active(file): 103532 kB
> Inactive(file):   384388 kB
> Unevictable:   0 kB
> Mlocked:   0 kB
> SwapTotal:  50331644 kB
> SwapFree:   50331644 kB
> Dirty:  2080 kB
> Writeback: 0 kB
> AnonPages:   6200104 kB
> Mapped:   847312 kB
> Shmem:   220 kB
> Slab: 103712 kB
> SReclaimable:  36832 kB
> SUnreclaim:66880 kB
> KernelStack:   14208 kB
> PageTables:19664 kB
> NFS_Unstable:  0 kB
> Bounce:0 kB
> WritebackTmp:  0 kB
> CommitLimit:66703544 kB
> Committed_AS:9755140 kB
> VmallocTotal:   34359738367 kB
> VmallocUsed: 1437596 kB
> VmallocChunk:   34340005064 kB
> HardwareCorrupted: 0 kB
> AnonHugePages:   6119424 kB
> HugePages_Total:   0
> HugePages_Free:0
> HugePages_Rsvd:0
> HugePages_Surp:0
> Hugepagesize:   2048 kB
> DirectMap4k:   14336 kB
> DirectMap2M: 2009088 kB
> DirectMap1G:31457280 kB
> 
> Regards,
> Gautam
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] nProbe Cento Performance Tuning

2017-12-18 Thread Alfredo Cardigliano
Hi Mike
it seems that running in passive wait is causing some loss, this is due to the
poor buffering power in i40e cards: when there is no packets, the application
calls poll() to wait for new packets until an interrupt is raised, then it 
wakes up
and continue processing traffic. What happens is that the buffer fills up during
this period of time, discarding some packet.
In essence it could be that 1. the poll/wake up period takes too much in passive
wait, or 2. the cpu is in power saving mode slowing down processing (please
check that it is set to “performance” in your Bios) when there is no much to do.
This also depends a lot on the traffic rate, we will run some test to check how
poll behaves at that rate.

Note that this does not happen with -a as the application is not calling poll(),
but it’s just spinning on the cpu.

Alfredo

> On 15 Dec 2017, at 22:00, Lang, Michael  wrote:
> 
> Alfredo,
> 
> Here is what that looks like (much better, I don’t understand why):
> 



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] nProbe Cento Performance Tuning

2017-12-15 Thread Alfredo Cardigliano
What happens if you add -a to pfcount?

Alfredo

> On 15 Dec 2017, at 20:25, Lang, Michael  wrote:
> 
> Hi Alfredo,
> 
> Thank you for the response.  Here is the output you requested:
> 
> [root@madmax ~]# pfcount -i zc:enp1s0
> Using PF_RING v.7.0.0
> Capturing from zc:enp1s0 [mac: 3C:FD:FE:A2:B9:58][if_index: 5][speed: 
> 4Mb/s]
> # Device RX channels: 1
> # Polling threads:1
> Dumping statistics on /proc/net/pf_ring/stats/7413-enp1s0.682
> =
> Absolute Stats: [1'596'557 pkts total][1'123'809 pkts dropped][70.4% dropped]
> [472'748 pkts rcvd][478'241'120 bytes rcvd]
> =
> 
> =
> Absolute Stats: [3'105'703 pkts total][2'154'407 pkts dropped][69.4% dropped]
> [951'296 pkts rcvd][959'971'922 bytes rcvd][951'229.41 pkt/sec][7'679.23 
> Mbit/sec]
> =
> Actual Stats: [478'548 pkts rcvd][1'000.07 ms][478'514.50 pps][3.85 Gbps]
> =



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] nProbe Cento Performance Tuning

2017-12-15 Thread Alfredo Cardigliano
Hi Mike
could you try running pfcount -i zc:enp1s0 (configuring the interface with a 
single RSS queue)?

Thank you
Alfredo

> On 15 Dec 2017, at 16:29, Lang, Michael  wrote:
> 
> Hello,
> 
> I’m in the process of configuring new 40G hardware with nProbe Cento to 
> replace older hardware with 2 x 10G and nProbe.  It is going well however I’m 
> not yet getting zero packet loss which is what I was expecting.  Packet loss 
> is low but not zero, about 0.1% with traffic rate 13.5 Gbps, 1.7 Mpps.
> 
> Are there tuning recommendations that I can try to achieve zero packet loss?
> 
> Here are some details of my environment:
> 
> OS: CentOS Linux release 7.4.1708
> Kernel: Linux cento 3.10.0-693.11.1.el7.x86_64 #1 SMP Mon Dec 4 23:52:40 UTC 
> 2017 x86_64 x86_64 x86_64 GNU/Linux
> REPO: ntop CentOS Stable
> CPU: Intel(R) Xeon(R) E3-1275 v6 (4 cores + HT = 8 Logical CPUs)
> MEM: 8 GB
> NIC: Intel XL710-QDA1
> NIC Driver: i40e_zc v2.2.4
> RSS: Auto configured to 8
> PFRING: 7.0.0 ($Revision: 
> 7.0.0-stable:f18cdc778a3c957689125dd7c52c40c2277703fa$)
> Huge pages: 2048
> Cento version: nProbe cento v.1.2.171211 
> 1.2-stable:de3cd10fb80990c50e753af6026787fe62b6d2b3:20171211
> Cento Invocation: cento --interface zc:enp1s0@[0-7] --lifetime-timeout 300 
> --v5 :
> 
> CPU Utilization is generally under 2% for all cores except 7 which is around 
> 10%.  Memory Utilization is 5.0 GB / 7.43 GB, no swap usage.
> 
> -  Mike
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it 
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
> 


signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] nscrub problems

2017-12-04 Thread Alfredo Cardigliano
Hi Spiros
please read my answers below.

> On 3 Dec 2017, at 08:15, Spiros Papageorgiou  wrote:
> 
> HI all,
> 
> I have an nscrub setup on an LTS16. The config is routing/assymetric mode. I 
> have a few problems and questions:
> - When I stop nscrub the nscrub-VM is left at a cripppled state where it 
> can't even ping IPs that are on connected interfaces (ex the gateway). Why is 
> that? How can i avoid this?
> 
Please provide your nscrub configuration file (or cli), ifconfig, and cat 
/proc/net/pf_ring/dev/ethX/info for the interface you are using in nscrub.
(feel free to write to my email address directly if you don’t want to share 
your data on the ml)
> - The white/black/gray dynamic lists are always empty when mitigating even 
> when nscrub drops attack packets. I'm reading with
> attackers?target_id=pc\&action=list\&profile=black\&list=dynamic
> 
Please send me your target configuration, you can dump it with nscrub-export
> - When pinging from the internet a host defined as a target in scrub, I can 
> see many packets are delayed.
> 64 bytes from x.y.z.130: icmp_seq=1 ttl=125 time=2.42 ms
> 64 bytes from x.y.z.130: icmp_seq=2 ttl=125 time=3002 ms
> 64 bytes from x.y.z.130: icmp_seq=3 ttl=125 time=2002 ms
> 64 bytes from x.y.z.130: icmp_seq=4 ttl=125 time=1002 ms
> 64 bytes from x.y.z.130: icmp_seq=5 ttl=125 time=3.09 ms
> This also happens when the target is in bypass enabled mode. Why this happens 
> and how can i avoid this?
> 
I need to see the nscrub configuration as above.
> - UDP packets are dropped even when I have default action "drop disable". Is 
> this a bug? See the below snippet, where I try to disable udp/src/53/drop. It 
> accepts the command but it there is not result.
> root@nscrub:~# nscrub-export all
> 
> target pc profile DEFAULT udp src 53 drop enable
> 
> root@nscrub:~# curl -u admin:admin 
> http://127.0.0.1:8880/profile/udp/src/53/accept?target_id=pc\&profile=default\&action=disable
>  
> 
> { "envelope_ver": "1.0", "hostname": "katharistis", "epoch": 1512284852, 
> "status": 200, "description": "OK", "data": { "function": 
> "\/profile\/udp\/src\/53\/accept", "return": "success" } }root@nscrub:~#
> root@nscrub:~# nscrub-export all
> target pc profile DEFAULT udp src 53 drop enable
> 
http://127.0.0.1:8880/profile/udp/src/53/accept?target_id=pc\&profile=default\&action=disable
 


Please note that you are using “accept" instead of “drop” in the url, I 
recommend you using nscrub-cli which is more clear.
> - What is the suggested config for mitigating DNS attacks? The victim still 
> needs to be able to do DNS requests and get the answers. Keep in mind that 
> nscrub does not see the DNS requests from the victim (assym mode).
> 
There are a few settings to mitigate DNS attacks that apply to requests:
dns request check_method 
dns request rate src [PPS]
dns request rate transaction_id [PPS]
dns request threshold [PPS]
dns request type NUM drop [enable|disable]

As of answers, all you can do is to configure UDP rating:

udp rate src [PPS]
udp rate dst [PPS]
> - Is the mitigation capabilities of nscrub efficient when I redirect an 
> attacked IP, through nscrub in realtime or nscrub needs time to profile a 
> "first seen IP" before mitigating attacks?
> 
With the current algorithms, you can redirect the IP on demand.
> - As far as i understand, nscrub tests IPs using some algorithms and 
> classifies the IPs to the white/black/grey list. Is that right?
> 
Yes, some of the algorithms work this way.

Alfredo
> Sp
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] Compilation of ixgbe-zc- module on RHEL 6.7 with kernel version-2.6.32-696.16.1.el6.x86_64

2017-11-30 Thread Alfredo Cardigliano
Hi Gautam
said that it’s untested, you should be able to use drivers from 7 with 6.2

Alfredo

> On 30 Nov 2017, at 10:54, Chandrika Gautam  
> wrote:
> 
> Hi Alfredo,
> 
> Thanks for the prompt reply!
> 
> We can not use latest PF_RING 7.0 immediately but will plan that soon.
> Meanwhile we can safely use the ixgbe driver code from 7.0 with PF_RING 6.2.0 
> right ?
> 
> 
> Regards,
> Gautam
> 
> On Thu, Nov 30, 2017 at 2:09 PM, Alfredo Cardigliano  <mailto:cardigli...@ntop.org>> wrote:
> Hi Gautam
> please move to latest stable (7.0) or use the drivers included in that 
> version,
> we no longer support 6.2..
> 
> Alfredo
> 
>> On 30 Nov 2017, at 05:50, Chandrika Gautam > <mailto:chandrika.iitd.r...@gmail.com>> wrote:
>> 
>> Hi all,
>> 
>> We are using PF_RING-6.2.0 and IXGBE-ZC compilation failing on the RHEL 6.7 
>> with kernel version-2.6.32-696.16.1.el6.x86_64
>> 
>> Below are the errors received -
>> 
>> /root/rpmbuild/BUILD/PF_RING-ZC-2.6.32-696.16.1.el6.x86_64-6.2.0/PF_RING-ZC-6.2.0/src/kcompat.h:
>>  In function '__kc_vlan_get_protocol':
>> /root/rpmbuild/BUILD/PF_RING-ZC-2.6.32-696.16.1.el6.x86_64-6.2.0/PF_RING-ZC-6.2.0/src/kcompat.h:3404:
>>  error: implicit declaration of function 'vlan_tx_tag_present'
>> In file included from 
>> /root/rpmbuild/BUILD/PF_RING-ZC-2.6.32-696.16.1.el6.x86_64-6.2.0/PF_RING-ZC-6.2.0/src/ixgbe_main.c:51:
>> /root/rpmbuild/BUILD/PF_RING-ZC-2.6.32-696.16.1.el6.x86_64-6.2.0/PF_RING-ZC-6.2.0/src/ixgbe.h:
>>  In function 'ixgbe_qv_unlock_napi':
>> /root/rpmbuild/BUILD/PF_RING-ZC-2.6.32-696.16.1.el6.x86_64-6.2.0/PF_RING-ZC-6.2.0/src/ixgbe.h:602:
>>  error: too few arguments to function 'napi_gro_flush'
>> make[2]: *** 
>> [/root/rpmbuild/BUILD/PF_RING-ZC-2.6.32-696.16.1.el6.x86_64-6.2.0/PF_RING-ZC-6.2.0/src/ixgbe_main.o]
>>  Error 1
>> make[1]: *** 
>> [_module_/root/rpmbuild/BUILD/PF_RING-ZC-2.6.32-696.16.1.el6.x86_64-6.2.0/PF_RING-ZC-6.2.0/src]
>>  Error 2
>> make[1]: Leaving directory `/usr/src/kernels/2.6.32-696.16.1.el6.x86_64'
>> make: *** [default] Error 2
>> make: Leaving directory 
>> `/root/rpmbuild/BUILD/PF_RING-ZC-2.6.32-696.16.1.el6.x86_64-6.2.0/PF_RING-ZC-6.2.0/src'
>> error: Bad exit status from /var/tmp/rpm-tmp.TtmLvI (%install)
>> 
>> Please help to resolve this.
>> 
>> 
>> 
>> 
>> Regards,
>> Gautam
>> ___
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] Compilation of ixgbe-zc- module on RHEL 6.7 with kernel version-2.6.32-696.16.1.el6.x86_64

2017-11-30 Thread Alfredo Cardigliano
Hi Gautam
please move to latest stable (7.0) or use the drivers included in that version,
we no longer support 6.2..

Alfredo

> On 30 Nov 2017, at 05:50, Chandrika Gautam  
> wrote:
> 
> Hi all,
> 
> We are using PF_RING-6.2.0 and IXGBE-ZC compilation failing on the RHEL 6.7 
> with kernel version-2.6.32-696.16.1.el6.x86_64
> 
> Below are the errors received -
> 
> /root/rpmbuild/BUILD/PF_RING-ZC-2.6.32-696.16.1.el6.x86_64-6.2.0/PF_RING-ZC-6.2.0/src/kcompat.h:
>  In function '__kc_vlan_get_protocol':
> /root/rpmbuild/BUILD/PF_RING-ZC-2.6.32-696.16.1.el6.x86_64-6.2.0/PF_RING-ZC-6.2.0/src/kcompat.h:3404:
>  error: implicit declaration of function 'vlan_tx_tag_present'
> In file included from 
> /root/rpmbuild/BUILD/PF_RING-ZC-2.6.32-696.16.1.el6.x86_64-6.2.0/PF_RING-ZC-6.2.0/src/ixgbe_main.c:51:
> /root/rpmbuild/BUILD/PF_RING-ZC-2.6.32-696.16.1.el6.x86_64-6.2.0/PF_RING-ZC-6.2.0/src/ixgbe.h:
>  In function 'ixgbe_qv_unlock_napi':
> /root/rpmbuild/BUILD/PF_RING-ZC-2.6.32-696.16.1.el6.x86_64-6.2.0/PF_RING-ZC-6.2.0/src/ixgbe.h:602:
>  error: too few arguments to function 'napi_gro_flush'
> make[2]: *** 
> [/root/rpmbuild/BUILD/PF_RING-ZC-2.6.32-696.16.1.el6.x86_64-6.2.0/PF_RING-ZC-6.2.0/src/ixgbe_main.o]
>  Error 1
> make[1]: *** 
> [_module_/root/rpmbuild/BUILD/PF_RING-ZC-2.6.32-696.16.1.el6.x86_64-6.2.0/PF_RING-ZC-6.2.0/src]
>  Error 2
> make[1]: Leaving directory `/usr/src/kernels/2.6.32-696.16.1.el6.x86_64'
> make: *** [default] Error 2
> make: Leaving directory 
> `/root/rpmbuild/BUILD/PF_RING-ZC-2.6.32-696.16.1.el6.x86_64-6.2.0/PF_RING-ZC-6.2.0/src'
> error: Bad exit status from /var/tmp/rpm-tmp.TtmLvI (%install)
> 
> Please help to resolve this.
> 
> 
> 
> 
> Regards,
> Gautam
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] nscrub config

2017-11-27 Thread Alfredo Cardigliano
Hi Spiros
please read below

> On 27 Nov 2017, at 12:44, Spiros Papageorgiou  wrote:
> 
> Hi all,
> 
> I'm need some help configuring nscrub. My setup is routed/symmetric for now:
> Internet <---> ens160 (native vlan) <> ens160.838 (servers)
> 
> with just one phy interface (--wan-interface=zc:ens160).
> 
> ens160Link encap:Ethernet  HWaddr 3c:fd:fe:18:0c:e0
>   inet addr:x.y.z.34  Bcast:x.y.z.63  Mask:255.255.255.224
> ens160.838 Link encap:Ethernet  HWaddr 3c:fd:fe:18:0c:e0
>   inet addr:x.y.z.129  Bcast:x.y.z.255  Mask:255.255.255.128
> 
> nscrub-cli:
> katharistis>
> localhost:8880> vlan id 1 reforge 838
> src_vlan_id: 1
> dst_vlan_id: 838
> 
> katharistis> list targets
> targets:
>   id: ntuanocnet
>   subnet:
>x.y.z.128/28
> 
> routingtable:
>   destination: 0.0.0.0/0
>   gw: x.y.z.33
> 
> 
> The setup is not working. I can't actually ping my server at x.y.z.130 (on 
> ens160.838).
> Questions:
> - What is the correct setup for this?

You need to configure 2 VLANs (e.g. 1 and 838 as in your current nscrub 
configuration),
nScrub will reforge the VLAN from 1 to 838. This means that ingress packets 
should be tagged with vlan 1,
and they will be sent to VLAN 838.

> - Is the vlan reforging as it supposed to be? I don't really understand what 
> is supposed to do... I would like to set the output vlan, but reforge needs 
> to do a rewrite. What exactly is rewriting?
> - I guess in pfring_zc mode, packets don't go up the kernel. So, who is doing 
> arp reuqests for x.y.z.130 or x.y.z.33 (gw)?

Kernel is bypassed, however kernel is still involved for ARP traffic.

> - When nscrub is running, can i see the packets with tcpdump on en160 and 
> ens160.838?

With ZC kernel is bypassed, thus the only way to see packets with tcpdump is 
attaching to the nscrub mirror queues (please refer to the user’s guide)

Alfredo

> 
> Thanx,
> Sp
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] nscrub installation

2017-11-26 Thread Alfredo Cardigliano
Hi Spiros
did you configure the i40e zc driver according to 
https://github.com/ntop/PF_RING/blob/dev/doc/README.apt_rpm_packages.md 
 ?
Please provide cat /proc/net/pf_ring/dev/ens160/info

Best Regards
Alfredo

> On 25 Nov 2017, at 21:37, Spiros Papageorgiou  wrote:
> 
> Hi all,
> 
> I'm running into problems with installing nscrub.
> 
> While, i had made it start succesfully, it has stopped working
> 
> I have updated and removed/reinstalled pfring/dkms.
> 
> Thank you,
> 
> Spiros
> 
> 
> Dmesg:
> 
> [5.914450] i40e :03:00.0 ens160: NIC Link is Up 10 Gbps Full Duplex, 
> Flow Control: None
> [5.914610] 8021q: adding VLAN 0 to HW filter on device ens160
> [5.914683] i40e :03:00.0 ens160: adding 3c:fd:fe:18:0c:e0 vid=0
> [5.920257] vmxnet3 :0b:00.0 ens192: intr type 3, mode 0, 5 vectors 
> allocated
> [5.920647] vmxnet3 :0b:00.0 ens192: NIC Link is Up 1 Mbps
> [5.920753] 8021q: adding VLAN 0 to HW filter on device ens192
> [5.928329] cgroup: new mount options do not match the existing 
> superblock, will be ignored
> [5.976789] NET: Registered protocol family 40
> [8.401998] floppy0: no floppy controllers found
> [8.514589] pf_ring: module verification failed: signature and/or required 
> key missing - tainting kernel
> [8.515040] [PF_RING] Welcome to PF_RING 7.1.0 ($Revision: 
> dev:a16ab9b0be7cd806c6be6c4bebbffc843b1c0751$)
>(C) 2004-17 ntop.org
> [8.515042] [PF_RING] Min # ring slots 4096
> [8.515043] [PF_RING] Slot version 16
> [8.515043] [PF_RING] Capture TX   Yes [RX+TX]
> [8.515044] [PF_RING] IP DefragmentNo
> [8.515054] [PF_RING] registered /proc/net/pf_ring/
> [8.515055] NET: Registered protocol family 27
> [8.515062] [PF_RING] Initialized correctly
> [8.540319] dca service started, version 1.12.1
> [   11.641688] [PF_RING] pfring_select_zc_dev:5703 ens160@0 mapping failed or 
> not a ZC device
> 
> Nscrub log:
> 
> 25/Nov/2017 22:14:21 [main.cpp:102] Creating engine instance..
> 25/Nov/2017 22:14:21 [Redis.cpp:77] Successfully connected to Redis 
> 127.0.0.1:6379@0
> 25/Nov/2017 22:14:21 [main.cpp:114] Registering REST server..
> 25/Nov/2017 22:14:21 [main.cpp:122] Starting engine..
> 25/Nov/2017 22:14:21 [nScrub.cpp:205] Welcome to nscrub x86_64 v.1.0.171125 
> (620) - (C) 2017 ntop.org
> 25/Nov/2017 22:14:21 [nScrub.cpp:222] System initialisation
> 25/Nov/2017 22:14:21 [nScrub.cpp:249] ERROR: Error reading device zc:ens160 
> info (please make sure pf_ring.ko is loaded)
> 25/Nov/2017 22:14:21 [nScrub.cpp:473] System initialised
> 
> The following is an earlier log, where I had the folowing nscrub.conf 
> settings, with a VM of 4 cores:
> 
> # Processing thread(s) CPU core(s) affinity (column-separated list in case of 
> RSS)
> --thread-affinity=0:1:2:3
> # Time thread CPU core affinity
> #--time-source-affinity=4
> # Other threads affinity
> #--other-affinity=4
> 
> nscrub was working though, but after some time it started producing the 
> following errors:
> 
> Nscrub log:
> 
> 25/Nov/2017 21:49:49 [main.cpp:102] Creating engine instance..
> 25/Nov/2017 21:49:49 [Redis.cpp:77] Successfully connected to Redis 
> 127.0.0.1:6379@0
> 25/Nov/2017 21:49:49 [main.cpp:114] Registering REST server..
> 25/Nov/2017 21:49:49 [main.cpp:122] Starting engine..
> 25/Nov/2017 21:49:49 [nScrub.cpp:205] Welcome to nscrub x86_64 v.1.0.171125 
> (620) - (C) 2017 ntop.org
> 25/Nov/2017 21:49:49 [Utils.cpp:244] ERROR: Error while binding to core 4: 
> errno=22
> 25/Nov/2017 21:49:49 [Utils.cpp:244] ERROR: Error while binding to core 4: 
> errno=22
> 25/Nov/2017 21:49:49 [nScrub.cpp:222] System initialisation
> 25/Nov/2017 21:49:49 [Utils.cpp:244] ERROR: Error while binding to core 4: 
> errno=22
> 25/Nov/2017 21:49:49 [nScrub.cpp:249] ERROR: Error reading device zc:ens160 
> info (please make sure pf_ring.ko is loaded)
> 25/Nov/2017 21:49:49 [nScrub.cpp:473] System initialised
> 
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] pfring version issues

2017-11-17 Thread Alfredo Cardigliano
Hi Felix
please make sure your application is not linking the wrong pfring
library (not sure if you are using static or dynamic linking), I do not
see any other possible reason.

Best Regards
Alfredo

> On 16 Nov 2017, at 11:25, Felix Erlacher  wrote:
> 
> Dear All,
> 
> I am trying to use the pfring zc drivers and libraries to speed up
> packet capturing for a network analysis application.
> After using the newest version of the pfring driver from the github repo
> (using make install and the load_driver script), I wanted to switch back
> to a stable branch, namely 7.0.0-stable. I removed every file related to
> pfring (and pf_ring and ixgbe) and also checked manually if all libs and
> drivers are removed. Than I switched to the stable branch, and rebuild
> and reinstalled kernel modules, libs and drivers. Now, all of the
> included userland applications ([pf,z]count among others) report that I
> am using PF_RING 7.0. Also cat /proc/net/pf_ring/info shows me I am
> using 7.0.0.
> 
> The problem is that my application reports I am still using PF_RING
> 7.1.0 (in the "no valid license...demo mode" message). I did recompile
> the application from scratch after the pfring reinstall process.
> Does anyone have any hint why my application thinks I am using PF_RING
> 7.1.0 altough I only have 7.0.0 modules, libs and drivers installed?
> 
> For the record:
> I am using Ubuntu 16.04.3 with kernel 4.4.0-98-generic and Intel 82599ES
> NICs with ixgbe drivers.
> 
> thanks and regards,
> 
> Felix
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] Fw: PFRING 7.0 ZC ixgbe (data all zeros) non-ZC (correct)

2017-11-11 Thread Alfredo Cardigliano
Hi
please provide:
1. card model
2. latest pf_ring version that was working, and the version you are using now
3. do you see the same when using pfcount -i zc:enp10s0f0 -v 2?

Thank you
Alfredo

> On 10 Nov 2017, at 23:50, buck kavin  wrote:
> 
> Trying a new list. Still having issues here. Possibly setup related. 
> Appreciate any insight. Thanks!
> 
> 
> On Wednesday, November 8, 2017 5:35 PM, buck kavin  
> wrote:
> 
> 
> Hello. I've been using a ZC appl. for years, but after upgrading the lic. and 
> the libs I now see no data when using ZC.
> Is this a known issue? I'll provide more data if required. Thanks!
> 
> galactus@silver-surfer:~/PF_RING-dev/userland/examples_zc$ sudo ./zfanout_ipc 
> -i zc:enp10s0f0 -c 1 -n 2
> Starting fanout for 2 slave applications..
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> =
> Absolute Stats: Recv 1 pkts (0 drops) 98 bytes - Forwarded 2 pkts (0 drops)
> =
> 
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 
> 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
> ^CLeaving...
> =
> Absolute Stats: Recv 2 pkts (0 drops) 196 bytes - Forwarded 4 pkts (0 drops)
> Actual Stats: Recv 1.07 pps (0.00 drops) 0.00 Gbps - Forwarded 2.14 pps (0.00 
> drops)
> =
> 
> =
> Absolute Stats: Recv 2 pkts (0 drops) 196 bytes - Forwarded 4 pkts (0 drops)
> Actual Stats: Recv 0.00 pps (0.00 drops) 0.00 Gbps - Forwarded 0.00 pps (0.00 
> drops)
> =
> 
> galactus@silver-surfer:~/PF_RING-dev/userland/examples_zc$ sudo ./zfanout_ipc 
> -i enp10s0f0 -c 1 -n 2
> Starting fanout for 2 slave applications..
> 9C B6 D0 E1 A4 D7 00 0C 29 01 23 95 08 00 45 00 00 28 1F B4 40 00 40 06 97 93 
> C0 A8 01 32 C0 A8 01 06 0B B8 AD 38 00 00 00 00 1D C1 58 5A 50 14 00 00 FD 3B 
> 00 00 00 00 00 00 00 00
> =
> Absolute Stats: Recv 1 pkts (0 drops) 84 bytes - Forwarded 2 pkts (0 drops)
> =
> 
> 9C B6 D0 E1 A4 D7 00 0C 29 01 23 95 08 00 45 00 00 28 1F B4 40 00 40 06 97 93 
> C0 A8 01 32 C0 A8 01 06 0B B8 AD 38 00 00 00 00 1D C1 58 5A 50 14 00 00 FD 3B 
> 00 00 00 00 00 00 00 00
> =
> Absolute Stats: Recv 2 pkts (0 drops) 168 bytes - Forwarded 4 pkts (0 drops)
> Actual Stats: Recv 1.00 pps (0.00 drops) 0.00 Gbps - Forwarded 2.00 pps (0.00 
> drops)
> =
> 
> 9C B6 D0 E1 A4 D7 00 0C 29 01 23 95 08 00 45 00 00 28 1F B5 40 00 40 06 97 92 
> C0 A8 01 32 C0 A8 01 06 0B B8 AD 3A 00 00 00 00 5F AB 56 BA 50 14 00 00 BC EF 
> 00 00 00 00 00 00 00 00
> ^CLeaving...
> =
> Absolute Stats: Recv 3 pkts (0 drops) 252 bytes - Forwarded 6 pkts (0 drops)
> Actual Stats: Recv 3.66 pps (0.00 drops) 0.00 Gbps - Forwarded 7.32 pps (0.00 
> drops)
> =
> 
> =
> Absolute Stats: Recv 3 pkts (0 drops) 252 bytes - Forwarded 6 pkts (0 drops)
> Actual Stats: Recv 0.00 pps (0.00 drops) 0.00 Gbps - Forwarded 0.00 pps (0.00 
> drops)
> =
> 
> 
> 
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] nscrub routing mode setup problems

2017-11-06 Thread Alfredo Cardigliano
Hi Spiros
yes you need to create vlan interfaces and assign IPs in linux, when you start 
nscrub it will detect them.

Alfredo

> On 6 Nov 2017, at 15:29, Spiros Papageorgiou  wrote:
> 
> Hi all,
> 
> It is not clear to me how to configure nscrub when I use routing mode. I am 
> using one phy interface for both WAN and LAN side and I am using  VLAN 
> reforging to setup the output VLAN. I have also setup my routing table via 
> the API. What I don't understand is how/where to configure the IP details in 
> the interfaces. Since nscrub acts as a router, both input and output 
> interface need to have an IP address/mask.
> 
> Do I need to setup the IP address/mask on the linux interface and also create 
> a (logical) sub interface with vlan/ip address for the output  (as I said i'm 
> using one physical interface for both input and output)?
> 
> Thanx,
> 
> Spiros
> 
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] Using upgraded i40e from pf_ring 7.0 (2.2.4), on pf_ring 6.6.0

2017-11-02 Thread Alfredo Cardigliano
Hi Amir
the driver API is the same, thus you can do that, however I do not recommend you
upgrading to 2.2.4 as this version seems to be not really stable yet, we are 
working on this.

Alfredo

> On 2 Nov 2017, at 10:31, Amir Kaduri  wrote:
> 
> Hi,
> 
> I'm using pf_ring 6.6.0 (not using  ZC) on CentOS 7.0.
> I would like to upgrade the i40e driver (only) to 2.2.4 (i.e. take the code 
> of i40e from pf_ring 7.0 and put it in pf_ring 6.6.0 and use them together).
> The question is how risky it is? How pf_ring is tight up with the i40e driver 
> version?
> 
> Thanks,
> Amir
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] pf_ring: Clarification regarding the relation between poll-watermark and poll-duration

2017-10-31 Thread Alfredo Cardigliano
Hi Amir
that's correct, however for some reason it seems it is not the case in your 
tests.

Alfredo

> On 31 Oct 2017, at 12:08, Amir Kaduri  wrote:
> 
> Thanks. tot_insert apparently works ok.
> 
> Regarding function copy_data_to_ring():
> At the end of it there is the statement:
>  if(num_queued_pkts(pfr) >= pfr->poll_num_pkts_watermark)
>  wake_up_interruptible(&pfr->ring_slots_waitqueue);
> 
> Since watermark is set to 128, and I send <128 packets, this causes them to 
> wait in kernel queue.
> But since poll_duration is set to 1 (1 millisecond I assume), I expect the 
> condition to check this also (meaning, there are packets in queue but 1 
> millisecond passed and they weren't read),
> the wake_up_interruptible should also be called. No?
> 
> Thanks,
> Amir
> 
> 
>> On Tue, Oct 31, 2017 at 10:20 AM, Alfredo Cardigliano  
>> wrote:
>> 
>> 
>>> On 31 Oct 2017, at 08:42, Amir Kaduri  wrote:
>>> 
>>> Hi Alfredo,
>>> 
>>> I'm trying to debug the issue, and I have a question about the code, to 
>>> make sure that there is no problem there:
>>> Specifically, I'm referring to the function "pfring_mod_recv":
>>> In order that the line that refers to poll_duration ("pfring_poll(ring, 
>>> ring->poll_duration)") will be reached, there are 2 conditions that should 
>>> occur:
>>> 1. pfring_there_is_pkt_available(ring) should return false (otherwise, the 
>>> function returns at the end of the condition).
>>> 2. wait_for_incoming_packet should be set to true.
>>> Currently, I'm referring to the first one:
>>> In order that the macro pfring_there_is_pkt_available(ring) will return 
>>> false, ring->slots_info->tot_insert should be equal to 
>>> ring->slots_info->tot_read.
>>> What I see in my tests that they don't get equal. I always see that 
>>> tot_insert>tot_read, and sometimes they get eual when tot_read++ is called 
>>> but it happens inside the condition, so the "pfring_mod_recv" returns with 
>>> 1.
>> 
>> It seems to be correct. The kernel module inserts packets into the ring 
>> increasing tot_insert, the userspace library reads packets from the ring 
>> increasing tot_read. This means that if tot_insert == tot_read there is no 
>> packet to read. If there is a bug, it should be in the kernel module that is 
>> somehow not adding packets to the ring (thus not updating tot_insert).
>> 
>> Alfredo
>> 
>>> I remind that I set the watermark to be high, in order to see the 
>>> poll_duration takes effect.
>>> 
>>> Could you please approve that you don't see any problem in the code?
>>> 
>>> Thanks,
>>> Amir 
>>> 
>>> 
>>>> On Thu, Oct 26, 2017 at 12:22 PM, Alfredo Cardigliano 
>>>>  wrote:
>>>> Hi Amir
>>>> yes, that’s the way it should work, if this is not the case, some 
>>>> debugging is needed to identify the problem
>>>> 
>>>> Alfredo
>>>> 
>>>>> On 26 Oct 2017, at 10:14, Amir Kaduri  wrote:
>>>>> 
>>>>> Basically, the functionality that I would like to have is even if less 
>>>>> than poll-watermark-threshold (default: 128) packets arrives the socket, 
>>>>> they will be forwarded to userland if 1 millisecond has passed.
>>>>> How can I gain this? Isn't it by using  pfring_set_poll_duration()?
>>>>> 
>>>>> Alfredo, could you please clarify?
>>>>> 
>>>>> Thanks,
>>>>> Amir
>>>>> 
>>>>>> On Wed, Oct 18, 2017 at 8:48 PM, Amir Kaduri  wrote:
>>>>>> Hi,
>>>>>> 
>>>>>> I'm using pf_ring 6.6.0 (no ZC) on CentOS 7, on 10G interfaces (ixgbe 
>>>>>> drivers).
>>>>>> As far as I understand the relation between poll-watermark and 
>>>>>> poll-duration, packets will be queued untill one of comes first: or 
>>>>>> passing the poll-watermark packets threshold, or a poll-duration 
>>>>>> milliseconds has passed.
>>>>>> I set poll-watermark to the maximum (4096) (using 
>>>>>> pfring_set_poll_watermark()) and set poll-duration to the minimum (1) 
>>>>>> (using pfring_set_poll_duration()).
>>>>>> I've sent 400 packets to the socket. I see that they are received by the 
>>>>&g

Re: [Ntop-misc] pf_ring: Clarification regarding the relation between poll-watermark and poll-duration

2017-10-31 Thread Alfredo Cardigliano


> On 31 Oct 2017, at 08:42, Amir Kaduri  wrote:
> 
> Hi Alfredo,
> 
> I'm trying to debug the issue, and I have a question about the code, to make 
> sure that there is no problem there:
> Specifically, I'm referring to the function "pfring_mod_recv":
> In order that the line that refers to poll_duration ("pfring_poll(ring, 
> ring->poll_duration)") will be reached, there are 2 conditions that should 
> occur:
> 1. pfring_there_is_pkt_available(ring) should return false (otherwise, the 
> function returns at the end of the condition).
> 2. wait_for_incoming_packet should be set to true.
> Currently, I'm referring to the first one:
> In order that the macro pfring_there_is_pkt_available(ring) will return 
> false, ring->slots_info->tot_insert should be equal to 
> ring->slots_info->tot_read.
> What I see in my tests that they don't get equal. I always see that 
> tot_insert>tot_read, and sometimes they get eual when tot_read++ is called 
> but it happens inside the condition, so the "pfring_mod_recv" returns with 1.

It seems to be correct. The kernel module inserts packets into the ring 
increasing tot_insert, the userspace library reads packets from the ring 
increasing tot_read. This means that if tot_insert == tot_read there is no 
packet to read. If there is a bug, it should be in the kernel module that is 
somehow not adding packets to the ring (thus not updating tot_insert).

Alfredo

> I remind that I set the watermark to be high, in order to see the 
> poll_duration takes effect.
> 
> Could you please approve that you don't see any problem in the code?
> 
> Thanks,
> Amir
> 
> 
> On Thu, Oct 26, 2017 at 12:22 PM, Alfredo Cardigliano  <mailto:cardigli...@ntop.org>> wrote:
> Hi Amir
> yes, that’s the way it should work, if this is not the case, some debugging 
> is needed to identify the problem
> 
> Alfredo
> 
>> On 26 Oct 2017, at 10:14, Amir Kaduri > <mailto:akadur...@gmail.com>> wrote:
>> 
>> Basically, the functionality that I would like to have is even if less than 
>> poll-watermark-threshold (default: 128) packets arrives the socket, they 
>> will be forwarded to userland if 1 millisecond has passed.
>> How can I gain this? Isn't it by using  pfring_set_poll_duration()?
>> 
>> Alfredo, could you please clarify?
>> 
>> Thanks,
>> Amir
>> 
>> On Wed, Oct 18, 2017 at 8:48 PM, Amir Kaduri > <mailto:akadur...@gmail.com>> wrote:
>> Hi,
>> 
>> I'm using pf_ring 6.6.0 (no ZC) on CentOS 7, on 10G interfaces (ixgbe 
>> drivers).
>> As far as I understand the relation between poll-watermark and 
>> poll-duration, packets will be queued untill one of comes first: or passing 
>> the poll-watermark packets threshold, or a poll-duration milliseconds has 
>> passed.
>> I set poll-watermark to the maximum (4096) (using 
>> pfring_set_poll_watermark()) and set poll-duration to the minimum (1) (using 
>> pfring_set_poll_duration()).
>> I've sent 400 packets to the socket. I see that they are received by the 
>> NIC, but they didn't pass to userland. Only when passing 500 packets, a 
>> chunk of them passed to userland.
>> I don't quite understand the behavior: since poll-duration is 1 (millisecond 
>> I assume), I've expected all the packets to pass to userland immediately, 
>> even though poll-watermark is much higher.
>> 
>> Can anyone shed some light on the above?
>> 
>> Thanks,
>> Amir
>> 
>> ___
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] i40e 2.2.4 pfring 7.0.0

2017-10-30 Thread Alfredo Cardigliano
Hi Jeremy
I added more verbosity, please use 
https://github.com/ntop/PF_RING/tree/dev/drivers/intel/i40e/i40e-2.2.4-zc/src 
<https://github.com/ntop/PF_RING/tree/dev/drivers/intel/i40e/i40e-2.2.4-zc/src>
loading it with load_driver.sh (this to avoid reinstalling new packages on 
every test).
Please provide the full dmesg output after a crash.

Thank you
Alfredo

> On 30 Oct 2017, at 17:00, Alfredo Cardigliano  wrote:
> 
> Hi Jeremy
> please open an issue on https://github.com/ntop/PF_RING/issues 
> <https://github.com/ntop/PF_RING/issues> to keep track on this,
> please also post a full dmesg output if possible.
> 
> Thank you
> Alfredo
> 
>> On 30 Oct 2017, at 16:55, Jeremy Erb > <mailto:jeremy@netsweeper.com>> wrote:
>> 
>> Upgrading to PF_RING 7.0.0 with the new i40e 2.2.4 causes a kernel panic for 
>> us on CentOS 6 kernel.   The vanilla i40e 2.2.4 driver does not cause this 
>> issue.   Here are two kernel dumps.  The i40e 1.5.18 driver works in 
>> libpfring 6.7.0.
>> 
>> 
>> 
>> Pid: 10278, comm: modprobe Not tainted 2.6.32-696.13.2.el6.x86_64 #1 Dell 
>> Inc. PowerEdge R910/0JRJM9
>> RIP: 0010:[]  [] 
>> i40e_vsi_map_rings_to_vectors+0xf3/0x280 [i40e_zc]
>> RSP: 0018:880861a53908  EFLAGS: 00010202
>> RAX:  RBX: 88106dedcc00 RCX: 88106dedcc00
>> RDX: 88106de0ed40 RSI: 0010 RDI: 88104e0b2000
>> RBP: 880861a53968 R08: 0003ae77 R09: 
>> R10: 0080 R11: 0001 R12: 88104e0b2000
>> R13: 88106e665f00 R14: 00010102464c457f R15: 0010
>> FS:  7fd3655b4700() GS:88089c44() knlGS:
>> CS:  0010 DS:  ES:  CR0: 8005003b
>> CR2: 7fee9e121000 CR3: 0010425ed000 CR4: 07e0
>> DR0:  DR1:  DR2: 
>> DR3:  DR6: 0ff0 DR7: 0400
>> Process modprobe (pid: 10278, threadinfo 880861a5, task 
>> 8805f8c8f520)
>> Stack:
>>  88104e0b2000 88104e0b2000 881063754000 0020202ca020
>>  00100010 00010010 880861a53968 88104e0b2000
>>  881063754000 881063754000 88104e0b2028 880861a53998
>> Call Trace:
>>  [] i40e_vsi_setup+0x543/0x880 [i40e_zc]
>>  [] ? i40e_aq_set_switch_config+0x9d/0xd0 [i40e_zc]
>>  [] i40e_setup_pf_switch+0x47f/0x5d0 [i40e_zc]
>>  [] i40e_probe+0xd8a/0x17e8 [i40e_zc]
>>  [] ? schedule+0x3ee/0xb70
>>  [] ? idr_get_empty_slot+0x110/0x2c0
>>  [] ? number+0x2ee/0x320
>>  [] ? idr_get_empty_slot+0x110/0x2c0
>>  [] ? find_inode+0x4e/0x90
>>  [] ? sysfs_ilookup_test+0x0/0x20
>>  [] ? iput+0x30/0x70
>>  [] ? sysfs_addrm_finish+0x4e/0x270
>>  [] ? __sysfs_add_one+0x7e/0xc0
>>  [] ? sysfs_add_one+0x2c/0xd0
>>  [] local_pci_probe+0x17/0x20
>>  [] pci_device_probe+0x101/0x120
>>  [] ? driver_sysfs_add+0x62/0x90
>>  [] driver_probe_device+0xaa/0x3a0
>>  [] __driver_attach+0xab/0xb0
>>  [] ? __driver_attach+0x0/0xb0
>>  [] bus_for_each_dev+0x64/0x90
>>  [] driver_attach+0x1e/0x20
>>  [] bus_add_driver+0x1e8/0x2b0
>>  [] driver_register+0x5f/0xe0
>>  [] ? i40e_init_module+0x0/0xa3 [i40e_zc]
>>  [] __pci_register_driver+0x56/0xd0
>>  [] ? debugfs_create_dir+0x1b/0x20
>> udev: renamed network interface eth10 to eth16
>>  [] ? i40e_init_module+0x0/0xa3 [i40e_zc]
>>  [] i40e_init_module+0xa1/0xa3 [i40e_zc]
>>  [] do_one_initcall+0xc0/0x280
>>  [] sys_init_module+0xe1/0x250
>>  [] system_call_fastpath+0x16/0x1b
>> Code: 00 49 89 9d a8 00 00 00 48 8b 83 18 01 00 00 49 89 45 00 66 83 83 30 
>> 01 00 00 01 4d 85 f6 4c 89 ab 18 01 00 00 0f 84 05 01 00 00 <49> 89 9e a8 00 
>> 00 00 48 8b 83 f0 00 00 00 41 83 c7 01 49 89 06
>> RIP  [] i40e_vsi_map_rings_to_vectors+0xf3/0x280 [i40e_zc]
>>  RSP 
>> ---[ end trace c4414a8eb6ab10b9 ]---
>> Kernel panic - not syncing: Fatal exception
>> Pid: 10278, comm: modprobe Tainted: G  D-- 
>> 2.6.32-696.13.2.el6.x86_64 #1
>> Call Trace:
>>  [] ? panic+0xa7/0x179
>>  [] ? oops_end+0xe4/0x100
>>  [] ? die+0x5b/0x90
>>  [] ? do_general_protection+0x152/0x160
>>  [] ? general_protection+0x25/0x30
>>  [] ? i40e_vsi_map_rings_to_vectors+0xf3/0x280 [i40e_zc]
>>  [] ? i40e_vsi_setup+0x543/0x880 [i40e_zc]
>>  [] ? i40e_aq_set_switch_config+0x9d/0xd0 [i40e_zc]
>>  [] ? i40e_setup_pf_switch+0x47f/0x5d0 [i40e_zc]
>>  [] ? i40e_probe+0xd8a/0x17e8 [i40e_zc]
>>  [] ? schedule+0

Re: [Ntop-misc] PF_RING kernel module soft lockup with bro ids

2017-10-30 Thread Alfredo Cardigliano


> On 30 Oct 2017, at 13:12, Bowen Li  wrote:
> 
> Hi Alfredo,
> I am very grateful to your kind reply. I deployed the latest version in a 
> testing environment yesterday; I will tell you more details if this problem 
> is reproduced.
> Forget to say,  in order to change the parameters, I complied the kernel 
> module of PF_RING and libpcap library from source code. Is my action the 
> cause of this problem?

This is discouraged to avoid out-of-sync issues, however it should work as long 
as the kernel and library versions are in sync.

> Here is log when PF_RING load kernel module:
> 
> pf_ring: module verification failed: signature and/or required key missing - 
> tainting kernel.

This is not causing the issue, it’s just a warning.

Alfredo

> Besides, I wonder whether this problem is an underlying BUG in the kernel 
> module of PF_RING or there is something wrong with my application calling 
> PF_RING’s kernel module. If there is something wrong with my application 
> method, I need to check my logic of application program. Now I am very 
> confused about this problem.
> Looking forward to your reply. Thank you.
> 
> 2017-10-30 15:53 GMT+08:00 Alfredo Cardigliano  <mailto:cardigli...@ntop.org>>:
> Hi Bowen Li
> any chance you can move to latest stable and check if you are still able to 
> reproduce it?
> 
> Thank you
> Alfredo
> 
>> On 30 Oct 2017, at 08:15, Bowen Li > <mailto:newfire...@gmail.com>> wrote:
>> 
>> Hi all,
>> Recently,I run bro cluster with PF_RING 6.4.1 as the package capturing 
>> framework. However, I found a strange scene: sometimes when the cluster 
>> restarts, the kernel module of PF_RING will block. The problem seems to 
>> generate when operates /proc to apply read-write lock, causing kernel CPU 
>> soft lockup and then the whole server is stuck. I want to know whether the 
>> deadlock is caused by the resource competition when the cluster progress 
>> applies read-write lock at the same time, or other reasons.
>> Here is the log in /var/log/message:
>> 
>> Oct 22 01:26:10 TEST kernel: BUG: soft lockup - CPU#16 stuck for 22s! 
>> [bro:17375]
>> Oct 22 01:26:10 TEST kernel: Modules linked in: binfmt_misc xt_CHECKSUM 
>> iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_nat_ipv4 
>> nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack ipt_REJECT 
>> tun bridge stp llc ebtable_filter ebtables ip6table_filter ip6_tables 
>> iptable_filter intel_powerclamp coretemp intel_rapl kvm_intel iTCO_wdt kvm 
>> mei_me iTCO_vendor_support mxm_wmi mei lpc_ich ipmi_ssif sb_edac 
>> crc32_pclmul ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper 
>> ablk_helper edac_core ipmi_si cryptd ioTESTma ipmi_msghandler shpchp wmi sg 
>> dca i2c_i801 pcspkr mfd_core nfsd auth_rpcgss nfs_acl lockd grace 
>> pf_ring(OE) sunrpc ip_tables ext4 mbcache jbd2 sd_mod crc_t10dif 
>> crct10dif_generic mgag200 syscopyarea sysfillrect sysimgblt i2c_algo_bit 
>> drm_kms_helper ttm crct10dif_pclmul crct10dif_common
>> Oct 22 01:26:10 TEST kernel: crc32c_intel drm isci e1000e libsas ahci 
>> scsi_transport_sas libahci ptp i2c_core libata pps_core ntb
>> Oct 22 01:26:10 TEST kernel: CPU: 16 PID: 17375 Comm: bro Tainted: G 
>>   OE     3.10.0-327.13.1.el7.x86_64 #1
>> Oct 22 01:26:10 TEST kernel: Hardware name: Supermicro 
>> X9DRL-3F/iF/X9DRL-3F/iF, BIOS 3.2 09/22/2015
>> Oct 22 01:26:10 TEST kernel: task: 880fdad73980 ti: 8800287b8000 
>> task.ti: 8800287b8000
>> Oct 22 01:26:10 TEST kernel: RIP: 0010:[]  
>> [] __write_lock_failed+0x9/0x20
>> Oct 22 01:26:10 TEST kernel: RSP: 0018:8800287bbe88  EFLAGS: 0297
>> Oct 22 01:26:10 TEST kernel: RAX:  RBX: 92e79345 
>> RCX: 
>> Oct 22 01:26:10 TEST kernel: RDX:  RSI: 88104aabb000 
>> RDI: a03c6324
>> Oct 22 01:26:10 TEST kernel: RBP: 8800287bbe88 R08: 00017600 
>> R09: 0080
>> Oct 22 01:26:10 TEST kernel: R10:  R11: 001b 
>> R12: 7f21f432e000
>> Oct 22 01:26:10 TEST kernel: R13: 0032 R14: 88a8 
>> R15: 
>> Oct 22 01:26:10 TEST kernel: FS:  7f21f4e56880() 
>> GS:88085fd0() knlGS:
>> Oct 22 01:26:10 TEST kernel: CS:  0010 DS:  ES:  CR0: 
>> 80050033
>> Oct 22 01:26:10 TEST kernel: CR2: 7f21f432eee0 CR3: 000648323000 
>> CR4: 000407e0
>> Oct 22 01:26:10 TEST kernel: DR0:  DR1:  
>> D

Re: [Ntop-misc] i40e 2.2.4 pfring 7.0.0

2017-10-30 Thread Alfredo Cardigliano
Hi Jeremy
please open an issue on https://github.com/ntop/PF_RING/issues 
 to keep track on this,
please also post a full dmesg output if possible.

Thank you
Alfredo

> On 30 Oct 2017, at 16:55, Jeremy Erb  wrote:
> 
> Upgrading to PF_RING 7.0.0 with the new i40e 2.2.4 causes a kernel panic for 
> us on CentOS 6 kernel.   The vanilla i40e 2.2.4 driver does not cause this 
> issue.   Here are two kernel dumps.  The i40e 1.5.18 driver works in 
> libpfring 6.7.0.
> 
> 
> 
> Pid: 10278, comm: modprobe Not tainted 2.6.32-696.13.2.el6.x86_64 #1 Dell 
> Inc. PowerEdge R910/0JRJM9
> RIP: 0010:[]  [] 
> i40e_vsi_map_rings_to_vectors+0xf3/0x280 [i40e_zc]
> RSP: 0018:880861a53908  EFLAGS: 00010202
> RAX:  RBX: 88106dedcc00 RCX: 88106dedcc00
> RDX: 88106de0ed40 RSI: 0010 RDI: 88104e0b2000
> RBP: 880861a53968 R08: 0003ae77 R09: 
> R10: 0080 R11: 0001 R12: 88104e0b2000
> R13: 88106e665f00 R14: 00010102464c457f R15: 0010
> FS:  7fd3655b4700() GS:88089c44() knlGS:
> CS:  0010 DS:  ES:  CR0: 8005003b
> CR2: 7fee9e121000 CR3: 0010425ed000 CR4: 07e0
> DR0:  DR1:  DR2: 
> DR3:  DR6: 0ff0 DR7: 0400
> Process modprobe (pid: 10278, threadinfo 880861a5, task 
> 8805f8c8f520)
> Stack:
>  88104e0b2000 88104e0b2000 881063754000 0020202ca020
>  00100010 00010010 880861a53968 88104e0b2000
>  881063754000 881063754000 88104e0b2028 880861a53998
> Call Trace:
>  [] i40e_vsi_setup+0x543/0x880 [i40e_zc]
>  [] ? i40e_aq_set_switch_config+0x9d/0xd0 [i40e_zc]
>  [] i40e_setup_pf_switch+0x47f/0x5d0 [i40e_zc]
>  [] i40e_probe+0xd8a/0x17e8 [i40e_zc]
>  [] ? schedule+0x3ee/0xb70
>  [] ? idr_get_empty_slot+0x110/0x2c0
>  [] ? number+0x2ee/0x320
>  [] ? idr_get_empty_slot+0x110/0x2c0
>  [] ? find_inode+0x4e/0x90
>  [] ? sysfs_ilookup_test+0x0/0x20
>  [] ? iput+0x30/0x70
>  [] ? sysfs_addrm_finish+0x4e/0x270
>  [] ? __sysfs_add_one+0x7e/0xc0
>  [] ? sysfs_add_one+0x2c/0xd0
>  [] local_pci_probe+0x17/0x20
>  [] pci_device_probe+0x101/0x120
>  [] ? driver_sysfs_add+0x62/0x90
>  [] driver_probe_device+0xaa/0x3a0
>  [] __driver_attach+0xab/0xb0
>  [] ? __driver_attach+0x0/0xb0
>  [] bus_for_each_dev+0x64/0x90
>  [] driver_attach+0x1e/0x20
>  [] bus_add_driver+0x1e8/0x2b0
>  [] driver_register+0x5f/0xe0
>  [] ? i40e_init_module+0x0/0xa3 [i40e_zc]
>  [] __pci_register_driver+0x56/0xd0
>  [] ? debugfs_create_dir+0x1b/0x20
> udev: renamed network interface eth10 to eth16
>  [] ? i40e_init_module+0x0/0xa3 [i40e_zc]
>  [] i40e_init_module+0xa1/0xa3 [i40e_zc]
>  [] do_one_initcall+0xc0/0x280
>  [] sys_init_module+0xe1/0x250
>  [] system_call_fastpath+0x16/0x1b
> Code: 00 49 89 9d a8 00 00 00 48 8b 83 18 01 00 00 49 89 45 00 66 83 83 30 01 
> 00 00 01 4d 85 f6 4c 89 ab 18 01 00 00 0f 84 05 01 00 00 <49> 89 9e a8 00 00 
> 00 48 8b 83 f0 00 00 00 41 83 c7 01 49 89 06
> RIP  [] i40e_vsi_map_rings_to_vectors+0xf3/0x280 [i40e_zc]
>  RSP 
> ---[ end trace c4414a8eb6ab10b9 ]---
> Kernel panic - not syncing: Fatal exception
> Pid: 10278, comm: modprobe Tainted: G  D-- 
> 2.6.32-696.13.2.el6.x86_64 #1
> Call Trace:
>  [] ? panic+0xa7/0x179
>  [] ? oops_end+0xe4/0x100
>  [] ? die+0x5b/0x90
>  [] ? do_general_protection+0x152/0x160
>  [] ? general_protection+0x25/0x30
>  [] ? i40e_vsi_map_rings_to_vectors+0xf3/0x280 [i40e_zc]
>  [] ? i40e_vsi_setup+0x543/0x880 [i40e_zc]
>  [] ? i40e_aq_set_switch_config+0x9d/0xd0 [i40e_zc]
>  [] ? i40e_setup_pf_switch+0x47f/0x5d0 [i40e_zc]
>  [] ? i40e_probe+0xd8a/0x17e8 [i40e_zc]
>  [] ? schedule+0x3ee/0xb70
>  [] ? idr_get_empty_slot+0x110/0x2c0
>  [] ? number+0x2ee/0x320
>  [] ? idr_get_empty_slot+0x110/0x2c0
>  [] ? find_inode+0x4e/0x90
>  [] ? sysfs_ilookup_test+0x0/0x20
>  [] ? iput+0x30/0x70
>  [] ? sysfs_addrm_finish+0x4e/0x270
>  [] ? __sysfs_add_one+0x7e/0xc0
>  [] ? sysfs_add_one+0x2c/0xd0
>  [] ? local_pci_probe+0x17/0x20
>  [] ? pci_device_probe+0x101/0x120
>  [] ? driver_sysfs_add+0x62/0x90
>  [] ? driver_probe_device+0xaa/0x3a0
>  [] ? __driver_attach+0xab/0xb0
>  [] ? __driver_attach+0x0/0xb0
>  [] ? bus_for_each_dev+0x64/0x90
>  [] ? driver_attach+0x1e/0x20
>  [] ? bus_add_driver+0x1e8/0x2b0
>  [] ? driver_register+0x5f/0xe0
>  [] ? i40e_init_module+0x0/0xa3 [i40e_zc]
>  [] ? __pci_register_driver+0x56/0xd0
>  [] ? debugfs_create_dir+0x1b/0x20
>  [] ? i40e_init_module+0x0/0xa3 [i40e_zc]
>  [] ? i40e_init_module+0xa1/0xa3 [i40e_zc]
>  [] ? do_one_initcall+0xc0/0x280
>  [] ? sys_init_module+0xe1/0x250
>  [] ? system_call_fastpath+0x16/0x1b
> 
> 
> 
> 
> 
> 
> 
> 
> 
> 
> And the second dump.
> 
> 
> 
> 
> 
> 
> BUG: unable to handle kernel NULL pointer dereference at 00a8
> IP: [] i40e_vsi_map_rings_to_vec

Re: [Ntop-misc] PF_RING kernel module soft lockup with bro ids

2017-10-30 Thread Alfredo Cardigliano
Hi Bowen Li
any chance you can move to latest stable and check if you are still able to 
reproduce it?

Thank you
Alfredo

> On 30 Oct 2017, at 08:15, Bowen Li  wrote:
> 
> Hi all,
> Recently,I run bro cluster with PF_RING 6.4.1 as the package capturing 
> framework. However, I found a strange scene: sometimes when the cluster 
> restarts, the kernel module of PF_RING will block. The problem seems to 
> generate when operates /proc to apply read-write lock, causing kernel CPU 
> soft lockup and then the whole server is stuck. I want to know whether the 
> deadlock is caused by the resource competition when the cluster progress 
> applies read-write lock at the same time, or other reasons.
> Here is the log in /var/log/message:
> 
> Oct 22 01:26:10 TEST kernel: BUG: soft lockup - CPU#16 stuck for 22s! 
> [bro:17375]
> Oct 22 01:26:10 TEST kernel: Modules linked in: binfmt_misc xt_CHECKSUM 
> iptable_mangle ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_nat_ipv4 
> nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 xt_conntrack nf_conntrack ipt_REJECT 
> tun bridge stp llc ebtable_filter ebtables ip6table_filter ip6_tables 
> iptable_filter intel_powerclamp coretemp intel_rapl kvm_intel iTCO_wdt kvm 
> mei_me iTCO_vendor_support mxm_wmi mei lpc_ich ipmi_ssif sb_edac crc32_pclmul 
> ghash_clmulni_intel aesni_intel lrw gf128mul glue_helper ablk_helper 
> edac_core ipmi_si cryptd ioTESTma ipmi_msghandler shpchp wmi sg dca i2c_i801 
> pcspkr mfd_core nfsd auth_rpcgss nfs_acl lockd grace pf_ring(OE) sunrpc 
> ip_tables ext4 mbcache jbd2 sd_mod crc_t10dif crct10dif_generic mgag200 
> syscopyarea sysfillrect sysimgblt i2c_algo_bit drm_kms_helper ttm 
> crct10dif_pclmul crct10dif_common
> Oct 22 01:26:10 TEST kernel: crc32c_intel drm isci e1000e libsas ahci 
> scsi_transport_sas libahci ptp i2c_core libata pps_core ntb
> Oct 22 01:26:10 TEST kernel: CPU: 16 PID: 17375 Comm: bro Tainted: G  
>  OE     3.10.0-327.13.1.el7.x86_64 #1
> Oct 22 01:26:10 TEST kernel: Hardware name: Supermicro 
> X9DRL-3F/iF/X9DRL-3F/iF, BIOS 3.2 09/22/2015
> Oct 22 01:26:10 TEST kernel: task: 880fdad73980 ti: 8800287b8000 
> task.ti: 8800287b8000
> Oct 22 01:26:10 TEST kernel: RIP: 0010:[]  
> [] __write_lock_failed+0x9/0x20
> Oct 22 01:26:10 TEST kernel: RSP: 0018:8800287bbe88  EFLAGS: 0297
> Oct 22 01:26:10 TEST kernel: RAX:  RBX: 92e79345 RCX: 
> 
> Oct 22 01:26:10 TEST kernel: RDX:  RSI: 88104aabb000 RDI: 
> a03c6324
> Oct 22 01:26:10 TEST kernel: RBP: 8800287bbe88 R08: 00017600 R09: 
> 0080
> Oct 22 01:26:10 TEST kernel: R10:  R11: 001b R12: 
> 7f21f432e000
> Oct 22 01:26:10 TEST kernel: R13: 0032 R14: 88a8 R15: 
> 
> Oct 22 01:26:10 TEST kernel: FS:  7f21f4e56880() 
> GS:88085fd0() knlGS:
> Oct 22 01:26:10 TEST kernel: CS:  0010 DS:  ES:  CR0: 80050033
> Oct 22 01:26:10 TEST kernel: CR2: 7f21f432eee0 CR3: 000648323000 CR4: 
> 000407e0
> Oct 22 01:26:10 TEST kernel: DR0:  DR1:  DR2: 
> 
> Oct 22 01:26:10 TEST kernel: DR3:  DR6: 0ff0 DR7: 
> 0400
> Oct 22 01:26:10 TEST kernel: Stack:
> Oct 22 01:26:10 TEST kernel: 8800287bbe98 8163ce37 
> 8800287bbeb8 a03afe1b
> Oct 22 01:26:10 TEST kernel: 88104aabb000 88101a631800 
> 8800287bbee8 a03b6903
> Oct 22 01:26:10 TEST kernel: 0003 0300 
>  81a25e00
> Oct 22 01:26:10 TEST kernel: Call Trace:
> Oct 22 01:26:10 TEST kernel: [] _raw_write_lock+0x17/0x20
> Oct 22 01:26:10 TEST kernel: [] ring_proc_add+0x1b/0xb0 
> [pf_ring]
> Oct 22 01:26:10 TEST kernel: [] ring_create+0x2e3/0x4a0 
> [pf_ring]
> Oct 22 01:26:10 TEST kernel: [] __sock_create+0x110/0x260
> Oct 22 01:26:10 TEST kernel: [] SyS_socket+0x61/0xf0
> Oct 22 01:26:10 TEST kernel: [] ? page_fault+0x28/0x30
> Oct 22 01:26:10 TEST kernel: [] 
> system_call_fastpath+0x16/0x1b
> Oct 22 01:26:10 TEST kernel: Code: 66 90 48 89 01 31 c0 66 66 90 c3 b8 f2 ff 
> ff ff 66 66 90 c3 90 90 90 90 90 90 90 90 90 90 90 90 90 90 55 48 89 e5 f0 ff 
> 07 f3 90 <83> 3f 01 75 f9 f0 ff 0f 75 f1 5d c3 66 66 2e 0f 1f 84 00 00 00
> 
> Info in /proc:
> 
> PF_RING Version  : 6.4.1 
> ((detached:eded7ce625e60620639050575b1fba0aa0412374)
> Total rings  : 16
> 
> Standard (non ZC) Options
> Ring slots   : 65534
> Slot version : 16
> Capture TX   : No [RX only]
> IP Defragment: No
> Socket Mode  : Standard
> Total plugins: 0
> Cluster Fragment Queue   : 0
> Cluster Fragment Discard : 0
> 
>cpu and memory info:
> 
> Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz * 2
> 
>   totalusedfree  shared  buff/

Re: [Ntop-misc] Multiple ZC Captures

2017-10-29 Thread Alfredo Cardigliano
Hi Terry
I confirm this is not the best use case for zbalance_ipc: in addition to the 
“flexibility” issue (you need to know in
advance the number of tcpdump instances for allocating queues), what happens in 
this configuration is that
zbalace is distributing 10g on every queue, where a bpf is evaluating the whole 
traffic. I am not sure this would
perform much faster then standard kernel filtering.
Question: do you have common filters you usually use (e.g. src/dst IP match is 
enough), or do you use the full power
the BPF syntax provides?

Regards
Alfredo

> On 27 Oct 2017, at 01:44, Terry  wrote:
> 
> Hey Alfredo,
> 
> Thanks. Just Tcpdump -- that's all these servers are used for.
> 
> The traffic can be expected to hit line-rate 10G. We distribute the traffic 
> (VLAN-based port mirroring) to eight 10G ports per server. Some of them are 
> fully utilized during peak hours.
> 
> When Tcpdump is run, it's not matching all traffic on a port. Captures are 
> run for specific things based on what a user is troubleshooting; e.g., 
> traffic between a specific source/destination IP address.
> 
> -Terry
> 
> 
> On Thursday, October 26, 2017, 1:22:07 PM EDT, Alfredo Cardigliano 
>  wrote:
> 
> 
> Hi Terry
> your assumptions are correct, this is not the best use case for zbalance_ipc,
> in order to help you finding the best configuration, I have a few questions:
> 1. what application do you run on top of zbalance_ipc in addition to tcpdump?
> 2. what is the peak traffic rate?
> 
> Alfredo
> 
>> On 26 Oct 2017, at 18:13, Tom J. > <mailto:t0psec...@yahoo.com>> wrote:
>> 
>> Hey Alfredo,
>> 
>> We dug into this a bit and are trying to figure out how to best use zbalance 
>> in our scenario, where lots of people have shell access to a Linux-based 
>> sniffer, each running TCPDUMP on various interfaces to troubleshoot network 
>> issues.
>> 
>> With zbalance running in the background duplicating traffic to some number 
>> of queues, is it correct that at any given time, only one TCPDUMP instance 
>> can be running on a given queue? If so I'm having trouble imagining how this 
>> can work logistically, as it would seem that users would have to try 
>> multiple queues each time until they find one that is free.
>> 
>> Thanks as always.
>> 
>> -Terry
>> 
>> 
>> On Tuesday, October 3, 2017, 6:30:17 AM EDT, Alfredo Cardigliano 
>> mailto:cardigli...@ntop.org>> wrote:
>> 
>> 
>> Hi Terry
>> please find zbalance_ipc+tcpdump examples here:
>> 
>> https://github.com/ntop/PF_RING/blob/dev/userland/examples_zc/README.examples
>>  
>> <https://github.com/ntop/PF_RING/blob/dev/userland/examples_zc/README.examples>
>> 
>> Alfredo
>> 
>>> On 2 Oct 2017, at 22:58, Terry >> <mailto:t0psec...@yahoo.com>> wrote:
>>> 
>>> Hey Alfredo,
>>> 
>>> Thank you. Is there documentation on zbalance beyond what I'm finding via 
>>> Google? I'm not seeing how to use it to create the queues for Tcpdump to 
>>> attach to.
>>> 
>>> -Terry
>>> 
>>> 
>>> On Friday, September 29, 2017 1:05 PM, Alfredo Cardigliano 
>>> mailto:cardigli...@ntop.org>> wrote:
>>> 
>>> 
>>> Hi Terry
>>> dummy interfaces are usually used with Bro because this consumer is well 
>>> known to be unstable (or at least it crashes from time to time for some 
>>> reason)
>>> leaving ZC queues in an inconsistent state, preventing it from reattaching 
>>> to the queue again (in order to reattach a zbalance_ipc restart is 
>>> required).
>>> As long as tcpdump is closed correctly, there should be no problem 
>>> attaching to the queues directly. Please note dummy interfaces are slow as 
>>> traffic
>>> goes through the kernel, and you loose most of the boost provided by ZC.
>>> 
>>> Alfredo
>>> 
>>>> On 29 Sep 2017, at 18:53, Terry >>> <mailto:t0psec...@yahoo.com>> wrote:
>>>> 
>>>> Hey Alfredo,
>>>> 
>>>> Thanks, the zbalance stuff looks encouraging. How would this look in the 
>>>> context of users constantly running/terminating their own instances of 
>>>> tcpdump? I see the "Best practices for using Bro IDS with PF_RING ZC" 
>>>> article, where ZC outputs to dummy interfaces which are then used by the 
>>>> application. Is this how we would do it -- set up one-to-one mappings of 
>>>> ZC Interface -> Dummy Interface, and then h

Re: [Ntop-misc] Multiple ZC Captures

2017-10-26 Thread Alfredo Cardigliano
Hi Terry
your assumptions are correct, this is not the best use case for zbalance_ipc,
in order to help you finding the best configuration, I have a few questions:
1. what application do you run on top of zbalance_ipc in addition to tcpdump?
2. what is the peak traffic rate?

Alfredo

> On 26 Oct 2017, at 18:13, Tom J.  wrote:
> 
> Hey Alfredo,
> 
> We dug into this a bit and are trying to figure out how to best use zbalance 
> in our scenario, where lots of people have shell access to a Linux-based 
> sniffer, each running TCPDUMP on various interfaces to troubleshoot network 
> issues.
> 
> With zbalance running in the background duplicating traffic to some number of 
> queues, is it correct that at any given time, only one TCPDUMP instance can 
> be running on a given queue? If so I'm having trouble imagining how this can 
> work logistically, as it would seem that users would have to try multiple 
> queues each time until they find one that is free.
> 
> Thanks as always.
> 
> -Terry
> 
> 
> On Tuesday, October 3, 2017, 6:30:17 AM EDT, Alfredo Cardigliano 
>  wrote:
> 
> 
> Hi Terry
> please find zbalance_ipc+tcpdump examples here:
> 
> https://github.com/ntop/PF_RING/blob/dev/userland/examples_zc/README.examples 
> <https://github.com/ntop/PF_RING/blob/dev/userland/examples_zc/README.examples>
> 
> Alfredo
> 
>> On 2 Oct 2017, at 22:58, Terry > <mailto:t0psec...@yahoo.com>> wrote:
>> 
>> Hey Alfredo,
>> 
>> Thank you. Is there documentation on zbalance beyond what I'm finding via 
>> Google? I'm not seeing how to use it to create the queues for Tcpdump to 
>> attach to.
>> 
>> -Terry
>> 
>> 
>> On Friday, September 29, 2017 1:05 PM, Alfredo Cardigliano 
>> mailto:cardigli...@ntop.org>> wrote:
>> 
>> 
>> Hi Terry
>> dummy interfaces are usually used with Bro because this consumer is well 
>> known to be unstable (or at least it crashes from time to time for some 
>> reason)
>> leaving ZC queues in an inconsistent state, preventing it from reattaching 
>> to the queue again (in order to reattach a zbalance_ipc restart is required).
>> As long as tcpdump is closed correctly, there should be no problem attaching 
>> to the queues directly. Please note dummy interfaces are slow as traffic
>> goes through the kernel, and you loose most of the boost provided by ZC.
>> 
>> Alfredo
>> 
>>> On 29 Sep 2017, at 18:53, Terry >> <mailto:t0psec...@yahoo.com>> wrote:
>>> 
>>> Hey Alfredo,
>>> 
>>> Thanks, the zbalance stuff looks encouraging. How would this look in the 
>>> context of users constantly running/terminating their own instances of 
>>> tcpdump? I see the "Best practices for using Bro IDS with PF_RING ZC" 
>>> article, where ZC outputs to dummy interfaces which are then used by the 
>>> application. Is this how we would do it -- set up one-to-one mappings of ZC 
>>> Interface -> Dummy Interface, and then have users use the dummy interfaces 
>>> with tcpdump rather than the ZC interfaces directly?
>>> 
>>> -Terry
>>> 
>>> 
>>> On Friday, September 29, 2017 5:12 AM, Alfredo Cardigliano 
>>> mailto:cardigli...@ntop.org>> wrote:
>>> 
>>> 
>>> Hi Terry
>>> ZC is a kernel-bypass technology, this means that the application takes 
>>> full control over the NIC
>>> in order to access the card memory in zero-copy and maximise the 
>>> performance. This implies that
>>> one process at a time can open an interface, thus what you are seeing with 
>>> tcpdump is expected.
>>> This said, there is a way to overcome this: you can use zbalance_ipc to 
>>> open the zc interface, and
>>> let it distribute the traffic (fanout) to multiple applications by means of 
>>> zc queues. Please note this
>>> adds some overhead with respect to opening the zc interface directly from 
>>> the application, however
>>> you should not notice the difference as tcpdump itself is a bottleneck.
>>> 
>>> Alfredo
>>> 
>>>> On 29 Sep 2017, at 00:29, Tom J. >>> <mailto:t0psec...@yahoo.com>> wrote:
>>>> 
>>>> Hello,
>>>> 
>>>> We're exploring using PF-RING ZC for our packet sniffers, but are looking 
>>>> to get clarity on an issue before purchasing licenses.
>>>> 
>>>> The sniffers are standard servers running Linux, each with (16) 10G NIC 
>>>> ports connected 

Re: [Ntop-misc] pf_ring: Clarification regarding the relation between poll-watermark and poll-duration

2017-10-26 Thread Alfredo Cardigliano
Hi Amir
yes, that’s the way it should work, if this is not the case, some debugging is 
needed to identify the problem

Alfredo

> On 26 Oct 2017, at 10:14, Amir Kaduri  wrote:
> 
> Basically, the functionality that I would like to have is even if less than 
> poll-watermark-threshold (default: 128) packets arrives the socket, they will 
> be forwarded to userland if 1 millisecond has passed.
> How can I gain this? Isn't it by using  pfring_set_poll_duration()?
> 
> Alfredo, could you please clarify?
> 
> Thanks,
> Amir
> 
> On Wed, Oct 18, 2017 at 8:48 PM, Amir Kaduri  > wrote:
> Hi,
> 
> I'm using pf_ring 6.6.0 (no ZC) on CentOS 7, on 10G interfaces (ixgbe 
> drivers).
> As far as I understand the relation between poll-watermark and poll-duration, 
> packets will be queued untill one of comes first: or passing the 
> poll-watermark packets threshold, or a poll-duration milliseconds has passed.
> I set poll-watermark to the maximum (4096) (using 
> pfring_set_poll_watermark()) and set poll-duration to the minimum (1) (using 
> pfring_set_poll_duration()).
> I've sent 400 packets to the socket. I see that they are received by the NIC, 
> but they didn't pass to userland. Only when passing 500 packets, a chunk of 
> them passed to userland.
> I don't quite understand the behavior: since poll-duration is 1 (millisecond 
> I assume), I've expected all the packets to pass to userland immediately, 
> even though poll-watermark is much higher.
> 
> Can anyone shed some light on the above?
> 
> Thanks,
> Amir
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] nscrub

2017-10-24 Thread Alfredo Cardigliano
Hi Spiros
I will send you the documentation.

Alfredo

> On 24 Oct 2017, at 14:16, Spiros Papageorgiou  wrote:
> 
> Hi all,
> 
> I have installed nscrub and i'm trying to configure it. I haven't found a 
> config guide at ntopng site. Is there a guide somewhere for nscrub?
> 
> I would like to setup nscrub in routing and asymmetric mode (input traffic 
> will be redirected via BGP through nscrub). I would like to ask if I can use 
> for wan AND lan the same zc interface. ex. nscrub -i zc:ens160 -o zc:ens160 
> -A -x. Of course I should be able to configure a VLAN somewhere to separate 
> the in and out networks.
> 
> Thanx,
> 
> Sp
> 
> 
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] pf_ring: bug fix (rule inactivity)

2017-10-19 Thread Alfredo Cardigliano
Hi Amir
here you go:

https://github.com/ntop/PF_RING/commit/8d04b242b48902600393e12209a758c4a84ec825

Please send a pull request for the next patch.

Thank you
Alfredo

> On 18 Oct 2017, at 19:38, Amir Kaduri  wrote:
> 
> Hi Alfredo,
> 
> The attached patch includes a fix of a bug that relates to updating the last 
> time a hash rule was active. I assume that the pfring_purge_idle_hash_rules() 
> API doesn't work for this reason.
> I also added an enhancement of the API pfring_get_hash_filtering_rule_stats() 
> to return for how long a hash filtering rule is inactive.
> The patch is based on pfring 6.6.0.
> 
> Since the fix and the enhancement are pretty short, I put them together in 
> the same patch.
> 
> I'll appreciate getting a feedback on this fix (I already sent it over a 
> month ago).
> 
> Thanks,
> Amir
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] Multiple ZC Captures

2017-10-03 Thread Alfredo Cardigliano
Hi Terry
please find zbalance_ipc+tcpdump examples here:

https://github.com/ntop/PF_RING/blob/dev/userland/examples_zc/README.examples 
<https://github.com/ntop/PF_RING/blob/dev/userland/examples_zc/README.examples>

Alfredo

> On 2 Oct 2017, at 22:58, Terry  wrote:
> 
> Hey Alfredo,
> 
> Thank you. Is there documentation on zbalance beyond what I'm finding via 
> Google? I'm not seeing how to use it to create the queues for Tcpdump to 
> attach to.
> 
> -Terry
> 
> 
> On Friday, September 29, 2017 1:05 PM, Alfredo Cardigliano 
>  wrote:
> 
> 
> Hi Terry
> dummy interfaces are usually used with Bro because this consumer is well 
> known to be unstable (or at least it crashes from time to time for some 
> reason)
> leaving ZC queues in an inconsistent state, preventing it from reattaching to 
> the queue again (in order to reattach a zbalance_ipc restart is required).
> As long as tcpdump is closed correctly, there should be no problem attaching 
> to the queues directly. Please note dummy interfaces are slow as traffic
> goes through the kernel, and you loose most of the boost provided by ZC.
> 
> Alfredo
> 
>> On 29 Sep 2017, at 18:53, Terry > <mailto:t0psec...@yahoo.com>> wrote:
>> 
>> Hey Alfredo,
>> 
>> Thanks, the zbalance stuff looks encouraging. How would this look in the 
>> context of users constantly running/terminating their own instances of 
>> tcpdump? I see the "Best practices for using Bro IDS with PF_RING ZC" 
>> article, where ZC outputs to dummy interfaces which are then used by the 
>> application. Is this how we would do it -- set up one-to-one mappings of ZC 
>> Interface -> Dummy Interface, and then have users use the dummy interfaces 
>> with tcpdump rather than the ZC interfaces directly?
>> 
>> -Terry
>> 
>> 
>> On Friday, September 29, 2017 5:12 AM, Alfredo Cardigliano 
>> mailto:cardigli...@ntop.org>> wrote:
>> 
>> 
>> Hi Terry
>> ZC is a kernel-bypass technology, this means that the application takes full 
>> control over the NIC
>> in order to access the card memory in zero-copy and maximise the 
>> performance. This implies that
>> one process at a time can open an interface, thus what you are seeing with 
>> tcpdump is expected.
>> This said, there is a way to overcome this: you can use zbalance_ipc to open 
>> the zc interface, and
>> let it distribute the traffic (fanout) to multiple applications by means of 
>> zc queues. Please note this
>> adds some overhead with respect to opening the zc interface directly from 
>> the application, however
>> you should not notice the difference as tcpdump itself is a bottleneck.
>> 
>> Alfredo
>> 
>>> On 29 Sep 2017, at 00:29, Tom J. >> <mailto:t0psec...@yahoo.com>> wrote:
>>> 
>>> Hello,
>>> 
>>> We're exploring using PF-RING ZC for our packet sniffers, but are looking 
>>> to get clarity on an issue before purchasing licenses.
>>> 
>>> The sniffers are standard servers running Linux, each with (16) 10G NIC 
>>> ports connected to SPAN ports on switches. Users log into the system and 
>>> run TCPDUMP to troubleshoot day-to-day connectivity issues in the 
>>> environment.
>>> 
>>> As traffic levels have increased we're seeing more and more drops on the 
>>> NICs, so the thought was to implement ZC to make things better. But it 
>>> looks like ZC may limit us to one capture per NIC at any given time. Is 
>>> this correct? I see text on the product page about not being able to do 
>>> standard networking activities on a given NIC when ZC is actively running, 
>>> but how about multiple ZC-enabled TCPDUMPs at once? It doesn't seem to work 
>>> for us (getting a "No such device" error), but maybe it's something we're 
>>> doing wrong.
>>> 
>>> Would appreciate any guidance.
>>> 
>>> -Terry
>>> 
>>> ___
>>> Ntop-misc mailing list
>>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>> 
>> 
> 
> 
> 



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] Multiple ZC Captures

2017-09-29 Thread Alfredo Cardigliano
Hi Terry
dummy interfaces are usually used with Bro because this consumer is well known 
to be unstable (or at least it crashes from time to time for some reason)
leaving ZC queues in an inconsistent state, preventing it from reattaching to 
the queue again (in order to reattach a zbalance_ipc restart is required).
As long as tcpdump is closed correctly, there should be no problem attaching to 
the queues directly. Please note dummy interfaces are slow as traffic
goes through the kernel, and you loose most of the boost provided by ZC.

Alfredo

> On 29 Sep 2017, at 18:53, Terry  wrote:
> 
> Hey Alfredo,
> 
> Thanks, the zbalance stuff looks encouraging. How would this look in the 
> context of users constantly running/terminating their own instances of 
> tcpdump? I see the "Best practices for using Bro IDS with PF_RING ZC" 
> article, where ZC outputs to dummy interfaces which are then used by the 
> application. Is this how we would do it -- set up one-to-one mappings of ZC 
> Interface -> Dummy Interface, and then have users use the dummy interfaces 
> with tcpdump rather than the ZC interfaces directly?
> 
> -Terry
> 
> 
> On Friday, September 29, 2017 5:12 AM, Alfredo Cardigliano 
>  wrote:
> 
> 
> Hi Terry
> ZC is a kernel-bypass technology, this means that the application takes full 
> control over the NIC
> in order to access the card memory in zero-copy and maximise the performance. 
> This implies that
> one process at a time can open an interface, thus what you are seeing with 
> tcpdump is expected.
> This said, there is a way to overcome this: you can use zbalance_ipc to open 
> the zc interface, and
> let it distribute the traffic (fanout) to multiple applications by means of 
> zc queues. Please note this
> adds some overhead with respect to opening the zc interface directly from the 
> application, however
> you should not notice the difference as tcpdump itself is a bottleneck.
> 
> Alfredo
> 
>> On 29 Sep 2017, at 00:29, Tom J. > <mailto:t0psec...@yahoo.com>> wrote:
>> 
>> Hello,
>> 
>> We're exploring using PF-RING ZC for our packet sniffers, but are looking to 
>> get clarity on an issue before purchasing licenses.
>> 
>> The sniffers are standard servers running Linux, each with (16) 10G NIC 
>> ports connected to SPAN ports on switches. Users log into the system and run 
>> TCPDUMP to troubleshoot day-to-day connectivity issues in the environment.
>> 
>> As traffic levels have increased we're seeing more and more drops on the 
>> NICs, so the thought was to implement ZC to make things better. But it looks 
>> like ZC may limit us to one capture per NIC at any given time. Is this 
>> correct? I see text on the product page about not being able to do standard 
>> networking activities on a given NIC when ZC is actively running, but how 
>> about multiple ZC-enabled TCPDUMPs at once? It doesn't seem to work for us 
>> (getting a "No such device" error), but maybe it's something we're doing 
>> wrong.
>> 
>> Would appreciate any guidance.
>> 
>> -Terry
>> 
>> ___
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc
> 
> 
> 



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] Multiple ZC Captures

2017-09-29 Thread Alfredo Cardigliano
Hi Terry
ZC is a kernel-bypass technology, this means that the application takes full 
control over the NIC
in order to access the card memory in zero-copy and maximise the performance. 
This implies that
one process at a time can open an interface, thus what you are seeing with 
tcpdump is expected.
This said, there is a way to overcome this: you can use zbalance_ipc to open 
the zc interface, and
let it distribute the traffic (fanout) to multiple applications by means of zc 
queues. Please note this
adds some overhead with respect to opening the zc interface directly from the 
application, however
you should not notice the difference as tcpdump itself is a bottleneck.

Alfredo

> On 29 Sep 2017, at 00:29, Tom J.  wrote:
> 
> Hello,
> 
> We're exploring using PF-RING ZC for our packet sniffers, but are looking to 
> get clarity on an issue before purchasing licenses.
> 
> The sniffers are standard servers running Linux, each with (16) 10G NIC ports 
> connected to SPAN ports on switches. Users log into the system and run 
> TCPDUMP to troubleshoot day-to-day connectivity issues in the environment.
> 
> As traffic levels have increased we're seeing more and more drops on the 
> NICs, so the thought was to implement ZC to make things better. But it looks 
> like ZC may limit us to one capture per NIC at any given time. Is this 
> correct? I see text on the product page about not being able to do standard 
> networking activities on a given NIC when ZC is actively running, but how 
> about multiple ZC-enabled TCPDUMPs at once? It doesn't seem to work for us 
> (getting a "No such device" error), but maybe it's something we're doing 
> wrong.
> 
> Would appreciate any guidance.
> 
> -Terry
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

Re: [Ntop-misc] pfring.h

2017-09-21 Thread Alfredo Cardigliano
Hi James
that’s definitely the problem, this is on one system in my lab:

$ ls -al /usr/local/include/pfring.h
-rw-rw-r-- 1 root root 52817 Sep 18 19:02 /usr/local/include/pfring.h

Alfredo

> On 21 Sep 2017, at 10:53, James  wrote:
> 
> I'm still a linux novice, so I've just discovered the config.log, which 
> suggests the problem is permission related:
> 
> conftest.c:57:20: error: /usr/local/include/pfring.h: Permission denied
> 
> Current permissions on these files:
> -rwxr-x---   1 root root 52817 Sep 13 14:37 pfring.h
> -rwxr-x---   1 root root 12326 Sep 13 14:37 pfring_mod_sysdig.h
> -rwxr-x---   1 root root 28511 Sep 13 14:37 pfring_zc.h
> 
> What should they be please? I'm running ./configure as a non-root user (as I 
> understand is best practice). I tried granting read to all on pfring.h but 
> that didn't help.
> 
> There's also a pf_ring.h in /usr/include/linux with these permissions:
> -rwxr-x---   1 root root  39009 Sep 13 13:46 pf_ring.h
> 
> Thanks
> James
> 
> 
> On 19 September 2017 at 11:02, James  <mailto:ntop-m...@cyclohexane.net>> wrote:
> Hi,
> 
> Yes I am, though I'm running this from the pfring-daq-module-zc directory and 
> you're in pfring-daq-module, is that relevant? I do want to use ZC, but the 
> drivers are not installed yet (that was my next task after the pfring DAQ).
> 
> I've even tried putting a link file in /usr/include/pfring.h which points to 
> /usr/local/include/pfring.h - no help.
> 
> Thanks
> James
> 
> On 18 September 2017 at 18:08, Alfredo Cardigliano  <mailto:cardigli...@ntop.org>> wrote:
> This is strange, actually if you have pfring installed in the standard path 
> there it should work
> even without specifying the path. This is on a machine in our lab:
> 
> $ pwd
> /home/nbox/PF_RING-dev/userland/snort/pfring-daq-module
> 
> $ autoreconf -ivf
> 
> $ ./configure
> 
> $ make
> 
> $ ldd .libs/daq_pfring.so
>   linux-vdso.so.1 =>  (0x7ffce8f5f000)
>   libpfring.so => /usr/local/lib/libpfring.so (0x7f65d75be000)
>   libhiredis.so.0.13 => /usr/lib/x86_64-linux-gnu/libhiredis.so.0.13 
> (0x7f65d73b1000)
>   libsfbpf.so.0 => /usr/lib/libsfbpf.so.0 (0x7f65d718a000)
>   libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7f65d6dc)
>   libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
> (0x7f65d6ba3000)
>   librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7f65d699b000)
>   libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7f65d6797000)
>   /lib64/ld-linux-x86-64.so.2 (0x7f65d7a45000)
> 
> Are you still getting "configure: error: Could not find pfring.h!”?
> 
> Alfredo
> 
>> On 18 Sep 2017, at 13:11, James > <mailto:ntop-m...@cyclohexane.net>> wrote:
>> 
>> Hi,
>> 
>> This command still fails to find the pfring.h file:
>> ./configure --with-libpfring-includes=/usr/local/include 
>> --with-pfring-kernel-includes=/usr/local/include 
>> --with-libpfring-libraries=/usr/local/lib
>> 
>> 
>> On 18 September 2017 at 11:03, Alfredo Cardigliano > <mailto:cardigli...@ntop.org>> wrote:
>> Please specify all of them together with the proper paths (lib and include)
>> 
>> Alfredo
>> 
>>> On 18 Sep 2017, at 10:56, James >> <mailto:ntop-m...@cyclohexane.net>> wrote:
>>> 
>>> Hi Alfredo,
>>> 
>>> Thanks for helping me. I've tried all three of those but still get the same 
>>> error:
>>> ./configure --with-libpfring-includes=/usr/local/include
>>> ./configure --with-pfring-kernel-includes=/usr/local/include
>>> ./configure --with-libpfring-libraries=/usr/local/include
>>> 
>>> On 18 September 2017 at 09:19, Alfredo Cardigliano >> <mailto:cardigli...@ntop.org>> wrote:
>>> Hi James
>>> the configure script currently checks for ${HOME}/PF_RING/ or installed 
>>> libraries specified with:
>>> 
>>>  --with-libpfring-includes=
>>>  --with-pfring-kernel-includes=
>>>  --with-libpfring-libraries=
>>> 
>>> Regards
>>> Alfredo
>>> 
>>> > On 15 Sep 2017, at 11:19, James >> > <mailto:ntop-m...@cyclohexane.net>> wrote:
>>> >
>>> > Hi,
>>> >
>>> > I'm trying to install the pfring DAQ and when I run configure, am getting 
>>> > the error:
>>> >
>>> > checking pfring.h usability... no
>>> > checking pfring.h presence... no
>>> > checking for pfring.h...

Re: [Ntop-misc] pfring.h

2017-09-18 Thread Alfredo Cardigliano
This is strange, actually if you have pfring installed in the standard path 
there it should work
even without specifying the path. This is on a machine in our lab:

$ pwd
/home/nbox/PF_RING-dev/userland/snort/pfring-daq-module

$ autoreconf -ivf

$ ./configure

$ make

$ ldd .libs/daq_pfring.so
linux-vdso.so.1 =>  (0x7ffce8f5f000)
libpfring.so => /usr/local/lib/libpfring.so (0x7f65d75be000)
libhiredis.so.0.13 => /usr/lib/x86_64-linux-gnu/libhiredis.so.0.13 
(0x7f65d73b1000)
libsfbpf.so.0 => /usr/lib/libsfbpf.so.0 (0x7f65d718a000)
libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x7f65d6dc)
libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 
(0x7f65d6ba3000)
librt.so.1 => /lib/x86_64-linux-gnu/librt.so.1 (0x7f65d699b000)
libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x7f65d6797000)
/lib64/ld-linux-x86-64.so.2 (0x7f65d7a45000)

Are you still getting "configure: error: Could not find pfring.h!”?

Alfredo

> On 18 Sep 2017, at 13:11, James  wrote:
> 
> Hi,
> 
> This command still fails to find the pfring.h file:
> ./configure --with-libpfring-includes=/usr/local/include 
> --with-pfring-kernel-includes=/usr/local/include 
> --with-libpfring-libraries=/usr/local/lib
> 
> 
> On 18 September 2017 at 11:03, Alfredo Cardigliano  <mailto:cardigli...@ntop.org>> wrote:
> Please specify all of them together with the proper paths (lib and include)
> 
> Alfredo
> 
>> On 18 Sep 2017, at 10:56, James > <mailto:ntop-m...@cyclohexane.net>> wrote:
>> 
>> Hi Alfredo,
>> 
>> Thanks for helping me. I've tried all three of those but still get the same 
>> error:
>> ./configure --with-libpfring-includes=/usr/local/include
>> ./configure --with-pfring-kernel-includes=/usr/local/include
>> ./configure --with-libpfring-libraries=/usr/local/include
>> 
>> On 18 September 2017 at 09:19, Alfredo Cardigliano > <mailto:cardigli...@ntop.org>> wrote:
>> Hi James
>> the configure script currently checks for ${HOME}/PF_RING/ or installed 
>> libraries specified with:
>> 
>>  --with-libpfring-includes=
>>  --with-pfring-kernel-includes=
>>  --with-libpfring-libraries=
>> 
>> Regards
>> Alfredo
>> 
>> > On 15 Sep 2017, at 11:19, James > > <mailto:ntop-m...@cyclohexane.net>> wrote:
>> >
>> > Hi,
>> >
>> > I'm trying to install the pfring DAQ and when I run configure, am getting 
>> > the error:
>> >
>> > checking pfring.h usability... no
>> > checking pfring.h presence... no
>> > checking for pfring.h... no
>> > configure: error: Could not find pfring.h!
>> >
>> > I have installed /kernel and /userland/lib and the file exists here:
>> >
>> > /usr/local/src/PF_RING-dev/userland/lib/pfring.h
>> > /usr/local/include/pfring.h
>> >
>> > Thanks
>> > James
>> > ___
>> > Ntop-misc mailing list
>> > Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>> > http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>> > <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>> 
>> 
>> ___
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
>> 
>> ___
>> Ntop-misc mailing list
>> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
>> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
>> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it <mailto:Ntop-misc@listgateway.unipi.it>
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc 
> <http://listgateway.unipi.it/mailman/listinfo/ntop-misc>
> 
> ___
> Ntop-misc mailing list
> Ntop-misc@listgateway.unipi.it
> http://listgateway.unipi.it/mailman/listinfo/ntop-misc



signature.asc
Description: Message signed with OpenPGP
___
Ntop-misc mailing list
Ntop-misc@listgateway.unipi.it
http://listgateway.unipi.it/mailman/listinfo/ntop-misc

  1   2   3   4   5   6   7   8   9   10   >