Scott,
I was using the following script when I was playing with the packet-bricks Dec
last year:
utilObj = dofile("scripts/utils.lua")
utilObj:enable_nmpipes()
pe = PktEngine.new("e0")
lb = Brick.new("LoadBalancer", 2)
lb:connect_input("ix0")
lb:connect_output("ix0{1", "ix0{2", "ix0{3", "ix0{4", "ix0{5", "ix0{6",
"ix0{7", "ix0{8", "ix0{9", "ix0{10", "ix0{11", "ix0{12", "ix0{13", "ix0{14",
"ix0{15", "ix0{16", "ix0{17", "ix0{18", "ix0{19", "ix0{20" )
pe:link(lb)
pe:start()
pe:show_stats()
This was on FreeBSD.
> When I run the "./pkt-gen -i netmap:eth6}0 -f rx" command, I am seeing
> only a tiny fraction of the expected traffic:
Uncharted terretory for me. I was actually comparing bro logs generated from
packet-bricks with the ones on other clusters and bricks vs others seemed
reasonably comparable when I last left playing with this.
> What can I do differently to get better performance?
Others can elaborate more since I haven't tested this at all but I hear netmap
support in linux is quite stable now - that might be something to try.
Aashish
On Mon, Nov 28, 2016 at 11:11:08AM -0500, Scott Campbell wrote:
> I have been investigating the use of Packet Bricks/netmap as a
> replacement for pf_ring on linux, but have a few questions.
>
> (1) Is there any documentation except for the git page and the scripts
> themselves? The script comments are nice and useful, but at times the
> syntax is rather opaque.
>
> (2) Following directions and mailing list recommendations, I have a
> working version which reads from a heavily loaded 10G ixgbe interface
> and splits the traffic into 4 netmap interfaces. The script looks like:
>
> utilObj:enable_nmpipes()
> pe = PktEngine.new("e0", 1024, 8)
> lb = Brick.new("LoadBalancer", 2)
> lb:connect_input("eth6")
> lb:connect_output("eth6{0", "eth6{1", "eth6{2", "eth6{3")
> pe:link(lb)
> pe:start()
>
> where eth6 is the data source interface.
>
> Script output looks like:
> > [root@xdev-w4 PB_INSTALL]# sbin/bricks -f
> > etc/bricks-scripts/startup-one-thread.lua
> > [ pmain(): line 466] Executing
> > etc/bricks-scripts/startup-one-thread.lua
> > [print_version(): line 348] BRICKS Version 0.5-beta
> > bricks> utilObj:enable_nmpipes()
> > bricks> pe = PktEngine.new("e0", 1024, 8)
> > bricks> lb = Brick.new("LoadBalancer", 2)
> > bricks> lb:connect_input("eth6")
> > bricks> lb:connect_output("eth6{0", "eth6{1", "eth6{2", "eth6{3")
> > bricks> pe:link(lb)
> > [ lb_init(): line 66] Adding brick eth6{0 to the engine
> > [ promisc(): line 96] Interface eth6 is already set to promiscuous mode
> > 970.328612 nm_open [444] overriding ARG3 0
> > 970.328631 nm_open [457] overriding ifname eth6 ringid 0x0 flags 0x1
> > [netmap_link_iface(): line 183] Wait for 2 secs for phy reset
> > [brick_link(): line 113] Linking e0 with link eth6 with batch size: 512
> > and qid: -1
> > [netmap_create_channel(): line 746] brick: 0xfac090, local_desc: 0xfac780
> > 972.343050 nm_open [444] overriding ARG3 0
> > [netmap_create_channel(): line 781] zerocopy for eth6 --> eth6{0 (index:
> > 0) enabled
> > [netmap_create_channel(): line 786] Created netmap:eth6{0 interface
> > [netmap_create_channel(): line 746] brick: 0xfac090, local_desc: 0xfac780
> > 972.343600 nm_open [444] overriding ARG3 0
> > [netmap_create_channel(): line 781] zerocopy for eth6 --> eth6{1 (index:
> > 1) enabled
> > [netmap_create_channel(): line 786] Created netmap:eth6{1 interface
> > [netmap_create_channel(): line 746] brick: 0xfac090, local_desc: 0xfac780
> > 972.344200 nm_open [444] overriding ARG3 0
> > [netmap_create_channel(): line 781] zerocopy for eth6 --> eth6{2 (index:
> > 2) enabled
> > [netmap_create_channel(): line 786] Created netmap:eth6{2 interface
> > [netmap_create_channel(): line 746] brick: 0xfac090, local_desc: 0xfac780
> > 972.344696 nm_open [444] overriding ARG3 0
> > [netmap_create_channel(): line 781] zerocopy for eth6 --> eth6{3 (index:
> > 3) enabled
> > [netmap_create_channel(): line 786] Created netmap:eth6{3 interface
> > bricks> pe:start()
>
> and the related dmesg data is:
>
> > dmesg:
> > ixgbe 0000:81:00.0: eth6: detected SFP+: 5
> > ixgbe 0000:81:00.0: eth6: NIC Link is Up 10 Gbps, Flow Control: RX/TX
> > 494.566450 [ 131] ixgbe_netmap_configure_srrctl bufsz: 2048 srrctl: 2
> > ixgbe 0000:81:00.0: eth6: detected SFP+: 5
> > ixgbe 0000:81:00.0: eth6: NIC Link is Up 10 Gbps, Flow Control: RX/TX
> > 496.743920 [ 320] netmap_pipe_krings_create ffff880876731a00: case 1,
> > create both ends
> > 496.744464 [ 320] netmap_pipe_krings_create ffff880876731000: case 1,
> > create both ends
> > 496.745026 [ 320] netmap_pipe_krings_create ffff880878fcb600: case 1,
> > create both ends
> > 496.745520 [ 320] netmap_pipe_krings_create ffff880875e06c00: case 1,
> > create both ends
> > Loading kernel module for a network device with CAP_SYS_MODULE
> > (deprecated). Use CAP_NET_ADMIN and alias netdev-netmap instead
> > Loading kernel module for a network device with CAP_SYS_MODULE
> > (deprecated). Use CAP_NET_ADMIN and alias netdev-netmap instead
>
>
> When I run the "./pkt-gen -i netmap:eth6}0 -f rx" command, I am seeing
> only a tiny fraction of the expected traffic:
>
> > [root@xdev-w4 bin]# ./pkt-gen -i netmap:eth6}0 -f rx
> > 007.093750 main [2552] interface is netmap:eth6}0
> > 007.093855 main [2675] running on 1 cpus (have 32)
> > 007.094406 extract_ip_range [465] range is 10.0.0.1:1234 to 10.0.0.1:1234
> > 007.094418 extract_ip_range [465] range is 10.1.0.1:1234 to 10.1.0.1:1234
> > 007.094481 main [2770] mapped 334980KB at 0x7f325200d000
> > Receiving from netmap:eth6}0: 1 queues, 1 threads and 1 cpus.
> > 007.094525 start_threads [2235] Wait 2 secs for phy reset
> > 009.094811 start_threads [2237] Ready...
> > 009.094927 receiver_body [1638] reading from netmap:eth6}0 fd 3 main_fd 3
> > 010.095975 main_thread [2325] 3.573 Kpps (3.577 Kpkts 2.151 Mbps in 1001085
> > usec) 511.00 avg_batch 0 min_space
> > 011.097211 main_thread [2325] 2.552 Kpps (2.555 Kpkts 1.643 Mbps in 1001237
> > usec) 511.00 avg_batch 1 min_space
> > 012.098314 main_thread [2325] 3.063 Kpps (3.066 Kpkts 1.981 Mbps in 1001103
> > usec) 511.00 avg_batch 1 min_space
> ...
> > ^C021.032505 sigint_h [512] received control-C on thread 0x7f326672f700
> > 021.032531 main_thread [2325] 2.762 Kpps (2.555 Kpkts 1.306 Mbps in 925126
> > usec) 511.00 avg_batch 1 min_space
> > 022.033620 main_thread [2325] 510.000 pps (511.000 pkts 306.248 Kbps in
> > 1001087 usec) 511.00 avg_batch 1 min_space
> > Received 33726 packets 2876632 bytes 66 events 85 bytes each in 12.02
> > seconds.
> > Speed: 2.806 Kpps Bandwidth: 1.915 Mbps (raw 2.453 Mbps). Average batch:
> > 511.00 pkts
>
> Running all 4 netmap interfaces provides about the same volume of data
> (which is in total ~ 1% of what I would expect). The cpu usage of the
> bricks command is ~ 15-20% regardless of running pkt-gen.
>
> What can I do differently to get better performance?
>
> (3) Accessing the interfaces - as far as actually using the interfaces
> with bro or tcpdump I am somewhat at a loss. I installed netmap-libpcap
> version 1.6.0-PRE-GIT_2016_11_26 and compiled tcpdump:
>
> [root@xdev-w4 tcpdump-4.6.2]# ./tcpdump --version
> tcpdump version 4.6.2
> libpcap version 1.6.0-PRE-GIT_2016_11_26
> OpenSSL 1.0.1e-fips 11 Feb 2013
>
> which is the recommended version per the information on the packet
> bricks git README.
>
> I am unable to get either the native or the netmap-libpcap version of
> tcpdump to recognize the interfaces:
>
> [root@xdev-w4 tcpdump-4.6.2]# ./tcpdump -n -i netmap:eth6{0
> tcpdump: netmap:eth6{0: No such device exists
> (SIOCGIFHWADDR: No such device)
>
> The ifconfig info for the interface looks like:
>
> > eth6 Link encap:Ethernet HWaddr 00:1B:21:9D:95:EA
> > UP BROADCAST RUNNING PROMISC MULTICAST MTU:9000 Metric:1
> > RX packets:39463364964 errors:0 dropped:72437 overruns:0 frame:0
> > TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
> > collisions:0 txqueuelen:1000
> > RX bytes:56963040769452 (51.8 TiB) TX bytes:0 (0.0 b)
>
>
> At this point I am wondering where to go next. It would be great to use
> PB instead of pf_ring, but I will need some help to get there.
>
> many thanks!
> scott
>
>
>
>
> --
> "The enemy knows the system."
> — Claude Shannon
> _______________________________________________
> bro-dev mailing list
> [email protected]
> http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev
_______________________________________________
bro-dev mailing list
[email protected]
http://mailman.icsi.berkeley.edu/mailman/listinfo/bro-dev