Re: low network speed

2012-01-24 Thread Kevin Oberman
On Tue, Jan 24, 2012 at 9:27 PM, Eugene M. Zheganin  wrote:
> Hi.
>
> I'm suffering from low network performance on one of my FreeBSDs.
> I have an i386 8.2-RELEASE machine with an fxp(4) adapter. It's connected
> though a bunch of catalysts 2950 to another 8.2. While other machines in
> this server room using the same sequence of switches and the same target
> source server (which, btw, is equipped with an em(4) and a gigabit link bia
> catalyst 3750) show sufficient speed, this particular machine while using
> scp starts with a speed of 200 Kbytes/sec and while copying the file shows
> speed about 600-800 Kbytes/sec.
>
> I've added this tweak to the sysctl:
>
> net.local.stream.recvspace=196605
> net.local.stream.sendspace=196605
> net.inet.tcp.sendspace=196605
> net.inet.tcp.recvspace=196605
> net.inet.udp.recvspace=196605
> kern.ipc.maxsockbuf=2621440
> kern.ipc.somaxconn=4096
> net.inet.tcp.sendbuf_max=524288
> net.inet.tcp.recvbuf_max=524288
>
> With these settings the copying starts at 9.5 Mbytes/sec speed, but then, as
> file is copying, drops down to 3.5 Megs/sec in about two-three minutes.
>
> Is there some way to maintain 9.5 Mbytes/sec (I like this speed more) ?
>
>
> Thanks.
> Eugene.
>
> P.S. This machine also runs zfs, I don't know if it's important but I
> decided to mention it.

9.5 MBytes? That's 76 Mbps which is reasonable. 28 Mbps is not, but
it's too good to make be think it's a duplex mis-match, but it's
probably worth checking. Look at the output of  'sysctl
dev.fxp.0.stats'. See if you are getting framing and CRC errors.

If this does not point at something, a packet capture of header and
tcptrace may show the cause of the problem, but the output is not easy
to understand. tcptrace is in ports. You could also look at the
capture with wireshark. It won't tell as much, but will flag errors
and "unusual" activity. Both tools are in ports.
-- 
R. Kevin Oberman, Network Engineer
E-mail: kob6...@gmail.com
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Ethernet Switch Framework

2012-01-24 Thread Adrian Chadd
So when will you two have something consensus-y to commit? :-)

What I'm hoping for is:

* some traction on the MII bus / MDIO bus split and tidyup from stb, which
is nice;
* ray's switch API for speaking to userland with;
* agreeing on whether the correct place to put the driver(s) is where stb,
ray, or a mix of both approaches says so.

I've been (mostly) trying to stay out of this to see where both of you have
gone. I think we've made some good progress; now it's time to solidify a
design for the first pass of what we want in -HEAD and figure out how to
move forward.


Adrian


On 22 January 2012 09:51, Aleksandr Rybalko  wrote:

> On Sun, 22 Jan 2012 16:31:06 +0100
> Stefan Bethke  wrote:
>
> > Am 20.01.2012 um 21:13 schrieb Aleksandr Rybalko:
> >
> > > It include sys/mips/conf/AR7240, that together with hints file is
> > > good example for typical AR7240 setup.
> >
> > IÄm heaving trouble getting this to work.  The patch applies cleanly
> > and I can get a kernel compiled and booted, but neither arge0 nor
> > arge1 appear to be functional.  I had to roll my own kernel config as
> > your AR7240 hangs before printing anything on my TL-MR3420.
>
> Yeah, I know where is problem, to proper attach switch framework to
> arge, arge must be regular NIC. Here is the patch for that:
> http://my.ddteam.net/files/2012-01-22_arge.patch
> Hope it will apply cleanly.
>
> Patch have fixed both arge problems (problem for allocation of ring
> buffer, and stray interrupts) + remove most phymask bits + whitespace
> cleanup.
>
> Thank you for testing that Stefan.
>
> P.S. I can't test clear SoC config on my board, because my board id
> D-Link DIR-615_E4 with modified U-Boot in it, which able to load only
> FW images, but not ELF kernel. So I test it with ZRouter.org FW image
> instead.
>
> P.P.S. can you also show me network part of your config and hints files.
>
> P.P.P.S. still working on your previous question about subj, already
> begin work on more wide documentation on wiki, but still far enough :)
> "http://wiki.freebsd.org/AleksandrRybalko/Switch Framework"
>
> >
> > dmesg and devinfo below.
> >
> >
> > Stefan
> >
> > CPU platform: Atheros AR7241 rev 1
> > CPU Frequency=400 MHz
> > CPU DDR Frequency=400 MHz
> > CPU AHB Frequency=200 MHz
> > platform frequency: 4
> > arguments:
> >   a0 = 0008
> >   a1 = a1f87fb0
> >   a2 = a1f88470
> >   a3 = 0004
> > Cmd line:argv is invalid
> > Environment:
> > envp is invalid
> > Cache info:
> >   picache_stride= 4096
> >   picache_loopcount = 16
> >   pdcache_stride= 4096
> >   pdcache_loopcount = 8
> > cpu0: MIPS Technologies processor v116.147
> >   MMU: Standard TLB, 16 entries
> >   L1 i-cache: 4 ways of 512 sets, 32 bytes per line
> >   L1 d-cache: 4 ways of 256 sets, 32 bytes per line
> >   Config1=0x9ee3519e
> >   Config3=0x20
> > KDB: debugger backends: ddb
> > KDB: current backend: ddb
> > Copyright (c) 1992-2012 The FreeBSD Project.
> > Copyright (c) 1979, 1980, 1983, 1986, 1988, 1989, 1991, 1992, 1993,
> > 1994 The Regents of the University of California. All rights reserved.
> > FreeBSD is a registered trademark of The FreeBSD Foundation.
> > FreeBSD 10.0-CURRENT #1: Thu Jan  1 01:00:00 CET 1970
> > stb@dummy
> :/home/stb/working/fe/obj/mipseb/mips.mipseb/home/stb/working/fe/freebsd/sys/TL-MR3420D
> > mips WARNING: WITNESS option enabled, expect reduced performance.
> > real memory  = 33554432 (32768K bytes)
> > avail memory = 25567232 (24MB)
> > random device not loaded; using insecure entropy
> > nexus0: 
> > nexus0: failed to add child: arge0
> > nexus0: failed to add child: arge1
> > clock0:  on nexus0
> > Timecounter "MIPS32" frequency 2 Hz quality 800
> > Event timer "MIPS32" frequency 2 Hz quality 800
> > apb0 at irq 4 on nexus0
> > uart0: <16550 or compatible> on apb0
> > uart0: console (115200,n,8,1)
> > gpio0:  on apb0
> > gpio0: [GIANT-LOCKED]
> > gpio0: function_set: 0x0
> > gpio0: function_clear: 0x0
> > gpio0: gpio pinmask=0x1943
> > gpioc0:  on gpio0
> > gpiobus0:  on gpio0
> > gpioled0:  at pin(s) 0 on gpiobus0
> > gpioled1:  at pin(s) 1 on gpiobus0
> > gpioled2:  at pin(s) 3 on gpiobus0
> > ehci0:  at mem
> > 0x1b000100-0x1bff irq 1 on nexus0 usbus0: set host controller mode
> > usbus0: EHCI version 1.0
> > usbus0: set host controller mode
> > usbus0:  on ehci0
> > arge0:  at mem
> > 0x1900-0x19000fff irq 2 on nexus0 arge0: Overriding MAC from
> > EEPROM arge0: No PHY specified, using mask 16
> > miibus0:  on arge0
> > floatphy0 PHY 0 on miibus0
> > floatphy0:  none, 10baseT, 10baseT-FDX, 100baseTX, 100baseTX-FDX,
> > 1000baseSX, 1000baseSX-FDX, 1000baseT, 1000baseT-master,
> > 1000baseT-FDX, 1000baseT-FDX-master, auto switch0 PHY 1 on miibus0
> > switch0:  100baseTX, 100baseTX-FDX, 1000baseSX, 1000baseSX-FDX,
> > 1000baseT, 1000baseT-master, 1000baseT-FDX, 1000baseT-FDX-master
> > ar8x16_switch0:  on
> > switch0 arge0: Ethernet address: ff:ff:ff:ff:ff:ff arge1:  > AR71xx built-in ethernet interfac

low network speed

2012-01-24 Thread Eugene M. Zheganin

Hi.

I'm suffering from low network performance on one of my FreeBSDs.
I have an i386 8.2-RELEASE machine with an fxp(4) adapter. It's 
connected though a bunch of catalysts 2950 to another 8.2. While other 
machines in this server room using the same sequence of switches and the 
same target source server (which, btw, is equipped with an em(4) and a 
gigabit link bia catalyst 3750) show sufficient speed, this particular 
machine while using scp starts with a speed of 200 Kbytes/sec and while 
copying the file shows speed about 600-800 Kbytes/sec.


I've added this tweak to the sysctl:

net.local.stream.recvspace=196605
net.local.stream.sendspace=196605
net.inet.tcp.sendspace=196605
net.inet.tcp.recvspace=196605
net.inet.udp.recvspace=196605
kern.ipc.maxsockbuf=2621440
kern.ipc.somaxconn=4096
net.inet.tcp.sendbuf_max=524288
net.inet.tcp.recvbuf_max=524288

With these settings the copying starts at 9.5 Mbytes/sec speed, but 
then, as file is copying, drops down to 3.5 Megs/sec in about two-three 
minutes.


Is there some way to maintain 9.5 Mbytes/sec (I like this speed more) ?


Thanks.
Eugene.

P.S. This machine also runs zfs, I don't know if it's important but I 
decided to mention it.

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


RE: Performance problem using Intel X520-DA2

2012-01-24 Thread Marcin Markowski

On 24.01.2012 16:07, Kirk Davis wrote:

-Original Message-
On 24.01.2012 09:18, Nikolay Denev wrote:

On Jan 23, 2012, at 11:39 PM, Marcin Markowski wrote:


Hello,

This message has been sent to freebsd-performance@ but got the
information that should contact also with freebsd-net@.

We use FreeBSD as sniffer (libpcap programs) and we experience
performance problems when incoming traffic is greater than 
7.5Gbps/s.

If we check 'top' we see that first irq from network card is using
100% CPU. I've tested this on FreeBSD 8.2-RELEASE and 9.0-RELEASE 
(on

9.0 we can see also kernel thread named {ix0 que} using 100% CPU),
and both systems behave the same. In logs we see also:
interrupt storm detected on "irq268:"; throttling interrupt source

Our server platform is Intel SR2600URBRP, 2x Xeon X5650, 6GB RAM 
and

NIC Intel X520-DA2.

I'm not sure if problem is with NIC or motherboard in SR2600URBRP,
because everything is fine when we use other server configuration:
Intel SR1630GP, 1x Xeon X3450, 8GB RAM, NIC X520-DA2

My /boot/loader.conf:
kern.ipc.nmbclusters=262144
hw.ixgbe.rxd=2048
hw.ixgbe.txd=2048
hw.ixgbe.num_queues=16

/etc/sysctl.conf
hw.intr_storm_threshold=1



I just finished a bunch of performance tests on this card.  In my
case I was trying to get as close to a full 10Gb/s as possible on a
Dell R710 but I haven't yet tried any sniffing.

Have you tried turning on LRO on the interface ( ifconfig lro ).  In
my case this made a big difference and I can now get 9.41Gb/s without
high CPU or interrupt storms.  I am also using Jack's latest driver
downloaded from intel (version 2.4.4).  Even the driver in 9.0 was
older.

Here is what I have
/etc/sysctl.conf
# Increase the network buffers
kern.ipc.nmbclusters=262144
kern.ipc.maxsockbuf=4194304
hw.intr_storm_threshold=9000
kern.ipc.nmbjumbop=262144

/boot/loader.conf
ixgbe_load="YES"
hw.ixgbe.txd=4096
hw.ixgbe.rxd=4096

 Kirk


 Hi Kirk,

I did not notice any change after turning on LRO. When checking stats
I see that LRO is not used (perhaps because the interface is receiving
traffic from port mirror on the switch).

sysctl output from dev.ix.0:
http://pastebin.com/fkRp7Py5

--
Marcin Markowski

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


livelock with full loaded em(4)

2012-01-24 Thread Anton Yuzhaninov
Hello.

I have test boxes with em(4) network card - Intel 82563EB
FreeBSD version - 8.2 stable from 2012-01-15, amd64

When this NIC is full loaded livelock occurs - system is unresponsive
even from local console.

To generate load I use netsend from /usr/src/tools/tools/netrate/
but other traffic source (e. g. TCP instead UDP) cause same problem.

There is 2 quese of this livelock:
1. With full load kernel thread "em1 taskq" hogs CPU.

top -zISHP for interface load a bit less, than full.
Traffic is generated by
# netsend 172.16.0.2 9001 8500 14300 3600
where 14300 is packets per second:

112 processes: 10 running, 82 sleeping, 20 waiting
CPU 0:  0.0% user,  0.0% nice, 27.1% system,  0.0% interrupt, 72.9% idle
CPU 1:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 2:  2.3% user,  0.0% nice, 97.7% system,  0.0% interrupt,  0.0% idle
CPU 3:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 4:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 5:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 6:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 7:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
Mem: 26M Active, 378M Inact, 450M Wired, 132K Cache, 63M Buf, 15G Free
Swap: 8192M Total, 8192M Free

  PID USERNAME  PRI NICE   SIZERES STATE   C   TIME   WCPU COMMAND
 7737 ayuzhaninov   1190  5832K  1116K CPU22   0:04 100.00% netsend
0 root  -680 0K   144K -   0   2:17 22.27% {em1 taskq}

top -zISHP for full interface load (some drops occurs), load is
generated by
# netsend 172.16.0.2 9001 8500 14400 3600

112 processes: 11 running, 81 sleeping, 20 waiting
CPU 0:  0.0% user,  0.0% nice,  100% system,  0.0% interrupt,  0.0% idle
CPU 1:  4.1% user,  0.0% nice, 95.9% system,  0.0% interrupt,  0.0% idle
CPU 2:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 3:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 4:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 5:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 6:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
CPU 7:  0.0% user,  0.0% nice,  0.0% system,  0.0% interrupt,  100% idle
Mem: 26M Active, 378M Inact, 450M Wired, 132K Cache, 63M Buf, 15G Free
Swap: 8192M Total, 8192M Free

  PID USERNAME  PRI NICE   SIZERES STATE   C   TIME   WCPU COMMAND
0 root  -680 0K   144K CPU00   2:17 100.00% {em1 taskq}
 7759 ayuzhaninov   1190  5832K  1116K CPU11   0:01 100.00% netsend

So pps increased from 14300 to 14400 (0.7%), but CPU load from "em1 taskq" 
thread
increased from 27.1% to 100.00%

This at least strange, but system still works fine unil I run
sysctl dev.cpu.0.temperature

2. sysctl handler code for coretemp must be executed on target cpu,
e. g. for dev.cpu.0.temperature code executed on CPU0.

If CPU0 is fully loaded by "em1 taskq" sysctl handler for
dev.cpu.0.temperature acquires Giant mutex lock then tries to run code
on CPU0, but it can't - CPU0 is busy.

If Giant mutex hold for long time system is unresponsive. In my case
Giant mutex acquired when sysctl dev.cpu.0.temperature started and hold
all time while netsend is running.

This seems to be a scheduler problem:
1. Why "em1 taskq" runs only on CPU0 (there is no affinity for this tread)?

# procstat -k 0 | egrep '(PID|em1)'
  PIDTID COMM TDNAME   KSTACK
0 100038 kernel   em1 taskq
# cpuset -g -t 100038
tid 100038 mask: 0, 1, 2, 3, 4, 5, 6, 7

2. Why "em1 taskq" is not preempted to execute sysctl handler code? This
is not short term condition - is netsend running for a hour, "em1 taskq"
is not preempted for a hour - sysctl all this time in running state but
don't have a chance to be executed.

-- 
 Anton Yuzhaninov

P. S. I tried to use EM_MULTIQUEUE, but this is don't help in this case.

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


RE: Performance problem using Intel X520-DA2

2012-01-24 Thread Kirk Davis


>-Original Message-
>On 24.01.2012 09:18, Nikolay Denev wrote:
>> On Jan 23, 2012, at 11:39 PM, Marcin Markowski wrote:
>>
>>> Hello,
>>>
>>> This message has been sent to freebsd-performance@ but got the 
>>> information that should contact also with freebsd-net@.
>>>
>>> We use FreeBSD as sniffer (libpcap programs) and we experience 
>>> performance problems when incoming traffic is greater than 7.5Gbps/s.
>>> If we check 'top' we see that first irq from network card is using 
>>> 100% CPU. I've tested this on FreeBSD 8.2-RELEASE and 9.0-RELEASE (on 
>>> 9.0 we can see also kernel thread named {ix0 que} using 100% CPU), 
>>> and both systems behave the same. In logs we see also:
>>> interrupt storm detected on "irq268:"; throttling interrupt source
>>>
>>> Our server platform is Intel SR2600URBRP, 2x Xeon X5650, 6GB RAM and 
>>> NIC Intel X520-DA2.
>>>
>>> I'm not sure if problem is with NIC or motherboard in SR2600URBRP, 
>>> because everything is fine when we use other server configuration:
>>> Intel SR1630GP, 1x Xeon X3450, 8GB RAM, NIC X520-DA2
>>>
>>> My /boot/loader.conf:
>>> kern.ipc.nmbclusters=262144
>>> hw.ixgbe.rxd=2048
>>> hw.ixgbe.txd=2048
>>> hw.ixgbe.num_queues=16
>>>
>>> /etc/sysctl.conf
>>> hw.intr_storm_threshold=1
>>>

I just finished a bunch of performance tests on this card.  In my case I was 
trying to get as close to a full 10Gb/s as possible on a Dell R710 but I 
haven't yet tried any sniffing.

Have you tried turning on LRO on the interface ( ifconfig lro ).  In my case 
this made a big difference and I can now get 9.41Gb/s without high CPU or 
interrupt storms.  I am also using Jack's latest driver downloaded from intel 
(version 2.4.4).  Even the driver in 9.0 was older.

Here is what I have 
/etc/sysctl.conf
# Increase the network buffers
kern.ipc.nmbclusters=262144
kern.ipc.maxsockbuf=4194304
hw.intr_storm_threshold=9000
kern.ipc.nmbjumbop=262144

/boot/loader.conf
ixgbe_load="YES"
hw.ixgbe.txd=4096
hw.ixgbe.rxd=4096

 Kirk
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"

Re: ng_bridge and locks

2012-01-24 Thread Gleb Smirnoff
On Tue, Jan 24, 2012 at 06:09:30AM +0900, rozhuk...@gmail.com wrote:
r> I found a comment in the code:
r>  /*
r>   * This node has all kinds of stuff that could be screwed by SMP.
r>   * Until it gets it's own internal protection, we go through in 
r>   * single file. This could hurt a machine bridging beteen two 
r>   * GB ethernets so it should be fixed. 
r>   * When it's fixed the process SHOULD NOT SLEEP, spinlocks please!
r>   * (and atomic ops )
r>   */
r> 
r> mtx_init(, MTX_DEF);
r> How bad to use netgraph node MTX_DEF mutex?

It would be correct to use MTX_DEF mutex to lock the ng_bridge node.

You need smth like a mutex per hash entry, and if all done correctly,
then you can remove NG_NODE_FORCE_WRITER().

-- 
Totus tuus, Glebius.
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Performance problem using Intel X520-DA2

2012-01-24 Thread Steven Hartland


- Original Message - 
From: "Marcin Markowski" 
I tried to compile the kernel with NETMAP on FreeBSD 8 and 9, but I get 
warnings and

the compilation ends.

cc1: warnings being treated as errors
../../../dev/netmap/netmap.c: In function 'netmap_memory_init':
../../../dev/netmap/netmap.c:1557: warning: format '%d' expects type 
'int', but argument 7 has type 'size_t'
../../../dev/netmap/netmap.c:1564: warning: format '%d' expects type 
'int', but argument 7 has type 'size_t'

../../../dev/netmap/netmap.c: In function 'netmap_memory_fini':
../../../dev/netmap/netmap.c:1607: warning: format '%d' expects type 
'int', but argument 2 has type 'size_t'

../../../dev/netmap/netmap.c: In function 'netmap_init':
../../../dev/netmap/netmap.c:1636: warning: format '%d' expects type 
'int', but argument 2 has type 'size_t'

*** Error code 1

I'll try HEAD and see if it will be the same.


If its just that error, change %d to %ld and that should fix it.

   Regards
   Steve


This e.mail is private and confidential between Multiplay (UK) Ltd. and the person or entity to whom it is addressed. In the event of misdirection, the recipient is prohibited from using, copying, printing or otherwise disseminating it or any information contained in it. 


In the event of misdirection, illegible or incomplete transmission please 
telephone +44 845 868 1337
or return the E.mail to postmas...@multiplay.co.uk.

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Performance problem using Intel X520-DA2

2012-01-24 Thread Marcin Markowski

On 24.01.2012 09:18, Nikolay Denev wrote:

On Jan 23, 2012, at 11:39 PM, Marcin Markowski wrote:


Hello,

This message has been sent to freebsd-performance@ but got
the information that should contact also with freebsd-net@.

We use FreeBSD as sniffer (libpcap programs) and we experience
performance problems when incoming traffic is greater than 
7.5Gbps/s.

If we check 'top' we see that first irq from network card is using
100% CPU. I've tested this on FreeBSD 8.2-RELEASE and 9.0-RELEASE
(on 9.0 we can see also kernel thread named {ix0 que} using 100% 
CPU),

and both systems behave the same. In logs we see also:
interrupt storm detected on "irq268:"; throttling interrupt source

Our server platform is Intel SR2600URBRP, 2x Xeon X5650, 6GB RAM and
NIC Intel X520-DA2.

I'm not sure if problem is with NIC or motherboard in SR2600URBRP,
because everything is fine when we use other server configuration:
Intel SR1630GP, 1x Xeon X3450, 8GB RAM, NIC X520-DA2

My /boot/loader.conf:
kern.ipc.nmbclusters=262144
hw.ixgbe.rxd=2048
hw.ixgbe.txd=2048
hw.ixgbe.num_queues=16

/etc/sysctl.conf
hw.intr_storm_threshold=1

--
Marcin Markowski



Hi,

Maybe you want to take a loot at NETMAP :
http://info.iet.unipi.it/~luigi/netmap/
There is a libpcap wrapper library, so you can use it with unchanged
pcap consumers,
and get great performance increase.
I'm not sure that the patches are updated for 8 and 9 though, since
the initial commit to HEAD
there were several related changes.

P.S.: It is important also what is you packet rate, since 7.5Gbps
with jumbo packets or 64 byte packets
are very different things :)

Regards,
Nikolay


I forgot to answer the P.S.

Our switch shows that the peak was 2Mpps.

--
Marcin Markowski

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Performance problem using Intel X520-DA2

2012-01-24 Thread Marcin Markowski

On 24.01.2012 09:18, Nikolay Denev wrote:

On Jan 23, 2012, at 11:39 PM, Marcin Markowski wrote:


Hello,

This message has been sent to freebsd-performance@ but got
the information that should contact also with freebsd-net@.

We use FreeBSD as sniffer (libpcap programs) and we experience
performance problems when incoming traffic is greater than 
7.5Gbps/s.

If we check 'top' we see that first irq from network card is using
100% CPU. I've tested this on FreeBSD 8.2-RELEASE and 9.0-RELEASE
(on 9.0 we can see also kernel thread named {ix0 que} using 100% 
CPU),

and both systems behave the same. In logs we see also:
interrupt storm detected on "irq268:"; throttling interrupt source

Our server platform is Intel SR2600URBRP, 2x Xeon X5650, 6GB RAM and
NIC Intel X520-DA2.

I'm not sure if problem is with NIC or motherboard in SR2600URBRP,
because everything is fine when we use other server configuration:
Intel SR1630GP, 1x Xeon X3450, 8GB RAM, NIC X520-DA2

My /boot/loader.conf:
kern.ipc.nmbclusters=262144
hw.ixgbe.rxd=2048
hw.ixgbe.txd=2048
hw.ixgbe.num_queues=16

/etc/sysctl.conf
hw.intr_storm_threshold=1

--
Marcin Markowski



Hi,

Maybe you want to take a loot at NETMAP :
http://info.iet.unipi.it/~luigi/netmap/
There is a libpcap wrapper library, so you can use it with unchanged
pcap consumers,
and get great performance increase.
I'm not sure that the patches are updated for 8 and 9 though, since
the initial commit to HEAD
there were several related changes.

P.S.: It is important also what is you packet rate, since 7.5Gbps
with jumbo packets or 64 byte packets
are very different things :)


 Hi Nikolay,

I tried to compile the kernel with NETMAP on FreeBSD 8 and 9, but I get 
warnings and

the compilation ends.

cc1: warnings being treated as errors
../../../dev/netmap/netmap.c: In function 'netmap_memory_init':
../../../dev/netmap/netmap.c:1557: warning: format '%d' expects type 
'int', but argument 7 has type 'size_t'
../../../dev/netmap/netmap.c:1564: warning: format '%d' expects type 
'int', but argument 7 has type 'size_t'

../../../dev/netmap/netmap.c: In function 'netmap_memory_fini':
../../../dev/netmap/netmap.c:1607: warning: format '%d' expects type 
'int', but argument 2 has type 'size_t'

../../../dev/netmap/netmap.c: In function 'netmap_init':
../../../dev/netmap/netmap.c:1636: warning: format '%d' expects type 
'int', but argument 2 has type 'size_t'

*** Error code 1

I'll try HEAD and see if it will be the same.

--
Marcin Markowski

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: pf not seeing inbound packets on netgraph interface

2012-01-24 Thread Andreas Longwitz
Hi Ed,

> I am running into a roadblock getting PF to filter traffic on
> a Netgraph interface representing an L2TP/IPSec connection.

> The problem I have is that PF only sees traffic on the outbound
> side of the netgraph interface.

This happens because the L2TP packets are tagged with an IPSEC-flag for
later use by ipfw, and this flag is passed to the packets coming from
ng0. Thats done by the netgraph under control of mpd, or better: mpd
does nothing to clear this flag.

With net.inet.ipsec.filtertunnel=1 you can ignore this IPSEC-flag but
only global for all interfaces in the system. Thats probably not what
you want, especially not for the real hardware interface the
IPSEC-tunnel is going through.

I think L2TP under control of mpd should work independent of the
existence of an IPSEC-tunnel and therefore clear this flag:

--- ng_l2tp.c.orig   2010-04-15 14:40:02.0 +0200
+++ ng_l2tp.c   2012-01-23 17:09:41.0 +0100
@@ -752,6 +752,7 @@
hookpriv_p hpriv = NULL;
hook_p hook = NULL;
struct mbuf *m;
+   struct m_tag *mtag;
u_int16_t tid, sid;
u_int16_t hdr;
u_int16_t ns, nr;
@@ -996,6 +997,11 @@
ERROUT(0);
}

+   /* Delete an existing ipsec tag */
+   mtag = m_tag_find(m, PACKET_TAG_IPSEC_IN_DONE, NULL);
+   if (mtag != NULL)
+   m_tag_delete(m, mtag);
+
/* Deliver data */
NG_FWD_NEW_DATA(error, item, hook, m);

This patch for the l2tp netgraph node does the job and you can use pf on
the ng0 interface without any restrections.

Regards,


-- 
Dr. Andreas Longwitz

Data Service GmbH
Beethovenstr. 2A
23617 Stockelsdorf
Amtsgericht Lübeck, HRB 318 BS
Geschäftsführer: Wilfried Paepcke, Dr. Andreas Longwitz, Josef Flatau

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: ICMP attacks against TCP and PMTUD

2012-01-24 Thread Nikolay Denev

On Jan 23, 2012, at 11:17 PM, Andre Oppermann wrote:

> On 23.01.2012 16:01, Nikolay Denev wrote:
>> 
>> On Jan 20, 2012, at 10:32 AM, Nikolay Denev wrote:
>> 
>>> On Jan 15, 2012, at 9:52 PM, Nikolay Denev wrote:
>>> 
 On 15.01.2012, at 21:35, Andrey Zonov  wrote:
 
> This helped me:
> /boot/loader.conf
> net.inet.tcp.hostcache.hashsizee536
> net.inet.tcp.hostcache.cachelimit66080
> 
> Actually, this is a workaround.  As I remember, real problem is in
> tcp_ctlinput(), it could not update MTU for destination IP if hostcache
> allocation fails.  tcp_hc_updatemtu() should returns NULL if
> tcp_hc_insert() returns NULL and tcp_ctlinput() should check this case
> and sets updated MTU for this particular connection if
> tcp_hc_updatemtu() fails.  Otherwise we've got infinite loop in MTU
> discovery.
> 
> 
> On 15.01.2012 22:59, Nikolay Denev wrote:
>> 
>> % uptime
>> 7:57PM  up 608 days,  4:06, 1 user, load averages: 0.30, 0.21, 0.17
>> 
>> % vmstat -z|grep hostcache
>> hostcache:136,15372,15136,  236, 44946965, 
>> 10972760
>> 
>> 
>> Hmm… probably I should increase this….
>> 
> 
> --
> Andrey Zonov
 
 Thanks, I will test this asap!
 
 Regards,
 Nikolay
>>> 
>>> I've upgraded from 7.3-STABLE to 8.2-STABLE and bumped significantly the 
>>> hostcache tunables.
>>> So far so good, I'll report back if I see similar traffic spikes.
>>> 
>> 
>> Seems like I have been wrong about these traffic spikes being attacks, and
>> actually the problem seems to be the pmtu infinite loop Andrey described.
>> I'm now running 8.2-STABLE with hostcache significantly bumped and regularly
>> have more than 20K hostcache entries, which was more than the default limit 
>> of 15K I was running with before.
> 
> The bug is real.  Please try the attached patch to fix the issue for IPv4.
> It's against current but should apply to 8 or 9 as well.
> 
> -- 
> Andre
> 
> http://people.freebsd.org/~andre/tcp_subr.c-pmtud-20120123.diff
> 
> Index: netinet/tcp_subr.c
> ===
> --- netinet/tcp_subr.c(revision 230489)
> +++ netinet/tcp_subr.c(working copy)
> @@ -1410,9 +1410,11 @@
>*/
>   if (mtu <= tcp_maxmtu(&inc, NULL))
>   tcp_hc_updatemtu(&inc, mtu);
> - }
> -
> - inp = (*notify)(inp, 
> inetctlerrmap[cmd]);
> + /* XXXAO: Slighly hackish. */
> + inp = (*notify)(inp, mtu);
> + } else
> + inp = (*notify)(inp,
> + inetctlerrmap[cmd]);
>   }
>   }
>   if (inp != NULL)
> @@ -1656,12 +1658,15 @@
>  * based on the new value in the route.  Also nudge TCP to send something,
>  * since we know the packet we just sent was dropped.
>  * This duplicates some code in the tcp_mss() function in tcp_input.c.
> + *
> + * XXXAO: Slight abuse of 'errno'.
>  */
> struct inpcb *
> tcp_mtudisc(struct inpcb *inp, int errno)
> {
>   struct tcpcb *tp;
>   struct socket *so;
> + int mtu;
> 
>   INP_WLOCK_ASSERT(inp);
>   if ((inp->inp_flags & INP_TIMEWAIT) ||
> @@ -1671,7 +1676,12 @@
>   tp = intotcpcb(inp);
>   KASSERT(tp != NULL, ("tcp_mtudisc: tp == NULL"));
> 
> - tcp_mss_update(tp, -1, NULL, NULL);
> + /* Extract the MTU from errno for IPv4. */
> + if (errno > PRC_NCMDS)
> + mtu = errno;
> + else
> + mtu = -1;
> + tcp_mss_update(tp, mtu, NULL, NULL);
> 
>   so = inp->inp_socket;
>   SOCKBUF_LOCK(&so->so_snd);

Hi Andre,

Thanks for the patch. I will apply it as soon as possible.
I'll probably first try to reproduce the problem locally since I've increased 
the hostcache
on my nginx balancers already, and changes require reboots which I'm not able 
to do at the moment.
Will let you know as soon as I have results.

Thanks!


___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"


Re: Performance problem using Intel X520-DA2

2012-01-24 Thread Nikolay Denev
On Jan 23, 2012, at 11:39 PM, Marcin Markowski wrote:

> Hello,
> 
> This message has been sent to freebsd-performance@ but got
> the information that should contact also with freebsd-net@.
> 
> We use FreeBSD as sniffer (libpcap programs) and we experience
> performance problems when incoming traffic is greater than 7.5Gbps/s.
> If we check 'top' we see that first irq from network card is using
> 100% CPU. I've tested this on FreeBSD 8.2-RELEASE and 9.0-RELEASE
> (on 9.0 we can see also kernel thread named {ix0 que} using 100% CPU),
> and both systems behave the same. In logs we see also:
> interrupt storm detected on "irq268:"; throttling interrupt source
> 
> Our server platform is Intel SR2600URBRP, 2x Xeon X5650, 6GB RAM and
> NIC Intel X520-DA2.
> 
> I'm not sure if problem is with NIC or motherboard in SR2600URBRP,
> because everything is fine when we use other server configuration:
> Intel SR1630GP, 1x Xeon X3450, 8GB RAM, NIC X520-DA2
> 
> My /boot/loader.conf:
> kern.ipc.nmbclusters=262144
> hw.ixgbe.rxd=2048
> hw.ixgbe.txd=2048
> hw.ixgbe.num_queues=16
> 
> /etc/sysctl.conf
> hw.intr_storm_threshold=1
> 
> -- 
> Marcin Markowski


Hi,

Maybe you want to take a loot at NETMAP : 
http://info.iet.unipi.it/~luigi/netmap/
There is a libpcap wrapper library, so you can use it with unchanged pcap 
consumers,
and get great performance increase.
I'm not sure that the patches are updated for 8 and 9 though, since the initial 
commit to HEAD
there were several related changes.

P.S.: It is important also what is you packet rate, since 7.5Gbps with jumbo 
packets or 64 byte packets
are very different things :)

Regards,
Nikolay

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to "freebsd-net-unsubscr...@freebsd.org"