Greetings,
After playing with many settings and testing various configuration, now
I'm able to to receive on bridge more then 800,000 packets/s
without errors, which is amazing!
Unfortunately the server behind bridge can't handle more then 250,000
packets/s
Please advise how I can increase those limits?
Is is possible?
The servers are with 82573E Gigabit Ethernet Controller (quad port)
So far I tried with lagg and ng_fec, but with them I see more problems
then benefits :)
Tried polling with kern.polling.user_frac from 5 to 95,
different HZ, but nothing helped.
Stefan Lambrev wrote:
Greetings,
I'm trying test a bridge firewall under FreeBSD 7.
What I have as configuration is:
Freebsd7 (web server) - bridge (FreeBSD7) - gigabit switch - flooders.
Both FreeBSD servers are using FreeBSD 7.0-RC1 amd64
With netperf -l 60 -p 10303 -H 10.3.3.1 I have no problems to reach
116MB/s
with and without pf enabled.
But what I want to test is how well will perform the firewall during
syn floods.
For this I'm using hping3 (hping-devel in ports) to generate traffic
from flooders
to the web server.
First think, that I notice is, that hping running on linux generate
twice more traffic compared to freebsd.
So I plan to separate a server with dual bootable linux and fbsd and
to see what's the real difference.
Second problem that I encountered is, that when running hping from
freebsd.
It exits after few seconds/minutes with this error message:
[send_ip] sendto: No buffer space available
And this happens on FreeBSD_7 and FreeBSD 6.2-p8 too amd64)
Can I increase those buffers ?
I'm able to generate 24MB/s SYN flood and during my test I can see
this on the bridge firewall:
netstat -w 1 -I em0 -d - external network
input (em0) output
packets errs bytes packets errs bytes colls drops
427613 1757 25656852 233604 0 14016924 0 0
428089 1274 25685358 233794 0 14025174 0 0
427433 1167 25645998 234775 0 14088834 0 0
438270 2300 26296218 233384 0 14004474 0 0
438425 2009 26305518 233858 0 14034114 0 0
and from the internal network:
input (em1) output
packets errs bytes packets errs bytes colls drops
232912 0 13974838 425796 0 25549446 0 1334
234487 0 14069338 423986 0 25432026 0 1631
233951 0 14037178 431330 0 25880286 0 3888
233509 0 14010658 436496 0 26191986 0 1437
234181 0 14050978 430291 0 25816806 0 4001
234144 0 14048870 430208 0 25810206 0 1621
234176 0 14050678 430292 0 25828926 0 3001
And here is top -S
last pid: 21830; load averages: 1.01, 0.50,
0.72
up 3+04:59:43 20:27:49
84 processes: 7 running, 60 sleeping, 17 waiting
CPU states: 0.0% user, 0.0% nice, 38.2% system, 0.0% interrupt,
61.8% idle
Mem: 17M Active, 159M Inact, 252M Wired, 120K Cache, 213M Buf, 1548M Free
Swap: 4056M Total, 4056M Free
PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND
14 root 1 171 ki31 0K 16K CPU0 0 76.8H 100.00%
idle: cpu0
11 root 1 171 ki31 0K 16K RUN 3 76.0H 100.00%
idle: cpu3
25 root 1 -68 - 0K 16K CPU1 1 54:26 86.28% em0
taskq
26 root 1 -68 - 0K 16K CPU2 2 39:13 66.70% em1
taskq
12 root 1 171 ki31 0K 16K RUN 2 76.0H 37.50% idle:
cpu2
13 root 1 171 ki31 0K 16K RUN 1 75.9H 16.89% idle:
cpu1
16 root 1 -32 - 0K 16K WAIT 0 7:00 0.00% swi4:
clock sio
51 root 1 20 - 0K 16K syncer 3 4:30 0.00% syncer
vmstat -i
interrupt total rate
irq1: atkbd0 544 0
irq4: sio0 10641 0
irq14: ata0 1 0
irq19: uhci1+ 123697 0
cpu0: timer 553887702 1997
irq256: em0 48227501 173
irq257: em1 46331164 167
cpu1: timer 553887682 1997
cpu3: timer 553887701 1997
cpu2: timer 553887701 1997
Total 2310244334 8333
netstat -m
594/2361/2955 mbufs in use (current/cache/total)
592/1854/2446/204800 mbuf clusters in use (current/cache/total/max)
592/1328 mbuf+clusters out of packet secondary zone in use
(current/cache)
0/183/183/12800 4k (page size) jumbo clusters in use
(current/cache/total/max)
0/0/0/6400 9k jumbo clusters in use (current/cache/total/max)
0/0/0/3200 16k jumbo clusters in use (current/cache/total/max)
1332K/5030K/6362K bytes allocated to network (current/cache/total)
systat -ifstat
Interface Traffic Peak Total
bridge0 in 38.704 MB/s 38.704 MB/s
185.924 GB
out 38.058 MB/s 38.058 MB/s
189.855 GB
em1 in 13.336 MB/s 13.402 MB/s
51.475 GB
out 24.722 MB/s 24.722 MB/s
137.396 GB
em0 in 24.882 MB/s 24.882 MB/s
138.918 GB
out 13.336 MB/s 13.403 MB/s
45.886 GB
Both FreeBSD servers have quad port intel network card, 2GB memory
[EMAIL PROTECTED]:3:0:0: class=0x020000 card=0x10bc8086 chip=0x10bc8086
rev=0x06 hdr=0x00
vendor = 'Intel Corporation'
device = '82571EB Gigabit Ethernet Controller (Copper)'
class = network
subclass = ethernet
Firewall server is running on CPU: Intel(R) Xeon(R) X3220 @ 2.40GHz
(quad core)
Web server is running on Intel(R) Xeon(R) CPU 3070 @ 2.66GHz (dual core)
So in brief how can I get rid of "No buffer space available",
increase the sent rate of hping in FreeBSD and get rid of dropped
packets on rates like 24MB/s :)
What other tests can I run (switching on of cpu cores and etc)?
Anyone interested?
P.S. I'm using custom kernel, with SCHED_ULE, both freebsds build from
source with CPUTYPE?=core2
and net.inet.icmp.icmplim_output=0
--
Best Wishes,
Stefan Lambrev
ICQ# 24134177
_______________________________________________
freebsd-performance@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-performance
To unsubscribe, send any mail to "[EMAIL PROTECTED]"