Re: FreeBSD 8.2 and MPD5 stability issues - update

2011-07-04 Thread Adrian Minta

On 07/03/2011 10:56 PM, Eugene Grosbein wrote:


There is internal queue of messages in the mpd-5.5 with length 8129.
Messages are generated based on various events and enqueued there, then 
processed.

Mpd uses GRED algorithm to prevent overload: it accepts all new L2TP connections
when queue has 10 or less slots occupied (unprocessed events).

It drops all connections then it has over 60 slots occupied.

s/all/new incoming/


In between, it drops new message with probability equal to (q-10)*2 percents

s/message/L2TP connection/


where q is number of occupied queue slots. These constants are hardcoded in its 
src/ppp.h

Each time it decided to ignore incoming L2TP requests it notes that in the log,
as you have already seen.

Eugene Grosbein



Hi Eugene,
if I undestand corectly, in order to increase the connection rate I need 
to replace 60 with 600 and 10 with 100 like this:


  #define SETOVERLOAD(q)do {\
int t = (q);\
if (t  600) {  \
gOverload = 100;\
} else if (t  100) {   \
gOverload = (t - 100) * 2;  \
} else {\
gOverload = 0;  \
}   \
} while (0)

 Is this enough, or I need to modify something else ?

--
Best regards,



___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org


Current problem reports assigned to freebsd-net@FreeBSD.org

2011-07-04 Thread FreeBSD bugmaster
Note: to view an individual PR, use:
  http://www.freebsd.org/cgi/query-pr.cgi?pr=(number).

The following is a listing of current problems submitted by FreeBSD users.
These represent problem reports covering all versions including
experimental development code and obsolete releases.


S Tracker  Resp.  Description

f kern/158426  net[e1000] [panic] _mtx_lock_sleep: recursed on non-recur
o kern/158201  net[re] re0 driver quit working on Acer AO751h between 8.
o kern/158156  net[bce] bce driver shows no carrier on IBM blade (HS22
f kern/157802  net[dummynet] [panic] kernel panic in dummynet
o kern/157785  netamd64 + jail + ipfw + natd = very slow outbound traffi
o kern/157429  net[re] Realtek RTL8169 doesn't work with re(4)
o kern/157418  net[em] em driver lockup during boot on Supermicro X9SCM-
o kern/157410  net[ip6] IPv6 Router Advertisements Cause Excessive CPU U
o kern/157287  net[re] [panic] INVARIANTS panic (Memory modified after f
o kern/157209  net[ip6] [patch] locking error in rip6_input() (sys/netin
o kern/157200  net[network.subr] [patch] stf(4) can not communicate betw
o kern/157182  net[lagg] lagg interface not working together with epair 
o kern/156978  net[lagg][patch] Take lagg rlock before checking flags
o kern/156877  net[dummynet] [panic] dummynet move_pkt() null ptr derefe
o kern/156667  net[em] em0 fails to init on CURRENT after March 17
o kern/156408  net[vlan] Routing failure when using VLANs vs. Physical e
o kern/156328  net[icmp]: host can ping other subnet but no have IP from
o kern/156317  net[ip6] Wrong order of IPv6 NS DAD/MLD Report
o kern/156283  net[ip6] [patch] nd6_ns_input - rtalloc_mpath does not re
o kern/156279  net[if_bridge][divert][ipfw] unable to correctly re-injec
o kern/156226  net[lagg]: failover does not announce the failover to swi
o kern/156030  net[ip6] [panic] Crash in nd6_dad_start() due to null ptr
o kern/155772  netifconfig(8): ioctl (SIOCAIFADDR): File exists on direc
o kern/155680  net[multicast] problems with multicast
s kern/155642  net[request] Add driver for Realtek RTL8191SE/RTL8192SE W
o kern/155604  net[flowtable] Flowtable excessively caches dest MAC addr
o kern/155597  net[panic] Kernel panics with sbdrop message
o kern/155585  net[tcp] [panic] tcp_output tcp_mtudisc loop until kernel
o kern/155498  net[ral] ral(4) needs to be resynced with OpenBSD's to ga
o kern/155420  net[vlan] adding vlan break existent vlan
o bin/155365   net[patch] routed(8): if.c in routed fails to compile if 
o kern/155177  net[route] [panic] Panic when inject routes in kernel
o kern/155030  net[igb] igb(4) DEVICE_POLLING does not work with carp(4)
o kern/155010  net[msk] ntfs-3g via iscsi using msk driver cause kernel 
o kern/155004  net[bce] [panic] kernel panic in bce0 driver
o kern/154943  net[gif] ifconfig gifX create on existing gifX clears IP
s kern/154851  net[request]: Port brcm80211 driver from Linux to FreeBSD
o kern/154850  net[netgraph] [patch] ng_ether fails to name nodes when t
o kern/154831  net[arp] [patch] arp sysctl setting log_arp_permanent_mod
o kern/154679  net[em] Fatal trap 12: em1 taskq only at startup (8.1-R
o kern/154600  net[tcp] [panic] Random kernel panics on tcp_output
o kern/154557  net[tcp] Freeze tcp-session of the clients, if in the gat
o kern/154443  net[if_bridge] Kernel module bridgestp.ko missing after u
o kern/154286  net[netgraph] [panic] 8.2-PRERELEASE panic in netgraph
o kern/154255  net[nfs] NFS not responding
o kern/154214  net[stf] [panic] Panic when creating stf interface
o kern/154185  netrace condition in mb_dupcl
o kern/154169  net[multicast] [ip6] Node Information Query multicast add
o kern/154134  net[ip6] stuck kernel state in LISTEN on ipv6 daemon whic
o kern/154091  net[netgraph] [panic] netgraph, unaligned mbuf?
o conf/154062  net[vlan] [patch] change to way of auto-generatation of v
o kern/153937  net[ral] ralink panics the system (amd64 freeBSDD 8.X) wh
o kern/153936  net[ixgbe] [patch] MPRC workaround incorrectly applied to
o kern/153816  net[ixgbe] ixgbe doesn't work properly with the Intel 10g
o kern/153772  net[ixgbe] [patch] sysctls reference wrong XON/XOFF varia
o kern/153497  net[netgraph] netgraph panic due to race conditions
o kern/153454  net[patch] [wlan] [urtw] Support ad-hoc and hostap modes 
o kern/153308  net[em] em interface use 100% cpu
o kern/153244  net[em] em(4) fails to send UDP to port 0x
o kern/152893  net[netgraph] [panic] 

Netgraph udp tunneling

2011-07-04 Thread Alexey V. Panfilov

Hi!

We've three servers, connected as S1 -ng_tunnel- S2 -ng_tunnel- S3.

Over tunnels runs BGP with full-view.

Sometimes on S2 occures fatal trap with automatic reboot or without 
reboot at all. It not depends of net loading (mbps or pps).


If process of save a core was successfull, backtrace always shows that 
fatal trap occures because of large packet (size around 60Kbyte) was 
received.



Any help are welcome. Thanks.

--

Info about S2:

smbios.system.maker=IBM
smbios.system.product=System x3250 M3 -[425232G]-
hw.model: Intel(R) Core(TM) i3 CPU 530  @ 2.93GHz
hw.physmem: 4207792128
dev.em.0.%desc: Intel(R) PRO/1000 Network Connection 7.1.9
dev.em.0.%pnpinfo: vendor=0x8086 device=0x10d3 subvendor=0x1014 
subdevice=0x03bd class=0x02


NOTE: On S2 used only em0

/etc/sysctl.conf:
net.inet.icmp.icmplim=50
net.inet.tcp.blackhole=2
net.inet.udp.blackhole=1
net.inet.icmp.drop_redirect=1
net.inet.icmp.log_redirect=0
net.inet.ip.redirect=0

Netgraph's tunnels are configured such as it wrote at example 
(/usr/share/examples/netgraph/udp.tunnel):


ngctl mkpeer iface dummy inet
ngctl mkpeer ng0: ksocket inet inet/dgram/udp
ngctl name ng0:inet to_S1
ngctl msg ng0:inet bind inet/1.1.1.1:60001
ngctl msg ng0:inet connect inet/5.5.5.5:60001
ifconfig ng0 10.0.0.1 10.0.0.2 netmask 255.255.255.252

ngctl mkpeer iface dummy inet
ngctl mkpeer ng1: ksocket inet inet/dgram/udp
ngctl name ng1:inet to_S3
ngctl msg ng1:inet bind inet/1.1.1.3:60002
ngctl msg ng1:inet connect inet/7.7.7.7:60002
ifconfig ng1 10.0.0.5 10.0.0.6 netmask 255.255.255.252

FreeBSD S2.line 8.2-RELEASE-p2 FreeBSD 8.2-RELEASE-p2 #1: Wed Jun 22 
13:56:26 MSD 2011 r...@s2.line:/usr/src/sys/amd64/compile/BGP  amd64


Hardware and software configurations on S1 and S3 are identical to S2, 
but they runs without problem.




backtrace:

Unread portion of the kernel message buffer:


Fatal trap 12: page fault while in kernel mode
cpuid = 3; apic id = 05
fault virtual address   = 0x18
fault code  = supervisor read data, page not present
instruction pointer = 0x20:0x803f4a87
stack pointer   = 0x28:0xff811c80a5a0
frame pointer   = 0x28:0xff811c80a600
code segment= base 0x0, limit 0xf, type 0x1b
= DPL 0, pres 1, long 1, def32 0, gran 1
processor eflags= interrupt enabled, resume, IOPL = 0
current process = 1478 (ng_queue0)
trap number = 12
panic: page fault
cpuid = 3
Uptime: 11d23h7m40s
Physical memory: 4012 MB
Dumping 674 MB: 659 643 627 611 595 579 563 547 531 515 499 483 467 451 
435 419 403 387 371 355 339 323 307 291 275 259 243 227 211 195 179 163 
147 131 115 99 83 67 51 35 19 3


Reading symbols from /boot/kernel/coretemp.ko...Reading symbols from 
/boot/kernel/coretemp.ko.symbols...done.

done.
Loaded symbols for /boot/kernel/coretemp.ko
Reading symbols from /boot/kernel/ng_socket.ko...Reading symbols from 
/boot/kernel/ng_socket.ko.symbols...done.

done.
Loaded symbols for /boot/kernel/ng_socket.ko
Reading symbols from /boot/kernel/netgraph.ko...Reading symbols from 
/boot/kernel/netgraph.ko.symbols...done.

done.
Loaded symbols for /boot/kernel/netgraph.ko
Reading symbols from /boot/kernel/ng_iface.ko...Reading symbols from 
/boot/kernel/ng_iface.ko.symbols...done.

done.
Loaded symbols for /boot/kernel/ng_iface.ko
Reading symbols from /boot/kernel/ng_ksocket.ko...Reading symbols from 
/boot/kernel/ng_ksocket.ko.symbols...done.

done.
Loaded symbols for /boot/kernel/ng_ksocket.ko
#0  doadump () at pcpu.h:224
224 pcpu.h: No such file or directory.
in pcpu.h
(kgdb) bt
#0  doadump () at pcpu.h:224
#1  0x80398d3e in boot (howto=260) at 
../../../kern/kern_shutdown.c:419
#2  0x80399153 in panic (fmt=0x0) at 
../../../kern/kern_shutdown.c:592
#3  0x8056f19d in trap_fatal (frame=0xff00070828c0, 
eva=Variable eva is not available.

) at ../../../amd64/amd64/trap.c:783
#4  0x8056f55f in trap_pfault (frame=0xff811c80a4f0, 
usermode=0) at ../../../amd64/amd64/trap.c:699
#5  0x8056f95f in trap (frame=0xff811c80a4f0) at 
../../../amd64/amd64/trap.c:449
#6  0x80557ba4 in calltrap () at 
../../../amd64/amd64/exception.S:224
#7  0x803f4a87 in m_copym (m=0x0, off0=2980, len=1480, wait=1) 
at ../../../kern/uipc_mbuf.c:542
#8  0x8047fc07 in ip_fragment (ip=0xff010e8d0558, 
m_frag=0xff811c80a718, mtu=Variable mtu is not available.

) at ../../../netinet/ip_output.c:819
#9  0x80480d1f in ip_output (m=0xff010e8d0500, opt=Variable 
opt is not available.

) at ../../../netinet/ip_output.c:650
#10 0x8047ca00 in ip_forward (m=0xff010e317600, 
srcrt=Variable srcrt is not available.

) at ../../../netinet/ip_input.c:1521
#11 0x8047e1cd in ip_input (m=0xff010e317600) at 

Re: FreeBSD 8.2 and MPD5 stability issues - update

2011-07-04 Thread Eugene Grosbein
04.07.2011 15:30, Adrian Minta пишет:

 if I undestand corectly, in order to increase the connection rate I need 
 to replace 60 with 600 and 10 with 100 like this:
 
#define SETOVERLOAD(q)do {\
  int t = (q);\
  if (t  600) {  \
  gOverload = 100;\
  } else if (t  100) {   \
  gOverload = (t - 100) * 2;  \
  } else {\
  gOverload = 0;  \
  }   \
  } while (0)
 
   Is this enough, or I need to modify something else ?

It seems, enough. But, are you sure your L2TP client will wait
for overloaded daemon to complete connection? The change will proportionally 
increase
responsiveness of mpd - it has not enough CPU horsepower to process requests 
timely.

Eugene Grosbein
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org


Multiple IPv6 ISPs

2011-07-04 Thread Paul Schenkeveld
Hi,

At one of my customers we have had 2 ISPs for a long time but now we
have to support IPv6 too.

In the IPv4 world I used ipfw for policy-based routing to separate
traffic from the two public address ranges:

ipfw add 1010 allow ip from any to MY_IP_RANGES
ipfw add 1020 fwd ISP1_GW ip from ISP1_SUBNET to any
ipfw add 1030 fwd ISP2_GW ip from ISP2_SUBNET to any

When I try the same with IPv6, it appears that ipfw(8) does not support
an IPv6 destination with the fwd statement, the packet matching part
seems to work fine.  This appears documented in bin/117214 (Oct 2007)
but never solved.

Before asking the list I went looking for other options, setfib came to
mind but it appears that setfib only works on IPv4, is that correct or
am I overlooking something?

Pf is used for firewalling and doing both filtering and policy based
routing in pf doesn't work.

Anyway, how do other people solve this?  I need to run services on both
address ranges so flipping a default gateway when pinging the next hop
fails does not solve it for me.

Soon, having IPv6 is no longer an option but rather a necessity.

Regards,

Paul Schenkeveld
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org


Re: FreeBSD 8.2 and MPD5 stability issues - update

2011-07-04 Thread Adrian Minta
It seems, enough. But, are you sure your L2TP client will wait
for overloaded daemon to complete connection? The change will
proportionally increase responsiveness of mpd - it has not enough CPU
horsepower to process requests timely.

Eugene Grosbein

Actually something else is happening.

I increased the queue in msg.c
#define   MSG_QUEUE_LEN   65536
... and in the ppp.h:
#define SETOVERLOAD(q)do {\
int t = (q);\
if (t  600) {  \
gOverload = 100;\
} else if (t  100) {   \
gOverload = (t - 100) * 2;  \
} else {\
gOverload = 0;  \
}   \
} while (0)

Now the overload message is very rare, but the behaviour is the same.
Around 5500 sessions the number don't grow anymore, but instead begin to
decrease.

The mpd log say something like this:
#tail -f /var/log/mpd.log | grep -v \[
Jul  4 19:56:46 lns mpd: Incoming L2TP packet from 10.42.10.16 1701
Jul  4 19:56:46 lns mpd: L2TP: Incoming call #32 via connection
0x80ae96c10 received
Jul  4 19:56:46 lns mpd: Link: packet from unexisting link 6310
Jul  4 19:56:46 lns mpd: Link: packet from unexisting link 6251
Jul  4 19:56:46 lns mpd: Link: packet from unexisting link 6250
Jul  4 19:56:46 lns mpd: L2TP: Control connection 0x80b06b710 10.42.1.48
1701 - 10.42.9.210 1701 connected
Jul  4 19:56:46 lns mpd: Incoming L2TP packet from 10.42.10.4 1701
Jul  4 19:56:46 lns mpd: Incoming L2TP packet from 10.42.10.10 1701
Jul  4 19:56:46 lns mpd: L2TP: Incoming call #48 via connection
0x80b06b710 received
Jul  4 19:56:46 lns mpd: Link: packet from unexisting link 6311
Jul  4 19:56:46 lns mpd: Link: packet from unexisting link 6312
Jul  4 19:56:46 lns mpd: Link: packet from unexisting link 6252
Jul  4 19:56:46 lns mpd: L2TP: Control connection 0x80ad99110 10.42.1.23
1701 - 10.42.9.244 1701 connected
Jul  4 19:56:46 lns mpd: L2TP: Control connection 0x80ad99410 10.42.1.4
1701 - 10.42.10.16 1701 connected
Jul  4 19:56:46 lns mpd: Incoming L2TP packet from 10.42.9.234 1701
Jul  4 19:56:46 lns mpd: Incoming L2TP packet from 10.42.10.2 1701
Jul  4 19:56:47 lns mpd: L2TP: Incoming call #4 via connection 0x80ad99410
received
Jul  4 19:56:47 lns mpd: L2TP: Incoming call #23 via connection
0x80ad99110 received
Jul  4 19:56:47 lns mpd: Link: packet from unexisting link 6253
Jul  4 19:56:47 lns mpd: L2TP: Control connection 0x80ad99a10 10.42.1.7
1701 - 10.42.10.4 1701 connected
Jul  4 19:56:47 lns mpd: Incoming L2TP packet from 10.42.9.214 1701
Jul  4 19:56:47 lns mpd: Incoming L2TP packet from 10.42.9.220 1701
Jul  4 19:56:47 lns mpd: L2TP: Incoming call #7 via connection 0x80ad99a10
received
Jul  4 19:56:47 lns mpd: L2TP: Control connection 0x80ad99d10 10.42.1.7
1701 - 10.42.10.10 1701 connected
Jul  4 19:56:47 lns mpd: Link: packet from unexisting link 6254
Jul  4 19:56:47 lns mpd: Link: packet from unexisting link 6303
Jul  4 19:56:47 lns mpd: Link: packet from unexisting link 6302
Jul  4 19:56:47 lns mpd: L2TP: Control connection 0x80ab22b10 10.42.1.32
1701 - 10.42.9.234 1701 connected
Jul  4 19:56:47 lns mpd: L2TP: Control connection 0x80ab22810 10.42.1.13
1701 - 10.42.10.2 1701 connected
Jul  4 19:56:47 lns mpd: Incoming L2TP packet from 10.42.10.14 1701

A top command reveal that the server is around 50% free:

last pid: 63542;  load averages:  4.93,  2.98, 1.40 up 0+22:32:42  19:44:23
24 processes:  2 running, 22 sleeping
CPU 0:  4.5% user,  0.0% nice,  5.6% system, 36.8% interrupt, 53.0% idle
CPU 1:  2.6% user,  0.0% nice,  7.5% system, 48.5% interrupt, 41.4% idle
CPU 2:  3.7% user,  0.0% nice,  7.9% system, 32.6% interrupt, 55.8% idle
CPU 3:  3.0% user,  0.0% nice,  7.9% system, 33.5% interrupt, 55.6% idle
CPU 4:  5.6% user,  0.0% nice, 13.9% system, 33.8% interrupt, 46.6% idle
CPU 5:  2.3% user,  0.0% nice,  7.5% system, 36.1% interrupt, 54.1% idle
CPU 6:  3.0% user,  0.0% nice,  9.8% system, 36.1% interrupt, 51.1% idle
CPU 7:  0.8% user,  0.0% nice,  2.6% system, 43.2% interrupt, 53.4% idle
Mem: 148M Active, 695M Inact, 753M Wired, 108K Cache, 417M Buf, 2342M Free
Swap: 4096M Total, 4096M Free

  PID USERNAME  THR PRI NICE   SIZERES STATE   C   TIME   WCPU COMMAND
75502 root2  760   194M   168M select  4   4:32 63.57% mpd5
 2131 root1  460  7036K  1544K select  1   4:27  5.18% syslogd
 1914 root1  440  5248K  3176K select  2   0:17  0.00% devd
73229 root1  440 16384K  8464K wait1   0:02  0.00% bash
 2434 root1  440 12144K  4156K select  2   0:01  0.00% sendmail
73222 media   1  44

Re: FreeBSD 8.2 and MPD5 stability issues - update

2011-07-04 Thread Mike Tancsa

What do you have net.graph.threads set to ?  With the load avg so high,
perhaps you are just running into processing limits with so many
connections ?  amotin would know.

---Mike


On 7/4/2011 1:16 PM, Adrian Minta wrote:
 It seems, enough. But, are you sure your L2TP client will wait
 for overloaded daemon to complete connection? The change will
 proportionally increase responsiveness of mpd - it has not enough CPU
 horsepower to process requests timely.

 Eugene Grosbein
 
 Actually something else is happening.
 
 I increased the queue in msg.c
 #define   MSG_QUEUE_LEN   65536
 ... and in the ppp.h:
 #define SETOVERLOAD(q)do {\
 int t = (q);\
 if (t  600) {  \
 gOverload = 100;\
 } else if (t  100) {   \
 gOverload = (t - 100) * 2;  \
 } else {\
 gOverload = 0;  \
 }   \
 } while (0)
 
 Now the overload message is very rare, but the behaviour is the same.
 Around 5500 sessions the number don't grow anymore, but instead begin to
 decrease.
 
 The mpd log say something like this:
 #tail -f /var/log/mpd.log | grep -v \[
 Jul  4 19:56:46 lns mpd: Incoming L2TP packet from 10.42.10.16 1701
 Jul  4 19:56:46 lns mpd: L2TP: Incoming call #32 via connection
 0x80ae96c10 received
 Jul  4 19:56:46 lns mpd: Link: packet from unexisting link 6310
 Jul  4 19:56:46 lns mpd: Link: packet from unexisting link 6251
 Jul  4 19:56:46 lns mpd: Link: packet from unexisting link 6250
 Jul  4 19:56:46 lns mpd: L2TP: Control connection 0x80b06b710 10.42.1.48
 1701 - 10.42.9.210 1701 connected
 Jul  4 19:56:46 lns mpd: Incoming L2TP packet from 10.42.10.4 1701
 Jul  4 19:56:46 lns mpd: Incoming L2TP packet from 10.42.10.10 1701
 Jul  4 19:56:46 lns mpd: L2TP: Incoming call #48 via connection
 0x80b06b710 received
 Jul  4 19:56:46 lns mpd: Link: packet from unexisting link 6311
 Jul  4 19:56:46 lns mpd: Link: packet from unexisting link 6312
 Jul  4 19:56:46 lns mpd: Link: packet from unexisting link 6252
 Jul  4 19:56:46 lns mpd: L2TP: Control connection 0x80ad99110 10.42.1.23
 1701 - 10.42.9.244 1701 connected
 Jul  4 19:56:46 lns mpd: L2TP: Control connection 0x80ad99410 10.42.1.4
 1701 - 10.42.10.16 1701 connected
 Jul  4 19:56:46 lns mpd: Incoming L2TP packet from 10.42.9.234 1701
 Jul  4 19:56:46 lns mpd: Incoming L2TP packet from 10.42.10.2 1701
 Jul  4 19:56:47 lns mpd: L2TP: Incoming call #4 via connection 0x80ad99410
 received
 Jul  4 19:56:47 lns mpd: L2TP: Incoming call #23 via connection
 0x80ad99110 received
 Jul  4 19:56:47 lns mpd: Link: packet from unexisting link 6253
 Jul  4 19:56:47 lns mpd: L2TP: Control connection 0x80ad99a10 10.42.1.7
 1701 - 10.42.10.4 1701 connected
 Jul  4 19:56:47 lns mpd: Incoming L2TP packet from 10.42.9.214 1701
 Jul  4 19:56:47 lns mpd: Incoming L2TP packet from 10.42.9.220 1701
 Jul  4 19:56:47 lns mpd: L2TP: Incoming call #7 via connection 0x80ad99a10
 received
 Jul  4 19:56:47 lns mpd: L2TP: Control connection 0x80ad99d10 10.42.1.7
 1701 - 10.42.10.10 1701 connected
 Jul  4 19:56:47 lns mpd: Link: packet from unexisting link 6254
 Jul  4 19:56:47 lns mpd: Link: packet from unexisting link 6303
 Jul  4 19:56:47 lns mpd: Link: packet from unexisting link 6302
 Jul  4 19:56:47 lns mpd: L2TP: Control connection 0x80ab22b10 10.42.1.32
 1701 - 10.42.9.234 1701 connected
 Jul  4 19:56:47 lns mpd: L2TP: Control connection 0x80ab22810 10.42.1.13
 1701 - 10.42.10.2 1701 connected
 Jul  4 19:56:47 lns mpd: Incoming L2TP packet from 10.42.10.14 1701
 
 A top command reveal that the server is around 50% free:
 
 last pid: 63542;  load averages:  4.93,  2.98, 1.40 up 0+22:32:42  19:44:23
 24 processes:  2 running, 22 sleeping
 CPU 0:  4.5% user,  0.0% nice,  5.6% system, 36.8% interrupt, 53.0% idle
 CPU 1:  2.6% user,  0.0% nice,  7.5% system, 48.5% interrupt, 41.4% idle
 CPU 2:  3.7% user,  0.0% nice,  7.9% system, 32.6% interrupt, 55.8% idle
 CPU 3:  3.0% user,  0.0% nice,  7.9% system, 33.5% interrupt, 55.6% idle
 CPU 4:  5.6% user,  0.0% nice, 13.9% system, 33.8% interrupt, 46.6% idle
 CPU 5:  2.3% user,  0.0% nice,  7.5% system, 36.1% interrupt, 54.1% idle
 CPU 6:  3.0% user,  0.0% nice,  9.8% system, 36.1% interrupt, 51.1% idle
 CPU 7:  0.8% user,  0.0% nice,  2.6% system, 43.2% interrupt, 53.4% idle
 Mem: 148M Active, 695M Inact, 753M Wired, 108K Cache, 417M Buf, 2342M Free
 Swap: 4096M Total, 4096M Free
 
   PID USERNAME  THR PRI NICE   SIZERES STATE   C   TIME   WCPU COMMAND
 75502 root2  760   194M   168M select  4   4:32 63.57% mpd5
  

Re: FreeBSD 8.2 and MPD5 stability issues - update

2011-07-04 Thread Adrian Minta

 What do you have net.graph.threads set to ?  With the load avg so high,
 perhaps you are just running into processing limits with so many
 connections ?  amotin would know.

   ---Mike


No, I didn't touch it.
lns# sysctl net.graph.threads
net.graph.threads: 8

How big should this be for my server (8 cores) ?


-- 
Best regards,
Adrian MintaMA3173-RIPE



___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org


Re: kern/127057: [udp] Unable to send UDP packet via IPv6 socket to IPv4 mapped address

2011-07-04 Thread bz
Synopsis: [udp] Unable to send UDP packet via IPv6 socket to IPv4 mapped address

State-Changed-From-To: open-patched
State-Changed-By: bz
State-Changed-When: Mon Jul 4 19:11:06 UTC 2011
State-Changed-Why: 
Seems I fixed it lately in r220463.


Responsible-Changed-From-To: freebsd-net-bz
Responsible-Changed-By: bz
Responsible-Changed-When: Mon Jul 4 19:11:06 UTC 2011
Responsible-Changed-Why: 
Take given it seems I fixed it so I can track MFCs.

http://www.freebsd.org/cgi/query-pr.cgi?pr=127057
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org


Re: FreeBSD 8.2 and MPD5 stability issues - update

2011-07-04 Thread Eugene Grosbein
On Mon, Jul 04, 2011 at 08:16:19PM +0300, Adrian Minta wrote:

 It seems, enough. But, are you sure your L2TP client will wait
 for overloaded daemon to complete connection? The change will
 proportionally increase responsiveness of mpd - it has not enough CPU
 horsepower to process requests timely.
 
 Eugene Grosbein
 
 Actually something else is happening.
 
 I increased the queue in msg.c
 #define   MSG_QUEUE_LEN   65536

You can't do this blindly, without other changes.
For example, there is MSG_QUEUE_MASK in the next line
that must be equal to MSG_QUEUE_LEN-1 and effectively
limits usage of this queue.

 ... and in the ppp.h:
 #define SETOVERLOAD(q)do {\
 int t = (q);\
 if (t  600) {  \
 gOverload = 100;\
 } else if (t  100) {   \
 gOverload = (t - 100) * 2;  \
 } else {\
 gOverload = 0;  \
 }   \
 } while (0)
 
 Now the overload message is very rare, but the behaviour is the same.
 Around 5500 sessions the number don't grow anymore, but instead begin to
 decrease.

You should study why existing connections break,
do clients disconnect themselves or server disconnect them?
You'll need turn off detailed logs, read mpd's documentation.

Also, there are system-wide queues for NETGRAPH messages that can overflow
and that's bad thing. Check them out with command:

vmstat -z | egrep 'ITEM|NetGraph'

FAILURES column shows how many times NETGRAPH queues have been overflowed.
One may increase their LIMIT (second column in vmstat's output)
with /boot/loader.conf:

net.graph.maxdata=65536
net.graph.maxalloc=65536

Eugene Grosbein
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org


Re: Bridging Two Tunnel Interfaces For ALTQ

2011-07-04 Thread Michael MacLeod
I merged both of your responses below, so that the thread doesn't get
fragmented.

On Sat, Jul 2, 2011 at 4:07 AM, Julian Elischer jul...@freebsd.org wrote:

 **
 On 7/1/11 12:59 AM, Michael MacLeod wrote:

 On Fri, Jul 1, 2011 at 1:20 AM, Julian Elischer jul...@freebsd.orgwrote:

  On 6/29/11 11:28 AM, Michael MacLeod wrote:

 I use pf+ALTQ to achieve some pretty decent traffic shaping results at
 home.
 However, recently signed up to be part of an IPv6 trial with my ISP, and
 they've given me a second (dual-stacked) PPPoE login with which to test
 with. The problem is that the second login lacks my static IP or my
 routed
 /29. I can have both tunnels up simultaneously, but that becomes a pain
 to
 traffic shape since I can't have them both assigned to the same ALTQ.

 ... unless there is some way for me to turn the ng interfaces (I'm using
 mpd5) into ethernet interfaces that could be assigned to an if_bridge. I
 could easily disable IPv4 on the IPv6 tunnel, which would clean up any
 routing issues, assign both tunnels to the bridge, and put the ALTQ on
 the
 bridge. It just might have the effect I'm looking for. Bonus points if
 the
 solution can be extended to allow it to work with a gif tunnel as well,
 so
 that users of 6in4 tunnels could use it (my ISPs IPv6 beta won't let me
 do
 rDNS delegation, so I might want to try a tunnel from he.net instead).

 I spent some time this morning trying to make netgraph do this with the
 two
 ng interfaces, but didn't have any luck. Google didn't turn up anyone
 trying
 to do anything similar that I could find; closest I got was this:
 http://lists.freebsd.org/pipermail/freebsd-net/2004-November/005598.html

 This is all assuming that the best way to use ALTQ on multiple outbound
 connections is with a bridge. If there is another or more elegant
 solution,
 I'd love to hear it.


  rather than trying to shoehorn ng into if_bridge, why not use the
 netgraph bridge itility,
 or maybe one of the many other netgraph nodes that can split traffic.
 fofr example the ng_bpf filter can filter traffic on an almost arbitrary
 manner that you program using
 the bpf filter language.


  Julian, thanks for responding. I'm not particularly concerned about how I
 accomplish my goal, so long as I can accomplish it. I was thinking about
 using if_bridge or ng_bridge because I have past experience with software
 bridges in BSD and linux. Unfortunately, ng_bridge requires a node that has
 an ether hook. I spent a bit of time looking at the mpd5 documentation, and
 there's actually a config option to have mpd generate an extra tee node
 between the ppp and the iface nodes. These nodes are connected together
 using inet hooks. If I could find a netgraph node that can take inet in one
 side and ether on the other, I believe I'd be set.

 I think you need to draw a diagram..


Alright, here's how the mpd daemon puts together a PPPoE interface by
default:

iface node (ng0)--ppp node--tee node--pppoe node--ether node(em1)

Here's how it looks if I enable the option to add another tee:

iface node (ng0)--tee node--ppp node--tee node--pppoe node--ether
node(em1)

The iface and ppp nodes are connected with inet hooks, which I believe means
that they are straight IP packets, with no PPP or other layer 2 framing
remaining (though I could be totally wrong about that).


 The nice thing (near as I can tell) about using ethernet based nodes would
 be that pretty much everything can talk to an ethernet interface (tcpdump,
 etc) and that ethernet should be fairly easy to fake; just assign a fake MAC
 to the ether nodes (which is what the ng_ether node does, pretty much) and
 the bridge will take care of making sure traffic for tunnel 0 doesn't go to
 tunnel 1, etc.

  I haven't read up very much about ng_bpf yet, but it seems like a pretty
 heavy tool for the job, and wouldn't the data have to enter userspace for
 parsing by the bpf script?

 no you download the filter program into the kernel module to program it.


Ah, okay.

  Also, I've never written anything in bpf. It's not a huge hurdle, I hope,
 but it's certainly more involved than a six line ngctl incantation that
 turns my iface nodes into eiface nodes suitable for bridging.

 read the ng_bpf man page and the tcpdump man page.
 Having said that you may find many other ways to split traffic.

 actually you can do that in 1 ngctl command..
 I think you want the ng_eiface module. but I'm not sure...ngeiface presents
 an interface in ifconfig and
 produces ethernet frames which can be fed into the ng_bridge node teh
 output of which can be fed into a real ethernet bottom end.


I already tried linking an eiface node to the tee interface I described
above, between the iface and ppp nodes. I ran some traffic through the
interface but didn't see any of it appear on the ngeth0 interface (I was
watching it with tcpdump). According to the man page, ng_eiface nodes should
be connected to the Ethernet downstream from another node, like the ng_vlan

Re: FreeBSD 8.2 and MPD5 stability issues - update

2011-07-04 Thread Adrian Minta
 You should study why existing connections break,
 do clients disconnect themselves or server disconnect them?
 You'll need turn off detailed logs, read mpd's documentation.

 Also, there are system-wide queues for NETGRAPH messages that can overflow
 and that's bad thing. Check them out with command:

 vmstat -z | egrep 'ITEM|NetGraph'

 FAILURES column shows how many times NETGRAPH queues have been overflowed.
 One may increase their LIMIT (second column in vmstat's output)
 with /boot/loader.conf:

 net.graph.maxdata=65536
 net.graph.maxalloc=65536

 Eugene Grosbein



Back to default mpd for now.

--- /boot/loader.conf ---
kern.maxusers=1024
net.graph.maxdata=65536
net.graph.maxalloc=65536
kern.ipc.maxpipekva=32000
###8 cores maxthreads=8
net.isr.maxthreads=8
net.isr.bindthreads=1
net.isr.defaultqlimit=8192
net.isr.maxqlimit=10240
console=comconsole,vidconsole
hw.igb.rxd=4096 # IGB Tuning
hw.igb.txd=4096 # IGB Tuning
if_lagg_load=YES


--- /etc/sysctl.conf ---
kern.maxfiles=25
net.inet.ip.intr_queue_maxlen=10240
kern.ipc.nmbclusters=128
kern.ipc.somaxconn=32768
kern.ipc.maxsockbuf=12800
kern.ipc.maxsockets=12800
net.local.stream.recvspace=65536
net.local.stream.sendspace=65536
net.local.dgram.recvspace=8000
net.inet.udp.recvspace=262144
net.graph.maxdgram=1024
net.graph.recvspace=1024
kern.random.sys.harvest.ethernet=0
net.isr.direct=1
net.isr.direct_force=0


--- When failures start (1 sec interval): ---
sessions:5697
ITEM SIZE LIMIT  USED  FREE  REQUESTS 
FAILURES
NetGraph items:   104,65540,   14, 8512, 11679211,
   0
NetGraph data items:  104,65540,0,  899, 18656500,
   0

sessions:5695
ITEM SIZE LIMIT  USED  FREE  REQUESTS 
FAILURES
NetGraph items:   104,65540,   14, 8512, 11698144,
   0
NetGraph data items:  104,65540,0,  899, 18690476,
   0

sessions:5696
ITEM SIZE LIMIT  USED  FREE  REQUESTS 
FAILURES
NetGraph items:   104,65540,   14, 8512, 11717921,
   0
NetGraph data items:  104,65540,0,  899, 18726539,
   0

sessions:5697
ITEM SIZE LIMIT  USED  FREE  REQUESTS 
FAILURES
NetGraph items:   104,65540,8, 8518, 11736938,
   0
NetGraph data items:  104,65540,0,  899, 18760816,
   0

sessions:5697
ITEM SIZE LIMIT  USED  FREE  REQUESTS 
FAILURES
NetGraph items:   104,65540,   14, 8512, 11756221,
   0
NetGraph data items:  104,65540,0,  899, 18796218,
   0

sessions:5697
ITEM SIZE LIMIT  USED  FREE  REQUESTS 
FAILURES
NetGraph items:   104,65540,   11, 8515, 11775151,
   0
NetGraph data items:  104,65540,0,  899, 18830631,
   0

sessions:5696
ITEM SIZE LIMIT  USED  FREE  REQUESTS 
FAILURES
NetGraph items:   104,65540,9, 8517, 11794388,
   0
NetGraph data items:  104,65540,0,  899, 18865673,
   0


--- mpd.log (via log -ccp -chat -lcp) ---
Jul  4 23:23:37 lns mpd: L2TP: Control connection 0x80a2aaf10 terminated:
0 (no more sessions exist in this tunnel)
Jul  4 23:23:37 lns mpd: L2TP: Control connection 0x80a2a8810 terminated:
0 (no more sessions exist in this tunnel)
Jul  4 23:23:37 lns mpd: [L19-6291] L2TP: Call #19 connected
Jul  4 23:23:37 lns mpd: [L19-6291] Link: UP event
Jul  4 23:23:37 lns mpd: [L19-6291] LCP: Up event
Jul  4 23:23:37 lns mpd: [L19-6291] LCP: state change Starting -- Req-Sent
Jul  4 23:23:37 lns mpd: [L19-6291] LCP: SendConfigReq #1
Jul  4 23:23:37 lns mpd: [L50-6292] L2TP: Call #50 connected
Jul  4 23:23:37 lns mpd: [L50-6292] Link: UP event
Jul  4 23:23:37 lns mpd: [L50-6292] LCP: Up event
Jul  4 23:23:37 lns mpd: [L50-6292] LCP: state change Starting -- Req-Sent
Jul  4 23:23:37 lns mpd: [L50-6292] LCP: SendConfigReq #1
Jul  4 23:23:37 lns mpd: [L7-6161] LCP: rec'd Configure Reject #1 (Req-Sent)
Jul  4 23:23:37 lns mpd: [L7-6161]   Wrong id#, expecting 4
Jul  4 23:23:37 lns mpd: [L50-5752] LCP: rec'd Configure Request #1
(Ack-Sent)
Jul  4 23:23:37 lns mpd: [L50-5752] LCP: SendConfigAck #1
Jul  4 23:23:37 lns mpd: [L6-6107] LCP: rec'd Configure Request #1 (Req-Sent)
Jul  4 23:23:37 lns mpd: [L6-6107] LCP: SendConfigAck #1
Jul  4 23:23:37 lns mpd: [L6-6107] LCP: state change Req-Sent -- Ack-Sent
Jul  4 23:23:37 lns mpd: [L25-6047] LCP: rec'd Configure Request #1
(Ack-Sent)
Jul  4 23:23:37 lns mpd: [L25-6047] LCP: SendConfigAck #1
Jul  4 23:23:37 lns mpd: [L1-5754] LCP: rec'd Configure Request #1 (Ack-Sent)
Jul  4 23:23:37 lns mpd: [L1-5754] LCP: SendConfigAck #1
Jul  4 23:23:37 lns mpd: [L26-6108] LCP: rec'd Configure Request #1
(Req-Sent)
Jul  4 

Re: Multiple IPv6 ISPs

2011-07-04 Thread Julian Elischer

On 7/4/11 5:24 AM, Paul Schenkeveld wrote:

Hi,

At one of my customers we have had 2 ISPs for a long time but now we
have to support IPv6 too.

In the IPv4 world I used ipfw for policy-based routing to separate
traffic from the two public address ranges:

 ipfw add 1010 allow ip from any to MY_IP_RANGES
 ipfw add 1020 fwd ISP1_GW ip from ISP1_SUBNET to any
 ipfw add 1030 fwd ISP2_GW ip from ISP2_SUBNET to any

When I try the same with IPv6, it appears that ipfw(8) does not support
an IPv6 destination with the fwd statement, the packet matching part
seems to work fine.  This appears documented in bin/117214 (Oct 2007)
but never solved.

Before asking the list I went looking for other options, setfib came to
mind but it appears that setfib only works on IPv4, is that correct or
am I overlooking something?

no, setfib for IPV6 is not complete
I know that work is underway to fix that,

it may be possible to use netgraph and vnetjails to simulate it 
somehow as vnet supports ipv6.

Pf is used for firewalling and doing both filtering and policy based
routing in pf doesn't work.

Anyway, how do other people solve this?  I need to run services on both
address ranges so flipping a default gateway when pinging the next hop
fails does not solve it for me.

Soon, having IPv6 is no longer an option but rather a necessity.

Regards,

Paul Schenkeveld
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org



___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org


Re: kern/158201: [re] re0 driver quit working on Acer AO751h between 8.0 and 8.2 (also 9.0) [regression]

2011-07-04 Thread yongari
Synopsis: [re] re0 driver quit working on Acer AO751h between 8.0 and 8.2 (also 
9.0) [regression]

Responsible-Changed-From-To: freebsd-net-yongari
Responsible-Changed-By: yongari
Responsible-Changed-When: Tue Jul 5 00:35:48 UTC 2011
Responsible-Changed-Why: 
Grab.

http://www.freebsd.org/cgi/query-pr.cgi?pr=158201
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org


bce packet loss

2011-07-04 Thread Charles Sprickman

Hello,

We're running a few 8.1-R servers with Broadcom bce interfaces (Dell R510) 
and I'm seeing occasional packet loss on them (enough that it trips nagios 
now and then).  Cabling seems fine as neither the switch nor the sysctl 
info for the device show any errors/collisions/etc, however there is one 
odd one, which is dev.bce.1.stat_IfHCInBadOctets: 539369.  See [1] below 
for full sysctl output.  The switch shows no errors but for Dropped 
packets 683868.


pciconf output is also below. [2]

By default, the switch had flow control set to on.  I also let it run 
with auto.  In both cases, the drops continued to increment.  I'm now 
running with flow control off to see if that changes anything.


I do see some correlation between cpu usage and drops - I have cpu usage 
graphed in nagios and cacti is graphing the drops on the dell switch. 
There's no signs of running out of mbufs or similar.


So given that limited info, is there anything I can look at to track this 
down?  Anything stand out in the stats sysctl exposes?  Two things are 
standing out for me - the number of changes in bce regarding flow control 
that are not in 8.1, and the correlation between cpu load and the drops.


What other information can I provide?

Thanks,

Charles

[1] [root@h23 /home/spork]# sysctl -a |grep bce.1
dev.bce.1.%desc: Broadcom NetXtreme II BCM5716 1000Base-T (C0)
dev.bce.1.%driver: bce
dev.bce.1.%location: slot=0 function=1
dev.bce.1.%pnpinfo: vendor=0x14e4 device=0x163b subvendor=0x1028 
subdevice=0x02f1 class=0x02

dev.bce.1.%parent: pci1
dev.bce.1.l2fhdr_error_count: 0
dev.bce.1.mbuf_alloc_failed_count: 282
dev.bce.1.fragmented_mbuf_count: 2748
dev.bce.1.dma_map_addr_rx_failed_count: 0
dev.bce.1.dma_map_addr_tx_failed_count: 5
dev.bce.1.unexpected_attention_count: 0
dev.bce.1.stat_IfHcInOctets: 62708651108
dev.bce.1.stat_IfHCInBadOctets: 539369
dev.bce.1.stat_IfHCOutOctets: 434264587173
dev.bce.1.stat_IfHCOutBadOctets: 0
dev.bce.1.stat_IfHCInUcastPkts: 533441918
dev.bce.1.stat_IfHCInMulticastPkts: 3108746
dev.bce.1.stat_IfHCInBroadcastPkts: 1314905
dev.bce.1.stat_IfHCOutUcastPkts: 640961970
dev.bce.1.stat_IfHCOutMulticastPkts: 26
dev.bce.1.stat_IfHCOutBroadcastPkts: 8909
dev.bce.1.stat_emac_tx_stat_dot3statsinternalmactransmiterrors: 0
dev.bce.1.stat_Dot3StatsCarrierSenseErrors: 0
dev.bce.1.stat_Dot3StatsFCSErrors: 0
dev.bce.1.stat_Dot3StatsAlignmentErrors: 0
dev.bce.1.stat_Dot3StatsSingleCollisionFrames: 0
dev.bce.1.stat_Dot3StatsMultipleCollisionFrames: 0
dev.bce.1.stat_Dot3StatsDeferredTransmissions: 0
dev.bce.1.stat_Dot3StatsExcessiveCollisions: 0
dev.bce.1.stat_Dot3StatsLateCollisions: 0
dev.bce.1.stat_EtherStatsCollisions: 0
dev.bce.1.stat_EtherStatsFragments: 0
dev.bce.1.stat_EtherStatsJabbers: 0
dev.bce.1.stat_EtherStatsUndersizePkts: 0
dev.bce.1.stat_EtherStatsOversizePkts: 0
dev.bce.1.stat_EtherStatsPktsRx64Octets: 34048797
dev.bce.1.stat_EtherStatsPktsRx65Octetsto127Octets: 431844366
dev.bce.1.stat_EtherStatsPktsRx128Octetsto255Octets: 25946173
dev.bce.1.stat_EtherStatsPktsRx256Octetsto511Octets: 39936369
dev.bce.1.stat_EtherStatsPktsRx512Octetsto1023Octets: 2296565
dev.bce.1.stat_EtherStatsPktsRx1024Octetsto1522Octets: 3931392
dev.bce.1.stat_EtherStatsPktsRx1523Octetsto9022Octets: 0
dev.bce.1.stat_EtherStatsPktsTx64Octets: 60122571
dev.bce.1.stat_EtherStatsPktsTx65Octetsto127Octets: 221041349
dev.bce.1.stat_EtherStatsPktsTx128Octetsto255Octets: 40177071
dev.bce.1.stat_EtherStatsPktsTx256Octetsto511Octets: 24099944
dev.bce.1.stat_EtherStatsPktsTx512Octetsto1023Octets: 44493532
dev.bce.1.stat_EtherStatsPktsTx1024Octetsto1522Octets: 251036438
dev.bce.1.stat_EtherStatsPktsTx1523Octetsto9022Octets: 0
dev.bce.1.stat_XonPauseFramesReceived: 61778
dev.bce.1.stat_XoffPauseFramesReceived: 76315
dev.bce.1.stat_OutXonSent: 0
dev.bce.1.stat_OutXoffSent: 0
dev.bce.1.stat_FlowControlDone: 0
dev.bce.1.stat_MacControlFramesReceived: 0
dev.bce.1.stat_XoffStateEntered: 0
dev.bce.1.stat_IfInFramesL2FilterDiscards: 145832
dev.bce.1.stat_IfInRuleCheckerDiscards: 0
dev.bce.1.stat_IfInFTQDiscards: 0
dev.bce.1.stat_IfInMBUFDiscards: 0
dev.bce.1.stat_IfInRuleCheckerP4Hit: 4448215
dev.bce.1.stat_CatchupInRuleCheckerDiscards: 0
dev.bce.1.stat_CatchupInFTQDiscards: 0
dev.bce.1.stat_CatchupInMBUFDiscards: 0
dev.bce.1.stat_CatchupInRuleCheckerP4Hit: 0
dev.bce.1.com_no_buffers: 0

[2] pciconf -lvb
bce1@pci0:1:0:1:	class=0x02 card=0x02f11028 chip=0x163b14e4 
rev=0x20 hdr=0x00

vendor = 'Broadcom Corporation'
class  = network
subclass   = ethernet
bar   [10] = type Memory, range 64, base 0xdc00, size 33554432, 
enabled

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org


carp for IPv6?

2011-07-04 Thread Doug Barton
If I try to set up a carp interface for IPv6 on a recent 8.2-STABLE I 
get an error using either /64 or /128 as the mask:


ifconfig carp2 vhid 4 advskew 0 pass mycleverpass 2001:a:b:c::2/64

ifconfig 2001:a:b:c::2/64: bad value (width too large)

There are no examples for IPv6 in the man page, or the handbook; and I 
can't find any on line. I'm interested in configuration for the command 
line, and rc.conf.



Thanks,

Doug

--

Nothin' ever doesn't change, but nothin' changes much.
-- OK Go

Breadth of IT experience, and depth of knowledge in the DNS.
Yours for the right price.  :)  http://SupersetSolutions.com/

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org


Re: carp for IPv6?

2011-07-04 Thread Michael Sinatra

On 07/04/11 19:59, Doug Barton wrote:

If I try to set up a carp interface for IPv6 on a recent 8.2-STABLE I
get an error using either /64 or /128 as the mask:

ifconfig carp2 vhid 4 advskew 0 pass mycleverpass 2001:a:b:c::2/64

ifconfig 2001:a:b:c::2/64: bad value (width too large)

There are no examples for IPv6 in the man page, or the handbook; and I
can't find any on line. I'm interested in configuration for the command
line, and rc.conf.


ifconfig_carp0=vhid 80 advskew 120 pass yomama 128.32.206.100/32
ipv6_ifconfig_carp0=2607:f140::::80/128

Works on 8.2-STABLE (June 7, 2011).

Note that I cannot get carp to work properly without configuring an IPv4 
address.


michael
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org


Re: carp for IPv6?

2011-07-04 Thread Doug Barton

On 07/04/2011 20:26, Michael Sinatra wrote:

On 07/04/11 19:59, Doug Barton wrote:

If I try to set up a carp interface for IPv6 on a recent 8.2-STABLE I
get an error using either /64 or /128 as the mask:

ifconfig carp2 vhid 4 advskew 0 pass mycleverpass 2001:a:b:c::2/64

ifconfig 2001:a:b:c::2/64: bad value (width too large)

There are no examples for IPv6 in the man page, or the handbook; and I
can't find any on line. I'm interested in configuration for the command
line, and rc.conf.


ifconfig_carp0=vhid 80 advskew 120 pass yomama 128.32.206.100/32
ipv6_ifconfig_carp0=2607:f140::::80/128

Works on 8.2-STABLE (June 7, 2011).

Note that I cannot get carp to work properly without configuring an IPv4
address.


Well that sucks. :-/

In the example you gave, how would you add the IPv6 address on the 
command line to an existing carp interface? 'ifconfig carp0 inet6 
2607:f140::::80/128 alias' ??



Thanks for the response,

Doug

--

Nothin' ever doesn't change, but nothin' changes much.
-- OK Go

Breadth of IT experience, and depth of knowledge in the DNS.
Yours for the right price.  :)  http://SupersetSolutions.com/

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org


Re: carp for IPv6?

2011-07-04 Thread Doug Barton

On 07/04/2011 21:20, Doug Barton wrote:

On 07/04/2011 20:26, Michael Sinatra wrote:

On 07/04/11 19:59, Doug Barton wrote:

If I try to set up a carp interface for IPv6 on a recent 8.2-STABLE I
get an error using either /64 or /128 as the mask:

ifconfig carp2 vhid 4 advskew 0 pass mycleverpass 2001:a:b:c::2/64

ifconfig 2001:a:b:c::2/64: bad value (width too large)

There are no examples for IPv6 in the man page, or the handbook; and I
can't find any on line. I'm interested in configuration for the command
line, and rc.conf.


ifconfig_carp0=vhid 80 advskew 120 pass yomama 128.32.206.100/32
ipv6_ifconfig_carp0=2607:f140::::80/128

Works on 8.2-STABLE (June 7, 2011).

Note that I cannot get carp to work properly without configuring an IPv4
address.


I should point out that I was able to do the following:

ifconfig carp2 *inet6* vhid 4 

as above, and it worked in the sense that it configured the carp 
interface, but subsequently my IPv6 network went straight into the 
crapper. Hence, the statement that it won't work without an IPv4 address 
seems to be correct. This seems pretty sub-optimal given the world we 
currently live in ...



Doug

--

Nothin' ever doesn't change, but nothin' changes much.
-- OK Go

Breadth of IT experience, and depth of knowledge in the DNS.
Yours for the right price.  :)  http://SupersetSolutions.com/

___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org


Re: nfe taskq kernel panic

2011-07-04 Thread Arnaud Lacombe
Hi,

On Wed, Jun 8, 2011 at 1:24 AM, Arnaud Lacombe lacom...@gmail.com wrote:
 Hi,

 [sorry for the delay]

 On Thu, May 5, 2011 at 2:22 PM, Arnaud Lacombe lacom...@gmail.com wrote:
 Hi,

 On Thu, May 5, 2011 at 1:37 PM, Emil Muratov g...@hotplug.ru wrote:


 Hi all.

 I have a small home router/nas running nvidia ion platform with onboard nfe
 LAN adapter.
 About a month ago I changed ISP and setup pppoe client with mpd5.5. Since
 that time my router
 issues kernel panic once or twice a day with Fatal trap 12: page fault
 while in kernel mode and (nfe0 taskq) is the current process.
 Updating to the latest stable doesn't help. I don’t know what to do next,
 any help would be much appreciated. Below is kgdb backtrace, dmesg output,
 kernel config file, if anything is missing just let me know.

 Your error looks like a nice use-after-free. Could you 'disassemble
 0x8037d7bb' in gdb, and find the matching faulty dereference ?

 For the record, this crash happen very early in LibAliasIn. The
 disassembly gives the following result:

 Dump of assembler code for function LibAliasIn:
 0x8037d78f LibAliasIn+0:      push   %rbp
 0x8037d790 LibAliasIn+1:      mov    %rsp,%rbp
 0x8037d793 LibAliasIn+4:      push   %r15
 0x8037d795 LibAliasIn+6:      push   %r14
 0x8037d797 LibAliasIn+8:      push   %r13
 0x8037d799 LibAliasIn+10:     push   %r12
 0x8037d79b LibAliasIn+12:     push   %rbx
 0x8037d79c LibAliasIn+13:     sub    $0x8,%rsp
 0x8037d7a0 LibAliasIn+17:     mov    %rdi,%rbx
 0x8037d7a3 LibAliasIn+20:     mov    %rsi,%r15
 0x8037d7a6 LibAliasIn+23:     mov    %edx,%r14d
 0x8037d7a9 LibAliasIn+26:     mov    %gs:0x0,%r12
 0x8037d7b2 LibAliasIn+35:     mov    $0x4,%r13d
 0x8037d7b8 LibAliasIn+41:     mov    %r13,%rax
 0x8037d7bb LibAliasIn+44:     lock cmpxchg %r12,0xfac8(%rdi)
 ^^^ crash here
 0x8037d7c4 LibAliasIn+53:     sete   %al
 0x8037d7c7 LibAliasIn+56:     test   %al,%al
 0x8037d7c9 LibAliasIn+58:     je     0x8037d813
 LibAliasIn+132

 As LibAliasIn in _very_ trivial:

 int
 LibAliasIn(struct libalias *la, char *ptr, int maxpacketsize)
 {
        int res;

        LIBALIAS_LOCK(la);
        res = LibAliasInLocked(la, ptr, maxpacketsize);
        LIBALIAS_UNLOCK(la);
        return (res);
 }

 the crash certainly happens because the reference of libalias became invalid.

 Now the reason as of why it happens remain to be found. That part of
 the code is pretty obscure to me, so I'll let piso@ or luigi@ handle
 this issue for now.

As no body (even among the authors of the relevant code...) cares
about this issue, I guess the best is to submit a new PR and
hopefully, in a generation or two, it will get resolved.

 - Arnaud

  - Arnaud

 I'd tend not to trust code relying on big hack, as per the preamble
 of m_megapullup():

 /*
  * m_megapullup() - this function is a big hack.
  * Thankfully, it's only used in ng_nat and ipfw+nat.
  *...

 which look like a re-invention of m_copydata()...

  - Arnaud

 Thanx.



 =
 epia.home.lan dumped core - see /crash/vmcore.15

 Thu May  5 18:29:58 MSD 2011

 FreeBSD epia.home.lan 8.2-STABLE FreeBSD 8.2-STABLE #1: Tue May  3 22:11:56
 MSD 2011     r...@epia.home.lan:/usr/obj/usr/src/sys/ION4debug  amd64

 panic: page fault

 GNU gdb 6.1.1 [FreeBSD]
 Copyright 2004 Free Software Foundation, Inc.
 GDB is free software, covered by the GNU General Public License, and you are
 welcome to change it and/or distribute copies of it under certain
 conditions.
 Type show copying to see the conditions.
 There is absolutely no warranty for GDB.  Type show warranty for details.
 This GDB was configured as amd64-marcel-freebsd...

 Unread portion of the kernel message buffer:

 Fatal trap 12: page fault while in kernel mode
 cpuid = 0; apic id = 00
 fault virtual address   = 0xff800ff02ac8
 fault code              = supervisor write data, page not present
 instruction pointer     = 0x20:0x8037d7bb
 stack pointer           = 0x28:0xff8fde20
 frame pointer           = 0x28:0xff8fde60
 code segment            = base 0x0, limit 0xf, type 0x1b
                        = DPL 0, pres 1, long 1, def32 0, gran 1
 processor eflags        = interrupt enabled, resume, IOPL = 0
 current process         = 0 (nfe0 taskq)
 trap number             = 12
 panic: page fault
 cpuid = 0
 KDB: stack backtrace:
 #0 0x802a97a3 at kdb_backtrace+0x5e
 #1 0x8027aa98 at panic+0x182
 #2 0x804466d0 at trap_fatal+0x292
 #3 0x80446a85 at trap_pfault+0x286
 #4 0x80446f2f at trap+0x3cb
 #5 0x8042ff54 at calltrap+0x8
 #6 0x8035ceb4 at ipfw_nat+0x20a
 #7 0x803547e3 at ipfw_chk+0xbaf
 #8 0x8035977c at ipfw_check_hook+0xf9
 #9 0x8032a221 at pfil_run_hooks+0x9c
 #10 0x8035fe84 at ip_input+0x2d0
 #11 0x8032947f at 

Re: nfe taskq kernel panic

2011-07-04 Thread Arnaud Lacombe
Hi,

[Moving thread to -current, added a...@freebsd.org to the Cc: list as he
changed that code recently.]

On Wed, Jun 8, 2011 at 1:25 AM, Arnaud Lacombe lacom...@gmail.com wrote:
 Hi,

 On Thu, May 5, 2011 at 2:49 PM, Arnaud Lacombe lacom...@gmail.com wrote:
 Hi,

 On Thu, May 5, 2011 at 2:22 PM, Arnaud Lacombe lacom...@gmail.com wrote:
 Hi,

 On Thu, May 5, 2011 at 1:37 PM, Emil Muratov g...@hotplug.ru wrote:


 Hi all.

 I have a small home router/nas running nvidia ion platform with onboard nfe
 LAN adapter.
 About a month ago I changed ISP and setup pppoe client with mpd5.5. Since
 that time my router
 issues kernel panic once or twice a day with Fatal trap 12: page fault
 while in kernel mode and (nfe0 taskq) is the current process.
 Updating to the latest stable doesn't help. I don’t know what to do next,
 any help would be much appreciated. Below is kgdb backtrace, dmesg output,
 kernel config file, if anything is missing just let me know.

 Your error looks like a nice use-after-free. Could you 'disassemble
 0x8037d7bb' in gdb, and find the matching faulty dereference ?
 I'd tend not to trust code relying on big hack, as per the preamble
 of m_megapullup():

 There is a stale reference to the mbuf passed to, and freed in
 m_megapullup(); could you test the following patch ?

 diff --git a/sys/netinet/ipfw/ip_fw_nat.c b/sys/netinet/ipfw/ip_fw_nat.c
 index f8c3e63..80c13dc 100644
 --- a/sys/netinet/ipfw/ip_fw_nat.c
 +++ b/sys/netinet/ipfw/ip_fw_nat.c
 @@ -263,7 +263,7 @@ ipfw_nat(struct ip_fw_args *args, struct cfg_nat
 *t, struct mbuf *m)
                retval = LibAliasOut(t-lib, c,
                        mcl-m_len + M_TRAILINGSPACE(mcl));
        if (retval == PKT_ALIAS_RESPOND) {
 -               m-m_flags |= M_SKIP_FIREWALL;
 +               mcl-m_flags |= M_SKIP_FIREWALL;
                retval = PKT_ALIAS_OK;
        }
        if (retval != PKT_ALIAS_OK 

 This was introduced in r188294 by piso@ (added to the CC: list).

 piso@, could you please _fix_ that code ?

Can someone fix this obvious use-after-free, please ? the mbuf passed
to m_megapullup() cannot be re-used as it might ends up being freed.
This conditional is the sole conditional still using `m' after the
call to m_megapullup().

diff --git a/sys/netinet/ipfw/ip_fw_nat.c b/sys/netinet/ipfw/ip_fw_nat.c
index 1679a97..dbeb254 100644
--- a/sys/netinet/ipfw/ip_fw_nat.c
+++ b/sys/netinet/ipfw/ip_fw_nat.c
@@ -315,7 +315,7 @@ ipfw_nat(struct ip_fw_args *args, struct cfg_nat
*t, struct mbuf *m)
}

if (retval == PKT_ALIAS_RESPOND)
-   m-m_flags |= M_SKIP_FIREWALL;
+   mcl-m_flags |= M_SKIP_FIREWALL;
mcl-m_pkthdr.len = mcl-m_len = ntohs(ip-ip_len);

/*

Thanks,
 - Arnaud

 Thanks,
  - Arnaud


  - Arnaud


 /*
  * m_megapullup() - this function is a big hack.
  * Thankfully, it's only used in ng_nat and ipfw+nat.
  *...

 which look like a re-invention of m_copydata()...

  - Arnaud

 Thanx.



 =
 epia.home.lan dumped core - see /crash/vmcore.15

 Thu May  5 18:29:58 MSD 2011

 FreeBSD epia.home.lan 8.2-STABLE FreeBSD 8.2-STABLE #1: Tue May  3 22:11:56
 MSD 2011     r...@epia.home.lan:/usr/obj/usr/src/sys/ION4debug  amd64

 panic: page fault

 GNU gdb 6.1.1 [FreeBSD]
 Copyright 2004 Free Software Foundation, Inc.
 GDB is free software, covered by the GNU General Public License, and you 
 are
 welcome to change it and/or distribute copies of it under certain
 conditions.
 Type show copying to see the conditions.
 There is absolutely no warranty for GDB.  Type show warranty for details.
 This GDB was configured as amd64-marcel-freebsd...

 Unread portion of the kernel message buffer:

 Fatal trap 12: page fault while in kernel mode
 cpuid = 0; apic id = 00
 fault virtual address   = 0xff800ff02ac8
 fault code              = supervisor write data, page not present
 instruction pointer     = 0x20:0x8037d7bb
 stack pointer           = 0x28:0xff8fde20
 frame pointer           = 0x28:0xff8fde60
 code segment            = base 0x0, limit 0xf, type 0x1b
                        = DPL 0, pres 1, long 1, def32 0, gran 1
 processor eflags        = interrupt enabled, resume, IOPL = 0
 current process         = 0 (nfe0 taskq)
 trap number             = 12
 panic: page fault
 cpuid = 0
 KDB: stack backtrace:
 #0 0x802a97a3 at kdb_backtrace+0x5e
 #1 0x8027aa98 at panic+0x182
 #2 0x804466d0 at trap_fatal+0x292
 #3 0x80446a85 at trap_pfault+0x286
 #4 0x80446f2f at trap+0x3cb
 #5 0x8042ff54 at calltrap+0x8
 #6 0x8035ceb4 at ipfw_nat+0x20a
 #7 0x803547e3 at ipfw_chk+0xbaf
 #8 0x8035977c at ipfw_check_hook+0xf9
 #9 0x8032a221 at pfil_run_hooks+0x9c
 #10 0x8035fe84 at ip_input+0x2d0
 #11 0x8032947f at netisr_dispatch_src+0x71
 #12 0x80c22cab at ng_iface_rcvdata+0xdc
 #13 0x80c18964 at ng_apply_item+0x20a
 #14 0x80c17afd at ng_snd_item+0x2a1
 

Re: carp for IPv6?

2011-07-04 Thread Michael Sinatra

On 07/04/11 21:29, Doug Barton wrote:

On 07/04/2011 21:20, Doug Barton wrote:

On 07/04/2011 20:26, Michael Sinatra wrote:

On 07/04/11 19:59, Doug Barton wrote:

If I try to set up a carp interface for IPv6 on a recent 8.2-STABLE I
get an error using either /64 or /128 as the mask:

ifconfig carp2 vhid 4 advskew 0 pass mycleverpass 2001:a:b:c::2/64

ifconfig 2001:a:b:c::2/64: bad value (width too large)

There are no examples for IPv6 in the man page, or the handbook; and I
can't find any on line. I'm interested in configuration for the command
line, and rc.conf.


ifconfig_carp0=vhid 80 advskew 120 pass yomama 128.32.206.100/32
ipv6_ifconfig_carp0=2607:f140::::80/128

Works on 8.2-STABLE (June 7, 2011).

Note that I cannot get carp to work properly without configuring an IPv4
address.


I should point out that I was able to do the following:

ifconfig carp2 *inet6* vhid 4 

as above, and it worked in the sense that it configured the carp
interface, but subsequently my IPv6 network went straight into the
crapper. Hence, the statement that it won't work without an IPv4 address
seems to be correct. This seems pretty sub-optimal given the world we
currently live in ...


Sorry, I was watching fireworks or I would have tipped you off on that 
one earlier.  I was setting that config up as part of a talk I gave on 
building a redundant IPv6 web reverse-proxy for World IPv6 Day, which 
has some of the config snippets I just gave you.  This was at the 
Internet2/ESCC Joint Techs meeting:


http://www.internet2.edu/presentations/jt2011winter/20110202-sinatra-world-IPv6-day-LT.pdf

The video for the talk can be seen here:

http://events.internet2.edu/2011/jt-clemson/agenda.cfm?go=sessionid=10001572event=1150

Anyway, I had some trouble getting carp to work correctly between 8.x 
hosts and 7.x hosts (both sides claimed to be master), in addition to 
the IPv6 issue.  I had meant to do some more troubleshooting after I 
returned from the talk, but I was in the process of changing jobs and a 
couple of the test servers at UCB are currently down and I need to get 
them back up.  In other words, I haven't had the chance to troubleshoot 
these issues further, but I do know that you definitely need to 
configure IPv4 on the interface.


michael
___
freebsd-net@freebsd.org mailing list
http://lists.freebsd.org/mailman/listinfo/freebsd-net
To unsubscribe, send any mail to freebsd-net-unsubscr...@freebsd.org