[Kernel-packages] [Bug 1065159] Re: ipv6 routing memory leak

2013-10-14 Thread Launchpad Bug Tracker
[Expired for linux (Ubuntu) because there has been no activity for 60
days.]

** Changed in: linux (Ubuntu)
   Status: Incomplete => Expired

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1065159

Title:
  ipv6 routing memory leak

Status in “linux” package in Ubuntu:
  Expired

Bug description:
  Hello,

  I'm running 12.04.1 LTS with the main kernel, all requested
  information is at the end of this report. As the server does not have
  any GUI, I'm not able to use the apport reporting tools, as no browser
  can be opened.

  When using IPv6 with routing and ndp proxy enabled
  (net.ipv6.conf.all.proxy_ndp=1, net.ipv6.conf.default_proxy=1), the
  routing table/cache becomes filled and  is not releasing memory.

  I first faced this problem three days ago, when some VMs became
  offline and couldn't be reached anymore. While trying to solve the
  problem, I hit the following error when adding routes after deleting
  the old ones:

  root@server:~# ip -6 route add default via 2001:db8::1
  RTNETLINK answers: Cannot allocate memory
  root@server:~# 

  After examining /proc and /proc/sys, I figured out that the size of
  /proc/net/ipv6_route was above the value set in sysctl:

  root@server:/proc/net# cat ipv6_route | wc -c
  5174
  root@server:/proc/net# sysctl net.ipv6.route.max_size
  net.ipv6.route.max_size = 4096
  root@server:/proc/net# 

  Actually there are only 10 routes in the routing table and there
  aren't any added or deleted by any KNOWN scripts automagically and
  yes, I know, that the text representation of /proc/sys/ipv6_route can
  be bigger than the sysctl value, 4096.

  After further tests, I figured out that I'm able to fix the problem by
  changing the value of net.ipv6.route.max_size to 8192. Adding the
  routes and the proxy entries for the machines to the table and it all
  started working again, till now.

  Now the size of ipv6_route is:

  root@server:/proc/net# cat ipv6_route | wc -c
  6324
  root@server:/proc/net# sysctl net.ipv6.route.max_size
  net.ipv6.route.max_size = 8192
  root@server:/proc/net# 

  And I can see the exact same problem again, when I try to re-add the
  routes. So changing net.ipv6.route.max_size to 16384 solves the
  problem once again for a few hours and I'm waiting for the moment the
  IPv6 stuff stops working again.

  The main configuration to enable everything that is used, is as
  follows:

  You have a subnet: 2001:db8::/64 that can be used for virtual machines
  that are connected to an internal bridge, that is called "brint". The
  interface that operates to the outside is eth0. The default gateway is
  2001:db8::dead:beef:1/128 (yes, in a different subnet).

  - Enable IPv6 NDP proxying
Set net.ipv6.conf.all.proxy_ndp=1, net.ipv6.conf.default.proxy_ndp=1 and 
net.ipv6.conf.eth0.proxy_ndp=1

  - Enable IPv6 forwarding
Set net.ipv6.conf.all.forwarding=1 and net.ipv6.conf.default.forwarding=1

  - Create the bridge
ip link add dev brint type bridge && ip link set dev brint up

  - Assign an IPv6 address to the network interface that operates to the 
outside (eth0)
ip -6 addr add 2001:db8::1/64 dev eth0 && ip link set dev eth0 up

  - Assign an IPv6 address to the bridge (brint)
ip -6 addr add 2001:db8::1/64 dev brint

  - To make sure the subnet is routed only to brint, delete the default route 
added by the kernel on eth0
ip -6 route del 2001:db8::/64 dev eth0

  - Now add the route to the gateway on eth0
ip -6 route add 2001:db8::dead:beef:1/128 dev eth0

  - Add the NDP proxy entries for the machines on brint
ip -6 neigh add proxy 2001:db8::2 dev eth0
ip -6 neigh add proxy 2001:db8::3 dev eth0

  As the subnet 2001:db8::/64 is link local for the external gateway,
  NDP requests are sent and the system now answers them and forwards the
  packets received on eth0 to the VMs that are in the neighbour list for
  brint.

  The VMs on the "inside" just use a simple setup like: 2001:db8::2/64
  as address and 2001:db8::1 as default gateway.

  Now wait and watch the buffer become filled and the memory become not
  released.

  If you have any questions, feel free to contact me.

  KR,

  Grimeton

  1) cat /proc/version_signature
  Ubuntu 3.2.0-31.50-generic 3.2.28

  2) lspci -vnvn

  00:00.0 Host bridge [0600]: Intel Corporation 2nd Generation Core Processor 
Family DRAM Controller [8086:0100] (rev 09)
Subsystem: ASUSTeK Computer Inc. P8P67 Deluxe Motherboard [1043:844d]
Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- 
Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- SERR- 
Kernel driver in use: agpgart-intel

  00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200/2nd Generation Core 
Processor Family PCI Express Root Port [8086:0101] (rev 09) (prog-if 00 [Normal 
decode])
Control: I/O+ Mem

[Kernel-packages] [Bug 1065159] Re: ipv6 routing memory leak

2013-08-15 Thread Christopher M. Penalver
Grimeton, if the problem is not reproducible in Quantal, and you don't
need a backport to Precise, you may toggle the Status to Invalid.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1065159

Title:
  ipv6 routing memory leak

Status in “linux” package in Ubuntu:
  Incomplete

Bug description:
  Hello,

  I'm running 12.04.1 LTS with the main kernel, all requested
  information is at the end of this report. As the server does not have
  any GUI, I'm not able to use the apport reporting tools, as no browser
  can be opened.

  When using IPv6 with routing and ndp proxy enabled
  (net.ipv6.conf.all.proxy_ndp=1, net.ipv6.conf.default_proxy=1), the
  routing table/cache becomes filled and  is not releasing memory.

  I first faced this problem three days ago, when some VMs became
  offline and couldn't be reached anymore. While trying to solve the
  problem, I hit the following error when adding routes after deleting
  the old ones:

  root@server:~# ip -6 route add default via 2001:db8::1
  RTNETLINK answers: Cannot allocate memory
  root@server:~# 

  After examining /proc and /proc/sys, I figured out that the size of
  /proc/net/ipv6_route was above the value set in sysctl:

  root@server:/proc/net# cat ipv6_route | wc -c
  5174
  root@server:/proc/net# sysctl net.ipv6.route.max_size
  net.ipv6.route.max_size = 4096
  root@server:/proc/net# 

  Actually there are only 10 routes in the routing table and there
  aren't any added or deleted by any KNOWN scripts automagically and
  yes, I know, that the text representation of /proc/sys/ipv6_route can
  be bigger than the sysctl value, 4096.

  After further tests, I figured out that I'm able to fix the problem by
  changing the value of net.ipv6.route.max_size to 8192. Adding the
  routes and the proxy entries for the machines to the table and it all
  started working again, till now.

  Now the size of ipv6_route is:

  root@server:/proc/net# cat ipv6_route | wc -c
  6324
  root@server:/proc/net# sysctl net.ipv6.route.max_size
  net.ipv6.route.max_size = 8192
  root@server:/proc/net# 

  And I can see the exact same problem again, when I try to re-add the
  routes. So changing net.ipv6.route.max_size to 16384 solves the
  problem once again for a few hours and I'm waiting for the moment the
  IPv6 stuff stops working again.

  The main configuration to enable everything that is used, is as
  follows:

  You have a subnet: 2001:db8::/64 that can be used for virtual machines
  that are connected to an internal bridge, that is called "brint". The
  interface that operates to the outside is eth0. The default gateway is
  2001:db8::dead:beef:1/128 (yes, in a different subnet).

  - Enable IPv6 NDP proxying
Set net.ipv6.conf.all.proxy_ndp=1, net.ipv6.conf.default.proxy_ndp=1 and 
net.ipv6.conf.eth0.proxy_ndp=1

  - Enable IPv6 forwarding
Set net.ipv6.conf.all.forwarding=1 and net.ipv6.conf.default.forwarding=1

  - Create the bridge
ip link add dev brint type bridge && ip link set dev brint up

  - Assign an IPv6 address to the network interface that operates to the 
outside (eth0)
ip -6 addr add 2001:db8::1/64 dev eth0 && ip link set dev eth0 up

  - Assign an IPv6 address to the bridge (brint)
ip -6 addr add 2001:db8::1/64 dev brint

  - To make sure the subnet is routed only to brint, delete the default route 
added by the kernel on eth0
ip -6 route del 2001:db8::/64 dev eth0

  - Now add the route to the gateway on eth0
ip -6 route add 2001:db8::dead:beef:1/128 dev eth0

  - Add the NDP proxy entries for the machines on brint
ip -6 neigh add proxy 2001:db8::2 dev eth0
ip -6 neigh add proxy 2001:db8::3 dev eth0

  As the subnet 2001:db8::/64 is link local for the external gateway,
  NDP requests are sent and the system now answers them and forwards the
  packets received on eth0 to the VMs that are in the neighbour list for
  brint.

  The VMs on the "inside" just use a simple setup like: 2001:db8::2/64
  as address and 2001:db8::1 as default gateway.

  Now wait and watch the buffer become filled and the memory become not
  released.

  If you have any questions, feel free to contact me.

  KR,

  Grimeton

  1) cat /proc/version_signature
  Ubuntu 3.2.0-31.50-generic 3.2.28

  2) lspci -vnvn

  00:00.0 Host bridge [0600]: Intel Corporation 2nd Generation Core Processor 
Family DRAM Controller [8086:0100] (rev 09)
Subsystem: ASUSTeK Computer Inc. P8P67 Deluxe Motherboard [1043:844d]
Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- 
Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- SERR- 
Kernel driver in use: agpgart-intel

  00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200/2nd Generation Core 
Processor Family PCI Express Root Port [8086:0101] (rev 09) (prog-if 00 [Normal 
decode])
Control: I/O+ Mem+ Bus

[Kernel-packages] [Bug 1065159] Re: ipv6 routing memory leak

2013-08-15 Thread Grimeton
Hi,

I'd really like to help you, but I had to upgrade to 12.10 to get a
working environment again.

If the 12.10 information helps you as well, let me know and I'll add the
required info.

KR,

Grimeton.

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1065159

Title:
  ipv6 routing memory leak

Status in “linux” package in Ubuntu:
  Incomplete

Bug description:
  Hello,

  I'm running 12.04.1 LTS with the main kernel, all requested
  information is at the end of this report. As the server does not have
  any GUI, I'm not able to use the apport reporting tools, as no browser
  can be opened.

  When using IPv6 with routing and ndp proxy enabled
  (net.ipv6.conf.all.proxy_ndp=1, net.ipv6.conf.default_proxy=1), the
  routing table/cache becomes filled and  is not releasing memory.

  I first faced this problem three days ago, when some VMs became
  offline and couldn't be reached anymore. While trying to solve the
  problem, I hit the following error when adding routes after deleting
  the old ones:

  root@server:~# ip -6 route add default via 2001:db8::1
  RTNETLINK answers: Cannot allocate memory
  root@server:~# 

  After examining /proc and /proc/sys, I figured out that the size of
  /proc/net/ipv6_route was above the value set in sysctl:

  root@server:/proc/net# cat ipv6_route | wc -c
  5174
  root@server:/proc/net# sysctl net.ipv6.route.max_size
  net.ipv6.route.max_size = 4096
  root@server:/proc/net# 

  Actually there are only 10 routes in the routing table and there
  aren't any added or deleted by any KNOWN scripts automagically and
  yes, I know, that the text representation of /proc/sys/ipv6_route can
  be bigger than the sysctl value, 4096.

  After further tests, I figured out that I'm able to fix the problem by
  changing the value of net.ipv6.route.max_size to 8192. Adding the
  routes and the proxy entries for the machines to the table and it all
  started working again, till now.

  Now the size of ipv6_route is:

  root@server:/proc/net# cat ipv6_route | wc -c
  6324
  root@server:/proc/net# sysctl net.ipv6.route.max_size
  net.ipv6.route.max_size = 8192
  root@server:/proc/net# 

  And I can see the exact same problem again, when I try to re-add the
  routes. So changing net.ipv6.route.max_size to 16384 solves the
  problem once again for a few hours and I'm waiting for the moment the
  IPv6 stuff stops working again.

  The main configuration to enable everything that is used, is as
  follows:

  You have a subnet: 2001:db8::/64 that can be used for virtual machines
  that are connected to an internal bridge, that is called "brint". The
  interface that operates to the outside is eth0. The default gateway is
  2001:db8::dead:beef:1/128 (yes, in a different subnet).

  - Enable IPv6 NDP proxying
Set net.ipv6.conf.all.proxy_ndp=1, net.ipv6.conf.default.proxy_ndp=1 and 
net.ipv6.conf.eth0.proxy_ndp=1

  - Enable IPv6 forwarding
Set net.ipv6.conf.all.forwarding=1 and net.ipv6.conf.default.forwarding=1

  - Create the bridge
ip link add dev brint type bridge && ip link set dev brint up

  - Assign an IPv6 address to the network interface that operates to the 
outside (eth0)
ip -6 addr add 2001:db8::1/64 dev eth0 && ip link set dev eth0 up

  - Assign an IPv6 address to the bridge (brint)
ip -6 addr add 2001:db8::1/64 dev brint

  - To make sure the subnet is routed only to brint, delete the default route 
added by the kernel on eth0
ip -6 route del 2001:db8::/64 dev eth0

  - Now add the route to the gateway on eth0
ip -6 route add 2001:db8::dead:beef:1/128 dev eth0

  - Add the NDP proxy entries for the machines on brint
ip -6 neigh add proxy 2001:db8::2 dev eth0
ip -6 neigh add proxy 2001:db8::3 dev eth0

  As the subnet 2001:db8::/64 is link local for the external gateway,
  NDP requests are sent and the system now answers them and forwards the
  packets received on eth0 to the VMs that are in the neighbour list for
  brint.

  The VMs on the "inside" just use a simple setup like: 2001:db8::2/64
  as address and 2001:db8::1 as default gateway.

  Now wait and watch the buffer become filled and the memory become not
  released.

  If you have any questions, feel free to contact me.

  KR,

  Grimeton

  1) cat /proc/version_signature
  Ubuntu 3.2.0-31.50-generic 3.2.28

  2) lspci -vnvn

  00:00.0 Host bridge [0600]: Intel Corporation 2nd Generation Core Processor 
Family DRAM Controller [8086:0100] (rev 09)
Subsystem: ASUSTeK Computer Inc. P8P67 Deluxe Motherboard [1043:844d]
Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- 
Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- SERR- 
Kernel driver in use: agpgart-intel

  00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200/2nd Generation Core 
Processor Family PCI Express Root Port [8086:0101] (

[Kernel-packages] [Bug 1065159] Re: ipv6 routing memory leak

2013-08-11 Thread Christopher M. Penalver
Grimeton, could you please attach your apport-collect following
https://help.ubuntu.com/community/ReportingBugs#Filing_bugs_when_off-
line ?

** Tags added: needs-kernel-logs regression-potential

** Changed in: linux (Ubuntu)
   Status: Confirmed => Incomplete

-- 
You received this bug notification because you are a member of Kernel
Packages, which is subscribed to linux in Ubuntu.
https://bugs.launchpad.net/bugs/1065159

Title:
  ipv6 routing memory leak

Status in “linux” package in Ubuntu:
  Incomplete

Bug description:
  Hello,

  I'm running 12.04.1 LTS with the main kernel, all requested
  information is at the end of this report. As the server does not have
  any GUI, I'm not able to use the apport reporting tools, as no browser
  can be opened.

  When using IPv6 with routing and ndp proxy enabled
  (net.ipv6.conf.all.proxy_ndp=1, net.ipv6.conf.default_proxy=1), the
  routing table/cache becomes filled and  is not releasing memory.

  I first faced this problem three days ago, when some VMs became
  offline and couldn't be reached anymore. While trying to solve the
  problem, I hit the following error when adding routes after deleting
  the old ones:

  root@server:~# ip -6 route add default via 2001:db8::1
  RTNETLINK answers: Cannot allocate memory
  root@server:~# 

  After examining /proc and /proc/sys, I figured out that the size of
  /proc/net/ipv6_route was above the value set in sysctl:

  root@server:/proc/net# cat ipv6_route | wc -c
  5174
  root@server:/proc/net# sysctl net.ipv6.route.max_size
  net.ipv6.route.max_size = 4096
  root@server:/proc/net# 

  Actually there are only 10 routes in the routing table and there
  aren't any added or deleted by any KNOWN scripts automagically and
  yes, I know, that the text representation of /proc/sys/ipv6_route can
  be bigger than the sysctl value, 4096.

  After further tests, I figured out that I'm able to fix the problem by
  changing the value of net.ipv6.route.max_size to 8192. Adding the
  routes and the proxy entries for the machines to the table and it all
  started working again, till now.

  Now the size of ipv6_route is:

  root@server:/proc/net# cat ipv6_route | wc -c
  6324
  root@server:/proc/net# sysctl net.ipv6.route.max_size
  net.ipv6.route.max_size = 8192
  root@server:/proc/net# 

  And I can see the exact same problem again, when I try to re-add the
  routes. So changing net.ipv6.route.max_size to 16384 solves the
  problem once again for a few hours and I'm waiting for the moment the
  IPv6 stuff stops working again.

  The main configuration to enable everything that is used, is as
  follows:

  You have a subnet: 2001:db8::/64 that can be used for virtual machines
  that are connected to an internal bridge, that is called "brint". The
  interface that operates to the outside is eth0. The default gateway is
  2001:db8::dead:beef:1/128 (yes, in a different subnet).

  - Enable IPv6 NDP proxying
Set net.ipv6.conf.all.proxy_ndp=1, net.ipv6.conf.default.proxy_ndp=1 and 
net.ipv6.conf.eth0.proxy_ndp=1

  - Enable IPv6 forwarding
Set net.ipv6.conf.all.forwarding=1 and net.ipv6.conf.default.forwarding=1

  - Create the bridge
ip link add dev brint type bridge && ip link set dev brint up

  - Assign an IPv6 address to the network interface that operates to the 
outside (eth0)
ip -6 addr add 2001:db8::1/64 dev eth0 && ip link set dev eth0 up

  - Assign an IPv6 address to the bridge (brint)
ip -6 addr add 2001:db8::1/64 dev brint

  - To make sure the subnet is routed only to brint, delete the default route 
added by the kernel on eth0
ip -6 route del 2001:db8::/64 dev eth0

  - Now add the route to the gateway on eth0
ip -6 route add 2001:db8::dead:beef:1/128 dev eth0

  - Add the NDP proxy entries for the machines on brint
ip -6 neigh add proxy 2001:db8::2 dev eth0
ip -6 neigh add proxy 2001:db8::3 dev eth0

  As the subnet 2001:db8::/64 is link local for the external gateway,
  NDP requests are sent and the system now answers them and forwards the
  packets received on eth0 to the VMs that are in the neighbour list for
  brint.

  The VMs on the "inside" just use a simple setup like: 2001:db8::2/64
  as address and 2001:db8::1 as default gateway.

  Now wait and watch the buffer become filled and the memory become not
  released.

  If you have any questions, feel free to contact me.

  KR,

  Grimeton

  1) cat /proc/version_signature
  Ubuntu 3.2.0-31.50-generic 3.2.28

  2) lspci -vnvn

  00:00.0 Host bridge [0600]: Intel Corporation 2nd Generation Core Processor 
Family DRAM Controller [8086:0100] (rev 09)
Subsystem: ASUSTeK Computer Inc. P8P67 Deluxe Motherboard [1043:844d]
Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- 
Stepping- SERR- FastB2B- DisINTx-
Status: Cap+ 66MHz- UDF- FastB2B+ ParErr- DEVSEL=fast >TAbort- SERR- 
Kernel driver in use: agpgart-intel

  00:01.0 PCI bridge [0604]: Intel Corporation Xeon E3-1200/2nd Generati