Re: IPv6 broken on Fedora 20?

2014-01-07 Thread Hannes Frederic Sowa
On Thu, Dec 19, 2013 at 07:14:24PM +0100, Hannes Frederic Sowa wrote:
  Once you're doing that, it's probably easier to handle L=1 by simply
  adding the on-link route directly, rather than adding the address as a
  /64 and relying on the kernel to add the route for you. The two should
  result in the same functionality, though, so I'm don't really understand
  what's actually broken here.
 
 I guess it breaks generation of privacy addresses.

It also had some affect of anycast address generation.

 But you are right, essentially it should work but some assumptions were
 made in the kernel which should have been checked first.

I guess they're switching back to 64 while suppressing automatically addding
prefix routes:

  http://patchwork.ozlabs.org/patch/307389/

This feature should also be available in iproute then.

Greetings,

  Hannes



Re: IPv6 broken on Fedora 20?

2014-01-07 Thread Hannes Frederic Sowa
On Tue, Jan 07, 2014 at 12:42:43PM +0100, Tore Anderson wrote:
 * Hannes Frederic Sowa
 
  It also had some affect of anycast address generation.
  
  But you are right, essentially it should work but some assumptions were
  made in the kernel which should have been checked first.
  
  I guess they're switching back to 64 while suppressing automatically addding
  prefix routes:
  
http://patchwork.ozlabs.org/patch/307389/
  
  This feature should also be available in iproute then.
 
 Could you elaborate on the anycast address generation problem?

Kernel did also install an subnet-all-router anycast address if the
prefixlen was 128. If you have NM and also e.g. libvirt, which may
enable ipv6 forwarding, the same /128 got installed as an anycast address
(see /proc/net/anycast6). I did not see any breakage, but it could defer ndisc
responses.

 Reason I'm asking is that even though the patch you linked to allow NM
 to return to adding /64s in the case of SLAAC, there's still DHCPv6
 IA_NA which are always /128, yet possibly in combination with arbitrary
 prefix length onlink routes (if PIO exists in RA with A=0, L=1). I'm
 thinking that perhaps this anycast address generation problem could be
 present in that case too?

Yes it is and I fixed that yesterday. I guess, I should ask that the patch
should be pushed to stable.

Greetings,

  Hannes


Re: IPv6 broken on Fedora 20?

2014-01-07 Thread Hannes Frederic Sowa
On Tue, Jan 07, 2014 at 12:49:15PM +0100, Hannes Frederic Sowa wrote:
 Yes it is and I fixed that yesterday. I guess, I should ask that the patch
 should be pushed to stable.

Sorry, forgot the link:
https://git.kernel.org/cgit/linux/kernel/git/davem/net.git/commit/?id=88ad31491e21f5dec347911d9804c673af414a09

Greetings,

  Hannes



Re: MTU handling in 6RD deployments

2014-01-07 Thread Emmanuel Thierry
Hello,

Le 7 janv. 2014 à 12:37, Tore Anderson a écrit :

 Hi list,
 
 Does anyone know what tricks, if any, the major 6RD deployments (ATT,
 Free, Swisscom, others?) are using to alleviate any problems stemming
 from the reduced IPv6 MTU? Some possibilities that come to mind are:
 
 * Having the 6RD CPE lower the TCP MSS value of SYN packets as they
 enter/exit the tunnel device
 * Having the 6RD BR lower the TCP MSS value in the same way as above
 * Having the 6RD CPE advertise a lowered MTU to the LAN in RA Options

For your information, i see an advertised mtu of 1480 on my WiFi interface with 
the Free CPE.

 * Several (or all) of the above in combination
 
 Also, given that some ISPs offer [only] Layer-2 service and expect/allow
 their customers to bring their own Layer-3 home gateway if they want
 one, I would find it interesting to learn if any of the most common
 off-the-shelf home gateway products (that enable 6RD by default) also
 implement any such tricks by default or not.
 

Best regards
Emmanuel Thierry



Re: MTU handling in 6RD deployments

2014-01-07 Thread Ole Troan
 Does anyone know what tricks, if any, the major 6RD deployments (ATT,
 Free, Swisscom, others?) are using to alleviate any problems stemming
 from the reduced IPv6 MTU? Some possibilities that come to mind are:
 
 * Having the 6RD CPE lower the TCP MSS value of SYN packets as they
 enter/exit the tunnel device
 * Having the 6RD BR lower the TCP MSS value in the same way as above
 * Having the 6RD CPE advertise a lowered MTU to the LAN in RA Options
 * Several (or all) of the above in combination

ensure the 6rd domain MTU is =1520.

 Also, given that some ISPs offer [only] Layer-2 service and expect/allow
 their customers to bring their own Layer-3 home gateway if they want
 one, I would find it interesting to learn if any of the most common
 off-the-shelf home gateway products (that enable 6RD by default) also
 implement any such tricks by default or not.

cheers,
Ole



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: MTU handling in 6RD deployments

2014-01-07 Thread Mark Townsley

On Jan 7, 2014, at 12:56 PM, Emmanuel Thierry wrote:

 Hello,
 
 Le 7 janv. 2014 à 12:37, Tore Anderson a écrit :
 
 Hi list,
 
 Does anyone know what tricks, if any, the major 6RD deployments (ATT,
 Free, Swisscom, others?) are using to alleviate any problems stemming
 from the reduced IPv6 MTU? Some possibilities that come to mind are:
 
 * Having the 6RD CPE lower the TCP MSS value of SYN packets as they
 enter/exit the tunnel device
 * Having the 6RD BR lower the TCP MSS value in the same way as above
 * Having the 6RD CPE advertise a lowered MTU to the LAN in RA Options
 
 For your information, i see an advertised mtu of 1480 on my WiFi interface 
 with the Free CPE.

Section 9.1 of RFC 5969:

   If the MTU is well-managed such that the IPv4 MTU on the CE WAN side
   interface is set so that no fragmentation occurs within the boundary
   of the SP, then the 6rd Tunnel MTU should be set to the known IPv4
   MTU minus the size of the encapsulating IPv4 header (20 bytes).  For
   example, if the IPv4 MTU is known to be 1500 bytes, the 6rd Tunnel
   MTU might be set to 1480 bytes.  Absent more specific information,
   the 6rd Tunnel MTU SHOULD default to 1280 bytes.

Note I've heard some ISPs consider running Jumbo Frames under the covers so 
that IPv4 could carry 1520 and 1500 would be possible for IPv6, but have not 
yet seen that confirmed to me in practice. 

- Mark

 
 * Several (or all) of the above in combination
 
 Also, given that some ISPs offer [only] Layer-2 service and expect/allow
 their customers to bring their own Layer-3 home gateway if they want
 one, I would find it interesting to learn if any of the most common
 off-the-shelf home gateway products (that enable 6RD by default) also
 implement any such tricks by default or not.
 
 
 Best regards
 Emmanuel Thierry
 



Re: MTU handling in 6RD deployments

2014-01-07 Thread Mikael Abrahamsson

On Tue, 7 Jan 2014, Mark Townsley wrote:

Note I've heard some ISPs consider running Jumbo Frames under the covers 
so that IPv4 could carry 1520 and 1500 would be possible for IPv6, but 
have not yet seen that confirmed to me in practice.


Unless this is done in a very controlled environment I'd say this is 
bordering on the impossible. There are so many failure points for a jumbo 
solution it's scary. Most of them is also silent failure of PMTUD, 
basically blackholing of traffic.


Yes, it can be done of course, but I'd say operationally it's easier to 
just drop the MTU to 1480 and known working, than the jumbo alternative.


--
Mikael Abrahamssonemail: swm...@swm.pp.se


Re: MTU handling in 6RD deployments

2014-01-07 Thread Gert Doering
Hi,

On Tue, Jan 07, 2014 at 12:37:39PM +0100, Tore Anderson wrote:
 Does anyone know what tricks, if any, the major 6RD deployments (ATT,
 Free, Swisscom, others?) are using to alleviate any problems stemming
 from the reduced IPv6 MTU? Some possibilities that come to mind are:

Have a higher IPv4 MTU between the 6rd tunnel endpoints  sounds like
a nice solution an ISP could deploy.

But I've long given up hope after everybody seems to agree that 1492
is just fine.

Gert Doering
-- NetMaster
-- 
have you enabled IPv6 on something today...?

SpaceNet AGVorstand: Sebastian v. Bomhard
Joseph-Dollinger-Bogen 14  Aufsichtsratsvors.: A. Grundner-Culemann
D-80807 Muenchen   HRB: 136055 (AG Muenchen)
Tel: +49 (0)89/32356-444   USt-IdNr.: DE813185279


RE: MTU handling in 6RD deployments

2014-01-07 Thread Jean-Francois . TremblayING
Hi Tore. 

 Does anyone know what tricks, if any, the major 6RD deployments (ATT,
 Free, Swisscom, others?) are using to alleviate any problems stemming
 from the reduced IPv6 MTU? Some possibilities that come to mind are:
 
 * Having the 6RD CPE lower the TCP MSS value of SYN packets as they
 enter/exit the tunnel device
 * Having the 6RD BR lower the TCP MSS value in the same way as above
 * Having the 6RD CPE advertise a lowered MTU to the LAN in RA Options
 * Several (or all) of the above in combination

Our managed CPEs (D-Links) send (IPv4 MTU) - 20 bytes in RAs, usually 
1480.

In the list of tricks, you might want to add: 
* Slightly raise the ICMPv6 rate-limit values for your 6RD BR (we do 
50/20)

I haven't seen IPv6 MSS clamping in the wild yet (it was discussed on 
this list a year ago). 

 Also, given that some ISPs offer [only] Layer-2 service and expect/allow
 their customers to bring their own Layer-3 home gateway if they want
 one, I would find it interesting to learn if any of the most common
 off-the-shelf home gateway products (that enable 6RD by default) also
 implement any such tricks by default or not.

From off-the-shelf, we see mostly D-Links and Cisco/Linksys/Belkin 
with option 212 support. A few Asus models started showing up in the 
stats in 2013 I believe. Last time I checked, all models supporting 
option 212 also reduced their MTU properly (YMMV here, that was almost a 
year ago).

Too bigs remain quite common however... 
#sh ipv6 traffic | in too
   11880 encapsulation failed, 0 no route, 3829023354 too big
#sh ver | in upt
uptime is 2 years, 4 weeks, 5 days, 4 hours, 3 minutes

If 6lab's data is right, roughly half of Canada's IPv6 users go through 
that box (50k users).

/JF



Re: MTU handling in 6RD deployments

2014-01-07 Thread Tore Anderson
* Gert Doering

 Have a higher IPv4 MTU between the 6rd tunnel endpoints  sounds like
 a nice solution an ISP could deploy.

True, well, in theory anyway.

The reason I didn't include this in my list was that considering the
whole point of 6RD is to be able to bypass limitations of old rusty gear
that don't support fancy features like IPv6...the chances of that old
rusty gear being able to reliably support jumbo frames wasn't very high
either.

Tore



Re: MTU handling in 6RD deployments

2014-01-07 Thread Jean-Francois . TremblayING
 De : Gert Doering g...@space.net
 
 Have a higher IPv4 MTU between the 6rd tunnel endpoints  sounds like
 a nice solution an ISP could deploy.

Docsis MTU is 1518 bytes, so that won't happen any time soon in the cable 
world. 
(Docsis 3.1 is higher at 2000 bytes, but that's years away)

/JF



Re: MTU handling in 6RD deployments

2014-01-07 Thread Tore Anderson
* Templin, Fred L

 6RD could use SEAL the same as any tunneling technology. SEAL makes
 sure that packets up to 1500 get through no matter what, and lets
 bigger packets through (as long as they fit the first-hop MTU) with
 the expectation that hosts sending the bigger packets know what they
 are doing. It works as follows:
 
   - tunnel ingress pings the egress with a 1500 byte ping
   - if the ping succeeds, the path MTU is big enough to
 accommodate 1500s w/o fragmentation
   - if the ping fails, use fragmentation/reassembly to
 accommodate 1500 and smaller
   - end result - IPv6 hosts always see an MTU of at least 1500

In order for the BR to support reassembly it must maintain state. That's
going to have a very negative impact on its scaling properties...

Tore



RE: MTU handling in 6RD deployments

2014-01-07 Thread Templin, Fred L
Hi again,

 Second (and more importantly) reassembly is not needed
 for packets of any size if the path can pass a 1500 byte ping packet.

I should have qualified this by saying that the mechanism still
works even if the BR responds to pings subject to rate limiting.

Thanks - Fred
fred.l.temp...@boeing.com