RE: MTU handling in 6RD deployments

2014-01-07 Thread Templin, Fred L
Hi again, > Second (and more importantly) reassembly is not needed > for packets of any size if the path can pass a 1500 byte ping packet. I should have qualified this by saying that the mechanism still works even if the BR responds to pings subject to rate limiting. Thanks - Fred fred.l.temp...

Re: MTU handling in 6RD deployments

2014-01-07 Thread Anfinsen, Ragnar
On 07.01.14 17:10, "jean-francois.tremblay...@videotron.com" wrote: >> How many users use your 6rd BR (pr. BR if many)? > >50k on a pair of ASR1002-5G, but the second is mostly idle. We have about 2K for the time being as we do opt-in. > > >> How does the rate-limiting (drops) numbers look at

RE: MTU handling in 6RD deployments

2014-01-07 Thread Templin, Fred L
Hi Tore, > -Original Message- > From: Tore Anderson [mailto:t...@fud.no] > Sent: Tuesday, January 07, 2014 9:57 AM > To: Templin, Fred L; IPv6 Ops list > Subject: Re: MTU handling in 6RD deployments > > * Templin, Fred L > > > 6RD could use SEAL the same as any tunneling technology. SEAL

Re: MTU handling in 6RD deployments

2014-01-07 Thread Tore Anderson
* Templin, Fred L > 6RD could use SEAL the same as any tunneling technology. SEAL makes > sure that packets up to 1500 get through no matter what, and lets > bigger packets through (as long as they fit the first-hop MTU) with > the expectation that hosts sending the bigger packets know what they >

Re: MTU handling in 6RD deployments

2014-01-07 Thread Jean-Francois . TremblayING
> How many users use your 6rd BR (pr. BR if many)? 50k on a pair of ASR1002-5G, but the second is mostly idle. > How does the rate-limiting (drops) numbers look at your side? Actually, it's quite high (over 50%). I gave up on reaching zero here. The nature of the traffic seems to be bursty eno

RE: MTU handling in 6RD deployments

2014-01-07 Thread Templin, Fred L
6RD could use SEAL the same as any tunneling technology. SEAL makes sure that packets up to 1500 get through no matter what, and lets bigger packets through (as long as they fit the first-hop MTU) with the expectation that hosts sending the bigger packets know what they are doing. It works as follo

Re: MTU handling in 6RD deployments

2014-01-07 Thread Simon Perreault
Le 2014-01-07 10:18, Mark Townsley a écrit : And generating stinkin' ICMPv6 too big messages ends up being perhaps the most significant scaling factor of a 6rd BR deployment... The worst thing is a lot of content providers will simply ignore those too bigs you worked so hard to produce... *s

Re: MTU handling in 6RD deployments

2014-01-07 Thread Mark Townsley
And generating stinkin' ICMPv6 too big messages ends up being perhaps the most significant scaling factor of a 6rd BR deployment... - Mark On Jan 7, 2014, at 3:59 PM, Simon Perreault wrote: > Le 2014-01-07 08:46, jean-francois.tremblay...@videotron.com a écrit : >> In the list of "tricks", you

Re: MTU handling in 6RD deployments

2014-01-07 Thread Simon Perreault
Le 2014-01-07 08:46, jean-francois.tremblay...@videotron.com a écrit : In the list of "tricks", you might want to add: * Slightly raise the ICMPv6 rate-limit values for your 6RD BR (we do 50/20) Yeah, this is really problematic. When IPv6 packets arrive at the BR from the Internet, the BR need

Re: MTU handling in 6RD deployments

2014-01-07 Thread Anfinsen, Ragnar
On 07.01.14 14:46, "jean-francois.tremblay...@videotron.com" wrote: >In the list of "tricks", you might want to add: >* Slightly raise the ICMPv6 rate-limit values for your 6RD BR (we do >50/20) How many users use your 6rd BR (pr. BR if many)? >Too bigs remain quite common however... >#sh ipv6

Re: MTU handling in 6RD deployments

2014-01-07 Thread Jean-Francois . TremblayING
> De : Gert Doering > > "Have a higher IPv4 MTU between the 6rd tunnel endpoints" sounds like > a nice solution an ISP could deploy. Docsis MTU is 1518 bytes, so that won't happen any time soon in the cable world. (Docsis 3.1 is higher at 2000 bytes, but that's years away) /JF

Re: MTU handling in 6RD deployments

2014-01-07 Thread Tore Anderson
* Gert Doering > "Have a higher IPv4 MTU between the 6rd tunnel endpoints" sounds like > a nice solution an ISP could deploy. True, well, in theory anyway. The reason I didn't include this in my list was that considering the whole point of 6RD is to be able to bypass limitations of old rusty ge

RE: MTU handling in 6RD deployments

2014-01-07 Thread Jean-Francois . TremblayING
Hi Tore. > Does anyone know what tricks, if any, the major 6RD deployments (AT&T, > Free, Swisscom, others?) are using to alleviate any problems stemming > from the reduced IPv6 MTU? Some possibilities that come to mind are: > > * Having the 6RD CPE lower the TCP MSS value of SYN packets as they

Re: MTU handling in 6RD deployments

2014-01-07 Thread Gert Doering
Hi, On Tue, Jan 07, 2014 at 12:37:39PM +0100, Tore Anderson wrote: > Does anyone know what tricks, if any, the major 6RD deployments (AT&T, > Free, Swisscom, others?) are using to alleviate any problems stemming > from the reduced IPv6 MTU? Some possibilities that come to mind are: "Have a higher

Re: MTU handling in 6RD deployments

2014-01-07 Thread Mikael Abrahamsson
On Tue, 7 Jan 2014, Mark Townsley wrote: Note I've heard some ISPs consider running Jumbo Frames under the covers so that IPv4 could carry 1520 and 1500 would be possible for IPv6, but have not yet seen that confirmed to me in practice. Unless this is done in a very controlled environment I'd

Re: MTU handling in 6RD deployments

2014-01-07 Thread Mark Townsley
On Jan 7, 2014, at 12:56 PM, Emmanuel Thierry wrote: > Hello, > > Le 7 janv. 2014 à 12:37, Tore Anderson a écrit : > >> Hi list, >> >> Does anyone know what tricks, if any, the major 6RD deployments (AT&T, >> Free, Swisscom, others?) are using to alleviate any problems stemming >> from the red

Re: MTU handling in 6RD deployments

2014-01-07 Thread Ole Troan
> Does anyone know what tricks, if any, the major 6RD deployments (AT&T, > Free, Swisscom, others?) are using to alleviate any problems stemming > from the reduced IPv6 MTU? Some possibilities that come to mind are: > > * Having the 6RD CPE lower the TCP MSS value of SYN packets as they > enter/ex

Re: MTU handling in 6RD deployments

2014-01-07 Thread Emmanuel Thierry
Hello, Le 7 janv. 2014 à 12:37, Tore Anderson a écrit : > Hi list, > > Does anyone know what tricks, if any, the major 6RD deployments (AT&T, > Free, Swisscom, others?) are using to alleviate any problems stemming > from the reduced IPv6 MTU? Some possibilities that come to mind are: > > * Havi

Re: IPv6 broken on Fedora 20?

2014-01-07 Thread Hannes Frederic Sowa
On Tue, Jan 07, 2014 at 12:49:15PM +0100, Hannes Frederic Sowa wrote: > Yes it is and I fixed that yesterday. I guess, I should ask that the patch > should be pushed to stable. Sorry, forgot the link: https://git.kernel.org/cgit/linux/kernel/git/davem/net.git/commit/?id=88ad31491e21f5dec347911d980

Re: IPv6 broken on Fedora 20?

2014-01-07 Thread Hannes Frederic Sowa
On Tue, Jan 07, 2014 at 12:42:43PM +0100, Tore Anderson wrote: > * Hannes Frederic Sowa > > > It also had some affect of anycast address generation. > > > >> But you are right, essentially it should work but some assumptions were > >> made in the kernel which should have been checked first. > >

Re: IPv6 broken on Fedora 20?

2014-01-07 Thread Tore Anderson
* Hannes Frederic Sowa > It also had some affect of anycast address generation. > >> But you are right, essentially it should work but some assumptions were >> made in the kernel which should have been checked first. > > I guess they're switching back to 64 while suppressing automatically adddin

MTU handling in 6RD deployments

2014-01-07 Thread Tore Anderson
Hi list, Does anyone know what tricks, if any, the major 6RD deployments (AT&T, Free, Swisscom, others?) are using to alleviate any problems stemming from the reduced IPv6 MTU? Some possibilities that come to mind are: * Having the 6RD CPE lower the TCP MSS value of SYN packets as they enter/exit

Re: IPv6 broken on Fedora 20?

2014-01-07 Thread Hannes Frederic Sowa
On Thu, Dec 19, 2013 at 07:14:24PM +0100, Hannes Frederic Sowa wrote: > > Once you're doing that, it's probably easier to handle L=1 by simply > > adding the on-link route directly, rather than adding the address as a > > /64 and relying on the kernel to add the route for you. The two should > > re