RE: MTU handling in 6RD deployments

2014-01-17 Thread Templin, Fred L
> BTW, by "infinite" I mean 4GB minus the encapsulation overhead.

Umm, sorry; that is only for tunnels over IPv6. For tunnels over
IPv4, "infinite" means 64KB minus the overhead.

Thanks - Fred
fred.l.temp...@boeing.com


RE: MTU handling in 6RD deployments

2014-01-17 Thread Templin, Fred L
Hi,

> > You don't ping the BR, you ping yourself via the BR. The BR only forwards 
> > the packet.
> 
> Precisely. The whole idea is to stay on the data plane.

I do not work for a network equipment manufacturer, so I'll take
your word that remaining in the data plane is critical for 6rd BRs
and that high data rate loopbacks are not a problem. So, a looped
back MTU test tests both the forward and reverse path MTUs between
the CE and BR. This is important to the CE, because if it were only
to test the forward path to the BR it would not know whether the
reverse path MTU is big enough and so allowing an IPv6 destination
outside of the 6rd site to discover a too-large MSS could result
in communication failures.

In terms of the BR's knowledge of the path MTU to the CE, if we
can assume that the BR will receive the necessary ICMPs from the
6rd site then it can passively rely on translating ICMPv4 PTB
messages coming from the 6rd site into corresponding ICMPv6 PTB
messages to send back to the remote IPv6 correspondent. So, the
BR should be able to set an infinite IPv6 MTU on its tunnel
interface and passively translate any PTB messages it receives.
That, plus the fact that the two IPv6 hosts have to agree on an
MSS excuses the BR from having to do any active probing itself.

So, take what is already in RFC5969, and add that a successful
test of a 1500 byte probe allows the CE to set an infinite IPv6
MTU with the understanding that IPv6 hosts that want to use
sizes larger than 1500 are expected to use RFC4821.

BTW, by "infinite" I mean 4GB minus the encapsulation overhead.

Thanks - Fred
fred.l.temp...@boeing.com


RE: MTU handling in 6RD deployments

2014-01-17 Thread Mikael Abrahamsson

On Fri, 17 Jan 2014, Templin, Fred L wrote:

But, if the BR doesn't examine the packet it could get caught up in a 
flood-ping initiated by a malicious CE.


The BR should have enough dataplane forwarding capacity to handle this.

I am considering a specific ping rather than an ordinary data packet as 
a way for the BR to know whether the CE is testing the MTU vs whether it 
is just looping back packets. If the BR knows the CE is testing the MTU, 
it can send ping replies subject to rate limiting so a malicious CE 
can't swamp the BR with excessive pings.


Why does it need to know? The CE is pinging itself CE->BR->CE, and if the 
CE doesn't receive the packet back then the MTU is obviously limited.


So the CE sends out a packet towards the BR, with the IPv6 address being 
the CE itself. So the packet arrives at the BR, gets decapsulated, does 
IPv6 dst address lookup, gets encapsulated, and then sent onto the CE. 
Pure data plane.


I don't get why the BR should need to get involved in anything more 
complicated than that?


--
Mikael Abrahamssonemail: swm...@swm.pp.se


Re: MTU handling in 6RD deployments

2014-01-17 Thread Mark Townsley

On Jan 17, 2014, at 5:14 PM, Mikael Abrahamsson wrote:

> On Fri, 17 Jan 2014, Templin, Fred L wrote:
> 
>> Sorry, I was looking at the wrong section. I see now that Section 8 is 
>> talking about a method for a CE to send an ordinary data packet that loops 
>> back via the BR. That method is fine, but it is no more immune to someone 
>> abusing the mechanism than would be sending a ping (or some other NUD 
>> message). By using a ping, the BR can impose rate-limiting on its ping 
>> responses whereas with a looped-back data packet the BR really can't do rate 
>> limiting.
> 
> You don't ping the BR, you ping yourself via the BR. The BR only forwards the 
> packet.

Precisely. The whole idea is to stay on the data plane. 

- Mark

> 
>> Also, Section 8 of RFC5969 only talks about the CE testing the forward
>> path to the BR. Unless the BR also tests the reverse path to the CE it
>> has no way of knowing whether the CE can accept large packets.
> 
> You misread the text.
> 
> -- 
> Mikael Abrahamssonemail: swm...@swm.pp.se



RE: MTU handling in 6RD deployments

2014-01-17 Thread Templin, Fred L
> cache a boolean "ACCEPTS_BIG_PACKETS" for this CE.

BTW, the reason I am saying that the only thing we are trying
to determine is whether/not the CE<->BR path can pass a 1500
byte packet is that 1500 bytes is the de facto Internet cell
most end systems expect to see w/o getting an ICMP PTB back.

So, if we can give the hosts at least 1500 then if they want
to try for a larger size they should use RFC4821. This makes
things much easier than trying to probe the CE<->BR path for
an exact size.

Thanks - Fred
fred.l.temp...@boeing.com



RE: MTU handling in 6RD deployments

2014-01-17 Thread Templin, Fred L
Hi Mikael,

> -Original Message-
> From: Mikael Abrahamsson [mailto:swm...@swm.pp.se]
> Sent: Friday, January 17, 2014 8:15 AM
> To: Templin, Fred L
> Cc: Mark Townsley; ipv6-ops@lists.cluenet.de
> Subject: RE: MTU handling in 6RD deployments
> 
> On Fri, 17 Jan 2014, Templin, Fred L wrote:
> 
> > Sorry, I was looking at the wrong section. I see now that Section 8 is
> > talking about a method for a CE to send an ordinary data packet that
> > loops back via the BR. That method is fine, but it is no more immune to
> > someone abusing the mechanism than would be sending a ping (or some
> > other NUD message). By using a ping, the BR can impose rate-limiting on
> > its ping responses whereas with a looped-back data packet the BR really
> > can't do rate limiting.
> 
> You don't ping the BR, you ping yourself via the BR. The BR only forwards
> the packet.
> 
> > Also, Section 8 of RFC5969 only talks about the CE testing the forward
> > path to the BR. Unless the BR also tests the reverse path to the CE it
> > has no way of knowing whether the CE can accept large packets.
> 
> You misread the text.

I don't see anywhere where it says that the BR should also ping the
CE and cache a boolean "ACCEPTS_BIG_PACKETS" for this CE. If the BR
doesn't do that, it needs to set its MTU to the CE to 1480 (or 1472
or something).

Thanks - Fred
fred.l.temp...@boeing.com

> --
> Mikael Abrahamssonemail: swm...@swm.pp.se


RE: MTU handling in 6RD deployments

2014-01-17 Thread Templin, Fred L
Hi Mikael,

> -Original Message-
> From: Mikael Abrahamsson [mailto:swm...@swm.pp.se]
> Sent: Friday, January 17, 2014 8:16 AM
> To: Templin, Fred L
> Cc: Mark Townsley; ipv6-ops@lists.cluenet.de
> Subject: RE: MTU handling in 6RD deployments
> 
> On Fri, 17 Jan 2014, Mikael Abrahamsson wrote:
> 
> > On Fri, 17 Jan 2014, Templin, Fred L wrote:
> >
> >> Sorry, I was looking at the wrong section. I see now that Section 8 is
> >> talking about a method for a CE to send an ordinary data packet that loops
> >> back via the BR. That method is fine, but it is no more immune to someone
> >> abusing the mechanism than would be sending a ping (or some other NUD
> >> message). By using a ping, the BR can impose rate-limiting on its ping
> >> responses whereas with a looped-back data packet the BR really can't do
> >> rate limiting.
> >
> > You don't ping the BR, you ping yourself via the BR. The BR only forwards 
> > the
> > packet.

But, if the BR doesn't examine the packet it could get caught up
in a flood-ping initiated by a malicious CE.
 
> My bad, I didn't read your text properly. Why would the BR want to
> rate-limit data plane traffic?

I am considering a specific ping rather than an ordinary data packet
as a way for the BR to know whether the CE is testing the MTU vs
whether it is just looping back packets. If the BR knows the CE is
testing the MTU, it can send ping replies subject to rate limiting
so a malicious CE can't swamp the BR with excessive pings.

Thanks - Fred
fred.l.temp...@boeing.com 

> --
> Mikael Abrahamssonemail: swm...@swm.pp.se


RE: MTU handling in 6RD deployments

2014-01-17 Thread Mikael Abrahamsson

On Fri, 17 Jan 2014, Templin, Fred L wrote:

Sorry, I was looking at the wrong section. I see now that Section 8 is 
talking about a method for a CE to send an ordinary data packet that 
loops back via the BR. That method is fine, but it is no more immune to 
someone abusing the mechanism than would be sending a ping (or some 
other NUD message). By using a ping, the BR can impose rate-limiting on 
its ping responses whereas with a looped-back data packet the BR really 
can't do rate limiting.


You don't ping the BR, you ping yourself via the BR. The BR only forwards 
the packet.



Also, Section 8 of RFC5969 only talks about the CE testing the forward
path to the BR. Unless the BR also tests the reverse path to the CE it
has no way of knowing whether the CE can accept large packets.


You misread the text.

--
Mikael Abrahamssonemail: swm...@swm.pp.se


RE: MTU handling in 6RD deployments

2014-01-17 Thread Mikael Abrahamsson

On Fri, 17 Jan 2014, Mikael Abrahamsson wrote:


On Fri, 17 Jan 2014, Templin, Fred L wrote:

Sorry, I was looking at the wrong section. I see now that Section 8 is 
talking about a method for a CE to send an ordinary data packet that loops 
back via the BR. That method is fine, but it is no more immune to someone 
abusing the mechanism than would be sending a ping (or some other NUD 
message). By using a ping, the BR can impose rate-limiting on its ping 
responses whereas with a looped-back data packet the BR really can't do 
rate limiting.


You don't ping the BR, you ping yourself via the BR. The BR only forwards the 
packet.


My bad, I didn't read your text properly. Why would the BR want to 
rate-limit data plane traffic?


--
Mikael Abrahamssonemail: swm...@swm.pp.se


RE: MTU handling in 6RD deployments

2014-01-17 Thread Mikael Abrahamsson

On Fri, 17 Jan 2014, Templin, Fred L wrote:


So, if we were to construct the pings from the IPv6 level we would
want to use link-local source and destination addresses.


No. What you want to do is ping your own 6RD address and see if you get 
the packet back. Link locals does not work in 6RD in that way.


--
Mikael Abrahamssonemail: swm...@swm.pp.se


RE: MTU handling in 6RD deployments

2014-01-17 Thread Templin, Fred L
Hi Mark,

> -Original Message-
> From: ipv6-ops-bounces+fred.l.templin=boeing@lists.cluenet.de 
> [mailto:ipv6-ops-
> bounces+fred.l.templin=boeing@lists.cluenet.de] On Behalf Of Templin, 
> Fred L
> Sent: Friday, January 17, 2014 7:57 AM
> To: Mark Townsley; Mikael Abrahamsson
> Cc: ipv6-ops@lists.cluenet.de
> Subject: RE: MTU handling in 6RD deployments
> 
> Hi Mark,
> 
> > -Original Message-
> > From: Mark Townsley [mailto:m...@townsley.net]
> > Sent: Friday, January 17, 2014 12:41 AM
> > To: Mikael Abrahamsson
> > Cc: Templin, Fred L; ipv6-ops@lists.cluenet.de
> > Subject: Re: MTU handling in 6RD deployments
> >
> >
> > On Jan 17, 2014, at 9:24 AM, Mikael Abrahamsson wrote:
> >
> > > On Thu, 16 Jan 2014, Templin, Fred L wrote:
> > >
> > >> The key is that we want to probe the path between the BR and CE (in both 
> > >> directions) *before*
> > allowing regular data packets to flow. We want to know ahead of time 
> > whether to allow large packets
> > into the tunnel or whether we need to shut the MTU down to 1480 (or 1472 or 
> > something) and clamp the
> > MSS. Because, once we restrict the tunnel MTU hosts will be stuck with a 
> > degenerate MTU indefinitely
> > or at least for a long time.
> > >
> > > This method makes some sense, but since network conditions can change, I 
> > > would like to see
> periodic
> > re-checks of the tunnel still working with the packet sizes, perhaps 
> > pinging itself over the tunnel
> > once per minute with the larger packet size if larger packet size is in use.
> >
> > Section 8 of RFC 5969 could be relevant here.
> 
> In that section, I see:
> 
>"The link-local
>address of a 6rd virtual interface performing the 6rd encapsulation
>would, if needed, be formed as described in Section 3.7 of [RFC4213].
>However, no communication using link-local addresses will occur."

Sorry, I was looking at the wrong section. I see now that Section 8
is talking about a method for a CE to send an ordinary data packet
that loops back via the BR. That method is fine, but it is no more
immune to someone abusing the mechanism than would be sending a ping
(or some other NUD message). By using a ping, the BR can impose
rate-limiting on its ping responses whereas with a looped-back
data packet the BR really can't do rate limiting.

Also, Section 8 of RFC5969 only talks about the CE testing the forward
path to the BR. Unless the BR also tests the reverse path to the CE it
has no way of knowing whether the CE can accept large packets. 

Thanks - Fred
fred.l.temp...@boeing.com

> So, if we were to construct the pings from the IPv6 level we would
> want to use link-local source and destination addresses. But, that
> raises a question that would need to be addressed - should the pings
> be constructed at the IPv6 level, the IPv4 level, or some mid-level
> like SEAL?
> 
> One other thing about this is that we are specifically not testing
> to determine an exact path MTU. We are only trying to answer the
> binary question of whether or not the tunnel can pass a 1500 byte
> IPv6 packet.
> 
> Thanks - Fred
> fred.l.temp...@boeing.com
> 
> > - Mark
> >
> > >
> > > --
> > > Mikael Abrahamssonemail: swm...@swm.pp.se



RE: MTU handling in 6RD deployments

2014-01-17 Thread Templin, Fred L
Hi Mark,

> -Original Message-
> From: Mark Townsley [mailto:m...@townsley.net]
> Sent: Friday, January 17, 2014 12:41 AM
> To: Mikael Abrahamsson
> Cc: Templin, Fred L; ipv6-ops@lists.cluenet.de
> Subject: Re: MTU handling in 6RD deployments
> 
> 
> On Jan 17, 2014, at 9:24 AM, Mikael Abrahamsson wrote:
> 
> > On Thu, 16 Jan 2014, Templin, Fred L wrote:
> >
> >> The key is that we want to probe the path between the BR and CE (in both 
> >> directions) *before*
> allowing regular data packets to flow. We want to know ahead of time whether 
> to allow large packets
> into the tunnel or whether we need to shut the MTU down to 1480 (or 1472 or 
> something) and clamp the
> MSS. Because, once we restrict the tunnel MTU hosts will be stuck with a 
> degenerate MTU indefinitely
> or at least for a long time.
> >
> > This method makes some sense, but since network conditions can change, I 
> > would like to see periodic
> re-checks of the tunnel still working with the packet sizes, perhaps pinging 
> itself over the tunnel
> once per minute with the larger packet size if larger packet size is in use.
> 
> Section 8 of RFC 5969 could be relevant here.

In that section, I see:

   "The link-local
   address of a 6rd virtual interface performing the 6rd encapsulation
   would, if needed, be formed as described in Section 3.7 of [RFC4213].
   However, no communication using link-local addresses will occur."

So, if we were to construct the pings from the IPv6 level we would
want to use link-local source and destination addresses. But, that
raises a question that would need to be addressed - should the pings
be constructed at the IPv6 level, the IPv4 level, or some mid-level
like SEAL?

One other thing about this is that we are specifically not testing
to determine an exact path MTU. We are only trying to answer the
binary question of whether or not the tunnel can pass a 1500 byte
IPv6 packet.

Thanks - Fred
fred.l.temp...@boeing.com

> - Mark
> 
> >
> > --
> > Mikael Abrahamssonemail: swm...@swm.pp.se



RE: MTU handling in 6RD deployments

2014-01-17 Thread Templin, Fred L
Hi Mikael,

> -Original Message-
> From: Mikael Abrahamsson [mailto:swm...@swm.pp.se]
> Sent: Friday, January 17, 2014 12:24 AM
> To: Templin, Fred L
> Cc: ipv6-ops@lists.cluenet.de
> Subject: RE: MTU handling in 6RD deployments
> 
> On Thu, 16 Jan 2014, Templin, Fred L wrote:
> 
> > The key is that we want to probe the path between the BR and CE (in both
> > directions) *before* allowing regular data packets to flow. We want to
> > know ahead of time whether to allow large packets into the tunnel or
> > whether we need to shut the MTU down to 1480 (or 1472 or something) and
> > clamp the MSS. Because, once we restrict the tunnel MTU hosts will be
> > stuck with a degenerate MTU indefinitely or at least for a long time.
> 
> This method makes some sense, but since network conditions can change, I
> would like to see periodic re-checks of the tunnel still working with the
> packet sizes, perhaps pinging itself over the tunnel once per minute with
> the larger packet size if larger packet size is in use.

Thanks for the thought, and I agree that dealing with possible path changes
is required. SEAL says the following:

   "When the ITE is actively sending packets over a subnetwork path to an
   ETE, it also sends explicit probes subject to rate limiting to test
   the path MTU."

I think this might be better than probing once per minute, because it
gives more timely feedback for detecting path MTU changes while packets
are actively flowing and desists if no packets are actively flowing.

Thanks - Fred
fred.l.temp...@boeing.com
 
> --
> Mikael Abrahamssonemail: swm...@swm.pp.se


Re: MTU handling in 6RD deployments

2014-01-17 Thread Mark Townsley

On Jan 17, 2014, at 9:24 AM, Mikael Abrahamsson wrote:

> On Thu, 16 Jan 2014, Templin, Fred L wrote:
> 
>> The key is that we want to probe the path between the BR and CE (in both 
>> directions) *before* allowing regular data packets to flow. We want to know 
>> ahead of time whether to allow large packets into the tunnel or whether we 
>> need to shut the MTU down to 1480 (or 1472 or something) and clamp the MSS. 
>> Because, once we restrict the tunnel MTU hosts will be stuck with a 
>> degenerate MTU indefinitely or at least for a long time.
> 
> This method makes some sense, but since network conditions can change, I 
> would like to see periodic re-checks of the tunnel still working with the 
> packet sizes, perhaps pinging itself over the tunnel once per minute with the 
> larger packet size if larger packet size is in use.

Section 8 of RFC 5969 could be relevant here.

- Mark

> 
> -- 
> Mikael Abrahamssonemail: swm...@swm.pp.se



RE: MTU handling in 6RD deployments

2014-01-17 Thread Mikael Abrahamsson

On Thu, 16 Jan 2014, Templin, Fred L wrote:

The key is that we want to probe the path between the BR and CE (in both 
directions) *before* allowing regular data packets to flow. We want to 
know ahead of time whether to allow large packets into the tunnel or 
whether we need to shut the MTU down to 1480 (or 1472 or something) and 
clamp the MSS. Because, once we restrict the tunnel MTU hosts will be 
stuck with a degenerate MTU indefinitely or at least for a long time.


This method makes some sense, but since network conditions can change, I 
would like to see periodic re-checks of the tunnel still working with the 
packet sizes, perhaps pinging itself over the tunnel once per minute with 
the larger packet size if larger packet size is in use.


--
Mikael Abrahamssonemail: swm...@swm.pp.se


RE: MTU handling in 6RD deployments

2014-01-16 Thread Templin, Fred L
Hi Sander,

> -Original Message-
> From: Sander Steffann [mailto:san...@steffann.nl]
> Sent: Thursday, January 16, 2014 2:45 PM
> To: Templin, Fred L
> Cc: ipv6-ops@lists.cluenet.de
> Subject: Re: MTU handling in 6RD deployments
> 
> Hi,
> 
> > In the reverse direction, when a 6RD BR forwards a packet to a CE
> > router that it hasn't ping'd before (or hasn't ping'd recently),
> > have it ping the CE with a 1520 byte ping. If it gets a reply, set
> > the MTU to the CE to infinity. If it doesn't get a reply, set the
> > MTU to 1480 (or maybe 1472). Again, no fragmentation and reassembly.
> >
> > The only state in the BR then is an MTU value for each CE that it
> > talks to - in the same way ordinary IPv4 nodes maintain a path MTU
> > cache for the destinations they talk to.
> 
> Since we assume that 6RD packets between the BR and the CE go over 
> infrastructure that the ISP
> controls, wouldn't it be easier to just try to send bigger (IPv4) packets 
> from the BR to the CE with
> the DF bit set, and look for PTB messages? On the public internet relying on 
> PTBs might be a bad idea,
> but on controlled infrastructure you might be able to reply on those. If you 
> can raise the MTU to 1520
> you should be able to make PTBs work, right? ;-)  It might save an extra 
> roundtrip with a ping and use
> standard ICMP messages and associated state.

The difference is that a PTB is a negative acknowledgement from a
router on the path from the BR to the CE, while a ping reply is a
positive acknowledgment from the CE itself. But, I failed to mention
that the ping would have DF=1, so it would give advantages of both,
i.e., a negative confirmation if the ping is too big for the path
MTU or a positive confirmation that the path MTU is sufficient.

The key is that we want to probe the path between the BR and CE
(in both directions) *before* allowing regular data packets to
flow. We want to know ahead of time whether to allow large packets
into the tunnel or whether we need to shut the MTU down to 1480
(or 1472 or something) and clamp the MSS. Because, once we
restrict the tunnel MTU hosts will be stuck with a degenerate
MTU indefinitely or at least for a long time.

Thanks - Fred
fred.l.temp...@boeing.com
 
> Cheers,
> Sander



Re: MTU handling in 6RD deployments

2014-01-16 Thread Sander Steffann
Hi,

> In the reverse direction, when a 6RD BR forwards a packet to a CE
> router that it hasn't ping'd before (or hasn't ping'd recently),
> have it ping the CE with a 1520 byte ping. If it gets a reply, set
> the MTU to the CE to infinity. If it doesn't get a reply, set the
> MTU to 1480 (or maybe 1472). Again, no fragmentation and reassembly.
> 
> The only state in the BR then is an MTU value for each CE that it
> talks to - in the same way ordinary IPv4 nodes maintain a path MTU
> cache for the destinations they talk to. 

Since we assume that 6RD packets between the BR and the CE go over 
infrastructure that the ISP controls, wouldn't it be easier to just try to send 
bigger (IPv4) packets from the BR to the CE with the DF bit set, and look for 
PTB messages? On the public internet relying on PTBs might be a bad idea, but 
on controlled infrastructure you might be able to reply on those. If you can 
raise the MTU to 1520 you should be able to make PTBs work, right? ;-)  It 
might save an extra roundtrip with a ping and use standard ICMP messages and 
associated state.

Cheers,
Sander



RE: MTU handling in 6RD deployments

2014-01-16 Thread Templin, Fred L
Here's another idea on 6RD MTU. When a 6RD CE router first comes up,
have it ping the BR with a 1520 byte ping. If it gets a reply, don't
advertise an MTU in RA options and set the MTU to the BR to infinity.
If it doesn't get a reply, advertise an MTU of 1480 (or maybe 1472).
No fragmentation and reassembly are permitted.

In the reverse direction, when a 6RD BR forwards a packet to a CE
router that it hasn't ping'd before (or hasn't ping'd recently),
have it ping the CE with a 1520 byte ping. If it gets a reply, set
the MTU to the CE to infinity. If it doesn't get a reply, set the
MTU to 1480 (or maybe 1472). Again, no fragmentation and reassembly.

The only state in the BR then is an MTU value for each CE that it
talks to - in the same way ordinary IPv4 nodes maintain a path MTU
cache for the destinations they talk to. 

Thanks - Fred
fred.l.temp...@boeing.com 


RE: MTU handling in 6RD deployments

2014-01-14 Thread Martin.Gysi
Hi Tore,

>Does anyone know what tricks, if any, the major 6RD deployments (AT&T,
>Free, Swisscom, others?) are using to alleviate any problems stemming
>from the reduced IPv6 MTU? Some possibilities that come to mind are:

>* Having the 6RD CPE lower the TCP MSS value of SYN packets as they
>enter/exit the tunnel device
>* Having the 6RD BR lower the TCP MSS value in the same way as above
>* Having the 6RD CPE advertise a lowered MTU to the LAN in RA Options
>* Several (or all) of the above in combination

We advertise an MTU of 1472 in the RA Options. We still have a small
number of PPPoE users and the max-payload-tag is not working reliably
enough on third-party devices.

Hence: 1500 Bytes - 20 Bytes IPv4 - 8 Bytes PPPoE = 1472 Bytes

\Martin




RE: MTU handling in 6RD deployments

2014-01-10 Thread Templin, Fred L
Hi Mikael,

> -Original Message-
> From: Mikael Abrahamsson [mailto:swm...@swm.pp.se]
> Sent: Thursday, January 09, 2014 11:11 PM
> To: Templin, Fred L
> Cc: IPv6 Ops list
> Subject: RE: MTU handling in 6RD deployments
> 
> On Thu, 9 Jan 2014, Templin, Fred L wrote:
> 
> > I don't doubt that your experience is valid for the environment you are
> > working in. What I am saying is that there may be many environments
> > where setting IPv4 link MTUs to 1520+ is a viable alternative and then
> > the hosts can see a full 1500+ MTU w/o ICMPs. SEAL detects when such
> > favorable conditions exist and uses limited fragmentation/reassembly
> > only when they don't. Or, if fragmentation/reassembly is deemed
> > unacceptable for the environment, then clamp the MSS.
> 
> 6RD relays can be made cheap because they are stateless. 6RD
> implementation in hosts can be bade cheap, because it's easy. SEAL isn't
> stateless (obviously, since it can do re-assembly), thus increasing cost
> and complexity both in host and relay.

I understand. But, SEAL is not heavy-duty and steady state fragmentation
and reassembly is not a desired end condition. Instead, it is a sign that
something is out of tune and needs to be tuned properly. Or, if it can't
be tuned, then fall back to MSS clamping and you are no worse off than
without SEAL.

> So while it might have a technical fit, it isn't really an operational or
> monetary fit right this minute. 6RD is widely implemented today, by the
> time any other mechanism is implemented, the use-case for IPv6 tunneled in
> IPv4 might be much less interesting, hopefully more are moving towards
> IPv4 over native IPv6 for new implementations.

There is an alpha implementation available at:

  linkupnetworks.com/seal/sealv2-1.0.tgz

And, I don't know whether any of us can say what the timeframe is for
all native IPv6 everywhere given that we are close to 20yrs in and still
no end in sight for IPv4. Also, SEAL works for all tunnel combinations
of IPvX-over-IPvY and is not specific to 6rd. So, implementation should
be of interest in the general sense for the longer term.

Thanks - Fred
fred.l.temp...@boeing.com
 
> --
> Mikael Abrahamssonemail: swm...@swm.pp.se


RE: MTU handling in 6RD deployments

2014-01-09 Thread Mikael Abrahamsson

On Thu, 9 Jan 2014, Templin, Fred L wrote:

I don't doubt that your experience is valid for the environment you are 
working in. What I am saying is that there may be many environments 
where setting IPv4 link MTUs to 1520+ is a viable alternative and then 
the hosts can see a full 1500+ MTU w/o ICMPs. SEAL detects when such 
favorable conditions exist and uses limited fragmentation/reassembly 
only when they don't. Or, if fragmentation/reassembly is deemed 
unacceptable for the environment, then clamp the MSS.


6RD relays can be made cheap because they are stateless. 6RD 
implementation in hosts can be bade cheap, because it's easy. SEAL isn't 
stateless (obviously, since it can do re-assembly), thus increasing cost 
and complexity both in host and relay.


So while it might have a technical fit, it isn't really an operational or 
monetary fit right this minute. 6RD is widely implemented today, by the 
time any other mechanism is implemented, the use-case for IPv6 tunneled in 
IPv4 might be much less interesting, hopefully more are moving towards 
IPv4 over native IPv6 for new implementations.


--
Mikael Abrahamssonemail: swm...@swm.pp.se


RE: MTU handling in 6RD deployments

2014-01-09 Thread Templin, Fred L
Hi Ragnar,

> -Original Message-
> From: Anfinsen, Ragnar [mailto:ragnar.anfin...@altibox.no]
> Sent: Thursday, January 09, 2014 11:36 AM
> To: Templin, Fred L; S.P.Zeidler
> Cc: IPv6 Ops list
> Subject: Re: MTU handling in 6RD deployments
> 
> On 09.01.14 17:36, "Templin, Fred L"  wrote:
> 
> 
> >But, in some environments we might not want the 6rd BRs to suffer from
> >sustained fragmentation and reassembly so a responsible network operator
> >would fix their IPv4 link MTUs to 1520+. If they can't do that and the
> >load on the 6rd BR appears too great, then MSS clamping and a degenerate
> >IPv6 MTU reported to the IPv6 hosts is the only option.
> 
> The problem with your statement

Where do you see a problem with my statement - it agrees with what
you said below:

> is that many L3 access networks do not
> support MTU greater than 1500 on the access port. And if the RA MTU is set
> to 1480, you would not see any problems at all. However, there are some
> retail routers which do not set the MTU to 1480 when using a 6rd tunnel.
> In these cases adjusting the MSS if a good and efficient way of correcting
> that problem. Our experience so far is that MSS-clamping does not have any
> additional CPU load compared to not do it.

I don't doubt that your experience is valid for the environment you
are working in. What I am saying is that there may be many environments
where setting IPv4 link MTUs to 1520+ is a viable alternative and then
the hosts can see a full 1500+ MTU w/o ICMPs. SEAL detects when such
favorable conditions exist and uses limited fragmentation/reassembly
only when they don't. Or, if fragmentation/reassembly is deemed
unacceptable for the environment, then clamp the MSS.

Thanks - Fred
fred.l.temp...@boeing.com
 
> /Ragnar
> 



Re: MTU handling in 6RD deployments

2014-01-09 Thread Anfinsen, Ragnar
On 09.01.14 17:36, "Templin, Fred L"  wrote:


>But, in some environments we might not want the 6rd BRs to suffer from
>sustained fragmentation and reassembly so a responsible network operator
>would fix their IPv4 link MTUs to 1520+. If they can't do that and the
>load on the 6rd BR appears too great, then MSS clamping and a degenerate
>IPv6 MTU reported to the IPv6 hosts is the only option.

The problem with your statement is that many L3 access networks do not
support MTU greater than 1500 on the access port. And if the RA MTU is set
to 1480, you would not see any problems at all. However, there are some
retail routers which do not set the MTU to 1480 when using a 6rd tunnel.
In these cases adjusting the MSS if a good and efficient way of correcting
that problem. Our experience so far is that MSS-clamping does not have any
additional CPU load compared to not do it.

/Ragnar




Re: MTU handling in 6RD deployments

2014-01-09 Thread Anfinsen, Ragnar
On 09.01.14 16:56, "Templin, Fred L"  wrote:


>Hi Ragnar,

Hi Fred.

>What is the MTU as seen by the IPv6 hosts - 1480? Something less?

Yes. Since we set the MMS to 1420, which is 20 bytes lower than the
default 1440. By doing this we only see 1480 sizes packets for IPv6.

>Would it not be better if they could see 1500+?

Yes, but then we would need to upgrade all our access routers, which then
again would give us the possibility to do native Dual Stack.

/Ragnar




RE: MTU handling in 6RD deployments

2014-01-09 Thread Templin, Fred L
Hi spz,

> -Original Message-
> From: S.P.Zeidler [mailto:s...@serpens.de]
> Sent: Thursday, January 09, 2014 8:22 AM
> To: Templin, Fred L
> Cc: IPv6 Ops list
> Subject: Re: MTU handling in 6RD deployments
> 
> Thus wrote Templin, Fred L (fred.l.temp...@boeing.com):
> 
> > What is the MTU as seen by the IPv6 hosts - 1480? Something less?
> > Would it not be better if they could see 1500+?
> 
> Is this about the "let's improve a case of flu (router generates too many
> Packet Too Big ICMP) with bubonic plague (let the router do both packet
> fragmentation (wasn't that explicitly forbidden in IPv6?) and packet
> reassembly)" idea?

Fragmentation and reassembly would not happen at the IPv6 level; they
would happen at a sub-layer below IPv6 and above IPv4. So, there is no
violation of the IPv6 standard since the tunnel endpoint is acting as
a "host" when it encapsulates IPv6 packets.

But, in some environments we might not want the 6rd BRs to suffer from
sustained fragmentation and reassembly so a responsible network operator
would fix their IPv4 link MTUs to 1520+. If they can't do that and the
load on the 6rd BR appears too great, then MSS clamping and a degenerate
IPv6 MTU reported to the IPv6 hosts is the only option.

This is not a "one size fits all" solution for all 6rd domains; some
might be better able to manage their IPv4 link MTUs and/or accept
steady-state fragmentation than others.

Thanks - Fred
fred.l.temp...@boeing.com
 
> regards,
>   spz
> --
> s...@serpens.de (S.P.Zeidler)


Re: MTU handling in 6RD deployments

2014-01-09 Thread S.P.Zeidler
Thus wrote Templin, Fred L (fred.l.temp...@boeing.com):

> What is the MTU as seen by the IPv6 hosts - 1480? Something less?
> Would it not be better if they could see 1500+?

Is this about the "let's improve a case of flu (router generates too many
Packet Too Big ICMP) with bubonic plague (let the router do both packet
fragmentation (wasn't that explicitly forbidden in IPv6?) and packet
reassembly)" idea?

regards,
spz
-- 
s...@serpens.de (S.P.Zeidler)


RE: MTU handling in 6RD deployments

2014-01-09 Thread Templin, Fred L
Hi Ragnar,

What is the MTU as seen by the IPv6 hosts - 1480? Something less?
Would it not be better if they could see 1500+?

Thanks - Fred
fred.l.temp...@boeing.com

> -Original Message-
> From: ipv6-ops-bounces+fred.l.templin=boeing@lists.cluenet.de 
> [mailto:ipv6-ops-
> bounces+fred.l.templin=boeing@lists.cluenet.de] On Behalf Of Anfinsen, 
> Ragnar
> Sent: Thursday, January 09, 2014 5:35 AM
> To: IPv6 Ops list
> Subject: RE: MTU handling in 6RD deployments
> 
> Hi all.
> 
> We have now changed and upgraded our BR to handle MTU in a sensible manner.
> 
> We are not able to set the IPv4 MTU to more than 1500 due to limitations in 
> our Cisco Cat4500
> platform. So we have to reduce the MTU to 1480 somehow.
> 
> Our other problem is that, by mistake, our RA MTU size is set to 1500, and 
> not 1480, as it should be.
> This will be fixed in the next firmware we push out.
> 
> Due to the problem with the MTU size, we had a lot of ICMPv6 PTB packets 
> being returned from the Cisco
> 1K unit (BR). This again results to that the rate-limiting feature in the 1K 
> dropped a lot of the PTB
> messages going back to the sender, hence making the IPv6 experience sluggish. 
> Typical problems was
> that pictures and such had some timeout before they were loaded.
> 
> To fix this issue, we first of all upgraded our 1K's to the latest IOS (Cisco 
> IOS XE Software, Version
> 03.11.00.S) which has the MSS-Clamping feature. This feature was added by 
> Cisco in November. Then we
> added the command "ipv6 tcp adjust-mss 1420" to the 6rd tunnel interface.
> 
> With this command set, all ICMPv6 PTB packets has disappeared, and the whole 
> IPv6 experience has
> become a whole lot better with snappy loading of pages and pictures. So for 
> TCP everything is good. As
> soon as we have fixed our firmware, this problem should be gone all together.
> 
> Thanks to Tore Anderson for point this out to us.
> 
> Hopefully this is useful for someone.
> 
> Best Regards
> Ragnar Anfinsen
> 
> Chief Architect CPE
> IPv6/IPv4 Architect
> Infrastructure
> Technology
> Altibox AS
> 
> Phone: +47 51 90 80 00
> Phone direct: +47 51 90 82 35
> Mobile +47 93 48 82 35
> E-mail: ragnar.anfin...@altibox.no
> Skype: ragnar_anfinsen
> www.altibox.no
> 
> 
> 
> CONFIDENTIAL
> The content of this e-mail is intended solely for the use of the individual 
> or entity to whom it is
> addressed. If you have received this communication in error, be aware that 
> forwarding it, copying it,
> or in any way disclosing its content to any other person, is strictly 
> prohibited. If you have received
> this communication in error, please notify the author by replying to this 
> e-mail immediately, deleting
> this message and destruct all received documents.
> 



RE: MTU handling in 6RD deployments

2014-01-09 Thread Anfinsen, Ragnar
Hi all.

We have now changed and upgraded our BR to handle MTU in a sensible manner.

We are not able to set the IPv4 MTU to more than 1500 due to limitations in our 
Cisco Cat4500 platform. So we have to reduce the MTU to 1480 somehow.

Our other problem is that, by mistake, our RA MTU size is set to 1500, and not 
1480, as it should be. This will be fixed in the next firmware we push out.

Due to the problem with the MTU size, we had a lot of ICMPv6 PTB packets being 
returned from the Cisco 1K unit (BR). This again results to that the 
rate-limiting feature in the 1K dropped a lot of the PTB messages going back to 
the sender, hence making the IPv6 experience sluggish. Typical problems was 
that pictures and such had some timeout before they were loaded.

To fix this issue, we first of all upgraded our 1K's to the latest IOS (Cisco 
IOS XE Software, Version 03.11.00.S) which has the MSS-Clamping feature. This 
feature was added by Cisco in November. Then we added the command "ipv6 tcp 
adjust-mss 1420" to the 6rd tunnel interface.

With this command set, all ICMPv6 PTB packets has disappeared, and the whole 
IPv6 experience has become a whole lot better with snappy loading of pages and 
pictures. So for TCP everything is good. As soon as we have fixed our firmware, 
this problem should be gone all together.

Thanks to Tore Anderson for point this out to us.

Hopefully this is useful for someone.

Best Regards 
Ragnar Anfinsen

Chief Architect CPE
IPv6/IPv4 Architect
Infrastructure
Technology
Altibox AS 

Phone: +47 51 90 80 00 
Phone direct: +47 51 90 82 35 
Mobile +47 93 48 82 35
E-mail: ragnar.anfin...@altibox.no
Skype: ragnar_anfinsen
www.altibox.no


  
CONFIDENTIAL
The content of this e-mail is intended solely for the use of the individual or 
entity to whom it is addressed. If you have received this communication in 
error, be aware that forwarding it, copying it, or in any way disclosing its 
content to any other person, is strictly prohibited. If you have received this 
communication in error, please notify the author by replying to this e-mail 
immediately, deleting this message and destruct all received documents.




RE: MTU handling in 6RD deployments

2014-01-07 Thread Templin, Fred L
Hi again,

> Second (and more importantly) reassembly is not needed
> for packets of any size if the path can pass a 1500 byte ping packet.

I should have qualified this by saying that the mechanism still
works even if the BR responds to pings subject to rate limiting.

Thanks - Fred
fred.l.temp...@boeing.com



Re: MTU handling in 6RD deployments

2014-01-07 Thread Anfinsen, Ragnar
On 07.01.14 17:10, "jean-francois.tremblay...@videotron.com"
 wrote:


>> How many users use your 6rd BR (pr. BR if many)?
>
>50k on a pair of ASR1002-5G, but the second is mostly idle.

We have about 2K for the time being as we do opt-in.

> 
>
>> How does the rate-limiting (drops) numbers look at your side?
>
>Actually, it's quite high (over 50%). I gave up on reaching zero here.
>
>The nature of the traffic seems to be bursty enough that getting to zero
>will be nearly impossible.

As of right now, we have a IPv6 ICMP packet drop of 0,09%. If you want to
hit the 0% mark, you must disable rate-limiting. However, seen from an
operational point of view turning off ICMP rate-limiting makes your 6rd BR
vulnerable for ICMP ping attacks.

/Ragnar




RE: MTU handling in 6RD deployments

2014-01-07 Thread Templin, Fred L
Hi Tore,

> -Original Message-
> From: Tore Anderson [mailto:t...@fud.no]
> Sent: Tuesday, January 07, 2014 9:57 AM
> To: Templin, Fred L; IPv6 Ops list
> Subject: Re: MTU handling in 6RD deployments
> 
> * Templin, Fred L
> 
> > 6RD could use SEAL the same as any tunneling technology. SEAL makes
> > sure that packets up to 1500 get through no matter what, and lets
> > bigger packets through (as long as they fit the first-hop MTU) with
> > the expectation that hosts sending the bigger packets know what they
> > are doing. It works as follows:
> >
> >   - tunnel ingress pings the egress with a 1500 byte ping
> >   - if the ping succeeds, the path MTU is big enough to
> > accommodate 1500s w/o fragmentation
> >   - if the ping fails, use fragmentation/reassembly to
> > accommodate 1500 and smaller
> >   - end result - IPv6 hosts always see an MTU of at least 1500
> 
> In order for the BR to support reassembly it must maintain state. That's
> going to have a very negative impact on its scaling properties...

A couple of things about this. First, reassembly is used only for packets
in the range of 1280-1500 bytes (smaller and larger packets are passed
w/o fragmentation). Second (and more importantly) reassembly is not needed
for packets of any size if the path can pass a 1500 byte ping packet. So
(as Ole said a few messages back) if the 6rd domain MTU can be made to be
>=1520 the fragmentation and reassembly process is suppressed and only
whole packets are transmitted.

Thanks - Fred
fred.l.temp...@boeing.com

> Tore



Re: MTU handling in 6RD deployments

2014-01-07 Thread Tore Anderson
* Templin, Fred L

> 6RD could use SEAL the same as any tunneling technology. SEAL makes
> sure that packets up to 1500 get through no matter what, and lets
> bigger packets through (as long as they fit the first-hop MTU) with
> the expectation that hosts sending the bigger packets know what they
> are doing. It works as follows:
> 
>   - tunnel ingress pings the egress with a 1500 byte ping
>   - if the ping succeeds, the path MTU is big enough to
> accommodate 1500s w/o fragmentation
>   - if the ping fails, use fragmentation/reassembly to
> accommodate 1500 and smaller
>   - end result - IPv6 hosts always see an MTU of at least 1500

In order for the BR to support reassembly it must maintain state. That's
going to have a very negative impact on its scaling properties...

Tore



Re: MTU handling in 6RD deployments

2014-01-07 Thread Jean-Francois . TremblayING
> How many users use your 6rd BR (pr. BR if many)?

50k on a pair of ASR1002-5G, but the second is mostly idle. 

> How does the rate-limiting (drops) numbers look at your side?

Actually, it's quite high (over 50%). I gave up on reaching zero here. 
The nature of the traffic seems to be bursty enough that getting to zero 
will be nearly impossible. 

> We hit a problem where the default rate-limiting (100/10) where way to
> aggressive. Right now, we have reduced it to minimum (1/200), but we 
still
> see some drops. Next step would be to turn rate-limiting off, but from 
an
> operational perspective, it does not taste very good.

Agreed. We might actually try these numbers or even disabling it and see 
what happens CPU-wise. 

/JF


RE: MTU handling in 6RD deployments

2014-01-07 Thread Templin, Fred L
6RD could use SEAL the same as any tunneling technology. SEAL makes
sure that packets up to 1500 get through no matter what, and lets
bigger packets through (as long as they fit the first-hop MTU) with
the expectation that hosts sending the bigger packets know what they
are doing. It works as follows:

  - tunnel ingress pings the egress with a 1500 byte ping
  - if the ping succeeds, the path MTU is big enough to
accommodate 1500s w/o fragmentation
  - if the ping fails, use fragmentation/reassembly to
accommodate 1500 and smaller
  - end result - IPv6 hosts always see an MTU of at least 1500

http://tools.ietf.org/html/draft-templin-intarea-seal

Thanks - Fred
fred.l.temp...@boeing.com

> -Original Message-
> From: ipv6-ops-bounces+fred.l.templin=boeing@lists.cluenet.de 
> [mailto:ipv6-ops-
> bounces+fred.l.templin=boeing@lists.cluenet.de] On Behalf Of Tore Anderson
> Sent: Tuesday, January 07, 2014 3:38 AM
> To: IPv6 Ops list
> Subject: MTU handling in 6RD deployments
> 
> Hi list,
> 
> Does anyone know what tricks, if any, the major 6RD deployments (AT&T,
> Free, Swisscom, others?) are using to alleviate any problems stemming
> from the reduced IPv6 MTU? Some possibilities that come to mind are:
> 
> * Having the 6RD CPE lower the TCP MSS value of SYN packets as they
> enter/exit the tunnel device
> * Having the 6RD BR lower the TCP MSS value in the same way as above
> * Having the 6RD CPE advertise a lowered MTU to the LAN in RA Options
> * Several (or all) of the above in combination
> 
> Also, given that some ISPs offer [only] Layer-2 service and expect/allow
> their customers to bring their own Layer-3 home gateway if they want
> one, I would find it interesting to learn if any of the most common
> off-the-shelf home gateway products (that enable 6RD by default) also
> implement any such tricks by default or not.
> 
> Tore


Re: MTU handling in 6RD deployments

2014-01-07 Thread Simon Perreault

Le 2014-01-07 10:18, Mark Townsley a écrit :


And generating stinkin' ICMPv6 too big messages ends up being perhaps the most 
significant scaling factor of a 6rd BR deployment...


The worst thing is a lot of content providers will simply ignore those 
too bigs you worked so hard to produce... *sigh*


Native IPv6 FTW.

Simon
--
DTN made easy, lean, and smart --> http://postellation.viagenie.ca
NAT64/DNS64 open-source--> http://ecdysis.viagenie.ca
STUN/TURN server   --> http://numb.viagenie.ca


Re: MTU handling in 6RD deployments

2014-01-07 Thread Mark Townsley

And generating stinkin' ICMPv6 too big messages ends up being perhaps the most 
significant scaling factor of a 6rd BR deployment...

- Mark

On Jan 7, 2014, at 3:59 PM, Simon Perreault wrote:

> Le 2014-01-07 08:46, jean-francois.tremblay...@videotron.com a écrit :
>> In the list of "tricks", you might want to add:
>> * Slightly raise the ICMPv6 rate-limit values for your 6RD BR (we do 50/20)
> 
> Yeah, this is really problematic. When IPv6 packets arrive at the BR from the 
> Internet, the BR needs to send too bigs so that the remote node can do PMTUD 
> correctly and figure out the 1480 MTU. If you rate-limit those too bigs, you 
> create black holes. You need to expect a lot of too bigs to be generated by 
> the BR in regular operation, even if the CPE uses tricks such as TCP MSS 
> adjustment or advertising 1480 in RA, because we still need to live with 
> non-TCP traffic and nodes that don't understand the MTU param in RAs.
> 
> Simon
> -- 
> DTN made easy, lean, and smart --> http://postellation.viagenie.ca
> NAT64/DNS64 open-source--> http://ecdysis.viagenie.ca
> STUN/TURN server   --> http://numb.viagenie.ca



Re: MTU handling in 6RD deployments

2014-01-07 Thread Simon Perreault

Le 2014-01-07 08:46, jean-francois.tremblay...@videotron.com a écrit :

In the list of "tricks", you might want to add:
* Slightly raise the ICMPv6 rate-limit values for your 6RD BR (we do 50/20)


Yeah, this is really problematic. When IPv6 packets arrive at the BR 
from the Internet, the BR needs to send too bigs so that the remote node 
can do PMTUD correctly and figure out the 1480 MTU. If you rate-limit 
those too bigs, you create black holes. You need to expect a lot of too 
bigs to be generated by the BR in regular operation, even if the CPE 
uses tricks such as TCP MSS adjustment or advertising 1480 in RA, 
because we still need to live with non-TCP traffic and nodes that don't 
understand the MTU param in RAs.


Simon
--
DTN made easy, lean, and smart --> http://postellation.viagenie.ca
NAT64/DNS64 open-source--> http://ecdysis.viagenie.ca
STUN/TURN server   --> http://numb.viagenie.ca


Re: MTU handling in 6RD deployments

2014-01-07 Thread Anfinsen, Ragnar
On 07.01.14 14:46, "jean-francois.tremblay...@videotron.com"
 wrote:


>In the list of "tricks", you might want to add:
>* Slightly raise the ICMPv6 rate-limit values for your 6RD BR (we do
>50/20)

How many users use your 6rd BR (pr. BR if many)?

>Too bigs remain quite common however...
>#sh ipv6 traffic | in too
>   11880 encapsulation failed, 0 no route, 3829023354 too big
>#sh ver | in upt 
>uptime is 2 years, 4 weeks, 5 days, 4 hours, 3 minutes

How does the rate-limiting (drops) numbers look at your side?

We hit a problem where the default rate-limiting (100/10) where way to
aggressive. Right now, we have reduced it to minimum (1/200), but we still
see some drops. Next step would be to turn rate-limiting off, but from an
operational perspective, it does not taste very good.

 
Best Regards 
Ragnar Anfinsen
 
Chief Architect CPE
IPv6 Architect
Netinfrastructure & Communication
Technology and Innovation
Altibox AS 

Phone: +47 51 90 80 00
Phone direct: +47 51 90 82 35
Mobile +47 93 48 82 35
E-mail: ragnar.anfin...@altibox.no 
Skype: ragnar_anfinsen
www.altibox.no 




Re: MTU handling in 6RD deployments

2014-01-07 Thread Jean-Francois . TremblayING
> De : Gert Doering 
> 
> "Have a higher IPv4 MTU between the 6rd tunnel endpoints"  sounds like
> a nice solution an ISP could deploy.

Docsis MTU is 1518 bytes, so that won't happen any time soon in the cable 
world. 
(Docsis 3.1 is higher at 2000 bytes, but that's years away)

/JF



Re: MTU handling in 6RD deployments

2014-01-07 Thread Tore Anderson
* Gert Doering

> "Have a higher IPv4 MTU between the 6rd tunnel endpoints"  sounds like
> a nice solution an ISP could deploy.

True, well, in theory anyway.

The reason I didn't include this in my list was that considering the
whole point of 6RD is to be able to bypass limitations of old rusty gear
that don't support fancy features like IPv6...the chances of that old
rusty gear being able to reliably support jumbo frames wasn't very high
either.

Tore



RE: MTU handling in 6RD deployments

2014-01-07 Thread Jean-Francois . TremblayING
Hi Tore. 

> Does anyone know what tricks, if any, the major 6RD deployments (AT&T,
> Free, Swisscom, others?) are using to alleviate any problems stemming
> from the reduced IPv6 MTU? Some possibilities that come to mind are:
> 
> * Having the 6RD CPE lower the TCP MSS value of SYN packets as they
> enter/exit the tunnel device
> * Having the 6RD BR lower the TCP MSS value in the same way as above
> * Having the 6RD CPE advertise a lowered MTU to the LAN in RA Options
> * Several (or all) of the above in combination

Our managed CPEs (D-Links) send (IPv4 MTU) - 20 bytes in RAs, usually 
1480.

In the list of "tricks", you might want to add: 
* Slightly raise the ICMPv6 rate-limit values for your 6RD BR (we do 
50/20)

I haven't seen IPv6 MSS clamping in the wild yet (it was discussed on 
this list a year ago). 

> Also, given that some ISPs offer [only] Layer-2 service and expect/allow
> their customers to bring their own Layer-3 home gateway if they want
> one, I would find it interesting to learn if any of the most common
> off-the-shelf home gateway products (that enable 6RD by default) also
> implement any such tricks by default or not.

>From off-the-shelf, we see mostly D-Links and Cisco/Linksys/Belkin 
with option 212 support. A few Asus models started showing up in the 
stats in 2013 I believe. Last time I checked, all models supporting 
option 212 also reduced their MTU properly (YMMV here, that was almost a 
year ago).

Too bigs remain quite common however... 
#sh ipv6 traffic | in too
   11880 encapsulation failed, 0 no route, 3829023354 too big
#sh ver | in upt
uptime is 2 years, 4 weeks, 5 days, 4 hours, 3 minutes

If 6lab's data is right, roughly half of Canada's IPv6 users go through 
that box (50k users).

/JF



Re: MTU handling in 6RD deployments

2014-01-07 Thread Gert Doering
Hi,

On Tue, Jan 07, 2014 at 12:37:39PM +0100, Tore Anderson wrote:
> Does anyone know what tricks, if any, the major 6RD deployments (AT&T,
> Free, Swisscom, others?) are using to alleviate any problems stemming
> from the reduced IPv6 MTU? Some possibilities that come to mind are:

"Have a higher IPv4 MTU between the 6rd tunnel endpoints"  sounds like
a nice solution an ISP could deploy.

But I've long given up hope after everybody seems to agree that 1492
is just fine.

Gert Doering
-- NetMaster
-- 
have you enabled IPv6 on something today...?

SpaceNet AGVorstand: Sebastian v. Bomhard
Joseph-Dollinger-Bogen 14  Aufsichtsratsvors.: A. Grundner-Culemann
D-80807 Muenchen   HRB: 136055 (AG Muenchen)
Tel: +49 (0)89/32356-444   USt-IdNr.: DE813185279


Re: MTU handling in 6RD deployments

2014-01-07 Thread Mikael Abrahamsson

On Tue, 7 Jan 2014, Mark Townsley wrote:

Note I've heard some ISPs consider running Jumbo Frames under the covers 
so that IPv4 could carry 1520 and 1500 would be possible for IPv6, but 
have not yet seen that confirmed to me in practice.


Unless this is done in a very controlled environment I'd say this is 
bordering on the impossible. There are so many failure points for a jumbo 
solution it's scary. Most of them is also silent failure of PMTUD, 
basically blackholing of traffic.


Yes, it can be done of course, but I'd say operationally it's easier to 
just drop the MTU to 1480 and known working, than the jumbo alternative.


--
Mikael Abrahamssonemail: swm...@swm.pp.se


Re: MTU handling in 6RD deployments

2014-01-07 Thread Mark Townsley

On Jan 7, 2014, at 12:56 PM, Emmanuel Thierry wrote:

> Hello,
> 
> Le 7 janv. 2014 à 12:37, Tore Anderson a écrit :
> 
>> Hi list,
>> 
>> Does anyone know what tricks, if any, the major 6RD deployments (AT&T,
>> Free, Swisscom, others?) are using to alleviate any problems stemming
>> from the reduced IPv6 MTU? Some possibilities that come to mind are:
>> 
>> * Having the 6RD CPE lower the TCP MSS value of SYN packets as they
>> enter/exit the tunnel device
>> * Having the 6RD BR lower the TCP MSS value in the same way as above
>> * Having the 6RD CPE advertise a lowered MTU to the LAN in RA Options
> 
> For your information, i see an advertised mtu of 1480 on my WiFi interface 
> with the Free CPE.

Section 9.1 of RFC 5969:

   If the MTU is well-managed such that the IPv4 MTU on the CE WAN side
   interface is set so that no fragmentation occurs within the boundary
   of the SP, then the 6rd Tunnel MTU should be set to the known IPv4
   MTU minus the size of the encapsulating IPv4 header (20 bytes).  For
   example, if the IPv4 MTU is known to be 1500 bytes, the 6rd Tunnel
   MTU might be set to 1480 bytes.  Absent more specific information,
   the 6rd Tunnel MTU SHOULD default to 1280 bytes.

Note I've heard some ISPs consider running Jumbo Frames under the covers so 
that IPv4 could carry 1520 and 1500 would be possible for IPv6, but have not 
yet seen that confirmed to me in practice. 

- Mark

> 
>> * Several (or all) of the above in combination
>> 
>> Also, given that some ISPs offer [only] Layer-2 service and expect/allow
>> their customers to bring their own Layer-3 home gateway if they want
>> one, I would find it interesting to learn if any of the most common
>> off-the-shelf home gateway products (that enable 6RD by default) also
>> implement any such tricks by default or not.
>> 
> 
> Best regards
> Emmanuel Thierry
> 



Re: MTU handling in 6RD deployments

2014-01-07 Thread Ole Troan
> Does anyone know what tricks, if any, the major 6RD deployments (AT&T,
> Free, Swisscom, others?) are using to alleviate any problems stemming
> from the reduced IPv6 MTU? Some possibilities that come to mind are:
> 
> * Having the 6RD CPE lower the TCP MSS value of SYN packets as they
> enter/exit the tunnel device
> * Having the 6RD BR lower the TCP MSS value in the same way as above
> * Having the 6RD CPE advertise a lowered MTU to the LAN in RA Options
> * Several (or all) of the above in combination

ensure the 6rd domain MTU is >=1520.

> Also, given that some ISPs offer [only] Layer-2 service and expect/allow
> their customers to bring their own Layer-3 home gateway if they want
> one, I would find it interesting to learn if any of the most common
> off-the-shelf home gateway products (that enable 6RD by default) also
> implement any such tricks by default or not.

cheers,
Ole



signature.asc
Description: Message signed with OpenPGP using GPGMail


Re: MTU handling in 6RD deployments

2014-01-07 Thread Emmanuel Thierry
Hello,

Le 7 janv. 2014 à 12:37, Tore Anderson a écrit :

> Hi list,
> 
> Does anyone know what tricks, if any, the major 6RD deployments (AT&T,
> Free, Swisscom, others?) are using to alleviate any problems stemming
> from the reduced IPv6 MTU? Some possibilities that come to mind are:
> 
> * Having the 6RD CPE lower the TCP MSS value of SYN packets as they
> enter/exit the tunnel device
> * Having the 6RD BR lower the TCP MSS value in the same way as above
> * Having the 6RD CPE advertise a lowered MTU to the LAN in RA Options

For your information, i see an advertised mtu of 1480 on my WiFi interface with 
the Free CPE.

> * Several (or all) of the above in combination
> 
> Also, given that some ISPs offer [only] Layer-2 service and expect/allow
> their customers to bring their own Layer-3 home gateway if they want
> one, I would find it interesting to learn if any of the most common
> off-the-shelf home gateway products (that enable 6RD by default) also
> implement any such tricks by default or not.
> 

Best regards
Emmanuel Thierry



MTU handling in 6RD deployments

2014-01-07 Thread Tore Anderson
Hi list,

Does anyone know what tricks, if any, the major 6RD deployments (AT&T,
Free, Swisscom, others?) are using to alleviate any problems stemming
from the reduced IPv6 MTU? Some possibilities that come to mind are:

* Having the 6RD CPE lower the TCP MSS value of SYN packets as they
enter/exit the tunnel device
* Having the 6RD BR lower the TCP MSS value in the same way as above
* Having the 6RD CPE advertise a lowered MTU to the LAN in RA Options
* Several (or all) of the above in combination

Also, given that some ISPs offer [only] Layer-2 service and expect/allow
their customers to bring their own Layer-3 home gateway if they want
one, I would find it interesting to learn if any of the most common
off-the-shelf home gateway products (that enable 6RD by default) also
implement any such tricks by default or not.

Tore