Re: IP tunnel MTU

2012-10-30 Thread Tim Franklin
 Certainly fixing all the buggy host stacks, firewall and compliance devices 
 to realize that ICMP isn't bad won't be hard.

 Wait till you get started on fixing the security consultants.

Ack.  I've yet to come across a *device* that doesn't deal properly with 
packet too big.  Lots (and lots and lots) of security people, one or two 
applications, but no devices.

Regards,
Tim.





Re: IP tunnel MTU

2012-10-30 Thread Sander Steffann
Hi,

 Certainly fixing all the buggy host stacks, firewall and compliance devices 
 to realize that ICMP isn't bad won't be hard.
 
 Wait till you get started on fixing the security consultants.
 
 Ack.  I've yet to come across a *device* that doesn't deal properly with 
 packet too big.  Lots (and lots and lots) of security people, one or two 
 applications, but no devices.


I know of one: Juniper SSG and SRX boxes used to block IPv6 ICMP errors when 
the screening option 'big ICMP packets' was enabled because it blocked all (v4 
and v6) ICMP packets bigger than 1024 bytes and IPv6 ICMP errors are often 1280 
bytes. I don't know if that has been fixed yet.

- Sander




Re: IP tunnel MTU

2012-10-30 Thread Jeroen Massar
On 2012-10-30 11:19, Sander Steffann wrote:
 Hi,
 
 Certainly fixing all the buggy host stacks, firewall and compliance 
 devices to realize that ICMP isn't bad won't be hard.

 Wait till you get started on fixing the security consultants.

 Ack.  I've yet to come across a *device* that doesn't deal properly with 
 packet too big.  Lots (and lots and lots) of security people, one or two 
 applications, but no devices.
 
 
 I know of one: Juniper SSG and SRX boxes used to block IPv6 ICMP errors when 
 the screening option 'big ICMP packets' was enabled because it blocked all 
 (v4 and v6) ICMP packets bigger than 1024 bytes and IPv6 ICMP errors are 
 often 1280 bytes. I don't know if that has been fixed yet.

I do not see them fixing that either, if one misconfigures a host to
filter big ICMP packets, you get exactly that, it will filter those packets.

In the same way as folks misconfiguring hosts to drop ICMP in general etc.

One cannot solve stupid people as they will do stupid things.

Greets,
 Jeroen




RE: IP tunnel MTU

2012-10-30 Thread Templin, Fred L
Hi Chris,

 -Original Message-
 From: Chris Woodfield [mailto:rek...@semihuman.com]
 Sent: Monday, October 29, 2012 4:40 PM
 To: Templin, Fred L
 Cc: William Herrin; Ray Soucy; NANOG list
 Subject: Re: IP tunnel MTU
 
 True, but it could be used as an alternative PMTUD algorithm - raise the
 segment size and wait for the I got this as fragments option to show
 up...

Yes; it is a very attractive option on the surface. Steve Deering
called it Report Fragmentation (RF) when he first proposed it
back in 1988, but it didn't gain sufficient traction and what we
got instead was RFC1191.

As I mentioned, SEAL does this already but in a best effort
fashion. SEAL will work over paths that don't conform well to
the RF model, but will derive some useful benefit from paths
that do.
 
 Of course, this only works for IPv4. IPv6 users are SOL if something in
 the middle is dropping ICMPv6.

Sad, but true.

Thanks - Fred
fred.l.temp...@boeing.com

 -C
 
 On Oct 29, 2012, at 4:02 PM, Templin, Fred L wrote:
 
  Hi Bill,
 
  Maybe something as simple as clearing the don't fragment flag and
  adding a TCP option to report receipt of a fragmented packet along
  with the fragment sizes back to the sender so he can adjust his mss to
  avoid fragmentation.
 
  That is in fact what SEAL is doing, but there is no guarantee
  that the size of the largest fragment is going to be an accurate
  reflection of the true path MTU. RFC1812 made sure of that when
  it more or less gave IPv4 routers permission to fragment packets
  pretty much any way they want.
 
  Thanks - Fred
  fred.l.temp...@boeing.com
 




Re: IP tunnel MTU

2012-10-29 Thread Ray Soucy
The core issue here is TCP MSS. PMTUD is a dynamic process for
adjusting MSS, but requires that ICMP be permitted to negotiate the
connection.  The realistic alternative, in a world that filters all
ICMP traffic, is to manually rewrite the MSS.  In IOS this can be
achieved via ip tcp adjust-mss and on Linux-based systems, netfilter
can be used to adjust MSS for example.

Keep in mind that the MSS will be smaller than your MTU.
Consider the following example:

 ip mtu 1480
 ip tcp adjust-mss 1440
 tunnel mode ipip

IP packets have 20 bytes of overhead, leaving 1480 bytes for data.  So
for an IP-in-IP tunnel, you'd set your MTU of your tunnel interface to
1480.  Subtract another 20 bytes for the tunneled IP header and 20
bytes (typical) for your TCP header and you're left with 1440 bytes
for data in a TCP connection.  So in this case we write the MSS as
1440.

I use IP-in-IP as an example because it's simple.  GRE tunnels can be
a little more complex.  While the GRE header is typically 4 bytes, it
can grow up to 16 bytes depending on options used.

So for a typical GRE tunnel (4 byte header), you would subtract 20
bytes for the IP header and 4 bytes for the GRE header from your base
MTU of 1500.  This would mean an MTU of 1476, and a TCP MMS of 1436.

Keep in mind that a TCP header can be up to 60 bytes in length, so you
may want to go higher than the typical 20 bytes for your MSS if you're
seeing problems.




On Tue, Oct 23, 2012 at 10:07 AM, Templin, Fred L
fred.l.temp...@boeing.com wrote:
 Hi Roland,

 -Original Message-
 From: Dobbins, Roland [mailto:rdobb...@arbor.net]
 Sent: Monday, October 22, 2012 6:49 PM
 To: NANOG list
 Subject: Re: IP tunnel MTU


 On Oct 23, 2012, at 5:24 AM, Templin, Fred L wrote:

  Since tunnels always reduce the effective MTU seen by data packets due
 to the encapsulation overhead, the only two ways to accommodate
  the tunnel MTU is either through the use of path MTU discovery or
 through fragmentation and reassembly.

 Actually, you can set your tunnel MTU manually.

 For example, the typical MTU folks set for a GRE tunnel is 1476.

 Yes; I was aware of this. But, what I want to get to is
 setting the tunnel MTU to infinity.

 This isn't a new issue; it's been around ever since tunneling technologies
 have been around, and tons have been written on this topic.  Look at your
 various router/switch vendor Web sites, archives of this list and others,
 etc.

 Sure. I've written a fair amount about it too over the span
 of the last ten years. What is new is that there is now a
 solution near at hand.

 So, it's been known about, dealt with, and documented for a long time.  In
 terms of doing something about it, the answer there is a) to allow the
 requisite ICMP for PMTU-D to work to/through any networks within your span
 of administrative control and b)

 That does you no good if there is some other network further
 beyond your span of administrative control that does not allow
 the ICMP PTBs through. And, studies have shown this to be the
 case in a non-trivial number of instances.

 b) adjusting your own tunnel MTUs to
 appropriate values based upon experimentation.

 Adjust it down to what? 1280? Then, if your tunnel with the
 adjusted MTU enters another tunnel with its own adjusted MTU
 there is an MTU underflow that might not get reported if the
 ICMP PTB messages are lost. An alternative is to use IP
 fragmentation, but recent studies have shown that more and
 more operators are unconditionally dropping IPv6 fragments
 and IPv4 fragmentation is not an option due to wrapping IDs
 at high data rates.

 Nested tunnels-within-tunnels occur in operational scenarios
 more and more, and adjusting the MTU for only one tunnel in
 the nesting does you no good if there are other tunnels that
 adjust their own MTUs.

 Enterprise endpoint networks are notorious for blocking *all* ICMP (as
 well as TCP/53 DNS) at their edges due to 'security' misinformation
 propagated by Confused Information Systems Security Professionals and
 their ilk.  Be sure that your own network policies aren't part of the
 problem affecting your userbase, as well as anyone else with a need to
 communicate with properties on your network via tunnels.

 Again, all an operator can control is that which is within their
 own administrative domain. That does no good for ICMPs that are
 lost beyond their administrative domain.

 Thanks - Fred
 fred.l.temp...@boeing.com

 ---
 Roland Dobbins rdobb...@arbor.net // http://www.arbornetworks.com

 Luck is the residue of opportunity and design.

  -- John Milton






-- 
Ray Patrick Soucy
Network Engineer
University of Maine System

T: 207-561-3526
F: 207-561-3531

MaineREN, Maine's Research and Education Network
www.maineren.net



RE: IP tunnel MTU

2012-10-29 Thread Templin, Fred L
Hi Ray,

MSS rewriting has been well known and broadly applied for a long
time now, but only applies to TCP. The subject of MSS rewriting
comes up all the time in the IETF wg discussions, but has failed
to reach consensus as a long-term alternative.

Plus, MSS rewriting does no good for tunnels-within-tunnels. If
the innermost tunnel rewrites MSS to a value that *it* thinks is
safe there is no guarantee that the packets will fit within any
outer tunnels that occur further down the line.

What I want to get to is an indefinite tunnel MTU; i.e., admit
any packet into the tunnel regardless of its size then make any
necessary adaptations from within the tunnel. That is exactly
what SEAL does:

 https://datatracker.ietf.org/doc/draft-templin-intarea-seal/

Thanks - Fred
fred.l.temp...@boeing.com

 -Original Message-
 From: Ray Soucy [mailto:r...@maine.edu]
 Sent: Monday, October 29, 2012 7:55 AM
 To: Templin, Fred L
 Cc: Dobbins, Roland; NANOG list
 Subject: Re: IP tunnel MTU
 
 The core issue here is TCP MSS. PMTUD is a dynamic process for
 adjusting MSS, but requires that ICMP be permitted to negotiate the
 connection.  The realistic alternative, in a world that filters all
 ICMP traffic, is to manually rewrite the MSS.  In IOS this can be
 achieved via ip tcp adjust-mss and on Linux-based systems, netfilter
 can be used to adjust MSS for example.
 
 Keep in mind that the MSS will be smaller than your MTU.
 Consider the following example:
 
  ip mtu 1480
  ip tcp adjust-mss 1440
  tunnel mode ipip
 
 IP packets have 20 bytes of overhead, leaving 1480 bytes for data.  So
 for an IP-in-IP tunnel, you'd set your MTU of your tunnel interface to
 1480.  Subtract another 20 bytes for the tunneled IP header and 20
 bytes (typical) for your TCP header and you're left with 1440 bytes
 for data in a TCP connection.  So in this case we write the MSS as
 1440.
 
 I use IP-in-IP as an example because it's simple.  GRE tunnels can be
 a little more complex.  While the GRE header is typically 4 bytes, it
 can grow up to 16 bytes depending on options used.
 
 So for a typical GRE tunnel (4 byte header), you would subtract 20
 bytes for the IP header and 4 bytes for the GRE header from your base
 MTU of 1500.  This would mean an MTU of 1476, and a TCP MMS of 1436.
 
 Keep in mind that a TCP header can be up to 60 bytes in length, so you
 may want to go higher than the typical 20 bytes for your MSS if you're
 seeing problems.
 
 
 
 
 On Tue, Oct 23, 2012 at 10:07 AM, Templin, Fred L
 fred.l.temp...@boeing.com wrote:
  Hi Roland,
 
  -Original Message-
  From: Dobbins, Roland [mailto:rdobb...@arbor.net]
  Sent: Monday, October 22, 2012 6:49 PM
  To: NANOG list
  Subject: Re: IP tunnel MTU
 
 
  On Oct 23, 2012, at 5:24 AM, Templin, Fred L wrote:
 
   Since tunnels always reduce the effective MTU seen by data packets
 due
  to the encapsulation overhead, the only two ways to accommodate
   the tunnel MTU is either through the use of path MTU discovery or
  through fragmentation and reassembly.
 
  Actually, you can set your tunnel MTU manually.
 
  For example, the typical MTU folks set for a GRE tunnel is 1476.
 
  Yes; I was aware of this. But, what I want to get to is
  setting the tunnel MTU to infinity.
 
  This isn't a new issue; it's been around ever since tunneling
 technologies
  have been around, and tons have been written on this topic.  Look at
 your
  various router/switch vendor Web sites, archives of this list and
 others,
  etc.
 
  Sure. I've written a fair amount about it too over the span
  of the last ten years. What is new is that there is now a
  solution near at hand.
 
  So, it's been known about, dealt with, and documented for a long time.
 In
  terms of doing something about it, the answer there is a) to allow the
  requisite ICMP for PMTU-D to work to/through any networks within your
 span
  of administrative control and b)
 
  That does you no good if there is some other network further
  beyond your span of administrative control that does not allow
  the ICMP PTBs through. And, studies have shown this to be the
  case in a non-trivial number of instances.
 
  b) adjusting your own tunnel MTUs to
  appropriate values based upon experimentation.
 
  Adjust it down to what? 1280? Then, if your tunnel with the
  adjusted MTU enters another tunnel with its own adjusted MTU
  there is an MTU underflow that might not get reported if the
  ICMP PTB messages are lost. An alternative is to use IP
  fragmentation, but recent studies have shown that more and
  more operators are unconditionally dropping IPv6 fragments
  and IPv4 fragmentation is not an option due to wrapping IDs
  at high data rates.
 
  Nested tunnels-within-tunnels occur in operational scenarios
  more and more, and adjusting the MTU for only one tunnel in
  the nesting does you no good if there are other tunnels that
  adjust their own MTUs.
 
  Enterprise endpoint networks are notorious for blocking *all* ICMP (as
  well

Re: IP tunnel MTU

2012-10-29 Thread Ray Soucy
Sorry, glanced at this and thought it was someone having problems with
tunnel MTU without adjusting TCP MSS.

Nice work, though my preference is to avoid tunnels at all costs :-)




On Mon, Oct 29, 2012 at 12:39 PM, Templin, Fred L
fred.l.temp...@boeing.com wrote:
 Hi Ray,

 MSS rewriting has been well known and broadly applied for a long
 time now, but only applies to TCP. The subject of MSS rewriting
 comes up all the time in the IETF wg discussions, but has failed
 to reach consensus as a long-term alternative.

 Plus, MSS rewriting does no good for tunnels-within-tunnels. If
 the innermost tunnel rewrites MSS to a value that *it* thinks is
 safe there is no guarantee that the packets will fit within any
 outer tunnels that occur further down the line.

 What I want to get to is an indefinite tunnel MTU; i.e., admit
 any packet into the tunnel regardless of its size then make any
 necessary adaptations from within the tunnel. That is exactly
 what SEAL does:

  https://datatracker.ietf.org/doc/draft-templin-intarea-seal/

 Thanks - Fred
 fred.l.temp...@boeing.com

 -Original Message-
 From: Ray Soucy [mailto:r...@maine.edu]
 Sent: Monday, October 29, 2012 7:55 AM
 To: Templin, Fred L
 Cc: Dobbins, Roland; NANOG list
 Subject: Re: IP tunnel MTU

 The core issue here is TCP MSS. PMTUD is a dynamic process for
 adjusting MSS, but requires that ICMP be permitted to negotiate the
 connection.  The realistic alternative, in a world that filters all
 ICMP traffic, is to manually rewrite the MSS.  In IOS this can be
 achieved via ip tcp adjust-mss and on Linux-based systems, netfilter
 can be used to adjust MSS for example.

 Keep in mind that the MSS will be smaller than your MTU.
 Consider the following example:

  ip mtu 1480
  ip tcp adjust-mss 1440
  tunnel mode ipip

 IP packets have 20 bytes of overhead, leaving 1480 bytes for data.  So
 for an IP-in-IP tunnel, you'd set your MTU of your tunnel interface to
 1480.  Subtract another 20 bytes for the tunneled IP header and 20
 bytes (typical) for your TCP header and you're left with 1440 bytes
 for data in a TCP connection.  So in this case we write the MSS as
 1440.

 I use IP-in-IP as an example because it's simple.  GRE tunnels can be
 a little more complex.  While the GRE header is typically 4 bytes, it
 can grow up to 16 bytes depending on options used.

 So for a typical GRE tunnel (4 byte header), you would subtract 20
 bytes for the IP header and 4 bytes for the GRE header from your base
 MTU of 1500.  This would mean an MTU of 1476, and a TCP MMS of 1436.

 Keep in mind that a TCP header can be up to 60 bytes in length, so you
 may want to go higher than the typical 20 bytes for your MSS if you're
 seeing problems.




 On Tue, Oct 23, 2012 at 10:07 AM, Templin, Fred L
 fred.l.temp...@boeing.com wrote:
  Hi Roland,
 
  -Original Message-
  From: Dobbins, Roland [mailto:rdobb...@arbor.net]
  Sent: Monday, October 22, 2012 6:49 PM
  To: NANOG list
  Subject: Re: IP tunnel MTU
 
 
  On Oct 23, 2012, at 5:24 AM, Templin, Fred L wrote:
 
   Since tunnels always reduce the effective MTU seen by data packets
 due
  to the encapsulation overhead, the only two ways to accommodate
   the tunnel MTU is either through the use of path MTU discovery or
  through fragmentation and reassembly.
 
  Actually, you can set your tunnel MTU manually.
 
  For example, the typical MTU folks set for a GRE tunnel is 1476.
 
  Yes; I was aware of this. But, what I want to get to is
  setting the tunnel MTU to infinity.
 
  This isn't a new issue; it's been around ever since tunneling
 technologies
  have been around, and tons have been written on this topic.  Look at
 your
  various router/switch vendor Web sites, archives of this list and
 others,
  etc.
 
  Sure. I've written a fair amount about it too over the span
  of the last ten years. What is new is that there is now a
  solution near at hand.
 
  So, it's been known about, dealt with, and documented for a long time.
 In
  terms of doing something about it, the answer there is a) to allow the
  requisite ICMP for PMTU-D to work to/through any networks within your
 span
  of administrative control and b)
 
  That does you no good if there is some other network further
  beyond your span of administrative control that does not allow
  the ICMP PTBs through. And, studies have shown this to be the
  case in a non-trivial number of instances.
 
  b) adjusting your own tunnel MTUs to
  appropriate values based upon experimentation.
 
  Adjust it down to what? 1280? Then, if your tunnel with the
  adjusted MTU enters another tunnel with its own adjusted MTU
  there is an MTU underflow that might not get reported if the
  ICMP PTB messages are lost. An alternative is to use IP
  fragmentation, but recent studies have shown that more and
  more operators are unconditionally dropping IPv6 fragments
  and IPv4 fragmentation is not an option due to wrapping IDs
  at high data rates.
 
  Nested tunnels-within

Re: IP tunnel MTU

2012-10-29 Thread Shahab Vahabzadeh
Hi there,
I have the same problem in my network, I have GRE tunnel for transfering
users real internet traffic, they have problems with browsing websites like
yahoo.com or microsoft.com.
I had to set ip mtu 1500 to solve it, and it occurs fragmantation...
Thanks

On Mon, Oct 29, 2012 at 10:47 PM, Ray Soucy r...@maine.edu wrote:

 Sorry, glanced at this and thought it was someone having problems with
 tunnel MTU without adjusting TCP MSS.

 Nice work, though my preference is to avoid tunnels at all costs :-)




 On Mon, Oct 29, 2012 at 12:39 PM, Templin, Fred L
 fred.l.temp...@boeing.com wrote:
  Hi Ray,
 
  MSS rewriting has been well known and broadly applied for a long
  time now, but only applies to TCP. The subject of MSS rewriting
  comes up all the time in the IETF wg discussions, but has failed
  to reach consensus as a long-term alternative.
 
  Plus, MSS rewriting does no good for tunnels-within-tunnels. If
  the innermost tunnel rewrites MSS to a value that *it* thinks is
  safe there is no guarantee that the packets will fit within any
  outer tunnels that occur further down the line.
 
  What I want to get to is an indefinite tunnel MTU; i.e., admit
  any packet into the tunnel regardless of its size then make any
  necessary adaptations from within the tunnel. That is exactly
  what SEAL does:
 
   https://datatracker.ietf.org/doc/draft-templin-intarea-seal/
 
  Thanks - Fred
  fred.l.temp...@boeing.com
 
  -Original Message-
  From: Ray Soucy [mailto:r...@maine.edu]
  Sent: Monday, October 29, 2012 7:55 AM
  To: Templin, Fred L
  Cc: Dobbins, Roland; NANOG list
  Subject: Re: IP tunnel MTU
 
  The core issue here is TCP MSS. PMTUD is a dynamic process for
  adjusting MSS, but requires that ICMP be permitted to negotiate the
  connection.  The realistic alternative, in a world that filters all
  ICMP traffic, is to manually rewrite the MSS.  In IOS this can be
  achieved via ip tcp adjust-mss and on Linux-based systems, netfilter
  can be used to adjust MSS for example.
 
  Keep in mind that the MSS will be smaller than your MTU.
  Consider the following example:
 
   ip mtu 1480
   ip tcp adjust-mss 1440
   tunnel mode ipip
 
  IP packets have 20 bytes of overhead, leaving 1480 bytes for data.  So
  for an IP-in-IP tunnel, you'd set your MTU of your tunnel interface to
  1480.  Subtract another 20 bytes for the tunneled IP header and 20
  bytes (typical) for your TCP header and you're left with 1440 bytes
  for data in a TCP connection.  So in this case we write the MSS as
  1440.
 
  I use IP-in-IP as an example because it's simple.  GRE tunnels can be
  a little more complex.  While the GRE header is typically 4 bytes, it
  can grow up to 16 bytes depending on options used.
 
  So for a typical GRE tunnel (4 byte header), you would subtract 20
  bytes for the IP header and 4 bytes for the GRE header from your base
  MTU of 1500.  This would mean an MTU of 1476, and a TCP MMS of 1436.
 
  Keep in mind that a TCP header can be up to 60 bytes in length, so you
  may want to go higher than the typical 20 bytes for your MSS if you're
  seeing problems.
 
 
 
 
  On Tue, Oct 23, 2012 at 10:07 AM, Templin, Fred L
  fred.l.temp...@boeing.com wrote:
   Hi Roland,
  
   -Original Message-
   From: Dobbins, Roland [mailto:rdobb...@arbor.net]
   Sent: Monday, October 22, 2012 6:49 PM
   To: NANOG list
   Subject: Re: IP tunnel MTU
  
  
   On Oct 23, 2012, at 5:24 AM, Templin, Fred L wrote:
  
Since tunnels always reduce the effective MTU seen by data packets
  due
   to the encapsulation overhead, the only two ways to accommodate
the tunnel MTU is either through the use of path MTU discovery or
   through fragmentation and reassembly.
  
   Actually, you can set your tunnel MTU manually.
  
   For example, the typical MTU folks set for a GRE tunnel is 1476.
  
   Yes; I was aware of this. But, what I want to get to is
   setting the tunnel MTU to infinity.
  
   This isn't a new issue; it's been around ever since tunneling
  technologies
   have been around, and tons have been written on this topic.  Look at
  your
   various router/switch vendor Web sites, archives of this list and
  others,
   etc.
  
   Sure. I've written a fair amount about it too over the span
   of the last ten years. What is new is that there is now a
   solution near at hand.
  
   So, it's been known about, dealt with, and documented for a long
 time.
  In
   terms of doing something about it, the answer there is a) to allow
 the
   requisite ICMP for PMTU-D to work to/through any networks within your
  span
   of administrative control and b)
  
   That does you no good if there is some other network further
   beyond your span of administrative control that does not allow
   the ICMP PTBs through. And, studies have shown this to be the
   case in a non-trivial number of instances.
  
   b) adjusting your own tunnel MTUs to
   appropriate values based upon experimentation.
  
   Adjust it down

Re: IP tunnel MTU

2012-10-29 Thread Joe Maimon



Templin, Fred L wrote:


Yes; I was aware of this. But, what I want to get to is
setting the tunnel MTU to infinity.



Essentially, its time the network matured to the point where 
inter-networking actually works (again), seamlessly.


I agree.

Joe



Re: IP tunnel MTU

2012-10-29 Thread Jared Mauch

On Oct 29, 2012, at 3:46 PM, Joe Maimon jmai...@ttec.com wrote:

 
 
 Templin, Fred L wrote:
 
 Yes; I was aware of this. But, what I want to get to is
 setting the tunnel MTU to infinity.
 
 
 Essentially, its time the network matured to the point where inter-networking 
 actually works (again), seamlessly.
 
 I agree.


Certainly fixing all the buggy host stacks, firewall and compliance devices to 
realize that ICMP isn't bad won't be hard.

- Jared


Re: IP tunnel MTU

2012-10-29 Thread Tim Durack
On Mon, Oct 29, 2012 at 4:01 PM, Jared Mauch ja...@puck.nether.net wrote:

 On Oct 29, 2012, at 3:46 PM, Joe Maimon jmai...@ttec.com wrote:



 Templin, Fred L wrote:

 Yes; I was aware of this. But, what I want to get to is
 setting the tunnel MTU to infinity.


 Essentially, its time the network matured to the point where 
 inter-networking actually works (again), seamlessly.

 I agree.


 Certainly fixing all the buggy host stacks, firewall and compliance devices 
 to realize that ICMP isn't bad won't be hard.

 - Jared

Wait till you get started on fixing the security consultants.

-- 
Tim:



Re: IP tunnel MTU

2012-10-29 Thread bmanning
On Mon, Oct 29, 2012 at 03:46:57PM -0400, Joe Maimon wrote:
 
 
 Templin, Fred L wrote:
 
 Yes; I was aware of this. But, what I want to get to is
 setting the tunnel MTU to infinity.
 
 
 Essentially, its time the network matured to the point where 
 inter-networking actually works (again), seamlessly.
 
 I agree.
 
 Joe

you mean its safe to turn off the VPNs?

/bill



Re: IP tunnel MTU

2012-10-29 Thread Joe Maimon



Jared Mauch wrote:


On Oct 29, 2012, at 3:46 PM, Joe Maimon jmai...@ttec.com wrote:




Templin, Fred L wrote:


Yes; I was aware of this. But, what I want to get to is
setting the tunnel MTU to infinity.



Essentially, its time the network matured to the point where inter-networking 
actually works (again), seamlessly.

I agree.



Certainly fixing all the buggy host stacks, firewall and compliance devices to 
realize that ICMP isn't bad won't be hard.

- Jared




ICMP is just not the way it is ever going to work.

Joe



Re: IP tunnel MTU

2012-10-29 Thread Joe Maimon



bmann...@vacation.karoshi.com wrote:

On Mon, Oct 29, 2012 at 03:46:57PM -0400, Joe Maimon wrote:



Templin, Fred L wrote:


Yes; I was aware of this. But, what I want to get to is
setting the tunnel MTU to infinity.



Essentially, its time the network matured to the point where
inter-networking actually works (again), seamlessly.

I agree.

Joe


you mean its safe to turn off the VPNs?

/bill




Quite the reverse.

Joe



Re: IP tunnel MTU

2012-10-29 Thread Jared Mauch

On Oct 29, 2012, at 4:43 PM, Joe Maimon jmai...@ttec.com wrote:

 
 
 Jared Mauch wrote:
 
 On Oct 29, 2012, at 3:46 PM, Joe Maimon jmai...@ttec.com wrote:
 
 
 
 Templin, Fred L wrote:
 
 Yes; I was aware of this. But, what I want to get to is
 setting the tunnel MTU to infinity.
 
 
 Essentially, its time the network matured to the point where 
 inter-networking actually works (again), seamlessly.
 
 I agree.
 
 
 Certainly fixing all the buggy host stacks, firewall and compliance devices 
 to realize that ICMP isn't bad won't be hard.
 
 - Jared
 
 
 
 ICMP is just not the way it is ever going to work.

I wish you luck in getting your host IP stacks to work properly without ICMP, 
especially as you deploy IPv6.

- Jared


RE: IP tunnel MTU

2012-10-29 Thread Templin, Fred L
 I wish you luck in getting your host IP stacks to work properly without
 ICMP, especially as you deploy IPv6.

From what I've heard, ICMPv6 is already being filtered, including
PTBs. I have also heard that IPv6 fragments are also being dropped
unconditionally along some paths. So, if neither ICMPv6 PTB nor
IPv6 fragmentation works then the tunnel endpoints have to take
matters into their own hands. That's where SEAL comes in.

Thanks - Fred
fred.l.temp...@boeing.com

 - Jared



Re: IP tunnel MTU

2012-10-29 Thread Joe Maimon



Jared Mauch wrote:





ICMP is just not the way it is ever going to work.


I wish you luck in getting your host IP stacks to work properly without ICMP, 
especially as you deploy IPv6.

- Jared



Precisely the state we are in. Looking for luck.

Joe



Re: IP tunnel MTU

2012-10-29 Thread bmanning
On Mon, Oct 29, 2012 at 04:44:40PM -0400, Joe Maimon wrote:
 
 
 bmann...@vacation.karoshi.com wrote:
 On Mon, Oct 29, 2012 at 03:46:57PM -0400, Joe Maimon wrote:
 
 
 Templin, Fred L wrote:
 
 Yes; I was aware of this. But, what I want to get to is
 setting the tunnel MTU to infinity.
 
 
 Essentially, its time the network matured to the point where
 inter-networking actually works (again), seamlessly.
 
 I agree.
 
 Joe
 
  you mean its safe to turn off the VPNs?
 
 /bill
 
 
 
 Quite the reverse.
 
 Joe

so its tunnels all the way down...  maybe we should just go back to 
a circuit oriented network, eh?

/bill



Re: IP tunnel MTU

2012-10-29 Thread Joe Maimon



bmann...@vacation.karoshi.com wrote:



you mean its safe to turn off the VPNs?

/bill




Quite the reverse.

Joe


so its tunnels all the way down...  maybe we should just go back to
a circuit oriented network, eh?

/bill




Its not safe to turn on VPNs.

Joe



Re: IP tunnel MTU

2012-10-29 Thread William Herrin
On Mon, Oct 29, 2012 at 10:54 AM, Ray Soucy r...@maine.edu wrote:
 The core issue here is TCP MSS. PMTUD is a dynamic process for
 adjusting MSS, but requires that ICMP be permitted to negotiate the
 connection.  The realistic alternative, in a world that filters all
 ICMP traffic, is to manually rewrite the MSS.  In IOS this can be
 achieved via ip tcp adjust-mss and on Linux-based systems, netfilter
 can be used to adjust MSS for example.

Longer term, the ideal solution would be a replacement algorithm that
allows TCP to adjust its MSS with or without negative acknowledgement
from intermediate routers. The ICMP-didn't-get-there problem is only
going to get worse and things like private IPs on routers and
encapsulation mechanisms where the intermediate router isn't dealing
with an IP packet directly are as much at fault these days as foolish
firewall admins.

Perhaps my understanding of end-to-end is flawed, but I suspect it
means that an endpoint shouldn't depend on direct communication with
an intermediate system for its successful communication with another
endpoint.

Maybe something as simple as clearing the don't fragment flag and
adding a TCP option to report receipt of a fragmented packet along
with the fragment sizes back to the sender so he can adjust his mss to
avoid fragmentation.

Regards,
Bill Herrin





-- 
William D. Herrin  her...@dirtside.com  b...@herrin.us
3005 Crane Dr. .. Web: http://bill.herrin.us/
Falls Church, VA 22042-3004



RE: IP tunnel MTU

2012-10-29 Thread Templin, Fred L
Hi Bill,

 Maybe something as simple as clearing the don't fragment flag and
 adding a TCP option to report receipt of a fragmented packet along
 with the fragment sizes back to the sender so he can adjust his mss to
 avoid fragmentation.

That is in fact what SEAL is doing, but there is no guarantee
that the size of the largest fragment is going to be an accurate
reflection of the true path MTU. RFC1812 made sure of that when
it more or less gave IPv4 routers permission to fragment packets
pretty much any way they want.

Thanks - Fred
fred.l.temp...@boeing.com



Re: IP tunnel MTU

2012-10-29 Thread Chris Woodfield
True, but it could be used as an alternative PMTUD algorithm - raise the 
segment size and wait for the I got this as fragments option to show up...

Of course, this only works for IPv4. IPv6 users are SOL if something in the 
middle is dropping ICMPv6.

-C

On Oct 29, 2012, at 4:02 PM, Templin, Fred L wrote:

 Hi Bill,
 
 Maybe something as simple as clearing the don't fragment flag and
 adding a TCP option to report receipt of a fragmented packet along
 with the fragment sizes back to the sender so he can adjust his mss to
 avoid fragmentation.
 
 That is in fact what SEAL is doing, but there is no guarantee
 that the size of the largest fragment is going to be an accurate
 reflection of the true path MTU. RFC1812 made sure of that when
 it more or less gave IPv4 routers permission to fragment packets
 pretty much any way they want.
 
 Thanks - Fred
 fred.l.temp...@boeing.com
 




Re: IP tunnel MTU

2012-10-29 Thread Masataka Ohta
Templin, Fred L wrote:

 I wish you luck in getting your host IP stacks to work properly without
 ICMP, especially as you deploy IPv6.

From what I've heard, ICMPv6 is already being filtered, including
 PTBs.

As v6 PTBs are specified to be generated even against
multicast packets, it is of course that they are dropped
to prevent ICMP implosions.

But, it is a very serious problem of not only tunnels but
entire IPv6.

That is, if PMTUD is unavailable, IPv6 hosts are prohibited
to send packets larger than 1280B.

Then, ignoring the prohibition, tunnel end points may send
packets a little larger than 1280B, which means physical link
MTU of 1500B or a little smaller than that is enough for
nested tunnels.

Thus, no new tunneling protocol is necessary.

The harder part of the job is to disable PMTUD on all the
IPv6 implementations.

 I have also heard that IPv6 fragments are also being dropped
 unconditionally along some paths.

Again, it is not a problem of tunnels only.

If that is the operational reality, specifications on
fragmentation must be dropped from IPv6 specification.

Masataka Ohta



RE: IP tunnel MTU

2012-10-23 Thread Templin, Fred L
Hi Roland,

 -Original Message-
 From: Dobbins, Roland [mailto:rdobb...@arbor.net]
 Sent: Monday, October 22, 2012 6:49 PM
 To: NANOG list
 Subject: Re: IP tunnel MTU
 
 
 On Oct 23, 2012, at 5:24 AM, Templin, Fred L wrote:
 
  Since tunnels always reduce the effective MTU seen by data packets due
 to the encapsulation overhead, the only two ways to accommodate
  the tunnel MTU is either through the use of path MTU discovery or
 through fragmentation and reassembly.
 
 Actually, you can set your tunnel MTU manually.
 
 For example, the typical MTU folks set for a GRE tunnel is 1476.

Yes; I was aware of this. But, what I want to get to is
setting the tunnel MTU to infinity.

 This isn't a new issue; it's been around ever since tunneling technologies
 have been around, and tons have been written on this topic.  Look at your
 various router/switch vendor Web sites, archives of this list and others,
 etc.

Sure. I've written a fair amount about it too over the span
of the last ten years. What is new is that there is now a
solution near at hand.
 
 So, it's been known about, dealt with, and documented for a long time.  In
 terms of doing something about it, the answer there is a) to allow the
 requisite ICMP for PMTU-D to work to/through any networks within your span
 of administrative control and b)

That does you no good if there is some other network further
beyond your span of administrative control that does not allow
the ICMP PTBs through. And, studies have shown this to be the
case in a non-trivial number of instances.

 b) adjusting your own tunnel MTUs to
 appropriate values based upon experimentation.

Adjust it down to what? 1280? Then, if your tunnel with the
adjusted MTU enters another tunnel with its own adjusted MTU
there is an MTU underflow that might not get reported if the
ICMP PTB messages are lost. An alternative is to use IP
fragmentation, but recent studies have shown that more and
more operators are unconditionally dropping IPv6 fragments
and IPv4 fragmentation is not an option due to wrapping IDs
at high data rates.

Nested tunnels-within-tunnels occur in operational scenarios
more and more, and adjusting the MTU for only one tunnel in
the nesting does you no good if there are other tunnels that
adjust their own MTUs.

 Enterprise endpoint networks are notorious for blocking *all* ICMP (as
 well as TCP/53 DNS) at their edges due to 'security' misinformation
 propagated by Confused Information Systems Security Professionals and
 their ilk.  Be sure that your own network policies aren't part of the
 problem affecting your userbase, as well as anyone else with a need to
 communicate with properties on your network via tunnels.

Again, all an operator can control is that which is within their
own administrative domain. That does no good for ICMPs that are
lost beyond their administrative domain.

Thanks - Fred
fred.l.temp...@boeing.com

 ---
 Roland Dobbins rdobb...@arbor.net // http://www.arbornetworks.com
 
 Luck is the residue of opportunity and design.
 
  -- John Milton
 




Re: IP tunnel MTU

2012-10-22 Thread Dobbins, Roland

On Oct 23, 2012, at 5:24 AM, Templin, Fred L wrote:

 Since tunnels always reduce the effective MTU seen by data packets due to the 
 encapsulation overhead, the only two ways to accommodate
 the tunnel MTU is either through the use of path MTU discovery or through 
 fragmentation and reassembly.

Actually, you can set your tunnel MTU manually.

For example, the typical MTU folks set for a GRE tunnel is 1476.

This isn't a new issue; it's been around ever since tunneling technologies have 
been around, and tons have been written on this topic.  Look at your various 
router/switch vendor Web sites, archives of this list and others, etc.

So, it's been known about, dealt with, and documented for a long time.  In 
terms of doing something about it, the answer there is a) to allow the 
requisite ICMP for PMTU-D to work to/through any networks within your span of 
administrative control and b) adjusting your own tunnel MTUs to appropriate 
values based upon experimentation.

Enterprise endpoint networks are notorious for blocking *all* ICMP (as well as 
TCP/53 DNS) at their edges due to 'security' misinformation propagated by 
Confused Information Systems Security Professionals and their ilk.  Be sure 
that your own network policies aren't part of the problem affecting your 
userbase, as well as anyone else with a need to communicate with properties on 
your network via tunnels.

---
Roland Dobbins rdobb...@arbor.net // http://www.arbornetworks.com

  Luck is the residue of opportunity and design.

   -- John Milton