RE: DHCPv6 relay with PD

2016-06-08 Thread Templin, Fred L
Hi Nick,

More on this, please see Section 3.7 on the AERO Routing System (2 pages). It 
tells
how the DHCPv6 relay can inject delegated prefixes into the routing system 
without
imparting unacceptable churn and while allowing scaling to many millions of 
delegated
prefixes. There is a terminology gap to overcome in that an "AERO Server" 
actually
implements both a DHCPv6 server and relay, while an "AERO Relay" is a simple BGP
router and does not implement any DHCPv6 functions.

The section is only two pages long. Let me know if you have any questions or
comments.

Fred

> -Original Message-
> From: ipv6-ops-bounces+fred.l.templin=boeing@lists.cluenet.de 
> [mailto:ipv6-ops-
> bounces+fred.l.templin=boeing@lists.cluenet.de] On Behalf Of Templin, 
> Fred L
> Sent: Wednesday, June 08, 2016 2:35 PM
> To: Nick Hilliard <n...@foobar.org>
> Cc: ipv6-ops@lists.cluenet.de
> Subject: RE: DHCPv6 relay with PD
> 
> Hi Nick,
> 
> > -Original Message-
> > From: ipv6-ops-bounces+fred.l.templin=boeing@lists.cluenet.de 
> > [mailto:ipv6-ops-
> > bounces+fred.l.templin=boeing@lists.cluenet.de] On Behalf Of Nick 
> > Hilliard
> > Sent: Wednesday, June 08, 2016 2:13 PM
> > To: Templin, Fred L <fred.l.temp...@boeing.com>
> > Cc: ipv6-ops@lists.cluenet.de
> > Subject: Re: DHCPv6 relay with PD
> >
> > Templin, Fred L wrote:
> > > Folks, for real – read AERO. It works. I apologize if that offends anyone.
> >
> > Not at all.  It's just that I'm confused about why we would need to
> > resort to a tunneling protocol in order to make basic ipv6 functionality
> > work.
> >
> > Would it not be better to try to make ipv6 work without resorting to
> > tunnels?
> 
> Mobile clients that can change their point of attachment to the network and
> may be many hops away from the DHCPv6 relay are the primary use case that
> AERO is addressing. But, the way in which AERO manages the routing system
> I think applies even when tunnels aren't needed and the clients are on the
> same link as the relays.
> 
> Thanks - Fred
> fred.l.temp...@boeing.com
> 
> > Nick



RE: DHCPv6 relay with PD

2016-06-08 Thread Templin, Fred L
Hi Nick,

> -Original Message-
> From: ipv6-ops-bounces+fred.l.templin=boeing@lists.cluenet.de 
> [mailto:ipv6-ops-
> bounces+fred.l.templin=boeing@lists.cluenet.de] On Behalf Of Nick Hilliard
> Sent: Wednesday, June 08, 2016 2:13 PM
> To: Templin, Fred L <fred.l.temp...@boeing.com>
> Cc: ipv6-ops@lists.cluenet.de
> Subject: Re: DHCPv6 relay with PD
> 
> Templin, Fred L wrote:
> > Folks, for real – read AERO. It works. I apologize if that offends anyone.
> 
> Not at all.  It's just that I'm confused about why we would need to
> resort to a tunneling protocol in order to make basic ipv6 functionality
> work.
> 
> Would it not be better to try to make ipv6 work without resorting to
> tunnels?

Mobile clients that can change their point of attachment to the network and
may be many hops away from the DHCPv6 relay are the primary use case that
AERO is addressing. But, the way in which AERO manages the routing system
I think applies even when tunnels aren't needed and the clients are on the
same link as the relays.

Thanks - Fred
fred.l.temp...@boeing.com

> Nick



RE: DHCPv6 relay with PD

2016-06-08 Thread Templin, Fred L
Hi,

> -Original Message-
> From: ipv6-ops-bounces+fred.l.templin=boeing@lists.cluenet.de 
> [mailto:ipv6-ops-
> bounces+fred.l.templin=boeing@lists.cluenet.de] On Behalf Of Erik Kline
> Sent: Wednesday, June 08, 2016 11:37 AM
> To: Ole Troan 
> Cc: IPv6 Ops list ; Mikael Abrahamsson 
> 
> Subject: Re: DHCPv6 relay with PD
> 
> On 9 June 2016 at 03:16, Ole Troan  wrote:
> > Mikael,
> >
> >>> We also tried (and failed) to come up with a secure mechanism for the 
> >>> requesting router to advertise it's delegated prefix to first-
> hop routers.
> >>>
> >>> Less astonished? ;-)
> >>
> >> Well, I guess I shouldn't be astonished. I've even seen vendors implement 
> >> the DHCPv6-PD server on the router itself, and fail to
> install route according to the delegated prefix.
> >>
> >> So basically, regarding how to actually implement PD in a network (from an 
> >> IETF point of view), everybody just gave up, declared
> the problem unsolvable, and went back to sleep?
> >
> > It shouldn't be the IETF's job to tell people how to run their networks.
> > The IETF provides the building blocks.
> 
> But this sounds like what's missing is operational guidance on what
> collections of blocks have been known to work.

AERO provides operational guidance on collections of blocks that work:

https://datatracker.ietf.org/doc/draft-templin-aerolink/

Thanks - Fred



RE: MTU = 1280 everywhere? / QUIC

2014-11-11 Thread Templin, Fred L
Hi, the idea of setting a fixed 1280 MTU everywhere and for all time is silly;
the maximum MTU for IPv4 is 64KB, and the maximum MTU for IPv6 is 4GB.

One item of follow-up:

 Also, fragments are evil and there is no real reason to have any
 fragments at all.

IPv4 fragmentation works at slow speeds, but is dangerous at line rates.
IPv6 fragmentation works at line rates, but is a pain point that should be
avoided and/or tuned out when possible. Neither in and of themselves
are evil, however.

Thanks - Fred
fred.l.temp...@boeing.com

 -Original Message-
 From: ipv6-ops-bounces+fred.l.templin=boeing@lists.cluenet.de 
 [mailto:ipv6-ops-
 bounces+fred.l.templin=boeing@lists.cluenet.de] On Behalf Of Jeroen Massar
 Sent: Tuesday, November 11, 2014 2:06 AM
 To: Vincent Bernat
 Cc: IPv6 Ops list
 Subject: Re: MTU = 1280 everywhere? / QUIC
 
 On 2014-11-11 10:55, Vincent Bernat wrote:
   ❦ 11 novembre 2014 10:42 +0100, Jeroen Massar jer...@massar.ch :
 
  From:
  https://docs.google.com/document/d/1RNHkx_VvKWyWg6Lr8SZ-saqsQx7rFV-ev2jRFUoVD34/mobilebasic
  UDP PACKET FRAGMENTATION but IPv6 dos not fragment...
 
  IPv6 routers don't fragment but IPv6 hosts still do.
 
 Correct. But that means if you are sending 1350 bytes on a 1280 link you
 are sending two packets, not one.
 
 As they do cool stuff like FEC in QUIC, they assume lossy networks (good
 thing they think that way), but that also means that you will be sending
 more data (due to FEC) and also assume you are losing packets.
 
 Hence, if your FEC protocol assumes that 1 packet is lost while actually
 only half the packet was, you got more loss than you are anticipating.
 
 Knowing what the MTU is on the link, thus is a smart thing.
 
 Hence, why PMTUD is important.
 
 
 Also, fragments are evil and there is no real reason to have any
 fragments at all.
 
 Greets,
  Jeroen



RE: SI6 Networks' IPv6 Toolkit v1.5.2 released!

2014-01-31 Thread Templin, Fred L
Hi Fernando,

I don't know if you are looking to add to your toolkit from outside
sources, but Sascha Hlusiak has created a tool called 'isatapd' that
sends RS messages to an ISATAP router and processes RA messages that
come back:

http://www.saschahlusiak.de/linux/isatap.htm

Does this look like something you might want to add to the toolkit?

Thanks - Fred
fred.l.temp...@boeing.com

 -Original Message-
 From: ipv6-ops-bounces+fred.l.templin=boeing@lists.cluenet.de 
 [mailto:ipv6-ops-
 bounces+fred.l.templin=boeing@lists.cluenet.de] On Behalf Of Fernando Gont
 Sent: Friday, January 31, 2014 8:03 AM
 To: ipv6-ops@lists.cluenet.de
 Subject: SI6 Networks' IPv6 Toolkit v1.5.2 released!
 
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Folks,
 
 [I had forgotten to send a heads-up to this list -- hopefully some of
 you will find this useful]
 
 This is not meant to be a big release, but it does fix some issues
 present in previous versions, and adds some new features (please find
 the changelog below).
 
 So if you're using the ipv6toolkit, please upgrade to version 1.5.2.
 
 Tarballs (plain one, and gpg-signed with my key below) can be found
 at: http://www.si6networks.com/tools/ipv6toolkit).
 
 * Tools:
 
 If you want to find out which tools the ipv6toolkit comprises, just
 do a man 7 ipv6toolkit.
 
 
 * Platforms:
 
 We currently support these platforms: FreeBSD, NetBSD, OpenBSD, Debian
 GNU/Linux, Debian GNU/kfreebsd, Gentoo Linux, Ubuntu, and Mac OS.
 
 Some of these platforms now feature the ipv6toolkit in their package
 system -- credits for that can be found below. :-)
 
 
 = CREDITS ==
 CONTRIBUTORS
 - 
 
 ** Contributors **
 
 The following people sent patches that were incorporated into this
 release of the toolkit:
 
 Octavio Alvarez alvar...@alvarezp.com
 Alexander Bluhm bl...@openbsd.org
 Alistair Crooks a...@pkgsrc.org
 Declan A Rieb   dar...@sandia.gov
 
 
 ** Package maintainers **
 
 Availability of packages for different operating systems makes it
 easier for users to install and update the toolkit, and for the toolkit
 to integrate better with the operating systems.
 
 These are the maintainers for each of the different packages:
 
   + Debian
 
 Octavio Alvarez alvar...@alvarezp.com, sponsored by Luciano Bello
 luci...@debian.org
 
   + FreeBSD
 
 Hiroki Sato h...@freebsd.org
 
   + Gentoo Linux
 
 Robin H. Johnson robb...@gentoo.org
 
   + Mac OS
 
 Declan A Rieb dar...@sandia.gov tests the toolkit on multiple Mac
 OS versions, to ensure clean compiles on such platforms.
 
   + NetBSD (pkgsrc framework)
 
 Alistair Crooks a...@pkgsrc.org
 
   + OpenBSD
 
 Alexander Bluhm bl...@openbsd.org
 
 
 ** Troubleshooting/Debugging **
 
 Spotting bugs in networking tool can be tricky, since at times they
 only show up in specific network scenarios.
 
 The following individuals provided great help in identifying bugs in
 the the toolkit (thus leading to fixes and improvements):
 
 Stephane Bortzmeyer steph...@bortzmeyer.org
 Marc Heuse m...@mh-sec.de
 Erik Muller er...@buh.org
 Declan A Rieb dar...@sandia.gov
 Tim tim-secur...@sentinelchicken.org
 = CREDITS =
 
 
 = CHANGELOG =
 SI6 Networks IPv6 Toolkit v1.5.2
 
* All: Add support for GNU Debian/kfreebsd
  The toolkit would not build on GNU Debian/kfreebsd before this
  release.
 
* tcp6: Add support for TCP/IPv6 probes
  tcp6 can now send TCP/IPv6 packets (--probe-mode option), and
  read the TCP response packets, if any. This can be leveraged for
  port scans, and miscellaneous measurements.
 
 SI6 Networks IPv6 Toolkit v1.5.1
* Fix Mac OS breakage
  libipv6.h had incorrect definitions for struct tcp_hdr.
 
 SI6 Networks IPv6 Toolkit v1.5
 
* All: Improved the next-hop determination
  Since the toolkit employs libpcap (as there is no portable way to
  forge IPv6 addresses and do other tricks), it was relying on the
  user specifying a network interface (-i was mandatory for all
  tools) and that routers would send Router Advertisements on the
  local links. This not only was rather inconvenient for users
  (specifying a network interface was not warranted), but also meant
  that in setups where RAs where not available (e.g., manual
  configuration), the tools would fail. The toolkit now employs
  routing sockets (in BSDs) or Netlink (in Linux), and only uses
  sending RAs as a fall-back in case of failure (IPv6 not
  configured on the local host).
 
* All: Improved source address selection
  This is closely related to the previous bullet.
 
* All: More code moved to libipv6
  More and more code was moved to libipv6 and removed to the
  individual tool source files. As with some of the above, this was
  painful and time-consuming, but was necessary -- and in the long
  run it will make code maintenance easier.
 

RE: Question about IPAM tools for v6

2014-01-31 Thread Templin, Fred L
 Not if you route a /64 to each host (the way 3GPP/LTE does for mobiles).  :-)

A /64 for each mobile is what I would expect. It is then up to the
mobile to manage the /64 responsibly by either black-holing the
portions of the /64 it is not using or by assigning the /64 to a
link other than the service provider wireless access link (and
then managing the NC appropriately).

Thanks - Fred
fred.l.temp...@boeing.com


RE: I can fetch the header of websites via IPv6 but not the webpage, why?

2014-01-21 Thread Templin, Fred L
Hi,

 -Original Message-
 From: ipv6-ops-bounces+fred.l.templin=boeing@lists.cluenet.de 
 [mailto:ipv6-ops-
 bounces+fred.l.templin=boeing@lists.cluenet.de] On Behalf Of Richard 
 Hartmann
 Sent: Tuesday, January 21, 2014 12:48 PM
 To: Tore Anderson
 Cc: Ez mail; ipv6-ops@lists.cluenet.de
 Subject: Re: I can fetch the header of websites via IPv6 but not the webpage, 
 why?
 
 On Mon, Jan 20, 2014 at 11:59 AM, Tore Anderson t...@fud.no wrote:
 
 
  As Erik mentions, lowering the TCP MSS will likely work around the
  problem. You can probably do this by having the RAs your router emits to
  the LAN advertise an MTU of 1452 to match your tunnel (which in turn
  should make your desktop default to a TCP MSS of 1392), and/or have your
  router rewrite (clamp) the MSS value in TCP packets it forwards
  to/from the tunnel to 1392.
 
 Unless a party has one single IPv6-enabled machine, clamping MSS on
 the gateway is probably preferable.

If you clamp the MSS to a smaller size but DO NOT advertise a small
MTU on the LAN, hosts that use RFC4821 can at a later time probe for
packet sizes that are larger that the MSS and advance the MSS size
if the probe succeeds. So, clamp the MSS but leave the MTU of the
LAN the same as that of the native link.

Thanks - Fred
fred.l.temp...@boeing.com
 
  Or, even better, get rid of the tunneling crap and get native IPv6. This
  is a very common problem for IPv6 tunnels. As a web site operator I
  would actually prefer it if people stayed IPv4-only until their ISP
  could provide them with properly supported IPv6 connectivity. Oh well...
 
 Most people don't have that liberty as of right now; increasing
 adoption is arguably better, especially considering that a lot of
 people developing software need to fix part of the ecosystem.
 
 
 
 Richard


RE: MTU handling in 6RD deployments

2014-01-17 Thread Templin, Fred L
Hi Mark,

 -Original Message-
 From: Mark Townsley [mailto:m...@townsley.net]
 Sent: Friday, January 17, 2014 12:41 AM
 To: Mikael Abrahamsson
 Cc: Templin, Fred L; ipv6-ops@lists.cluenet.de
 Subject: Re: MTU handling in 6RD deployments
 
 
 On Jan 17, 2014, at 9:24 AM, Mikael Abrahamsson wrote:
 
  On Thu, 16 Jan 2014, Templin, Fred L wrote:
 
  The key is that we want to probe the path between the BR and CE (in both 
  directions) *before*
 allowing regular data packets to flow. We want to know ahead of time whether 
 to allow large packets
 into the tunnel or whether we need to shut the MTU down to 1480 (or 1472 or 
 something) and clamp the
 MSS. Because, once we restrict the tunnel MTU hosts will be stuck with a 
 degenerate MTU indefinitely
 or at least for a long time.
 
  This method makes some sense, but since network conditions can change, I 
  would like to see periodic
 re-checks of the tunnel still working with the packet sizes, perhaps pinging 
 itself over the tunnel
 once per minute with the larger packet size if larger packet size is in use.
 
 Section 8 of RFC 5969 could be relevant here.

In that section, I see:

   The link-local
   address of a 6rd virtual interface performing the 6rd encapsulation
   would, if needed, be formed as described in Section 3.7 of [RFC4213].
   However, no communication using link-local addresses will occur.

So, if we were to construct the pings from the IPv6 level we would
want to use link-local source and destination addresses. But, that
raises a question that would need to be addressed - should the pings
be constructed at the IPv6 level, the IPv4 level, or some mid-level
like SEAL?

One other thing about this is that we are specifically not testing
to determine an exact path MTU. We are only trying to answer the
binary question of whether or not the tunnel can pass a 1500 byte
IPv6 packet.

Thanks - Fred
fred.l.temp...@boeing.com

 - Mark
 
 
  --
  Mikael Abrahamssonemail: swm...@swm.pp.se



RE: MTU handling in 6RD deployments

2014-01-17 Thread Templin, Fred L
Hi Mark,

 -Original Message-
 From: ipv6-ops-bounces+fred.l.templin=boeing@lists.cluenet.de 
 [mailto:ipv6-ops-
 bounces+fred.l.templin=boeing@lists.cluenet.de] On Behalf Of Templin, 
 Fred L
 Sent: Friday, January 17, 2014 7:57 AM
 To: Mark Townsley; Mikael Abrahamsson
 Cc: ipv6-ops@lists.cluenet.de
 Subject: RE: MTU handling in 6RD deployments
 
 Hi Mark,
 
  -Original Message-
  From: Mark Townsley [mailto:m...@townsley.net]
  Sent: Friday, January 17, 2014 12:41 AM
  To: Mikael Abrahamsson
  Cc: Templin, Fred L; ipv6-ops@lists.cluenet.de
  Subject: Re: MTU handling in 6RD deployments
 
 
  On Jan 17, 2014, at 9:24 AM, Mikael Abrahamsson wrote:
 
   On Thu, 16 Jan 2014, Templin, Fred L wrote:
  
   The key is that we want to probe the path between the BR and CE (in both 
   directions) *before*
  allowing regular data packets to flow. We want to know ahead of time 
  whether to allow large packets
  into the tunnel or whether we need to shut the MTU down to 1480 (or 1472 or 
  something) and clamp the
  MSS. Because, once we restrict the tunnel MTU hosts will be stuck with a 
  degenerate MTU indefinitely
  or at least for a long time.
  
   This method makes some sense, but since network conditions can change, I 
   would like to see
 periodic
  re-checks of the tunnel still working with the packet sizes, perhaps 
  pinging itself over the tunnel
  once per minute with the larger packet size if larger packet size is in use.
 
  Section 8 of RFC 5969 could be relevant here.
 
 In that section, I see:
 
The link-local
address of a 6rd virtual interface performing the 6rd encapsulation
would, if needed, be formed as described in Section 3.7 of [RFC4213].
However, no communication using link-local addresses will occur.

Sorry, I was looking at the wrong section. I see now that Section 8
is talking about a method for a CE to send an ordinary data packet
that loops back via the BR. That method is fine, but it is no more
immune to someone abusing the mechanism than would be sending a ping
(or some other NUD message). By using a ping, the BR can impose
rate-limiting on its ping responses whereas with a looped-back
data packet the BR really can't do rate limiting.

Also, Section 8 of RFC5969 only talks about the CE testing the forward
path to the BR. Unless the BR also tests the reverse path to the CE it
has no way of knowing whether the CE can accept large packets. 

Thanks - Fred
fred.l.temp...@boeing.com

 So, if we were to construct the pings from the IPv6 level we would
 want to use link-local source and destination addresses. But, that
 raises a question that would need to be addressed - should the pings
 be constructed at the IPv6 level, the IPv4 level, or some mid-level
 like SEAL?
 
 One other thing about this is that we are specifically not testing
 to determine an exact path MTU. We are only trying to answer the
 binary question of whether or not the tunnel can pass a 1500 byte
 IPv6 packet.
 
 Thanks - Fred
 fred.l.temp...@boeing.com
 
  - Mark
 
  
   --
   Mikael Abrahamssonemail: swm...@swm.pp.se



RE: MTU handling in 6RD deployments

2014-01-17 Thread Templin, Fred L
Hi Mikael,

 -Original Message-
 From: Mikael Abrahamsson [mailto:swm...@swm.pp.se]
 Sent: Friday, January 17, 2014 8:15 AM
 To: Templin, Fred L
 Cc: Mark Townsley; ipv6-ops@lists.cluenet.de
 Subject: RE: MTU handling in 6RD deployments
 
 On Fri, 17 Jan 2014, Templin, Fred L wrote:
 
  Sorry, I was looking at the wrong section. I see now that Section 8 is
  talking about a method for a CE to send an ordinary data packet that
  loops back via the BR. That method is fine, but it is no more immune to
  someone abusing the mechanism than would be sending a ping (or some
  other NUD message). By using a ping, the BR can impose rate-limiting on
  its ping responses whereas with a looped-back data packet the BR really
  can't do rate limiting.
 
 You don't ping the BR, you ping yourself via the BR. The BR only forwards
 the packet.
 
  Also, Section 8 of RFC5969 only talks about the CE testing the forward
  path to the BR. Unless the BR also tests the reverse path to the CE it
  has no way of knowing whether the CE can accept large packets.
 
 You misread the text.

I don't see anywhere where it says that the BR should also ping the
CE and cache a boolean ACCEPTS_BIG_PACKETS for this CE. If the BR
doesn't do that, it needs to set its MTU to the CE to 1480 (or 1472
or something).

Thanks - Fred
fred.l.temp...@boeing.com

 --
 Mikael Abrahamssonemail: swm...@swm.pp.se


RE: MTU handling in 6RD deployments

2014-01-17 Thread Templin, Fred L
 cache a boolean ACCEPTS_BIG_PACKETS for this CE.

BTW, the reason I am saying that the only thing we are trying
to determine is whether/not the CE-BR path can pass a 1500
byte packet is that 1500 bytes is the de facto Internet cell
most end systems expect to see w/o getting an ICMP PTB back.

So, if we can give the hosts at least 1500 then if they want
to try for a larger size they should use RFC4821. This makes
things much easier than trying to probe the CE-BR path for
an exact size.

Thanks - Fred
fred.l.temp...@boeing.com



RE: MTU handling in 6RD deployments

2014-01-17 Thread Templin, Fred L
Hi,

  You don't ping the BR, you ping yourself via the BR. The BR only forwards 
  the packet.
 
 Precisely. The whole idea is to stay on the data plane.

I do not work for a network equipment manufacturer, so I'll take
your word that remaining in the data plane is critical for 6rd BRs
and that high data rate loopbacks are not a problem. So, a looped
back MTU test tests both the forward and reverse path MTUs between
the CE and BR. This is important to the CE, because if it were only
to test the forward path to the BR it would not know whether the
reverse path MTU is big enough and so allowing an IPv6 destination
outside of the 6rd site to discover a too-large MSS could result
in communication failures.

In terms of the BR's knowledge of the path MTU to the CE, if we
can assume that the BR will receive the necessary ICMPs from the
6rd site then it can passively rely on translating ICMPv4 PTB
messages coming from the 6rd site into corresponding ICMPv6 PTB
messages to send back to the remote IPv6 correspondent. So, the
BR should be able to set an infinite IPv6 MTU on its tunnel
interface and passively translate any PTB messages it receives.
That, plus the fact that the two IPv6 hosts have to agree on an
MSS excuses the BR from having to do any active probing itself.

So, take what is already in RFC5969, and add that a successful
test of a 1500 byte probe allows the CE to set an infinite IPv6
MTU with the understanding that IPv6 hosts that want to use
sizes larger than 1500 are expected to use RFC4821.

BTW, by infinite I mean 4GB minus the encapsulation overhead.

Thanks - Fred
fred.l.temp...@boeing.com


RE: MTU handling in 6RD deployments

2014-01-17 Thread Templin, Fred L
 BTW, by infinite I mean 4GB minus the encapsulation overhead.

Umm, sorry; that is only for tunnels over IPv6. For tunnels over
IPv4, infinite means 64KB minus the overhead.

Thanks - Fred
fred.l.temp...@boeing.com


RE: MTU handling in 6RD deployments

2014-01-16 Thread Templin, Fred L
Here's another idea on 6RD MTU. When a 6RD CE router first comes up,
have it ping the BR with a 1520 byte ping. If it gets a reply, don't
advertise an MTU in RA options and set the MTU to the BR to infinity.
If it doesn't get a reply, advertise an MTU of 1480 (or maybe 1472).
No fragmentation and reassembly are permitted.

In the reverse direction, when a 6RD BR forwards a packet to a CE
router that it hasn't ping'd before (or hasn't ping'd recently),
have it ping the CE with a 1520 byte ping. If it gets a reply, set
the MTU to the CE to infinity. If it doesn't get a reply, set the
MTU to 1480 (or maybe 1472). Again, no fragmentation and reassembly.

The only state in the BR then is an MTU value for each CE that it
talks to - in the same way ordinary IPv4 nodes maintain a path MTU
cache for the destinations they talk to. 

Thanks - Fred
fred.l.temp...@boeing.com 


RE: MTU handling in 6RD deployments

2014-01-10 Thread Templin, Fred L
Hi Mikael,

 -Original Message-
 From: Mikael Abrahamsson [mailto:swm...@swm.pp.se]
 Sent: Thursday, January 09, 2014 11:11 PM
 To: Templin, Fred L
 Cc: IPv6 Ops list
 Subject: RE: MTU handling in 6RD deployments
 
 On Thu, 9 Jan 2014, Templin, Fred L wrote:
 
  I don't doubt that your experience is valid for the environment you are
  working in. What I am saying is that there may be many environments
  where setting IPv4 link MTUs to 1520+ is a viable alternative and then
  the hosts can see a full 1500+ MTU w/o ICMPs. SEAL detects when such
  favorable conditions exist and uses limited fragmentation/reassembly
  only when they don't. Or, if fragmentation/reassembly is deemed
  unacceptable for the environment, then clamp the MSS.
 
 6RD relays can be made cheap because they are stateless. 6RD
 implementation in hosts can be bade cheap, because it's easy. SEAL isn't
 stateless (obviously, since it can do re-assembly), thus increasing cost
 and complexity both in host and relay.

I understand. But, SEAL is not heavy-duty and steady state fragmentation
and reassembly is not a desired end condition. Instead, it is a sign that
something is out of tune and needs to be tuned properly. Or, if it can't
be tuned, then fall back to MSS clamping and you are no worse off than
without SEAL.

 So while it might have a technical fit, it isn't really an operational or
 monetary fit right this minute. 6RD is widely implemented today, by the
 time any other mechanism is implemented, the use-case for IPv6 tunneled in
 IPv4 might be much less interesting, hopefully more are moving towards
 IPv4 over native IPv6 for new implementations.

There is an alpha implementation available at:

  linkupnetworks.com/seal/sealv2-1.0.tgz

And, I don't know whether any of us can say what the timeframe is for
all native IPv6 everywhere given that we are close to 20yrs in and still
no end in sight for IPv4. Also, SEAL works for all tunnel combinations
of IPvX-over-IPvY and is not specific to 6rd. So, implementation should
be of interest in the general sense for the longer term.

Thanks - Fred
fred.l.temp...@boeing.com
 
 --
 Mikael Abrahamssonemail: swm...@swm.pp.se


RE: MTU handling in 6RD deployments

2014-01-09 Thread Templin, Fred L
Hi Ragnar,

What is the MTU as seen by the IPv6 hosts - 1480? Something less?
Would it not be better if they could see 1500+?

Thanks - Fred
fred.l.temp...@boeing.com

 -Original Message-
 From: ipv6-ops-bounces+fred.l.templin=boeing@lists.cluenet.de 
 [mailto:ipv6-ops-
 bounces+fred.l.templin=boeing@lists.cluenet.de] On Behalf Of Anfinsen, 
 Ragnar
 Sent: Thursday, January 09, 2014 5:35 AM
 To: IPv6 Ops list
 Subject: RE: MTU handling in 6RD deployments
 
 Hi all.
 
 We have now changed and upgraded our BR to handle MTU in a sensible manner.
 
 We are not able to set the IPv4 MTU to more than 1500 due to limitations in 
 our Cisco Cat4500
 platform. So we have to reduce the MTU to 1480 somehow.
 
 Our other problem is that, by mistake, our RA MTU size is set to 1500, and 
 not 1480, as it should be.
 This will be fixed in the next firmware we push out.
 
 Due to the problem with the MTU size, we had a lot of ICMPv6 PTB packets 
 being returned from the Cisco
 1K unit (BR). This again results to that the rate-limiting feature in the 1K 
 dropped a lot of the PTB
 messages going back to the sender, hence making the IPv6 experience sluggish. 
 Typical problems was
 that pictures and such had some timeout before they were loaded.
 
 To fix this issue, we first of all upgraded our 1K's to the latest IOS (Cisco 
 IOS XE Software, Version
 03.11.00.S) which has the MSS-Clamping feature. This feature was added by 
 Cisco in November. Then we
 added the command ipv6 tcp adjust-mss 1420 to the 6rd tunnel interface.
 
 With this command set, all ICMPv6 PTB packets has disappeared, and the whole 
 IPv6 experience has
 become a whole lot better with snappy loading of pages and pictures. So for 
 TCP everything is good. As
 soon as we have fixed our firmware, this problem should be gone all together.
 
 Thanks to Tore Anderson for point this out to us.
 
 Hopefully this is useful for someone.
 
 Best Regards
 Ragnar Anfinsen
 
 Chief Architect CPE
 IPv6/IPv4 Architect
 Infrastructure
 Technology
 Altibox AS
 
 Phone: +47 51 90 80 00
 Phone direct: +47 51 90 82 35
 Mobile +47 93 48 82 35
 E-mail: ragnar.anfin...@altibox.no
 Skype: ragnar_anfinsen
 www.altibox.no
 
 
 
 CONFIDENTIAL
 The content of this e-mail is intended solely for the use of the individual 
 or entity to whom it is
 addressed. If you have received this communication in error, be aware that 
 forwarding it, copying it,
 or in any way disclosing its content to any other person, is strictly 
 prohibited. If you have received
 this communication in error, please notify the author by replying to this 
 e-mail immediately, deleting
 this message and destruct all received documents.
 



RE: MTU handling in 6RD deployments

2014-01-09 Thread Templin, Fred L
Hi spz,

 -Original Message-
 From: S.P.Zeidler [mailto:s...@serpens.de]
 Sent: Thursday, January 09, 2014 8:22 AM
 To: Templin, Fred L
 Cc: IPv6 Ops list
 Subject: Re: MTU handling in 6RD deployments
 
 Thus wrote Templin, Fred L (fred.l.temp...@boeing.com):
 
  What is the MTU as seen by the IPv6 hosts - 1480? Something less?
  Would it not be better if they could see 1500+?
 
 Is this about the let's improve a case of flu (router generates too many
 Packet Too Big ICMP) with bubonic plague (let the router do both packet
 fragmentation (wasn't that explicitly forbidden in IPv6?) and packet
 reassembly) idea?

Fragmentation and reassembly would not happen at the IPv6 level; they
would happen at a sub-layer below IPv6 and above IPv4. So, there is no
violation of the IPv6 standard since the tunnel endpoint is acting as
a host when it encapsulates IPv6 packets.

But, in some environments we might not want the 6rd BRs to suffer from
sustained fragmentation and reassembly so a responsible network operator
would fix their IPv4 link MTUs to 1520+. If they can't do that and the
load on the 6rd BR appears too great, then MSS clamping and a degenerate
IPv6 MTU reported to the IPv6 hosts is the only option.

This is not a one size fits all solution for all 6rd domains; some
might be better able to manage their IPv4 link MTUs and/or accept
steady-state fragmentation than others.

Thanks - Fred
fred.l.temp...@boeing.com
 
 regards,
   spz
 --
 s...@serpens.de (S.P.Zeidler)


RE: MTU handling in 6RD deployments

2014-01-09 Thread Templin, Fred L
Hi Ragnar,

 -Original Message-
 From: Anfinsen, Ragnar [mailto:ragnar.anfin...@altibox.no]
 Sent: Thursday, January 09, 2014 11:36 AM
 To: Templin, Fred L; S.P.Zeidler
 Cc: IPv6 Ops list
 Subject: Re: MTU handling in 6RD deployments
 
 On 09.01.14 17:36, Templin, Fred L fred.l.temp...@boeing.com wrote:
 
 
 But, in some environments we might not want the 6rd BRs to suffer from
 sustained fragmentation and reassembly so a responsible network operator
 would fix their IPv4 link MTUs to 1520+. If they can't do that and the
 load on the 6rd BR appears too great, then MSS clamping and a degenerate
 IPv6 MTU reported to the IPv6 hosts is the only option.
 
 The problem with your statement

Where do you see a problem with my statement - it agrees with what
you said below:

 is that many L3 access networks do not
 support MTU greater than 1500 on the access port. And if the RA MTU is set
 to 1480, you would not see any problems at all. However, there are some
 retail routers which do not set the MTU to 1480 when using a 6rd tunnel.
 In these cases adjusting the MSS if a good and efficient way of correcting
 that problem. Our experience so far is that MSS-clamping does not have any
 additional CPU load compared to not do it.

I don't doubt that your experience is valid for the environment you
are working in. What I am saying is that there may be many environments
where setting IPv4 link MTUs to 1520+ is a viable alternative and then
the hosts can see a full 1500+ MTU w/o ICMPs. SEAL detects when such
favorable conditions exist and uses limited fragmentation/reassembly
only when they don't. Or, if fragmentation/reassembly is deemed
unacceptable for the environment, then clamp the MSS.

Thanks - Fred
fred.l.temp...@boeing.com
 
 /Ragnar
 



RE: MTU handling in 6RD deployments

2014-01-07 Thread Templin, Fred L
Hi again,

 Second (and more importantly) reassembly is not needed
 for packets of any size if the path can pass a 1500 byte ping packet.

I should have qualified this by saying that the mechanism still
works even if the BR responds to pings subject to rate limiting.

Thanks - Fred
fred.l.temp...@boeing.com



RE: Caching learned MSS/MTU values

2013-10-18 Thread Templin, Fred L
Hi,

 -Original Message-
 From: ipv6-ops-bounces+fred.l.templin=boeing@lists.cluenet.de
 [mailto:ipv6-ops-bounces+fred.l.templin=boeing@lists.cluenet.de] On
 Behalf Of Hannes Frederic Sowa
 Sent: Friday, October 18, 2013 12:31 AM
 To: Jason Fesler
 Cc: IPv6 operators forum
 Subject: Re: Caching learned MSS/MTU values
 
 On Thu, Oct 17, 2013 at 09:05:24AM -0700, Jason Fesler wrote:
  I'm once again considering trying to improve on the test-ipv6.com
 PMTUD
  failure detection. Due to limitations on the client side I can't use
 raw
  sockets to generate test packets. The client is JavaScript and runs
 in a
  browser; all I can do is try fetching urls from multiple locations,
 each
  with a different MTU.
 
  I know that the various operating systems tend to cache any PMTUD
 issues
  that they can detect; future connections to that destination will use
  smaller packets accordingly. What I can not see to find is an
 adequate
  description of what granularity this gets cached with. /128? /64?
 Also, I
  the absence of Packet Too Big messages, what does each OS do?
 
 Linux, too, does cache on /128 basis. In the absence of PTB the
 connection
 will get stuck. ;)

Right, and we are observing non-negligible cases where PTBs are either
not delivered or lost somewhere along the way. That is why there is a
growing push for wider deployment of RFC4821 for end systems, and why
I am investing my time in developing SEAL for tunnels.

Thanks - Fred
fred.l.temp...@boeing.com

 
 Greetings,
 
   Hannes


RE: Caching learned MSS/MTU values

2013-10-18 Thread Templin, Fred L
Hi Hannes,

 -Original Message-
 From: Hannes Frederic Sowa [mailto:han...@stressinduktion.org]
 Sent: Friday, October 18, 2013 9:24 AM
 To: Templin, Fred L
 Cc: Jason Fesler; IPv6 operators forum
 Subject: Re: Caching learned MSS/MTU values
 
 On Fri, Oct 18, 2013 at 03:17:28PM +, Templin, Fred L wrote:
  Hi,
 
   -Original Message-
   From: ipv6-ops-bounces+fred.l.templin=boeing@lists.cluenet.de
   [mailto:ipv6-ops-
 bounces+fred.l.templin=boeing@lists.cluenet.de] On
   Behalf Of Hannes Frederic Sowa
   Sent: Friday, October 18, 2013 12:31 AM
   To: Jason Fesler
   Cc: IPv6 operators forum
   Subject: Re: Caching learned MSS/MTU values
  
   On Thu, Oct 17, 2013 at 09:05:24AM -0700, Jason Fesler wrote:
I'm once again considering trying to improve on the test-ipv6.com
   PMTUD
failure detection. Due to limitations on the client side I can't
 use
   raw
sockets to generate test packets. The client is JavaScript and
 runs
   in a
browser; all I can do is try fetching urls from multiple
 locations,
   each
with a different MTU.
   
I know that the various operating systems tend to cache any PMTUD
   issues
that they can detect; future connections to that destination will
 use
smaller packets accordingly. What I can not see to find is an
   adequate
description of what granularity this gets cached with. /128? /64?
   Also, I
the absence of Packet Too Big messages, what does each OS do?
  
   Linux, too, does cache on /128 basis. In the absence of PTB the
   connection
   will get stuck. ;)
 
  Right, and we are observing non-negligible cases where PTBs are
 either
  not delivered or lost somewhere along the way. That is why there is a
  growing push for wider deployment of RFC4821 for end systems, and why
  I am investing my time in developing SEAL for tunnels.
 
 There is basis support for mtu probing for tcp. It is currently
 deactivated by
 default: cat /proc/sys/net/ipv4/tcp_mtu_probing = 0
 
 Guess it had not the seen the testing it needs to activate it by
 default.

Yes, I had heard that there was an off-by-default linux implementation
of RFC4821. I also heard that it was not yet fully compliant to the
spec, but that was a while ago now and it may have gotten better by now?

 I still have to take a closer look at SEAL. Thanks for the reminder. ;)

Sure. I have recently published an alpha linux implementation of SEAL:

http://www.ietf.org/mail-archive/web/ipv6/current/msg19114.html

It is still in early phases and does not yet fully implement the spec
but does implement the core RFC4821 path MTU probing and fragmentation
requirements for several different varieties of tunnels. The code is
also quite ugly, and I would welcome any help on cleaning it up and/or
implementing more features in the spec.

Thanks - Fred
fred.l.temp...@boeing.com

 Greetings,
 
   Hannes



RE: Caching learned MSS/MTU values

2013-10-18 Thread Templin, Fred L
Hi Hannes,

 Oh, that is interesting. I'll have a look at the weekend.

OK. I had to roll another version to make some minor
changes - see:

http://linkupnetworks.com/seal/sealv2-0.2.tgz
http://www.ietf.org/id/draft-templin-intarea-seal-64.txt

I will let it rest for now, so this would be the version to
start looking at. Let me know if there are any questions or
comments.

Thanks - Fred
fred.l.temp...@boeing.com


RE: Google's unusual traffic notification

2013-07-25 Thread Templin, Fred L
Hi John,

If you suspect an ISATAP problem, I would like to understand it better because
I am not aware of any outstanding issues. Also, please refer to RFC6964 which
gives: Operational Guidance for IPv6 Deployment in IPv4 Sites Using the 
Intra-Site
Automatic Tunnel Addressing Protocol (ISATAP).

Thanks - Fred


From: Brzozowski, John Jason [mailto:j...@jjmb.com]
Sent: Wednesday, July 24, 2013 6:17 PM
To: Templin, Fred L
Cc: Tore Anderson; ipv6-ops@lists.cluenet.de
Subject: RE: Google's unusual traffic notification


My case was ISATAP related. Perhaps specific to my deployment.
On Jul 24, 2013 1:52 PM, Templin, Fred L 
fred.l.temp...@boeing.commailto:fred.l.temp...@boeing.com wrote:
Hi John - are saying that you are suspecting an ISATAP problem?

Thanks - Fred

From: 
ipv6-ops-bounces+fred.l.templin=boeing@lists.cluenet.demailto:boeing@lists.cluenet.de
 
[mailto:ipv6-ops-bounces+fred.l.templinmailto:ipv6-ops-bounces%2Bfred.l.templin=boeing@lists.cluenet.demailto:boeing@lists.cluenet.de]
 On Behalf Of Brzozowski, John Jason
Sent: Wednesday, July 24, 2013 10:27 AM
To: Tore Anderson
Cc: ipv6-ops@lists.cluenet.demailto:ipv6-ops@lists.cluenet.de
Subject: Re: Google's unusual traffic notification

We have seen this in the past from corporate desktop blocks used for ISATAP.  I 
found this to be strange.  Note I have not seen this for some time.

John