> i haven't seen links that changes MTU (*1) dynamically based on > dynamic changes from outside (*2). in this case (*1) is "IPv6 MTU > on top of tunnel" and (*2) is "IPv4 routing changes". > maybe my experience is limited, but anyway, i have never seen one.
I having a hard time parsing this regular expression. Looking at the definitions in RFC 2460 and 1981 I see it clearly: link - a communication facility or medium over which nodes can communicate at the link layer, i.e., the layer immediately below IPv6. Examples are Ethernets (simple or bridged); PPP links; X.25, Frame Relay, or ATM networks; and internet (or higher) layer "tunnels", such as tunnels over IPv4 or IPv6 itself. link MTU - the maximum transmission unit, i.e., maximum packet size in octets, that can be conveyed in one piece over a link. When the link is a tunnel the link MTU is defined to effectively vary based on the path MTU of that tunnel, since different size packets can be conveyed in one piece over the tunnel at different points in time. > if link MTUs are not stable enough, there will be more ICMP too big > than we desire. Please provide an analysis. I don't think this is the case. The time limit for when an path MTU increase is attempted (10 minutes by default in RFC 1981) limits this whether the potential for increase is for a tunnel link (due to IPv4 routing changes) or due to IPv6 routing using a different set of links. When the path MTU decreases (for a tunnel link, or due to IPv6 routing using a different set of links) a single ICMP too big is sufficient. (Of course, if there is a window full of large packets you might see more than one subject to ICMP error rate limiting.) The point is that I don't see why there should be anything new here for tunneling. The tradeoffs for path MTU discovery are: - downside of ICMP error packets - downside of additional roundtrip times due to the data packets being lost when the packet is to big - the benefit of making the retransmission unit the same as the loss unit, while being able to with larger packets If this tradeoff comes up with PMTUD makes sense when there is no tunneling, then I don't see why the same conclusion doesn't hold for the case of nested PMTUD using tunnels. So I'm looking forward to your performance analysis. > with the cost of complexity in tunnel endpoint implementations (needs > to maintain IPv4 path MTU and reflect it to IPv6 tunnel link MTU). Yes. And there is additional complexity to implement PMTUD in the non-tunneling case. Thus I think your arguments about performance and implementation complexity can also be used to argue that we shouldn't do PMTUD at all and instead send with 1280 bytes. FWIW The Solaris implementation doesn't have soft state in the tunneling pseudo-device driver. It is all reflected in the conceptual neighbor cache for the "upper IP layer" when an ICMP too big arrives from the "lower IP layer". > i would really like to know how many of existing implementations follow > this part of RFC2983. Solaris does. I might also make sense to ask what folks do that have IPv4 in IPv4 IPsec tunnels. I *think* (but am far from certain) that the disadvantage of doubling the number of packet due to the packets with the IP+ESP headers being too big means that it is common for such implementions to reflect PMTU up through the tunnelong pseudo interface. Erik -------------------------------------------------------------------- IETF IPng Working Group Mailing List IPng Home Page: http://playground.sun.com/ipng FTP archive: ftp://playground.sun.com/pub/ipng Direct all administrative requests to [EMAIL PROTECTED] --------------------------------------------------------------------