* Hannes Frederic Sowa > Tunnels should actually work fine and icmp rate limiting should take > place per destination (or on a /64 boundary). Either someone messed up > their filters or we have a software bug (maybe we should just introduce > a netfilter target which does mid-path fragmentation of IPv6 packets :P ).
The key word here is "should" ;-) In my experience the reality is more grim. DIY-tunnelers are greatly over-represented in the users that report IPv6-related problems. It would appear that DIY-tunnelers has a special attraction to people who are in that dangerous middle ground of cluefulness where they are fully capable of tinkering around and changing all sorts of stuff, yet lacking the clue to actually fully understand what they're doing, why their stuff doesn't actually end up working very well, or how to actually get it working again. (I'm saying that *every* DIY-tunneler fit that description, just that they're over-represented.) Centrally managed tunnels like 6RD fares better. Presumably the providers responsible for those know that relying on PMTUD to work flawlessly 100% of the time for 100% of the destinations on the internet is a non-started, and implement tricks like RA MTU Options and/or TCP MSS clamping to give the subscriber a better chance of having a good user experience. (This isn't anything new with IPv6, BTW, clamping the TCP MSS was commonplace in IPv4 PPPoE deployments too, for exactly the same reasons.) Fortunately the DIY-tunnelers constitute an insignificant amount of users, so it's okay to simply tell them «you broke it so you get to keep the pieces». Had there been significantly more of them we would probably have felt forced to jack down the MTU/MSS on the server end, which would have been a real shame because that would be something that would have impacted all users, including those with native connectivity. Also worth noting is that even if PMTUD would have worked 100% of the time, its still not without cost; the time to first byte on a download gets a penalty equal to the RTT between the server and the tunnel ingress server/router. So you end up with a worse user experience than with IPv4 even though the network latencies/paths are identical. Tore