On woensdag, sep 17, 2003, at 22:59 Europe/Amsterdam, Keith Moore wrote:

a protocol that depends on the random kindness of remote routers won't fly.

I have no problem with trying to choose a signaling protocol that won't
look like an attractive target to ignorant sysadmins, or with trying
to pick a signalling protocol that is easy to distinguish from traffic
that sysadmins will be tempted to filter.

That's only a tiny part of the problem.


Marcelo Bagnulo proposed a multi6 solution that was based on catching traffic that can't be forwarded. But we came to the conclusion that this won't fly because there is always the possibility that the last router in the path isn't aware of a failure situation. Now for nice-to-have functionality such as MTU optimization this shouldn't be a huge problem as you can fall back to some kind of conservative behavior when you don't get the desired feedback. For multihoming this is very different as the whole idea is to provide _better_ reachability. Having to depend on a router somehwere that isn't under the control of either endpoint to do something that routers often don't do today is asking for trouble. In a private network this _might_ possibly work. On the internet, it's useless.

But vague assertions of the
form "sysadmins will filter X" are unsupportable.  If we pay them too
much heed we'll end up trying to design a protocol to meet an
unrealistic set of conditions, and either we'll never finish or we'll
put the robustness in the wrong place.

Again, I agree in principle. But this can never be an excuse for lazy protocol design or implementation. Didn't we learn anything from the PMTUD disaster?



-------------------------------------------------------------------- IETF IPv6 working group mailing list [EMAIL PROTECTED] Administrative Requests: https://www1.ietf.org/mailman/listinfo/ipv6 --------------------------------------------------------------------

Reply via email to