Excerpts from Christian Vogt at 20:36:11 +0200 on Fri 28 Nov 2008: > Chris - > > I agree with you that the overhead created by a host-based solution > needs to be carefully looked at. Yet I don't think that high overhead > is inevitable for host-based solutions. As you say, it depends on > protocol design. Take, e.g., the overhead that is needed for > connectivity verification: Whereas host-based solutions cannot > aggregate explicit probing messages as efficiently as network-based > solutions, host-based solutions can potentially use implicit probing > more efficiently than network-based solutions because hosts have session > state that may include useful connectivity information. So the > overhead-efficiency of a host-based solution depends to a significant > extent on how effectively the solution exploits such session state.
We need real numbers. First, probing in general has serious scaling problems and we need to quantify how much overhead they are (relative to actual data packets). Second, you say endpoints have session state that may include useful information, but they only get that state from traffic flowing (e.g. a TCP keepalive), and points further into the network have even more traffic flowing (they would see that TCP keepalive and many more, so they wouldn't need it). Endpoints have an advantage when flows are asymmetric but lose it when they have to check inactive locators for possible liveness. In any case, one of the fundamental goals should be that applications do not get led down a garden path by either lower layers in the stack or the network. Whatever we come up with needs to be able to provide robustness in the face of network changes without overwhelming the network. _______________________________________________ rrg mailing list [email protected] https://www.irtf.org/mailman/listinfo/rrg
