On 2021-7-20, at 1:19, Roberto Peon <fenix=40fb....@dmarc.ietf.org> wrote:
> 
> If we have to send data along a path in order to discover properties about 
> that path, then sending less data on the path means discovering less about 
> that path.
> 
> The ideal would be to send *enough* data on any one path to maintain an 
> understanding of its characteristics (including variance), and no more than 
> that, and then to schedule the rest of the data to whichever path(s) are best 
> at the moment.

^^^ This.

Because the Internet has no explicit network-to-endpoint signaling, an endpoint 
must build its understanding of the properties of a path by exercising it, and 
specifically exercising it to a degree that causes queues to form (to obtain 
"under load" RTTs, see bufferbloat) and congestion loss to happen (to obtain an 
understanding of available path capacity.) Some people have called this 
"putting pressure on a path".

There has been a long-standing assumption that if you exercised a path in the 
(recent) past you can probably assume that the properties haven't changed much 
if you want to start exercising it again. This is why heuristics like caching 
path properties (RTTs, etc.) are often of benefit - often, but not always, and 
maybe never in some scenarios (e.g., overcommitted CGNs.)

There has been some work on this in the past for MPTCP. For example, on mobile 
devices - which most often have multiple possible paths to a destination via 
WiFi and cellular - exercising multiple paths comes at a distinct increase in 
energy usage. So you need a heuristic to determine if the potential benefit of 
going multipath is worth the energy cost of probing multiple paths before you 
do so.

Thanks,
Lars

Attachment: signature.asc
Description: Message signed with OpenPGP

Reply via email to