Hi, Yunfei,

On Fri, Sep 17, 2021 at 1:39 AM Yunfei Ma <yfmas...@gmail.com> wrote:

> Hi Robin,
>
> Thanks for the question and sorry for the late reply. With regard to the
> experiment, the goal was to achieve bandwidth, latency and reliability for
> a video stream at the same time. Intuitively, adding more paths should give
> you immediate improvement. But, as you pointed out, HoL might undo the
> benefit. So what we found was a way to overcome this, and the technique was
> to use QoE feedback in conjunction with scheduling techniques. What the
> results tell us is if you have a stream that is time critical and bandwidth
> intensive, with the user-space nature of QUIC, you now have a way to go
> forward using multipath.
>
> But please keep in mind that you can choose to use whatever scheduling
> algorithm you like with the draft. In the experiment, we are trying to push
> the limit, but it is indeed one of the many possibilities. If you want to
> send streams that do not require every high reliability or high bandwidth,
> you can definitely use the fixed-path-per-stream strategy. Recently, we
> also have also done testing with fixed-path-per-stream with
> draft-liu-multipath-quic for certain scenarios. Here are some observations,
> if you have a large number of streams, having an assigned path for each can
> actually give you more combined bandwidth utilization, but not every stream
> benefits and some could suffer because of a bad path assignment. If you
> really care about the long tail performance, then, I would recommend
> sending a stream on multiple paths.
>
> How to use multi-path and which scheduling mode depends on what you try to
> optimize. I hope the draft can provide a starting point, and please feel
> free to go beyond.
>

This is an important point that we shouldn't lose sight of. I think we're
tripping over an assumption about how smart "QUIC with multiple paths"
should be about scheduling packets on each path, versus how much
applications should be involved in scheduling packets on each path.

Applications using MP-TCP, especially for applications like bandwidth
bonding, could reasonably let MP-TCP "do the right thing" - if you've got
paths, use them, and MP-TCP can manage multiple paths with different path
characteristics fairly well if the goal is to use all the available
bandwidth.

Applications using QUIC with multiple paths have a lot more possibilities.
For instance, using reliable streams and unreliable datagrams
simultaneously across multiple paths.

ISTM that we should be kind of clear about what QUIC implementations will
be responsible for, and what decisions applications will be responsible
for.

I have been maintaining a collection of various goals for path selection
(most recently, in
https://datatracker.ietf.org/doc/html/draft-dawkins-quic-multipath-selection-01#section-3).
It's not a short list.

If we're also including user preferences that would include knowing the
difference between cellular connections that are metered, cellular
connections that are unmetered but being throttled, and cellular
connections that are unmetered and unthrottled, and cross-matching them
with wifi connections that are likely to work better than cellular
connections vs. wifi connections that are only intermittently performing
well, that's not a small amount of complexity.

Maybe that complexity should be managed by applications until we have a
better understanding of what applications are actually doing?

Best,

Spencer


> Cheers,
> Yunfei
>
>
>
>
>> Thanks for your extended explanations on Multipath HoL-blocking and
>> especially:
>>
>> > I think the stream dependencies you mentioned here is a great point. In
>> our implementation, we introduced a stream-priority based reinjection which
>> tries to address such dependency (There is a figure in the material that
>> Yanmei sent). But we haven't tried when each stream is limited to a single
>> path. In our case, streams are distributed on multiple paths. I would
>> definitely want to hear more about the application you are dealing with,
>> and maybe for wired transport, such a design is needed.
>>
>> This is exactly what I was trying to explore in my previous mail. You're
>> basically intentionally causing (or perhaps risking?) HOL blocking because
>> you split a single stream over multiple paths.
>> As noted by Christian with the 'equal cost multipath', this can have
>> bandwidth usage benefits, but only if paths are usable/similar. If not, HOL
>> blocking might undo all the benefits you get from this setup (and using a
>> single path per stream would be better).
>> So my question was: where is the inflection point where you might decide
>> to switch modes? At which parameters is one better than the other?
>> I'd hoped you would have experimented with the fixed-path-per-stream
>> setup to get some insight into this.
>>
>> In my mind, the idea of doing a purely transport-level multipath
>> scheduler (i.e., without taking into account application layer streams /
>> data dependencies / etc.)
>> has historically made some sense for TCP / for completely separated
>> stacks, as the transport didn't have that type of information available.
>> It is however utterly strange to me that this approach would continue for
>> QUIC (at least in endpoint multipath, not things like in-network
>> aggregators that have been discussed),
>> where we have clear splits between streams and (hopefully) already some
>> type of prioritization information for each stream.
>> For QUIC, I'd expect one-path-per-stream to be the default, with
>> multiple-paths-per-stream to be an edge case if you have a single,
>> high-traffic stream (which I do assume is your situation with a video
>> stream).
>>
>> With best regards,
>> Robin
>>
>>
>>
>>
>> On Tue, 20 Jul 2021 at 09:15, Lars Eggert <l...@eggert.org> wrote:
>>
>>> On 2021-7-20, at 1:19, Roberto Peon <fenix=40fb....@dmarc.ietf.org>
>>> wrote:
>>> >
>>> > If we have to send data along a path in order to discover properties
>>> about that path, then sending less data on the path means discovering less
>>> about that path.
>>> >
>>> > The ideal would be to send *enough* data on any one path to maintain
>>> an understanding of its characteristics (including variance), and no more
>>> than that, and then to schedule the rest of the data to whichever path(s)
>>> are best at the moment.
>>>
>>> ^^^ This.
>>>
>>> Because the Internet has no explicit network-to-endpoint signaling, an
>>> endpoint must build its understanding of the properties of a path by
>>> exercising it, and specifically exercising it to a degree that causes
>>> queues to form (to obtain "under load" RTTs, see bufferbloat) and
>>> congestion loss to happen (to obtain an understanding of available path
>>> capacity.) Some people have called this "putting pressure on a path".
>>>
>>> There has been a long-standing assumption that if you exercised a path
>>> in the (recent) past you can probably assume that the properties haven't
>>> changed much if you want to start exercising it again. This is why
>>> heuristics like caching path properties (RTTs, etc.) are often of benefit -
>>> often, but not always, and maybe never in some scenarios (e.g.,
>>> overcommitted CGNs.)
>>>
>>> There has been some work on this in the past for MPTCP. For example, on
>>> mobile devices - which most often have multiple possible paths to a
>>> destination via WiFi and cellular - exercising multiple paths comes at a
>>> distinct increase in energy usage. So you need a heuristic to determine if
>>> the potential benefit of going multipath is worth the energy cost of
>>> probing multiple paths before you do so.
>>>
>>> Thanks,
>>> Lars
>>>
>>>
>>
>> --
>>
>> dr. Robin Marx
>> Postdoc researcher - Web protocols
>> Expertise centre for Digital Media
>>
>> *Cellphone *+32(0)497 72 86 94
>>
>> www.uhasselt.be
>> Universiteit Hasselt - Campus Diepenbeek
>> Agoralaan Gebouw D - B-3590 Diepenbeek
>> Kantoor EDM-2.05
>>
>>
>>

Reply via email to