Hello David and others,

ATSSS supports low latency applications in the following way (this is a quote 
from 23.700-93) as part of its “Priority-based” mode:

“RTT Threshold: Supported by using the RTT estimation mechanism defined in 
draft-ietf-quic-recovery, with which the QUIC protocol can estimate the RTT of 
a QUIC connection. Example of ATSSS rule using this steering mode: "Send the 
traffic of App1 to the access with RTT < 100ms; if both accesses have RTT < 
100ms, send it to non-3GPP access". … As long as the selected access has RTT < 
100ms, the traffic can remain on this access.”

I hope this clarifies that ATSSS is supporting what was called as Interactive 
service/Siri service in Christoph’s presentation.

And as explained by Mirja earlier it is the application that requests what kind 
of session it would like to have.

Best regards
Hannu



From: QUIC <[email protected]> On Behalf Of Florin Baboescu
Sent: Saturday, October 24, 2020 5:54 AM
To: David Schinazi <[email protected]>; Mirja Kuehlewind 
<[email protected]>
Cc: [email protected]
Subject: RE: More context on ATSSS use case

Hi David,

I see that you are repeating a statement with which I definitely can not agree 
“I'm noticing a pattern where no one is able to explain how this will improve 
the end-user experience though, so I'm going to assume that this is beneficial 
for carriers and not end-users.” So I’ll try to give it a try. First I thought 
it was not necessary as there were already some great presentations by 
Christoph, Olivier and the guys from Alibaba which should have provided you 
with a very good reasoning on benefits for the end user. I am not going to go 
again through what they presented.

               We also had one slide in our presentation, which may have been 
overlooked, detailing at least three elements through which a multipath access 
solution may improve the overall Quality of Experience for the end user:

  1.  Increased capacity, 2) increased coverage and 3) increased reliability.

Let’s assume for simplicity an user which would be charged for the amount of 
data he/she would use over a cellular network using licensed spectrum while for 
all the data exchanged over the WiFi. While the user is under a good WiFi 
coverage all his traffic is going to be routed over the WiFi, no data traffic 
is going over the cellular. However, when the user is in an area of limited 
coverage or the level of interference reaches a certain threshold the quality 
of the communication over the WiFi access degrades. As a result the achievable 
throughput over the WiFi may get below a certain threshold. At this moment the 
WiFi access may not be able to sustain the throughput the user may require. The 
user may either switch over to the cellular (paying a higher penalty) or use 
both accesses, WiFi and cellular. When both accesses are used,  all the traffic 
below a maximum threshold will go over the WiFi access, while all the leftover 
traffic will go over the cellular access.
In total for this example there are the following cases:

User entirely under the WiFi coverage

User entirely under the cellular coverage (no WiFi coverage)

User under both WiFi and cellular coverage

This solution essentially increases the coverage area for the user 
complementing the use WiFi with cellular in zones of poor coverage or no 
coverage.  Without it the user would have been left without data access in 
areas of no WiFi coverage, or with a high rate of error and limited throughput 
access in the areas of poor WiFi coverage or high interference. The solution 
increases the reliability, allowing the user to backup the primary access(in 
this case WiFi) with a secondary access (cellular).

On a side note I would also try to answer a different question. Is the 
bandwidth aggregation solution always useful? Based on various companies 
contributions in 3GPP it was noticed that there is no benefit for the user to 
do bandwidth aggregation when the throughput ratio between the two accesses 
exceeds somewhere between (3-5):1.

Another interesting use case addresses one of the limitations of WiFi ( before 
WiFi6, which uses an OFDMA based access). As many of you know, in WiFi an user 
can transmit only after it detects that there is no one else transmitting at 
the same time. Because of this when the number of users served by the same 
access point increases the quality of the access decreases, as all the users 
compete for the same access. In this case the end user may use the WiFi access 
for all the downlink traffic while for the uplink traffic may use the cellular 
access. This use case improves for the end user both the capacity for both 
downlink and uplink as well as the reliability.

These are just few examples which try to show the benefits of bringing a 
multipath solution in the toolbox for both end user as well as network 
elements/functions. I hope I brought some more clarity to you.
Regards,
-Florin



From: QUIC [mailto:[email protected]<mailto:[email protected]>] On 
Behalf Of David Schinazi
Sent: Friday, October 23, 2020 6:12 PM
To: Mirja Kuehlewind 
<[email protected]<mailto:[email protected]>>
Cc: [email protected]<mailto:[email protected]>
Subject: Re: More context on ATSSS use case

Hi Mirja,

I understand how in some scenarios this could increase throughput.
However, can you clarify how this could improve latency?

I'm noticing a pattern where no one is able to explain how this will
improve the end-user experience though, so I'm going to assume
that this is beneficial for carriers and not end-users. Unfortunately
I don't have the time to go to 3GPP and do this research myself.

David

On Fri, Oct 23, 2020 at 6:07 PM Mirja Kuehlewind 
<[email protected]<mailto:[email protected]>> wrote:
Hi David,

this depends on the actual use case. Using multipath in a masque-like proxy 
setup covers multiple scenarios; in the hybrid access scenario it’s throughput, 
in other cases it can be latency, or a cheaper data subscription. That’s what I 
tried to explain below.

However, the whole point of ATSSS, as well as other use cases, is to provide 
the (mobile) operator’s costumer/the user better performance that what you have 
right now when using only a single path by actually making use of currently 
unused resources. We can argue what’s the best way to achieve that but you 
probably need to go to 3GPP and have that this discussion there. I was mainly 
trying to explain what ATSSS is, what the motivation is, and what the 
requirements are.

Mirja



From: David Schinazi <[email protected]<mailto:[email protected]>>
Date: Friday, 23. October 2020 at 23:08
To: Mirja Kuehlewind 
<[email protected]<mailto:[email protected]>>
Cc: "[email protected]<mailto:[email protected]>" <[email protected]<mailto:[email protected]>>
Subject: Re: More context on ATSSS use case

Hi Mirja,

Can you clarify what you mean by "optimize resource usage and
therefore also the performance for the user"?
1) What does it mean in networking terms (latency, throughput, etc.)?
2) What does it mean in end-user terms (video loads faster, etc.)?

Thanks,
David

On Fri, Oct 23, 2020 at 12:45 PM Mirja Kuehlewind 
<[email protected]<mailto:[email protected]>>
 wrote:
Hi all,

based on the discussion yesterday I would like to provide some more context for 
the ATSSS use case and some notes that probably also applies to other proxy 
based-use cases.

First of all, I would like to clearly note that it's the client (UE) that has 
to request ATSSS support (a Multi-Access (MA)-PDU session) when connecting to 
the mobile network and it's also the client that starts the QUIC connection to 
the proxy (hosted in the UPF). Further for each connection that the client 
starts to some target content server, it can again decide to use the ATSSS 
setup or not (by otherwise connecting to the server over a single PDU 
mobile-network-only session). That means the endpoint can locally decide if it 
wants to only use the mobile link for certain connections instead of any kind 
of ATSSS service. However, that decision will likely not only depend on the 
application characteristics but also on e.g. the data subscription, user 
preferences, or device status.

And that brings me to another point: The right scheduling for the use of 
multiple paths does not only depend on the application characteristics. It's 
also the network conditions of each link, which to some extend can be measured 
in the transport if traffic is sent on both/all links, as well as other factors 
such as user tariff, remaining data volume, or battery status. Yes, this 
doesn't make the problem easier but we also don't need to solve this problem in 
a general way. For each of the proxy-based use cases presented yesterday there 
is a specific network setup with specific characteristics and goals. And often 
the two links do have quite different but known characteristics which does make 
the decision easier.

For the hybrid access case, you have one DSL and one mobile link and multipath 
is used for bandwidth aggregation. This setup is usually deployed when the 
physical line that is serving the DSL doesn't provide sufficient bandwidth and 
in certain areas upgrading those links would be very costly. In this case the 
scheduling is clear: you always fill up the DSL first and only use the mobile 
link when the DSL capacity is exhausted; this can happen for e.g. high quality 
video streaming. In that case the mobile link usually has a higher latency and 
you might need to wait a few more seconds before your video starts but I guess 
that's better than watching the video in low quality.

For ATSSS you always have one mobile 3GPP link and one non-3GPP link, usually 
wifi. And as I said in the chat yesterday, for ATSSS this will probably get 
first deployed with managed wifi networks, such that are often available today 
already by mobile operators in certain countries. ATSSS also provides a small 
number of so called "steering modes" which impacts the scheduling used, as 
presented by Spencer yesterday. These modes are provided by the network to the 
client (on the UE) as well as the proxy (hosted in the UPF) and these both 
tunnel endpoints decide independently which scheduling to use.

There are different scenarios for these different steering modes, however, it's 
rather a small set of options. When selecting these modes the network is able 
to take additional factors into account such as subscriber data, operator 
configuration, or also application server provided info, e.g. for cases where 
there is actually an SLA between the content provider and network operator in 
place.

By default the scheduling could always prefer one link and only switch over 
when the performance is not sufficient anymore, e.g. the selected network gets 
loaded. While you can measure the network characteristics, and ATSSS will also 
rely on measured characteristics when deciding which path to use, the operator 
of the mobile and wifi networks might actually have some additional knowledge 
about the current network load (number of connected user, total traffic 
volume). Further both the UE as well as the UPF in the mobile network might 
actually have a better view about what's happening on the local link than the 
far end where the content server sits, e.g. knowing that a user is moving out 
of coverage. As such the network could for example provide a priority for one 
path when signaling the steering mode and may also indicate certain threshold 
values that could be used to make a switching decision. However, for most flows 
it might be even simpler than that and probably some kind of default mode will 
be used, e.g. based on lowest delay assuming that delay increases when one link 
gets congested.

Another scenario is that a user might choose a cheaper tariff where as much as 
possible of the downlink traffic is off-loaded to wifi. This needs to be 
implemented based on the scheduling in the UPF sitting in the mobile network. 
Further, as the steering modes are provided on a per flow level, another 
example scenarios is that bandwidth aggregation is requested for certain 
traffic flow based on an existing SLA.

Please note that in any of these setups there are multiple e2e connection that 
use the same QUIC tunnel and as just noted each flow can have a different 
steering mode assigned. This is why simultaneous use of both paths is 
especially important for proxy-based use cases.

All these scenarios benefit from knowledge about the local network conditions 
to optimize resource usage and therefore also the performance for the user.

Hope that helps,
Mirja

Reply via email to