On Mon, 26 Sep 2022, Eugene Y Chang wrote:

On Sep 26, 2022, at 11:01 AM, Sebastian Moeller <[email protected]> wrote:

Hi Eugene,


On Sep 26, 2022, at 22:54, Eugene Y Chang via Starlink <[email protected] 
<mailto:[email protected]>> wrote:

Ok, we are getting into the details. I agree.

Every node in the path has to implement this to be effective.

Amazingly the biggest bang for the buck is gotten by fixing those nodes that actually contain a network path's bottleneck. Often these are pretty stable. So yes for fully guaranteed service quality all nodes would need to participate, but for improving things noticeably it is sufficient to improve the usual bottlenecks, e.g. for many internet access links the home gateway is a decent point to implement better buffer management. (In short the problem are over-sized and under-managed buffers, and one of the best solution is better/smarter buffer management).


This is not completely true. Say the bottleneck is at node N. During the period of congestion, the upstream node N-1 will have to buffer. When node N recovers, the bufferbloat at N-1 will be blocking until the bufferbloat drains. Etc. etc. Making node N better will reduce the extent of the backup at N-1, but N-1 should implement the better code.

only if node N and node N-1 handle the same traffic with the same link speeds. In practice this is almost never the case.

Until you get to gigabit last-mile links, the last mile is almost always the bottleneck from both sides, so implementing cake on the home router makes a huge improvement (and if you can get it on the last-mile ISP router, even better). Once you get into the Internet fabric, bottlenecks are fairly rare, they do happen, but ISPs carefully watch for those and add additional paths and/or increase bandwith to avoid them.

David Lang


In fact, every node in the path has to have the same prioritization or the 
scheme becomes ineffective.

        Yes and no, one of the clearest winners has been flow queueing, IMHO 
not because it is the most optimal capacity sharing scheme, but because it is 
the least pessimal scheme, allowing all (or none) flows forward progress. You 
can interpret that as a scheme in which flows below their capacity share are 
prioritized, but I am not sure that is the best way to look at these things.

The hardest part is getting competing ISPs to implement and coordinate. 
Bufferbloat and handoff between ISPs will be hard. The only way to fix this is 
to get the unwashed public to care. Then they can say “we don’t care about the 
technical issues, just fix it.” Until then …..




Regards
        Sebastian



Gene
----------------------------------------------
Eugene Chang
IEEE Senior Life Member
[email protected]
781-799-0233 (in Honolulu)



On Sep 26, 2022, at 10:48 AM, David Lang <[email protected]> wrote:

software updates can do far more than just improve recovery.

In practice, large data transfers are less sensitive to latency than smaller 
data transfers (i.e. downloading a CD image vs a video conference), software 
can ensure better fairness in preventing a bulk transfer from hurting the more 
latency sensitive transfers.

(the example below is not completely accurate, but I think it gets the point 
across)

When buffers become excessivly large, you have the situation where a video call 
is going to generate a small amount of data at a regular interval, but a bulk 
data transfer is able to dump a huge amount of data into the buffer instantly.

If you just do FIFO, then you get a small chunk of video call, then several 
seconds worth of CD transfer, followed by the next small chunk of the video 
call.

But the software can prevent the one app from hogging so much of the connection 
and let the chunk of video call in sooner, avoiding the impact to the real time 
traffic. Historically this has required the admin classify all traffic and 
configure equipment to implement different treatment based on the 
classification (and this requires trust in the classification process), the 
bufferbloat team has developed options (fq_codel and cake) that can ensure 
fairness between applications/servers with little or no configuration, and no 
trust in other systems to properly classify their traffic.

The one thing that Cake needs to work really well is to be able to know what 
the data rate available is. With Starlink, this changes frequently and cake 
integrated into the starlink dish/router software would be far better than 
anything that can be done externally as the rate changes can be fed directly 
into the settings (currently they are only indirectly detected)

David Lang


On Mon, 26 Sep 2022, Eugene Y Chang via Starlink wrote:

You already know this. Bufferbloat is a symptom and not the cause. Bufferbloat 
grows when there are (1) periods of low or no bandwidth or (2) periods of 
insufficient bandwidth (aka network congestion).

If I understand this correctly, just a software update cannot make bufferbloat 
go away. It might improve the speed of recovery (e.g. throw away all time 
sensitive UDP messages).

Gene
----------------------------------------------
Eugene Chang
IEEE Senior Life Member
[email protected]
781-799-0233 (in Honolulu)



On Sep 26, 2022, at 10:04 AM, Bruce Perens <[email protected]> wrote:

Please help to explain. Here's a draft to start with:

Starlink Performance Not Sufficient for Military Applications, Say Scientists

The problem is not availability: Starlink works where nothing but another 
satellite network would. It's not bandwidth, although others have questions 
about sustaining bandwidth as the customer base grows. It's latency and jitter. 
As load increases, latency, the time it takes for a packet to get through, 
increases more than it should. The scientists who have fought bufferbloat, a 
major cause of latency on the internet, know why. SpaceX needs to upgrade their 
system to use the scientist's Open Source modifications to Linux to fight 
bufferbloat, and thus reduce latency. This is mostly just using a newer 
version, but there are some tunable parameters. Jitter is a change in the speed 
of getting a packet through the network during a connection, which is 
inevitable in satellite networks, but will be improved by making use of the 
bufferbloat-fighting software, and probably with the addition of more 
satellites.

We've done all of the work, SpaceX just needs to adopt it by upgrading their 
software, said scientist Dave Taht. Jim Gettys, Taht's collaborator and creator of 
the X Window System, chimed in: <fill in here please>
Open Source luminary Bruce Perens said: sometimes Starlink's latency and jitter 
make it inadequate to remote-control my ham radio station. But the military is 
experimenting with remote-control of vehicles on the battlefield and other 
applications that can be demonstrated, but won't happen at scale without 
adoption of bufferbloat-fighting strategies.

On Mon, Sep 26, 2022 at 12:59 PM Eugene Chang 
<[email protected]<mailto:[email protected]>> wrote:
The key issue is most people don’t understand why latency matters. They don’t 
see it or feel it’s impact.

First, we have to help people see the symptoms of latency and how it impacts 
something they care about.
- gamers care but most people may think it is frivolous.
- musicians care but that is mostly for a hobby.
- business should care because of productivity but they don’t know how to “see” 
the impact.

Second, there needs to be a “OMG, I have been seeing the action of latency all 
this time and never knew it! I was being shafted.” Once you have this 
awakening, you can get all the press you want for free.

Most of the time when business apps are developed, “we” hide the impact of poor 
performance (aka latency) or they hide from the discussion because the 
developers don’t have a way to fix the latency. Maybe businesses don’t care 
because any employees affected are just considered poor performers. (In bad 
economic times, the poor performers are just laid off.) For employees, if they 
happen to be at a location with bad latency, they don’t know that latency is 
hurting them. Unfair but most people don’t know the issue is latency.

Talking and explaining why latency is bad is not as effective as showing why 
latency is bad. Showing has to be with something that has a person impact.

Gene
-----------------------------------
Eugene Chang
[email protected] <mailto:[email protected]>
+1-781-799-0233 (in Honolulu)





On Sep 26, 2022, at 6:32 AM, Bruce Perens via Starlink 
<[email protected]<mailto:[email protected]>> wrote:

If you want to get attention, you can get it for free. I can place articles 
with various press if there is something interesting to say. Did this all 
through the evangelism of Open Source. All we need to do is write, sign, and 
publish a statement. What they actually write is less relevant if they publish 
a link to our statement.

Right now I am concerned that the Starlink latency and jitter is going to be a 
problem even for remote controlling my ham station. The US Military is 
interested in doing much more, which they have demonstrated, but I don't see 
happening at scale without some technical work on the network. Being able to 
say this isn't ready for the government's application would be an 
attention-getter.

  Thanks

  Bruce

On Mon, Sep 26, 2022 at 9:21 AM Dave Taht via Starlink 
<[email protected]<mailto:[email protected]>> wrote:
These days, if you want attention, you gotta buy it. A 50k half page
ad in the wapo or NYT riffing off of It's the latency, Stupid!",
signed by the kinds of luminaries we got for the fcc wifi fight, would
go a long way towards shifting the tide.

On Mon, Sep 26, 2022 at 8:29 AM Dave Taht <[email protected] 
<mailto:[email protected]>> wrote:

On Mon, Sep 26, 2022 at 8:20 AM Livingood, Jason
<[email protected] <mailto:[email protected]>> wrote:

The awareness & understanding of latency & impact on QoE is nearly unknown among 
reporters. IMO maybe there should be some kind of background briefings for reporters - maybe 
like a simple YouTube video explainer that is short & high level & visual? Otherwise 
reporters will just continue to focus on what they know...

That's a great idea. I have visions of crashing the washington
correspondents dinner, but perhaps
there is some set of gatherings journalists regularly attend?


On 9/21/22, 14:35, "Starlink on behalf of Dave Taht via Starlink" 
<[email protected] <mailto:[email protected]> on behalf 
of [email protected] <mailto:[email protected]>> wrote:

  I still find it remarkable that reporters are still missing the
  meaning of the huge latencies for starlink, under load.




--
FQ World Domination pending: 
https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
Dave Täht CEO, TekLibre, LLC



--
FQ World Domination pending: 
https://blog.cerowrt.org/post/state_of_fq_codel/<https://blog.cerowrt.org/post/state_of_fq_codel/>
Dave Täht CEO, TekLibre, LLC
_______________________________________________
Starlink mailing list
[email protected] <mailto:[email protected]>
https://lists.bufferbloat.net/listinfo/starlink 
<https://lists.bufferbloat.net/listinfo/starlink>


--
Bruce Perens K6BP
_______________________________________________
Starlink mailing list
[email protected] <mailto:[email protected]>
https://lists.bufferbloat.net/listinfo/starlink 
<https://lists.bufferbloat.net/listinfo/starlink>



--
Bruce Perens K6BP

_______________________________________________
Starlink mailing list
[email protected] <mailto:[email protected]>
https://lists.bufferbloat.net/listinfo/starlink 
<https://lists.bufferbloat.net/listinfo/starlink>
_______________________________________________
Starlink mailing list
[email protected]
https://lists.bufferbloat.net/listinfo/starlink

Reply via email to