On Thursday, June 11, 2020 2:56pm, "David Lang" <da...@lang.hm> said:



> We will see, but since the answer to satellite-satellite communication being 
> the
> bottleneck is to launch more satellites, this boils down to investment vs
> service quality. Since they are making a big deal about the latency, I expect
> them to work to keep it acceptable.


We'll see. I should have mentioned that the ATT network actually had adequate 
capacity. As did Comcast's network when it was bloated like cracy (as Jim 
Gettys will verify).
 
As I have said way too often - the problem isn't throughput related, and can't 
be measured by achieved throughput, nor can it be controlled by adding capacity 
alone.
 
The problem is the lack of congestion signalling that can stop the *source* 
from sending more than its share.
 
That's all that matters. I see bufferbloat in 10-100 GigE datacenter networks 
quite frequently (esp. Arista switches!).  Some would think that "fat pipes" 
solve this problem. It doesn't. Some think pirority eliminates the problem. It 
doesn't, unless there is congestion signalling in operation.
 
Yes, using a single TCP end-to-end connection over an unloaded network you can 
get 100% throughput on an unloaded network. The problem isn't the hardware at 
all. It's the switching logic that just builds up queues till they are 
intolerably long, at which point the queues cannot drain, so they stay full as 
long as the load remains.
 
In the iPhone case, when a page didn't download in a flash, what do users do? 
Well, they click on the link again. Underneath it all, then, all the packets 
that were stuffed in the pipe toward that user remain queued. And a whole lot 
more get shoved in. And the user keeps hitting the button. If the queue holds 2 
seconds of data at the bottleneck rate, it continues to be full as long as 
users keep clicking on the link.
 
You REALLY must think about this scenario, and get it in your mind that 
throughput doesn't eliminate congestion, especially when computers can do a lot 
of work on your behalf every time you ask them.
 
One request packet - thousands of response packets, and no one telling the 
sources that they should slow down.
 
For all of this, there is a known fix: don't queue packets more than 
2xRTTx"bottleneck rate" in any switch anywhere. That's been in a best practice 
RFC forever, and it is ignored almost always. Cake and other algorithms do even 
better, by queuing less than that in any bottleneck-adjacent queue.
 
But instead of the known fix (known ever since the first screwed up Frame Relay 
hops were set to never lose a packet) is deliberately ignored, by hardware 
know-it-alls.
 
_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to