On 21. aug. 2014, at 10:30, David Lang <da...@lang.hm> wrote:

> On Thu, 21 Aug 2014, Michael Welzl wrote:
> 
>> On 21. aug. 2014, at 08:52, Eggert, Lars wrote:
>> 
>>> On 2014-8-21, at 0:05, Jim Gettys <j...@freedesktop.org> wrote:
>>>> ​And what kinds of AP's?  All the 1G guarantees you is that your 
>>>> bottleneck is in the wifi hop, and they can suffer as badly as anything 
>>>> else (particularly consumer home routers).
>>>> The reason why 802.11 works ok at IETF and NANOG is that:
>>>> o) they use Cisco enterprise AP's, which are not badly over buffered.
>> 
>> I'd like to better understand this particular bloat problem:
>> 
>> 100s of senders try to send at the same time. They can't all do that, so 
>> their cards retry a fixed number of times (10 or something, I don't 
>> remember, probably configurable), for which they need to have a buffer.
>> 
>> Say, the buffer is too big. Say, we make it smaller. Then an 802.11 sender 
>> trying to get its time slot in a crowded network will have to drop a packet, 
>> requiring the TCP sender to retransmit the packet instead. The TCP sender 
>> will think it's congestion (not entirely wrong) and reduce its window (not 
>> entirely wrong either). How appropriate TCP's cwnd reduction is probably 
>> depends on how "true" the notion of congestion is ... i.e. if I can buffer 
>> only one packet and just don't get to send it, or it gets a CRC error 
>> ("collides" in the air), then that can be seen as a pure matter of luck. 
>> Then I provoke a sender reaction that's like the old story of TCP 
>> mis-interpreting random losses as a sign of congestion. I think in most 
>> practical systems this old story is now a myth because wireless equipment 
>> will try to buffer data for a relatively long time instead of exhibiting 
>> sporadic random drops to upper layers. That is, in principle, a good thing - 
>> but buffering too much has of course all the problems that we know. Not an 
>> easy trade-off at all I think.
> 
> in this case the loss is a direct sign of congestion.

"this case" - I talk about different buffer lengths. E.g., take the minimal 
buffer that would just function, and set retransmissions to 0. Then, a packet 
loss is a pretty random matter - just because you and I contended, doesn't mean 
that the net is truly "overloaded" ?   So my point is that the buffer creates a 
continuum from "random loss" to "actual congestion" - we want loss to mean 
"actual congestion", but how large should it be to meaningfully convey that?


> remember that TCP was developed back in the days of 10base2 networks where 
> everyone on the network was sharing a wire and it was very possible for 
> multiple senders to start transmitting on the wire at the same time, just 
> like with radio.

cable or wireless: is one such occurrence "congestion"?
i.e. is halving the cwnd really the right response to that sort of 
"congestion"? (contention, really)


> A large part of the problem with high-density wifi is that it just wasn't 
> designed for that sort of environment, and there are a lot of things that it 
> does that work great for low-density, weak signal environments, but just make 
> the problem worse for high-density environements
> 
> batching packets together
> slowing down the transmit speed if you aren't getting through

well... this *should* only happen when there's an actual physical signal 
quality degradation, not just collisions. at least minstrel does quite a good 
job at ensuring that, most of the time.


> retries of packets that the OS has given up on (including the user has closed 
> the app that sent them)
> 
> Ideally we want the wifi layer to be just like the wired layer, buffer only 
> what's needed to get it on the air without 'dead air' (where the driver is 
> waiting for the OS to give it more data), at that point, we can do the 
> retries from the OS as appropriate.
> 
>> I have two questions: 1) is my characterization roughly correct? 2) have 
>> people investigated the downsides (negative effect on TCP) of buffering *too 
>> little* in wireless equipment? (I suspect so?)  Finding where "too little" 
>> begins could give us a better idea of what the ideal buffer length should 
>> really be.
> 
> too little buffering will reduce the throughput as a result of unused airtime.

so that's a function of, at least: 1) incoming traffic rate; 2) no. retries * ( 
f(MAC behavior; number of other senders trying) ).


> But at the low data rates involved, the system would have to be extremely 
> busy to be a significant amount of time if even one packet at a time is 
> buffered.



> You are also conflating the effect of the driver/hardware buffering with it 
> doing retries.

because of the "function" i wrote above: the more you retry, the more you need 
to buffer when traffic continuously arrives because you're stuck trying to send 
a frame again.

what am I getting wrong? this seems to be just the conversation I was hoping to 
have ( so thanks!)  - I'd like to figure out if there's a fault in my logic.

Cheers,
Michael

_______________________________________________
Bloat mailing list
Bloat@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/bloat

Reply via email to