> >> but if the connection from the laptop to the AP is 54M and the
> >> connection from the AP to the Internet is 1G, you are not going to
> >> have a lot of buffering taking place. You will have no buffering on
> >> the uplink side, and while you will have some buffering on the
> >> downlink sid
I think the point being made here was that the FTTH homes were talking to
DSL hosts via P2P a lot.
- Jonathan Morton
On Jan 9, 2013 6:54 AM, "David Lang" wrote:
> On Tue, 8 Jan 2013, Mark Allman wrote:
>
> Did any of their 90 homes contained laptops connected over WiFi?
>>>
>>> Almost cer
On Tue, 8 Jan 2013, Mark Allman wrote:
Did any of their 90 homes contained laptops connected over WiFi?
Almost certinly,
Yeah - they nearly for sure did. (See the note I sent to bloat@ this
morning.)
but if the connection from the laptop to the AP is 54M and the
connection from the AP to
> > Note the paper does not work in units of *connections* in section 2, but
> > rather in terms of *RTT samples*. So, nearly 5% of the RTT samples add
> >>= 400msec to the base delay measured for the given remote (in the
> > "residential" case).
>
> Hmm, yes, I was wondering about this and was
> > Did any of their 90 homes contained laptops connected over WiFi?
>
> Almost certinly,
Yeah - they nearly for sure did. (See the note I sent to bloat@ this
morning.)
> but if the connection from the laptop to the AP is 54M and the
> connection from the AP to the Internet is 1G, you are no
On Tue, 8 Jan 2013, Hal Murray wrote:
Aside from their dataset having absolutely no reflection on the reality of
the 99.999% of home users running at speeds two or three or *more* orders of
magnitude below that speed, it seems like a nice paper.
Did any of their 90 homes contained laptops conn
On 8 Jan, 2013, at 9:03 pm, Hal Murray wrote:
> Any ideas on what happened at 120 seconds? Is that a pattern I should
> recognize?
That looks to me like the link changed to a slower speed for a few seconds.
That can happen pretty much at random in a wireless environment, possibly in
respons
> Aside from their dataset having absolutely no reflection on the reality of
> the 99.999% of home users running at speeds two or three or *more* orders of
> magnitude below that speed, it seems like a nice paper.
Did any of their 90 homes contained laptops connected over WiFi?
> Here is a pl
Hey, guys, chill.
I'm sorry if my first comment at the paper's dataset sounded overly
sarcastic. I was equally sincere in calling it a "good paper", as the
analysis of the dataset seemed largely sound at first glance... but I
have to think about it for a while a while longer, and hopefully
suggest
David-
I completely agree with your "measure it" notion. That is one of the
points of my paper.
> First, it's important to measure the "right thing" - which in this
> case is "how much queueing *delay* builds up in the bottleneck link
> under load"
That said, as often is the case there is no "
Re: "the only thing that counts is peak throughput" - it's a pretty cynical
stance to say "I'm a professional engineer, but the marketing guys don't have a
clue, so I'm not going to build a usable system".
It's even worse when fellow engineers *disparage* or downplay the work of
engineers who
[This mail won't go to "end2end-interest" because I am blocked from posting
there, but I leave the address on so that I don't narrow the "reply-to" list
for those replying to me. I receive but can not send there.]
Looking at your graph, Ingemar, the problem is in the extreme cases, which are
> graphs, ~5% of connections to "residential" hosts exhibit added delays
> of >=400 milliseconds, a delay that is certainly noticeable and would
> make interactive applications (gaming, voip etc) pretty much unusable.
Note the paper does not work in units of *connections* in section 2, but
rather
Let me make a few general comments here ...
(0) The goal is to bring *some* *data* to the conversation. To
understand the size and scope of bufferbloat problem it seems to me
we need data.
(1) My goal is to make some observations of the queuing (/delay
variation) in the non-FTTH por
OK...
Likely means that AQM is not turned on in the eNodeB, can't be 100% sure though
but it seems so.
At least one company I know of offers AQM in eNodeB. However one problem seems
to be that the only thing that counts is peak throughput, you have probably too
seen these "up to X Mbps" slogan
Stephen Hemminger
writes:
> The tone of the paper is a bit of "if academics don't analyze it to
> death it must not exist". The facts are interesting, but the
> interpretation ignores the human element. If human's perceive delay
> "Daddy the Internet is slow", then they will change their behavior
Hello Ingemar,
Thanks for your feedback and your own graph.
This is testing the LTE downlink, not the uplink. It was a TCP download.
There was zero packet loss on the ICMP pings. I did not measure the
TCP flow itself but I suspect packet loss was minimal if not also
zero.
Best,
Keith
On Tue, J
Hi
Interesting graph, thanks for sharing it.
It is likely that the delay is only limited by TCPs maximum congestion window,
for instance at T=70 the thoughput is ~15Mbps and the RTT~0.8s, giving a
congestion window of 1.5e7/8/0.8 = 2343750 bytes, recalculations at other time
instants seems to g
I'm sorry to report that the problem is not (in practice) better on
LTE, even though the standard may support features that could be used
to mitigate the problem.
Here is a plot (also at http://web.mit.edu/keithw/www/verizondown.png)
from a computer tethered to a Samsung Galaxy Nexus running Andro
19 matches
Mail list logo