I know that latency is the real reason of user experience. But my speed
tests were averages for longer time and many requests. When I have faster
connection, but it feels much slower.

Tor:
request > ..... noooothiiiing for minute > ........ page is loading in ten
seconds with all images (and bandwidth meter shows me speed above 140 kB/s).

JAP:
request > .... few second for first response > slower, but continuous page
loading for longer than minute > done (and maximum speed wasnt better than
few kB/s)

Second behaviour is much better, because you feel the responsivity and in
some cases, you can read page until it is fully loaded.

Im asking, why is Tor different in this. Is there any reason in tor design
or it is simply not too optimized? Because in total, there IS enough
bandwidth on nodes (overal speed is sufficient).

Reason 1, latency)
I understand, that JAP (with most servers in germany) will be more
responsive than Tor, which have nodes around whole world and one request is
pending across continents. But: There are 3 nodes, me and target server (4
connections in line total). We can expect, that every node is on different
continent and on bad line (~ 400 ms latency). My latencies (Im 3ms to Czech
internet bone) to USA are around 140ms. So at total:

me -- 400 + 140 + 400 -- node1 --- 400 + 140 + 400 --- node2 --- 940 ---
node3 --- 940 --- target

==== 3760 ms for one-way, it is ~7,5 second to response in the WORST case!!
Dont tell me, that 940 ms is typical latency between nodes...

Reason 2, node overall throughput)
Slow responsibility can be done by slow nodes connection. But overall speed
is not a problem (not for me), because there IS enough connectivity, but
probably badly spread in time. Im running exit node (
https://torstatus.blutmagie.de/router_detail.php?FP=9a8eb14286dea095815b702f6d6f7d1d6d051da7)
and I can see, that my bandwidth is not used in the best way. Sometimes node
is idle and sometimes it hit the speed limits. In idle time is probably
everything ok - node is working on every request without own speed
limitation. But what in the case of short-time overloading? My tip is, that
in this case, node is working for somebody faster than for else. I think so,
because it can be the reason, why I often have ZERO throughput thru Tor for
tens seconds. My guess is that some nodes simply dont give me a slot for my
data, because somebody else is transfering something big, what "kill" the
bandwidth for a moment. And because there are three nodes in the line, it is
the big probability I "hit" this problem on more nodes - and I will finally
very slow responsibility.

Hight throughput on the end of request (as I wrote on begin) is also an
evidence - when I use so much bandwidth for a few seconds, somebody else
must wait. Because there are tens of users on one node, it support my idea.

Is there anybody, who knows, if it is possible?

Thanks,
Marek

On Fri, Feb 13, 2009 at 4:31 AM, coderman <coder...@gmail.com> wrote:

> latency is also an important component to measure.  especially when
> content, like html pages, contains many nested elements or links which
> require additional connections to other sites.  the latencies are
> additive and some elements of an HTML document are not rendered until
> a sufficient number of bytes / linked entities are loaded.
>
> best regards,
>

Reply via email to