[tor-relays] Guard flag flapping

2015-08-07 Thread Green Dream
I have two relays on the same Gb/s connection. I followed the optimization
tips offered in another thread, and think I have things running reasonably
well. What I don't understand is why the Guard flag keeps flapping back and
forth on both relays.

https://atlas.torproject.org/#details/89B9AE4C778DE44AFFAB791093E19979616E69C4

https://atlas.torproject.org/#details/11A579CC6CEE644A390704977B70CBA8D8347783

If you look at the 1 week graph of weights, you'll see what I mean.

I haven't been able to find any documentation or explanation for this
behavior. Anyone have any thoughts on why this might be happening, or any
pointers on how to further investigate?
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] Guard flag flapping

2015-08-07 Thread starlight . 2015q3
Both relays are showing low BWauth-measured
bandwidth and are below the 2000 threshold
for the Guard flag.

Recently BWauths were offline and the
consensus algorithm reverted to self-
measure.  During that period the relays
were above the 2000 threshold and
were assigned Guard.

But even the self-measure was very low
for a gigabit link in a tier-1 network
(Quest).

Some more tuning work is probably needed
to get performance up higher.  Possibly
network issues are at work.  If a low-end
consumer-grade router or firewall is in
the picture, it could be the cause of
the problem.

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] Guard flag flapping

2015-08-07 Thread starlight . 2015q3
You might start with running SpeedTest
via the Python script to see how the
network performance looks:

   https://pypi.python.org/pypi/speedtest-cli/

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] Guard flag flapping

2015-08-07 Thread starlight . 2015q3
First, I am assuming you are running bare-metal on
a system and not in a virtualized server--everything
below is premised on that.  Do not expect a virtual
server or Linux container to perform well as a high-
capacity Tor relay.  It's possible to configure a
high-performance VM, but this is an esoteric art
and one is better off renting a small dedicated
physical server than going that route.

Your story of a relay setup that should measure
fast by all apparent metrics but is given terrible
rankings by BWauths is common this year.

BWauths scripts are known to be buggy, though
supposedly have been improved very recently.
'longclaw' just came back online with the "latest"
code, but after starting out with a failure to
measure 2000 relays two days ago, it's still
running 1000 shy of the full population:

https://consensus-health.torproject.org/#bwauthstatus

Scroll down a little and you will see 'longclaw'
is unique in voting 976 relays not-guard and 1709
relays not-fast.  That seems a more serious issue
than cold start glitching IMO, and is not
impressive if that is what it really is.

A fifth BWauth is said to be arriving soon and it
is said that it will help.

Your relays currently are measured thusly:

greendream848
longclaw-w Bandwidth=1694 Measured=986
gabelmoo-w Bandwidth=1694 Measured=347
maatuska-w Bandwidth=1694 Measured=874
moria1  -w Bandwidth=1694 Measured=1550

spacequeen974
longclaw-w Bandwidth=1698 Measured=493
gabelmoo-w Bandwidth=1698 Measured=970
maatuska-w Bandwidth=1698 Measured=1930
moria1  -w Bandwidth=1698 Measured=2130

You can see future and past reports of these in

https://collector.torproject.org/recent/relay-descriptors/votes/
https://collector.torproject.org/archive/relay-descriptors/votes/

where

longclaw is 23D15D9. . .
gabelmoo is ED03BB6. . .
maatuska is 49015F7. . .
moria1   is D586D18. . .

That the measurements are all in the same ballpark
does indicate that some subtle issue with the
network and/or equipment may be at work and the
BWauths may not be at fault.  But many have
complained that nothing they do seems to work.

If the firewall is performing stateful packet
inspection or any kind of DPI (deep packet inspection)
disable that for all incoming and outgoing Tor
traffic.  It's all encrypted anyway so there's
no point, and DPI can drag down performance
big-time.  The directory traffic is unencrypted
but I've never heard of a firewall with
stateful rules for the Tor directory protocol.

If you can put the system directly on the public
IP address with no firewall or local-rack router I
recommend doing this.  Just make sure iptables are
set to protect login and other non-tor access.
Either that or disable iptables and strip the
server down so that nothing but the 'tor' process
and 'ssh' are running, and configure 'ssh' to
accept only certificate authentication (be sure to
set and test the cert auth before applying the
setting).  Check for minimized listeners with

   lsof -Pn | fgrep LISTEN

The email daemon should stay up to handle alarms,
just be sure it listens on 127.0.0.1.  Likewise
anything else that is absolutely necessary.  Use
*Port and *Policy settings in torrc to lock down
control and socks access to the daemon.

One notable sysctl that matters for high-capacity
relays is

   net.netfilter.nf_conntrack_checksum = 0

though having this enabled would not cause the
current poor measurements.

You should change this setting:

   net.ipv4.tcp_no_metrics_save = 1

turning this off was to work around a very-
long-ago kernel bug that is fixed everywhere.
Turning it on improves performance.

You might try

   net.ipv4.tcp_wmem = 4096  25  4194304
   net.ipv4.tcp_rmem = 4096  375000  4194304

which will cause the congestion window to
get to full size a bit quicker, and these

   net.core.somaxconn = 1024
   net.core.netdev_max_backlog = 524288
   net.ipv4.tcp_slow_start_after_idle = 0
   net.ipv4.tcp_keepalive_time = 600

which increase various limits for fast networks,
lots of connections.

Make sure these defaults values are active and
have not been changed to non-default by
/etc/sysctl.conf:

net.ipv4.tcp_moderate_rcvbuf = 1
net.ipv4.tcp_timestamps = 1
net.ipv4.tcp_window_scaling = 1
net.ipv4.tcp_sack = 1
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_congestion_control = cubic

And try adding

   TXQUEUELEN=10

to the

   /etc/sysconfig/network-scripts/ifcfg-ethX

for the interface(s) where tor runs.  Manually
activated with 

   ip link set qlen 10 dev ethX
   ip link show dev ethX

Finally make sure the kernel is of a vintage with
the Google-advocated connection-start
congestion-window increase:

https://lwn.net/Articles/427104/

http://samsaffron.com/archive/2012/03/01/why-upgrading-your-linux-kernel-will-make-your-customers-much-happier

http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=442b9635c569fef038d5367a7acd906db4677ae1

If you end up implementing any of the above and it
works please describe the results in tor-relays

[tor-relays] Guard flag flapping

2015-08-07 Thread starlight . 2015q3
Ah, forgot to calculate the default TCP
windows for you link speed rather than
mine.

So that's

12500 bytes/sec ( 1 gigabit / sec )

* 25 milliseconds or 3125000 (for read)

* 40 milliseconds or 500 (for write)

   net.core.rmem_max = 16777216
   net.core.wmem_max = 16777216
   net.ipv4.tcp_wmem = 4096  3125000 16777216
   net.ipv4.tcp_rmem = 4096  500 16777216

The 25 and 40 milliseconds are
my guess at roughly the faster and 
more middle ping times within the
domestic US Internet.

This just gets each connection started

   net.ipv4.tcp_moderate_rcvbuf = 1
   net.ipv4.tcp_timestamps = 1

cause the IP stack to very quickly
establish precise values for the transmit
and receive TCP windows.

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] Guard flag flapping

2015-08-07 Thread starlight . 2015q3
comments backward, but the sysctls are correct

* 25 milliseconds or 3125000 (for write)

* 40 milliseconds or 500 (for read)

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] Guard flag flapping

2015-08-07 Thread starlight . 2015q3
One more errata:

The sense of this is negative, so
the current setting you have is
correct:

   net.ipv4.tcp_no_metrics_save = 0

The double-negative got me.

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] Guard flag flapping

2015-08-07 Thread starlight . 2015q3
>I had already run tests with both speedtest-cli
>and iperf3. This server consistently achieves 200
>to 300 Mb/s in both directions, with both relays
>still running, and on some runs is hitting over
>800 Mb/s.

Final caveats:

If the server is on a shared gigabit link,
performance may not improve.  The difference
between a dedicated gigabit and a shared
gigabit is immense.  One expects an ISP
to load a shared GB links as heavily
as possible--I.E. up to the point
where customers complain and terminate
service.  True dedicated gigabit links
are expensive.

If it's running in a VPS that's showing
a gigabit link speed, the number is fiction.

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] Guard flag flapping

2015-08-08 Thread starlight . 2015q3
>. . .have physical access to the . . . ONT.
>

Doubt this is the case, but on the off chance that
the ONT is configured to flow IP traffic over a
coaxial cable attached to a CPE router (customer
premise equipment), the configuration should be
changed to deliver data over a Gigabit Ethernet
directly to a good-quality GBE switch provided by
you.  New eight-port Netgears are surprisingly
good managed switches for around $100.  For under
$600 you can purchase used 24-or-48-port
Nortel/Avaya 5510/5520 switches on eBay.  Though
old, these enterprise grade switches have
astounding performance characteristics due to the
Broadcom silicon switch-fabric chipset.  Were
around $7000 new when they were current-gen.  All
above switches have VLAN support so one switch can
operate as multiple isolated networks.

I run a Verizon FiOS 75/75 and the default setup
is for Internet traffic to pass to a Verizon
supplied Actiontec router over a coax cable.

Fortunately I hung out with the installer and
found out up front--had it switched to the
GBE from the outset.

For any VZ customers reading this, note that you
can connect the Actiontec to your network so it
can obtain TV programming information.  The coax
will continue to carry video and phone data which
are on separate fiber optic wavelengths/channels.

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] Guard flag flapping

2015-08-08 Thread starlight . 2015q3
>The problem is likely that your ISP is routing
>some traffic via an overloaded peering point.

Running

   traceroute -w 1 -q 5 -A  1400

may provide more detail on this issue.
The -A option causes 'traceroute' to show
Autonomous System numbers with each
route hop, where ASNs are Internet
routing domains associated with different
network providers.  One can readily see
where big increases in latency are occurring
along the path and what peerings are
associated with it.  Note that some big
increases are justified--such as with
a hop over a transatlantic cable.
Setting -q 5 (or bigger) causes traceroute
to attempt each hop that many times.
If you see asterisk characters, that's
packet loss and may indicate congestion
points.

Setting the packet size to 1400 will
increase the probability that network
congestion will impact the results.

Try pulling up

   https: // torstatus dot blutmagie dot de

which shows IPs together with other relay
attributes and experiment tracing to various
major relays.

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] Guard flag flapping

2015-08-08 Thread starlight . 2015q3
[apologies to all for thread-breaking, am 
 really going on hiatus but the horror-show
 performance GBE topic was too darn
 interesting--last post I promise!]

>I would call it a dedicated gigabit link. This is
>probably up for debate.  The provider's overall
>capacity is very likely not [number of customers]
>x [1 Gb/s] but I've never witnessed signs of
>throttling or over-subscription.

Pulling a whois on the IP 184.100.166.110
and a quick Google turned up that you
are probably a bleeding-edge subscriber
to this new Quest service:

  https://www.centurylink.com/fiber/

Running a filter on Blutmagie

   hostname contains quest

shows that your relays are, by a huge margin,
the fastest of about a dozen.

My advice is that this QWest service is third-rate
and rather than bleeding for the length of the
contract, run for the hills!

If you are still in the 1-month cancellation
period (relays are about a month old), terminate
the service immediately and place an order for
Verizon FiOS.

FiOS costs more, but sells real bandwidth instead
of imaginary bandwidth.  You can run a relay on a
75/75 MBit link (like mine) and obtain respectable
bandwidth ranking.  FiOS goes up to 1/2 GB for
about $300-400 per month but the bandwidth is real
and the network is good-to-excellent.

Either that or switch to leasing a colo server in
Germany or some other country where bandwidth is
extremely cheap, or perhaps finding a cheap
bandwidth colo in the US.

QWest GBE looks like a turkey.

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Guard flag flapping

2015-08-07 Thread Green Dream
Thanks for the reply.

I had already run tests with both speedtest-cli and iperf3. This server
consistently achieves 200 to 300 Mb/s in both directions, with both relays
still running, and on some runs is hitting over 800 Mb/s.

The BWauth and self-measured bandwidths make no sense to me. Watching arm,
the averages are always in the X Mb/s range. I've watched these relays
serve 10 - 15 Mb/s each, 20 - 30 Mb/s in parallel, during busy times. Right
now one is running 1.6 Mb/s average and the other at 2.5 Mb/s, having
started these two arm instances about 2 hours ago. I don't find these
numbers to be very impressive given the capacity of the connection, but
they're still several orders of magnitude better than the measured
bandwidth. I don't understand the discrepancy.

I'm not using the ISP-provided router. It's not a consumer-grade router
either. I hesitate to list the specific model here, but according to its
specifications it shouldn't have a problem with the load, and indeed it
doesn't appear to be struggling at all.

In terms of optimizing the server, I've followed Moritz's guide. It doesn't
appear to be dropping connections. At one point I had 5000+ established
connections. The CPU and RAM are getting a work-out for sure, but neither
is maxed.

If there's a bottleneck on my side, I'm not sure where it is. What else
should I be checking? And why is actual performance in Mb/s so much higher
than the measured bandwidth? By the way, where are you finding the
historical BWauth and self-measured bandwidths?
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Guard flag flapping

2015-08-07 Thread Green Dream
P.S. Here's some additional data from the server. I just ran these
commands, with the two relays still running.

$ speedtest-cli
Retrieving speedtest.net configuration...
Retrieving speedtest.net server list...
Selecting best server based on latency...
Hosted by City of Sandy-SandyNet Fiber (Sandy, OR) [1.91 km]: 15.696 ms
Testing download speed
Download: 719.17 Mbit/s
Testing upload speed..
Upload: 166.69 Mbit/s

$ sudo sysctl -p
net.ipv4.ip_forward = 1
net.ipv4.ip_local_port_range = 1 61000
net.ipv4.tcp_fin_timeout = 30
net.ipv4.tcp_no_metrics_save = 1
net.ipv4.tcp_tw_recycle = 1
fs.file-max = 64000

$ sudo su debian-tor --shell /bin/bash --command "ulimit -Sn"
64000
$ sudo su debian-tor --shell /bin/bash --command "ulimit -Hn"
64000

$ ss -s
Total: 2314 (kernel 0)
TCP:   2262 (estab 2195, closed 10, orphaned 38, synrecv 0, timewait 9/0),
ports 0

Transport Total IPIPv6
* 0 - -
RAW   0 0 0
UDP   6 4 2
TCP   2252  2250  2
INET  2258  2254  4
FRAG  0 0 0

$ uptime
 15:06:26 up 7 days, 20:57,  2 users,  load average: 0.50, 0.49, 0.59
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Guard flag flapping

2015-08-07 Thread Green Dream
Thank you for the thoughtful replies. To clear up a few points:

- This is a dedicated bare-metal server -- not a VPS, VM or container. I
have physical access to the server, router and ONT.

- I would call it a dedicated gigabit link. This is probably up for debate.
The provider's overall capacity is very likely not [number of customers] x
[1 Gb/s] but I've never witnessed signs of throttling or over-subscription.
If I plug a laptop directly into the router, I get a 800 - 900 Mb/s
bidirectional speed test every time (via dslreports.com/speedtest).

If I fire up the Tor Browser Bundle and use the 'EntryNodes' config line to
force traffic through my own relays, performance is fine. It's not great
mind you, but it's no worse than going through randomly selected guards. In
fact, depending on the other middle and exit relays, I'd say my relays work
quite well as entries.

I've learned from this thread that the Guard flag flapping is a direct
result of the low measured bandwidth. I still have no idea why the measured
bandwidth is so (terribly, comically) low. The fact that it's so wrong is
somewhat telling. Actual performance on the network is like 1000 times
better than what the measured bandwidth says! Something feels very broken
here.

Is it possible for an operator of one of the BWauth nodes to look into
this? What happens if you set my relays as 'EntryNodes' and run real-world
tests?

I will try the other optimizations mentioned here as well, at a slow pace,
so I can understand any changes that may occur.
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Guard flag flapping

2015-08-08 Thread torrry
 Original Message 
From: Green Dream 
Apparently from: tor-relays-boun...@lists.torproject.org
To: tor-relays@lists.torproject.org
Subject: Re: [tor-relays] Guard flag flapping
Date: Fri, 7 Aug 2015 21:49:16 -0700
 
> I've learned from this thread that the Guard flag flapping is a direct result 
> of the low measured bandwidth. I still have no idea why the measured 
> bandwidth is so (terribly, comically) low. The fact that it's so wrong is 
> somewhat telling. Actual performance on the network is like 1000 times better 
> than what the measured bandwidth says! Something feels very broken here.

I ran some tests against your node. While performance is generally very good, 
it has very low performance connecting to some exit nodes. The problem is 
likely that your ISP is routing some traffic via an overloaded peering point. 
Some ISPs have been know to do that on purpose, e.g. to make connections to 
Netflix or Youtube extremely slow while claiming not to do throttling.

The bandwidth auths probably downrate the measurement results of your server 
severely because of those slow connections.
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Guard flag flapping

2015-08-08 Thread torrry
 Original Message 
From: starlight.201...@binnacle.cx
Apparently from: tor-relays-boun...@lists.torproject.org
To: tor-relays@lists.torproject.org
Subject: [tor-relays]  Guard flag flapping
Date: Sat, 08 Aug 2015 14:39:48 -0400

> My advice is that this QWest service is third-rate
> and rather than bleeding for the length of the
> contract, run for the hills!
> 
> If you are still in the 1-month cancellation
> period (relays are about a month old), terminate
> the service immediately and place an order for
> Verizon FiOS.

Verizon is one of those ISPs who have played routing games, slowing down 
Netflix and other traffic that happens to use use the same routing:

http://www.extremetech.com/computing/186576-verizon-caught-throttling-netflix-traffic-even-after-its-pays-for-more-bandwidth
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Guard flag flapping

2015-08-08 Thread Green Dream
On Sat, Aug 8, 2015 at 1:41 AM,  wrote:

>
> I ran some tests against your node. While performance is generally very
> good, it has very low performance connecting to some exit nodes.


Thanks for running the tests. Which exit nodes led to poor performance? I
would like to try to reproduce any performance problems.

How would you measure performance between my node and a given exit without
being influenced by the properties of the middle relay? You can only set me
as an entrynode, and you can't pick a specific middle, so how would you
know that the low performance was my node and not the random middle relay?

The problem is likely that your ISP is routing some traffic via an
> overloaded peering point.


This is certainly possible! It's the most compelling theory for me at the
moment, although I'm not convinced.


> Some ISPs have been know to do that on purpose, e.g. to make connections
> to Netflix or Youtube extremely slow while claiming not to do throttling.


Fortunately those practices are mostly coming to end in the US, with the
FCC's "Open Internet" rules adopted earlier this year. There is a way to
file complaints about it, but I'd need more data.

The bandwidth auths probably downrate the measurement results of your
> server severely because of those slow connections.


Probably? How can we investigate further?

As I write this one of the relays has the Guard flag again and is averaging
around 2 Mb/s. That still stinks, but the measured bandwidth shows only
15.0 Kb/s! Something is still ridiculously off. I'll continue digging. I
appreciate all the help thus far.
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Guard flag flapping

2015-08-09 Thread torrry
> Thanks for running the tests. Which exit nodes led to poor performance? I 
> would like to try to reproduce any performance problems.

I did not record the nodes (they were in Europe). A simple test you could run 
on your server is fetching directory info from nodes that have directory 
functionality enabled.

wget http://:/tor/server/all

e.g.: wget http://176.126.252.11:443/tor/server/all

You can get a bandwidth-sorted list of nodes at:
https://torstatus.blutmagie.de/

There is a column that has the directory port.

> How would you measure performance between my node and a given exit without 
> being influenced by the properties of the middle relay? You can only set me 
> as an entrynode, and you can't pick a specific middle, 

You are wrong. :-) 

You can build arbitrary circuits by hand. There are libraries like Stem, 
Txtorcon, TorCtl and there is a text based, SMTP-like protocol that you can use 
directly:

https://gitweb.torproject.org/torspec.git/tree/control-spec.txt

> so how would you know that the low performance was my node and not the random 
> middle relay?

Your node would be a non-random middle relay.

> >  The bandwidth auths probably downrate the measurement results of your 
> > server severely because of those slow connections.
>  
> Probably? How can we investigate further?

AFAIK, the raw bandwidth auth measurements are not published, only the total 
result.
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Guard flag flapping

2015-08-09 Thread Green Dream
> A simple test you could run on your server is fetching directory info
> from nodes that have directory functionality enabled.

Thanks for the idea. blutmagie offers a CSV list of its current result set,
so this ended up being quite easy to automate.

I fetched a copy of the CSV to the server:

  wget https://torstatus.blutmagie.de/query_export.php/Tor_query_EXPORT.csv

Then I picked out the columns I cared about, included only Exits with a
Dirport, then sorted by the bandwidth column, and grabbed the fastest 50:

   awk -F \, '{if ($10 == "1" && $8 != "None") print $3, $5, $8}'
Tor_query_EXPORT.csv | sort -nr | head -50 > top-50-exits-with-dirport.txt

That file now looks like:

34994 37.130.227.133 80
33134 176.126.252.11 443
30736 176.126.252.12 21
30720 77.247.181.164 80
26958 77.247.181.166 80


So we now have the bandwidth, IP, and dirport of the fastest exits. With
this list in hand, I just needed to form a proper URL, wget each one, and
grep out the transfer speed:

   for URL in $(awk '{print "http://"; $2 ":" $3 "/tor/server/all"}'
top-50-exits-with-dirport.txt); do printf "$URL " && wget $URL -O /dev/null
2>&1 | grep -o "[0-9.]\+ [KM]*B/s"; done

The output ends up looking like this (only displaying the first 10 for
brevity):

http://37.130.227.133:80/tor/server/all 1.17 MB/s
http://176.126.252.11:443/tor/server/all 4.54 MB/s
http://176.126.252.12:21/tor/server/all 666 KB/s
http://77.247.181.164:80/tor/server/all 111 KB/s
http://77.247.181.166:80/tor/server/all 330 KB/s
http://195.154.56.44:80/tor/server/all 3.65 MB/s
http://77.109.141.138:80/tor/server/all 2.20 MB/s
http://96.44.189.100:80/tor/server/all 13.4 MB/s
http://197.231.221.211:1080/tor/server/all 347 KB/s
http://89.234.157.254:80/tor/server/all 295 KB/s

I'm not seeing anything immediately, although I need to run it on a larger
set. There's no smoking gun so far though. Some of the speeds are a bit
slow, but nothing low enough to explain the extremely low measured
bandwidth these relays are getting. I think I'll clean this up a bit, put
it into an actual script, and try running it on another server on a
different AS for comparison.
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Guard flag flapping

2015-08-09 Thread torrry
> So we now have the bandwidth, IP, and dirport of the fastest exits. With this 
> list in hand, I just needed to form a proper URL, wget each one, and grep out 
> the transfer speed:
>  
> http://37.130.227.133:80/tor/server/all 1.17 MB/s
> http://176.126.252.11:443/tor/server/all 4.54 MB/s
> http://176.126.252.12:21/tor/server/all 666 KB/s
> http://77.247.181.164:80/tor/server/all 111 KB/s
> http://77.247.181.166:80/tor/server/all 330 KB/s
> http://195.154.56.44:80/tor/server/all 3.65 MB/s
> http://77.109.141.138:80/tor/server/all 2.20 MB/s
> http://96.44.189.100:80/tor/server/all 13.4 MB/s
> http://197.231.221.211:1080/tor/server/all 347 KB/s
> http://89.234.157.254:80/tor/server/all 295 KB/s
>  
> I'm not seeing anything immediately, although I need to run it on a larger 
> set. There's no smoking gun so far though. Some of the speeds are a bit slow, 
> but nothing low enough to explain the extremely low measured bandwidth these 
> relays are getting.

The current BW auth measurement results are around 1.0MBit/s for greendream848. 
I had a couple of measurements in the 300-500KBit/s range. So, if the auths 
heavily weight towards low individual measurements, things might make sense.

Maybe one of the BW auth guys can comment on how the total measurement result 
is cooked up!?
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Guard flag flapping

2015-08-09 Thread Roger Dingledine
On Sun, Aug 09, 2015 at 12:52:21PM -0700, Green Dream wrote:
>  Some of the speeds are a bit
> slow, but nothing low enough to explain the extremely low measured
> bandwidth these relays are getting.

Note that the bandwidth weights in the consensus are unitless: they
are simply weights, and they only matter relative to the other weights.
Thinking of them as an attempt at an estimate of the bandwidth of your
relay will lead to confusion and unhappiness. :)

> I think I'll clean this up a bit, put
> it into an actual script

Great!

--Roger

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Guard flag flapping

2015-08-10 Thread Jannis Wiese
Hi Roger,

> On 10.08.2015, at 00:24, Roger Dingledine  wrote:
> Note that the bandwidth weights in the consensus are unitless: [...]

So Atlas is misleading, when you hover your mouse over the Consensus Weight, it 
says

> Weight assigned to this relay by the directory authorities that clients use 
> in their path selection algorithm. The unit is arbitrary; currently it's 
> kilobytes per second, but that might change in the future.

Cheers,
Jannis___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Guard flag flapping

2015-08-10 Thread nusenu
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

> Maybe one of the BW auth guys can comment on how the total
> measurement result is cooked up!?

https://gitweb.torproject.org/torspec.git/tree/dir-spec.txt#n2028

https://lists.torproject.org/pipermail/tor-dev/2015-August/009255.html
-BEGIN PGP SIGNATURE-

iQIcBAEBCgAGBQJVyOHtAAoJEFv7XvVCELh0OJoP/AmLyVqHyxfEiKrD6ktRvKwg
WhZHWh/6fcITt1U9lWBj9zui5yAA+27ooKgxhLjwywYf9U1EYHtfkA8XGBtcIXD4
Nyh+NON6GI/h5pNvuW8P6mfF2IJQTmu2qcsGlS8cknqZjSx8+kDSJyQAGB5CUCFC
n/oLwaApjKwk2C0ZIKecI703nQJMVbHTKegGGcIlqp2NKayGy8u5ZQSSvciBQklL
48lb9rFVJuCvvnv+bHabvH0dQfDtKGKkw6rm4ZQ19R841tRxWDGZ1vHVEVgDLqSr
NK/EVAJjkJ2rJq/3yDHYyzxYCBtM5366RScOULoc9BedgCA2NQhKHml+zbIGkKRn
pnTSCEJkievPcz7h4KMJEpmfHp1aTlH56LwpBuaQ0DbiJUksi98sWRmK3FErSy/N
7IAm+95nEuRdFl358thd+4ae3iyRj/8mkp7BUEjV70aQ/whe5jqN2tAoaPmYu88G
fU5nLi1Ugl/b7B2WLhmVVebMo/oRqwAOvtIsKQLQ5ifoEq1oFOmYxT/LSy1S6YY/
Vurw3LxYNFG2SgyywVq2JevNKU4vHeTT7Y06vyotoBnEgfeC374F46F1U1gL7+To
hlPkl1RZIVOX9m1w54wFg0baHAHhYkB2qx0k5465DLyb21Ex7f131LlW6//MmmZa
KWEIjBl96UJxBL8CGk7P
=BjWP
-END PGP SIGNATURE-
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays