Re: [tor-relays] Metrics

2023-09-11 Thread mailinglistreader

So you don't have to dig through the logs:
(as root or sudo)
~# cat /var/lib/tor/pt_state/obfs4_bridgeline.txt
~# cat /var/lib/tor/fingerprint

or with multiple instances:
~# cat /var/lib/tor-instances/NN/pt_state/obfs4_bridgeline.txt

Or when running obfs4 in docker:
docker exec `docker ps -aqf "name=obfs4"` get-bridge-line
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics

2023-09-07 Thread lists
So you don't have to dig through the logs:
(as root or sudo)
~# cat /var/lib/tor/pt_state/obfs4_bridgeline.txt
~# cat /var/lib/tor/fingerprint

or with multiple instances:
~# cat /var/lib/tor-instances/NN/pt_state/obfs4_bridgeline.txt

-- 
╰_╯ Ciao Marco!

Debian GNU/Linux

It's free software and it gives you freedom!

signature.asc
Description: This is a digitally signed message part.
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics

2023-09-07 Thread Anonforpeace via tor-relays
Ok good I have those in my bridge.

Sent from Proton Mail mobile

 Original Message 
On Sep 7, 2023, 9:22 AM, telekobold wrote:

> On 07.09.23 12:43, Anonforpeace via tor-relays wrote: > What is the 
> "complete" bridge line? Bridge obfs4 :  cert= iat-mode=0 where PORT is the 
> obfs4 port, not the ORPort. (When using IPv6,  must be in []). See also 
> https://community.torproject.org/relay/setup/bridge/post-install/ 
> ___ tor-relays mailing list 
> tor-relays@lists.torproject.org 
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics

2023-09-07 Thread telekobold




On 07.09.23 12:43, Anonforpeace via tor-relays wrote:

What is the "complete" bridge line?


Bridge obfs4 :  cert= iat-mode=0

where PORT is the obfs4 port, not the ORPort. (When using IPv6, ADDRESS> must be in []).


See also https://community.torproject.org/relay/setup/bridge/post-install/
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics

2023-09-07 Thread Anonforpeace via tor-relays
What is the "complete" bridge line?

Sent from Proton Mail mobile

 Original Message 
On Sep 7, 2023, 6:28 AM, telekobold wrote:

> Hi gus, On 06.09.23 21:27, gus wrote: > If you add just IP:ORPort (**ORPort** 
> and not the OBFS4 Port) you have a > "vanilla" Tor bridge: a bridge that 
> doesn't obfuscate your Tor traffic. > So it may not work in countries/ISPs 
> doing DPI. > To use your own obfs4 bridge, you need to build the "complete 
> bridge line"[1]. > > cheers, > Gus > [1] 
> https://gitlab.torproject.org/tpo/web/manual/-/issues/130 thank you for the 
> clarification! To be honest, I indeed confused "ORPort" and "obfs4port" for a 
> moment. Kind regards telekobold 
> ___ tor-relays mailing list 
> tor-relays@lists.torproject.org 
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics

2023-09-07 Thread telekobold

Hi gus,

On 06.09.23 21:27, gus wrote:


If you add just IP:ORPort (**ORPort** and not the OBFS4 Port) you have a
"vanilla" Tor bridge: a bridge that doesn't obfuscate your Tor traffic.
So it may not work in countries/ISPs doing DPI.
To use your own obfs4 bridge, you need to build the "complete bridge line"[1].

cheers,
Gus
[1] https://gitlab.torproject.org/tpo/web/manual/-/issues/130


thank you for the clarification! To be honest, I indeed confused 
"ORPort" and "obfs4port" for a moment.


Kind regards
telekobold
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics

2023-09-06 Thread gus
Hi,

On Wed, Sep 06, 2023 at 09:11:02PM +0200, telekobold wrote:
> Hi,
> 
> On 06.09.23 09:25, gus wrote:
> 
> > Have you tried to connect to your own bridge and see if it works?
> > Here is how you build your obfs4 bridge line (note: it's your bridge
> > fingerprint and not your hashed bridge fingerprint):
> > https://community.torproject.org/relay/setup/bridge/post-install/
> 
> there seems to be a mismatch between the description linked above and the
> Tor browser UI to manually add a Tor bridge: If one starts the Tor browser,
> click on "Configure Tor connections" and then on "Add a Bridge Manually"
> (seems to be the only possibility to test your own Bridge directly in the
> Tor browser), there is only the option to provide the bridge's IP address
> and the obfs4 port, but not, as mentioned in the description linked above
> the fingerprint and the obfs4 certificate. When I try to add the fingerprint
> and the obfs4 certificate of my bridges, no connection is established.

Yes, there is a mismatch in Tor Browser UI. See these tickets:

https://gitlab.torproject.org/tpo/applications/tor-browser/-/issues/40552

https://gitlab.torproject.org/tpo/applications/tor-browser/-/issues/41913

> So, where is the advantage on additionally providing the fingerprint and the
> obfs4 certificate when connecting to Tor (I can imagine that it has
> something to do with authenticity)? And how can one do that using the Tor
> software respectively the Tor browser bundle?

If you add just IP:ORPort (**ORPort** and not the OBFS4 Port) you have a
"vanilla" Tor bridge: a bridge that doesn't obfuscate your Tor traffic.
So it may not work in countries/ISPs doing DPI.
To use your own obfs4 bridge, you need to build the "complete bridge line"[1]. 

cheers,
Gus
[1] https://gitlab.torproject.org/tpo/web/manual/-/issues/130
-- 
The Tor Project
Community Team Lead


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics

2023-09-06 Thread telekobold

Hi,

On 06.09.23 09:25, gus wrote:


Have you tried to connect to your own bridge and see if it works?
Here is how you build your obfs4 bridge line (note: it's your bridge
fingerprint and not your hashed bridge fingerprint):
https://community.torproject.org/relay/setup/bridge/post-install/


there seems to be a mismatch between the description linked above and 
the Tor browser UI to manually add a Tor bridge: If one starts the Tor 
browser, click on "Configure Tor connections" and then on "Add a Bridge 
Manually" (seems to be the only possibility to test your own Bridge 
directly in the Tor browser), there is only the option to provide the 
bridge's IP address and the obfs4 port, but not, as mentioned in the 
description linked above the fingerprint and the obfs4 certificate. When 
I try to add the fingerprint and the obfs4 certificate of my bridges, no 
connection is established.


So, where is the advantage on additionally providing the fingerprint and 
the obfs4 certificate when connecting to Tor (I can imagine that it has 
something to do with authenticity)? And how can one do that using the 
Tor software respectively the Tor browser bundle?


Kind regards
telekobold
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics

2023-09-06 Thread gus
Hi,

There are some issues[1][2] with the status indicator on Metrics for bridges.

That said, I tested your bridge with bridgestrap[3], and it tells me:

Bridge ED3B1CBDEFAB89B6546B77984076969DDD19DDB7 advertises:

* obfs4: dysfunctional
  Error: timed out waiting for bridge descriptor
  Last tested: 2023-09-05 16:00:16.040172317 + UTC (15h18m32.726072356s ago)

Have you tried to connect to your own bridge and see if it works?
Here is how you build your obfs4 bridge line (note: it's your bridge
fingerprint and not your hashed bridge fingerprint):
https://community.torproject.org/relay/setup/bridge/post-install/

Which obfs4 port are you using? Can you check if it's externally reachable?
Here is how you can test it: https://bridges.torproject.org/scan/

cheers,
Gus

[1] https://gitlab.torproject.org/tpo/anti-censorship/team/-/issues/112
[2] Blocking ORPort 
https://gitlab.torproject.org/tpo/anti-censorship/team/-/issues/129
[3] 
https://bridges.torproject.org/status?id=ED3B1CBDEFAB89B6546B77984076969DDD19DDB7

On Wed, Sep 06, 2023 at 02:27:07AM +, BridgeOverStyx via tor-relays wrote:
> My bridge styxVortex is up and running. I know this because the Nyx monitor 
> shows activity. However, a search of metrics.torproject.org shows it down. It 
> has been in this state for at least a month. Do you have any suggestions of 
> what could be the possible cause of this?
> 
> I am using pfblockerng on my network, but the machine that is running Tor 
> bridge is not filtered by it. I do have a couple of TOR feed enabled in 
> pfblockerng but only incoming traffic is filtered.
> 
> I have no idea how the bridge stats are passed to metrics.torproject.org so 
> it is very challenging for me to tamp down on the cause.
> Any suggestion, at this point, will be helpful.
> 
> Sent with [Proton Mail](https://proton.me/) secure email.

> ___
> tor-relays mailing list
> tor-relays@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


-- 
The Tor Project
Community Team Lead


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] Metrics

2023-09-06 Thread BridgeOverStyx via tor-relays
My bridge styxVortex is up and running. I know this because the Nyx monitor 
shows activity. However, a search of metrics.torproject.org shows it down. It 
has been in this state for at least a month. Do you have any suggestions of 
what could be the possible cause of this?

I am using pfblockerng on my network, but the machine that is running Tor 
bridge is not filtered by it. I do have a couple of TOR feed enabled in 
pfblockerng but only incoming traffic is filtered.

I have no idea how the bridge stats are passed to metrics.torproject.org so it 
is very challenging for me to tamp down on the cause.
Any suggestion, at this point, will be helpful.

Sent with [Proton Mail](https://proton.me/) secure email.___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics falsely showing my relay as offline

2022-08-19 Thread Eddie

On 8/19/2022 12:21 AM, Georg Koppen wrote:

Eddie:
I have 1 relay (40D13096BBD11AF198CE61DEE4EAECCE5472F2E7) that 
according to the metrics is always bouncing between online and 
offline, sometimes multiple times per day.  The logs show it running 
the whole time and when the metrics also show it running, the uptime 
continues to increase correctly.  This hasn't always been the case, 
as the history shows this didn't happen prior to around June based on 
the guard flag usage.  This relay is hosted at AWS if that makes any 
difference.


What are you looking at when you say "according to the metrics"? Are 
you constantly watching relay-search or is it something else we could 
look at to figure out what is going on.


Yep, metrics.torproject.org searching on a partial nickname: OhNoAnother

Should I care about this, because the relay is running correctly. 
It's just that it never gets the stable (long running) and guard 
flags any more because of this yo-yo effect, which I'm sure is 
affecting how the relay is allocated circuits.


Right. I wonder whether some directory authorities have issues 
reaching your relay sometimes resulting in the loss of flags you are 
seeing, but I have not looked at the votes for your relay since June.


That's why I added the additional information about the AWS hosting, 
which I didn't last time, in case that had a bearing.



Georg

I do have another relay (E823B5F000835A669E902EBAE5ECCB9A324F46C9) 
that sometimes exhibits the same issues, but to a much lesser 
extent.  Like it will show offline (maybe) once a month for a very 
short period and any flags lost are quickly regained.

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays



___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics falsely showing my relay as offline

2022-08-19 Thread Georg Koppen

Eddie:
I have 1 relay (40D13096BBD11AF198CE61DEE4EAECCE5472F2E7) that according 
to the metrics is always bouncing between online and offline, sometimes 
multiple times per day.  The logs show it running the whole time and 
when the metrics also show it running, the uptime continues to increase 
correctly.  This hasn't always been the case, as the history shows this 
didn't happen prior to around June based on the guard flag usage.  This 
relay is hosted at AWS if that makes any difference.


What are you looking at when you say "according to the metrics"? Are you 
constantly watching relay-search or is it something else we could look 
at to figure out what is going on.


Should I care about this, because the relay is running correctly. It's 
just that it never gets the stable (long running) and guard flags any 
more because of this yo-yo effect, which I'm sure is affecting how the 
relay is allocated circuits.


Right. I wonder whether some directory authorities have issues reaching 
your relay sometimes resulting in the loss of flags you are seeing, but 
I have not looked at the votes for your relay since June.


Georg

I do have another relay (E823B5F000835A669E902EBAE5ECCB9A324F46C9) that 
sometimes exhibits the same issues, but to a much lesser extent.  Like 
it will show offline (maybe) once a month for a very short period and 
any flags lost are quickly regained.

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays




OpenPGP_signature
Description: OpenPGP digital signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] Metrics falsely showing my relay as offline

2022-08-18 Thread Eddie
I have 1 relay (40D13096BBD11AF198CE61DEE4EAECCE5472F2E7) that according 
to the metrics is always bouncing between online and offline, sometimes 
multiple times per day.  The logs show it running the whole time and 
when the metrics also show it running, the uptime continues to increase 
correctly.  This hasn't always been the case, as the history shows this 
didn't happen prior to around June based on the guard flag usage.  This 
relay is hosted at AWS if that makes any difference.


Should I care about this, because the relay is running correctly. It's 
just that it never gets the stable (long running) and guard flags any 
more because of this yo-yo effect, which I'm sure is affecting how the 
relay is allocated circuits.


I do have another relay (E823B5F000835A669E902EBAE5ECCB9A324F46C9) that 
sometimes exhibits the same issues, but to a much lesser extent.  Like 
it will show offline (maybe) once a month for a very short period and 
any flags lost are quickly regained.

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics shows my relay down. But it's not.

2022-06-25 Thread lists
On Freitag, 24. Juni 2022 21:11:30 CEST Eddie wrote:
> The metrics is showing one of my relays
> (40D13096BBD11AF198CE61DEE4EAECCE5472F2E7) as down for around the last 3
> hours.  Logging in to it, I see everything running normally.
> 
> This server has also lost a bunch of flags for no apparent reason, so
> I'm not sure if they're connected.

Jo it's unfortunately normal, my ¹relays have been changing from red to green 
for weeks. ;-)
Because the Tor network and also the ²dir auths are under DDoS. We just talked 
about it in the meeting. see meeting pad³

¹https://metrics.torproject.org/rs.html#search/ForPrivacyNET
²https://gitlab.torproject.org/tpo/core/tor/-/issues/40622
³https://pad.riseup.net/p/tor-relay-meetup-june-2022-keep

Our XMR .onion (hidden service) nodes look bad too:
http://xmrguide25ibknxgaray5rqksrclddxqku3ggdcnzg4ogdi5qkdkd2yd.onion/remote_nodes

-- 
╰_╯ Ciao Marco!

Debian GNU/Linux

It's free software and it gives you freedom!

signature.asc
Description: This is a digitally signed message part.
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] Metrics shows my relay down. But it's not.

2022-06-25 Thread Eddie
The metrics is showing one of my relays 
(40D13096BBD11AF198CE61DEE4EAECCE5472F2E7) as down for around the last 3 
hours.  Logging in to it, I see everything running normally.


This server has also lost a bunch of flags for no apparent reason, so 
I'm not sure if they're connected.


Cheers.
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] metrics: add `IPv6 Exit Address` in addition to `Exit Address`

2021-11-09 Thread s7r

Hello,

Currently we have Exit Address for relays that use a different IPv4 
address for `Exit` connections vs their ORPort IPv4 address.


Similarly, one relay *could* listen on a certain IPv6 address ORPort but 
use a different IPv6 address for v6 exiting. Wouldn't it be useful to 
add this data too?


This would tell us how many relays use one static IPv6 address for 
ORPort connections and temporary IPv6 addresses for exiting (using IPv6 
privacy extensions).


By looking at https://metrics.torproject.org/relays-ipv6.html one could 
assume that almost 50% of the IPv6 enabled relays (IPv6 ORPort) are exits.


OpenPGP_signature
Description: OpenPGP digital signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] metrics

2021-02-21 Thread David Goulet
On 20 Feb (11:52:33), Manager wrote:
>Hello,
> 
>im trying to enable prometheus metrics, and... something goes wrong:
> 
>torrc:
>MetricsPort 9166
>MetricsPortPolicy accept *
> 
>after tor restart in logs:
>Tor[15368]: Opening Metrics listener on 127.0.0.1:9166
>Tor[15368]: Could not bind to 127.0.0.1:9166: Address already in use. Is
>Tor already running?
> 
>-- before restart, no one listen on this port, as `ss | grep :9166` can
>say.
> 
>there is also backtrace in logs:
>Tor[15368]: connection_finished_flushing(): Bug: got unexpected conn type
>20. (on Tor 0.4.5.6 )
>Tor[15368]: tor_bug_occurred_(): Bug:
>../src/core/mainloop/connection.c:5192: connection_finished_flushing: This
>line should not have been reached. (Future instances of this warning will
>be silenced.) (on Tor 0.4.5.6 )

This was reported 3 days ago:

https://gitlab.torproject.org/tpo/core/tor/-/issues/40295

And we pushed a fix upstream and will be in the next tor stable release thus
0.4.5.7. As for the timeline of that release, unclear but I will make a point
to the network team to make it sooner than usually because this problem is
effectively making the MetricsPort unusable :S.

Sorry about this. Thanks for the report!!!

David


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] metrics

2021-02-20 Thread Manager

  
  
Hello,
  
  im trying to enable prometheus metrics, and... something goes
  wrong:
  
  torrc:
  MetricsPort 9166
  MetricsPortPolicy accept *
  
  after tor restart in logs:
  Tor[15368]: Opening Metrics listener on 127.0.0.1:9166
  Tor[15368]: Could not bind to 127.0.0.1:9166: Address already in
  use. Is Tor already running?
  
  -- before restart, no one listen on this port, as `ss | grep
  :9166` can say.
  
  there is also backtrace in logs:
  Tor[15368]: connection_finished_flushing(): Bug: got unexpected
  conn type 20. (on Tor 0.4.5.6 )
  Tor[15368]: tor_bug_occurred_(): Bug:
  ../src/core/mainloop/connection.c:5192:
  connection_finished_flushing: This line should not have been
  reached. (Future instances of this warning will be silenced.) (on
  Tor 0.4.5.6 )
  Tor[15368]: Bug: Tor 0.4.5.6: Line unexpectedly reached at
  connection_finished_flushing at
  ../src/core/mainloop/connection.c:5192. Stack trace: (on Tor
  0.4.5.6 )
  Tor[15368]: Bug: /usr/bin/tor(log_backtrace_impl+0x58)
  [0x557b52648428] (on Tor 0.4.5.6 )
  Tor[15368]: Bug: /usr/bin/tor(tor_bug_occurred_+0x16a)
  [0x557b526537aa] (on Tor 0.4.5.6 )
  Tor[15368]: Bug: /usr/bin/tor(+0x189355) [0x557b526e3355] (on
  Tor 0.4.5.6 )
  Tor[15368]: Bug: /usr/bin/tor(connection_handle_write+0x4c8)
  [0x557b526ed3b8] (on Tor 0.4.5.6 )
  Tor[15368]: Bug: /usr/bin/tor(+0x69a2e) [0x557b525c3a2e] (on
  Tor 0.4.5.6 )
  Tor[15368]: Bug:
  /usr/lib/x86_64-linux-gnu/libevent-2.0.so.5(event_base_loop+0x6a0)
  [0x7ff21468a5a0] (on Tor 0.4.5.6 )
  Tor[15368]: Bug: /usr/bin/tor(do_main_loop+0x105)
  [0x557b525c4f15] (on Tor 0.4.5.6 )
  Tor[15368]: Bug: /usr/bin/tor(tor_run_main+0x9bd)
  [0x557b525bf41d] (on Tor 0.4.5.6 )
  Tor[15368]: Bug: /usr/bin/tor(tor_main+0x3a) [0x557b525bd22a]
  (on Tor 0.4.5.6 )
  Tor[15368]: Bug: /usr/bin/tor(main+0x19) [0x557b525bcda9] (on
  Tor 0.4.5.6 )
  Tor[15368]: Bug:
  /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1)
  [0x7ff212ede2e1] (on Tor 0.4.5.6 )
  Tor[15368]: Bug: /usr/bin/tor(_start+0x2a) [0x557b525bcdfa]
  (on Tor 0.4.5.6 )
  Tor[15368]: conn_write_callback(): Bug: unhandled error on write
  for Metrics connection (fd 2496); removing (on Tor 0.4.5.6 )
  Tor[15368]: tor_bug_occurred_(): Bug:
  ../src/core/mainloop/mainloop.c:932: conn_write_callback: This
  line should not have been reached. (Future instances of this
  warning will be silenced.) (on Tor 0.4.5.6 )
  Tor[15368]: Bug: Tor 0.4.5.6: Line unexpectedly reached at
  conn_write_callback at ../src/core/mainloop/mainloop.c:932. Stack
  trace: (on Tor 0.4.5.6 )
  Tor[15368]: Bug: /usr/bin/tor(log_backtrace_impl+0x58)
  [0x557b52648428] (on Tor 0.4.5.6 )
  Tor[15368]: Bug: /usr/bin/tor(tor_bug_occurred_+0x16a)
  [0x557b526537aa] (on Tor 0.4.5.6 )
  Tor[15368]: Bug: /usr/bin/tor(+0x69b9f) [0x557b525c3b9f] (on
  Tor 0.4.5.6 )
  Tor[15368]: Bug:
  /usr/lib/x86_64-linux-gnu/libevent-2.0.so.5(event_base_loop+0x6a0)
  [0x7ff21468a5a0] (on Tor 0.4.5.6 )
  Tor[15368]: Bug: /usr/bin/tor(do_main_loop+0x105)
  [0x557b525c4f15] (on Tor 0.4.5.6 )
  Tor[15368]: Bug: /usr/bin/tor(tor_run_main+0x9bd)
  [0x557b525bf41d] (on Tor 0.4.5.6 )
  Tor[15368]: Bug: /usr/bin/tor(tor_main+0x3a) [0x557b525bd22a]
  (on Tor 0.4.5.6 )
  Tor[15368]: Bug: /usr/bin/tor(main+0x19) [0x557b525bcda9] (on
  Tor 0.4.5.6 )
  Tor[15368]: Bug:
  /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xf1)
  [0x7ff212ede2e1] (on Tor 0.4.5.6 )
  Tor[15368]: Bug: /usr/bin/tor(_start+0x2a) [0x557b525bcdfa]
  (on Tor 0.4.5.6 )
  
  

  

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics Error: staledesc

2021-01-28 Thread Roger Dingledine
On Thu, Jan 28, 2021 at 07:00:45PM +0100, li...@for-privacy.net wrote:
> Metrics showed my relay offline. But my Tor daemon is running normally.
> Then I saw _many_ relays suddenly have flag: staledesc
> ?
> 
> https://metrics.torproject.org/rs.html#search/flag:staledesc

Yep. The reason that happens is that the directory authorities are
receiving too many dirport connections from exit relays, but the exit
relays use a dirport connection to post their own descriptor.

So if we don't handle all of the dirport attempts, then we end up not
receiving some of the descriptor publish attempts.

I'm thinking that this part will still work out though, for two reasons.

One is that if *any* of the dir auths receive the descriptor, then they
will mention it in their next vote, and the other dir auths will learn
about it from that vote and ask for a copy.

And two is that relays watch to see if they are still listed in the
consensus, and if they're not then they try more often to upload a
new descriptor.

So yes, we are making an effort to make sure there is at least one dir
auth that will be good at receiving descriptor publishes.

Some small fraction of relays are expected to get the StaleDesc flag in
normal network operation, because there is an unfortunate interaction
between how relays publish a new descriptor "every 18 hours or when
something important changes", but dir auths ignore new descriptors if
they are too close in time or other characteristics to one that they
already have. So for example there is a known bad interaction where you
restart your relay, and the relay publishes a new descriptor because
it doesn't know that it just published one earlier, but then the dir
auths discard that new descriptor because they already have the old one,
and then your relay waits 18 hours to create a new one.

For much more backstory, see
https://gitlab.torproject.org/tpo/core/tor/-/issues/1810
https://gitlab.torproject.org/tpo/core/tor/-/issues/2479
https://gitlab.torproject.org/tpo/core/tor/-/issues/3327
https://gitweb.torproject.org/torspec.git/tree/proposals/293-know-when-to-publish.txt

But I guess the other way to look at it is: the StaleDesc flag is a
*feature*, to let your relay know that it has fallen into this edge case
so it can take steps to recover.

> https://metrics.torproject.org/rs.html#details/5D84900DBE6D6365684A9675B81A68ACE9577A68

This relay looks genuinely down.

--Roger

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics Error: staledesc

2021-01-28 Thread niftybunny
“Normal”, half of my relays are also shown as offline and 1/3 of my relays are 
not even known by https://consensus-health.torproject.org/consensus-health.html 
 right now.

just wait a little bit until everything is back to normal.

whatever the new normal is normal nowadays.

> On 28. Jan 2021, at 19:00, li...@for-privacy.net wrote:
> 
> Metrics showed my relay offline. But my Tor daemon is running normally.
> Then I saw _many_ relays suddenly have flag: staledesc
> ?
> 
> https://metrics.torproject.org/rs.html#search/flag:staledesc
> https://metrics.torproject.org/rs.html#details/5D84900DBE6D6365684A9675B81A68ACE9577A68
> 
> 
> --
> ╰_╯ Ciao Marco!
> 
> Debian GNU/Linux
> 
> It's free software and it gives you freedom!
> ___
> tor-relays mailing list
> tor-relays@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays



signature.asc
Description: Message signed with OpenPGP
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] Metrics Error: staledesc

2021-01-28 Thread lists

Metrics showed my relay offline. But my Tor daemon is running normally.
Then I saw _many_ relays suddenly have flag: staledesc
?

https://metrics.torproject.org/rs.html#search/flag:staledesc
https://metrics.torproject.org/rs.html#details/5D84900DBE6D6365684A9675B81A68ACE9577A68


--
╰_╯ Ciao Marco!

Debian GNU/Linux

It's free software and it gives you freedom!
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] Metrics site down

2020-05-21 Thread mnlph74
As of this writing... noticed that the metrics site is currently down.. Can 
someone confirm? Thanks
https://metrics.torproject.org/

Sent with ProtonMail Secure Email.

publickey - mnlph74@protonmail.com - 0xA7D18794.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] [metrics-team] New Fallbacks from June 2019

2019-08-04 Thread teor
> On 5 Aug 2019, at 03:28, Toralf Förster  wrote:
> 
>> On 7/2/19 1:33 PM, teor wrote:
>> Dear Relay Operators,
>> 
>> The FallbackDir flags on Consensus Health [2] and Relay Search [3]
>> might take a week or two to update.
>> 
>> [3]: For example, this relay is a new fallback, but its flag isn't
>> shown yet:
>> https://metrics.torproject.org/rs.html#details/1211AC1BBB8A1AF7CBA86BCE8689AA3146B86423
> 
> I'm just curious b/c that relay doesn't show the FallbackDir-Flag even after 
> a month.

We had an in-person meeting early July, so maybe people forgot to do the 
updates.

I opened a ticket for relay search:
https://trac.torproject.org/projects/tor/ticket/31332

Stem also hasn't updated yet, and Stem's list is used for consensus-health:
https://trac.torproject.org/projects/tor/ticket/31315

T___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics show Relay donw

2019-01-03 Thread teor

> On 26 Dec 2018, at 07:11, Darek Kramin  wrote:
> 
> Looks problem is solved. Online now
> 
> On Tue, Dec 25, 2018, 21:13 Darek Kramin  hi,
> 
> I did started 2 days ago tor relay. when I set daily accounting was ok and 
> now with weekly set of GB relay is listed down. It is a glitch or my 
> misconfiguration.
> Relay is named dasBoot at IP 46.175.238.8

Accounting causes your relay to hibernate at a random time each interval.

If you don't want that to happen:
1. set the accounting limit very high
2. wait one interval for Tor to get good bandwidth estimates

T



signature.asc
Description: Message signed with OpenPGP
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics show Relay donw

2018-12-25 Thread Darek Kramin
Looks problem is solved. Online now

On Tue, Dec 25, 2018, 21:13 Darek Kramin  hi,
>
> I did started 2 days ago tor relay. when I set daily accounting was ok and
> now with weekly set of GB relay is listed down. It is a glitch or my
> misconfiguration.
> Relay is named dasBoot at IP 46.175.238.8
>
> brgds
> Darek
>
> --
>
> Cpt D.Kramin
> +48 505 135145
>
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] Metrics show Relay down

2018-12-25 Thread Darek Kramin
hi,

I did started 2 days ago tor relay. when I set daily accounting was ok and
now with weekly set of GB relay is listed down. It is a glitch or my
misconfiguration.
Relay is named dasBoot at IP 46.175.238.8

brgds
darek
-- 

Cpt D.Kramin
+48 505 135145
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics not functioning?

2018-04-14 Thread Matthew Finkel
On Sat, Apr 14, 2018 at 10:30:54AM +1000, teor wrote:
> 
> > On 14 Apr 2018, at 09:48, Matthew Glennon  wrote:
> > 
> > Are the right people aware that Metrics has been like this for most of the 
> > day? No matter what relay you look for it claims old data.
> > https://metrics.torproject.org/rs.html#details/9695DFC35FFEB861329B9F1AB04C46397020CE31
> 
> Yes, but it's their weekend, so it might take a while.

And that issue is resolved now (but I didn't do it).
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics not functioning?

2018-04-13 Thread teor

> On 14 Apr 2018, at 09:48, Matthew Glennon  wrote:
> 
> Are the right people aware that Metrics has been like this for most of the 
> day? No matter what relay you look for it claims old data.
> https://metrics.torproject.org/rs.html#details/9695DFC35FFEB861329B9F1AB04C46397020CE31

Yes, but it's their weekend, so it might take a while.

T___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] Metrics not functioning?

2018-04-13 Thread Matthew Glennon
Are the right people aware that Metrics has been like this for most of the
day? No matter what relay you look for it claims old data.
https://metrics.torproject.org/rs.html#details/9695DFC35FFEB861329B9F1AB04C46397020CE31
-- 
Matthew Glennon
matthew@glennon.online
PGP Signing Available Upon Request
https://keybase.io/crazysane
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] [metrics-team] Atlas is now Relay Search!

2017-11-14 Thread David Goulet
On 14 Nov (13:24:21), Iain R. Learmonth wrote:
> Hi David,
> 
> On 14/11/17 13:01, David Goulet wrote:
> > Quick question for you. Atlas used to have the search box at all time in the
> > corner which for me was very useful because I could do many search without 
> > an
> > extra click to go back one level down like the new site has.
> > 
> > How crazy would it be to bring it back? Always hovering in the top corner? 
> > :)
> > Maybe a ticket would be a better way to ask?
> 
> Please do file a ticket.

Cheers!

https://trac.torproject.org/projects/tor/ticket/24274

> 
> Thanks,
> Iain.
> 
> ___
> tor-relays mailing list
> tor-relays@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


-- 
MxBkRXCYwsjs9XYQ2CdV6AR4pWxGtfzRvkWje9ebIvM=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] [metrics-team] Atlas is now Relay Search!

2017-11-14 Thread Roman Mamedov
On Tue, 14 Nov 2017 13:22:00 +
nusenu  wrote:

> > Quick question for you. Atlas used to have the search box at all time in the
> > corner which for me was very useful because I could do many search without 
> > an
> > extra click
> 
> +1

Here's another variation on the Atlas theme that I found some time ago:
https://onionite.now.sh/
it still has the search box.

-- 
With respect,
Roman
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] [metrics-team] Atlas is now Relay Search!

2017-11-14 Thread Iain R. Learmonth
Hi David,

On 14/11/17 13:01, David Goulet wrote:
> Quick question for you. Atlas used to have the search box at all time in the
> corner which for me was very useful because I could do many search without an
> extra click to go back one level down like the new site has.
> 
> How crazy would it be to bring it back? Always hovering in the top corner? :)
> Maybe a ticket would be a better way to ask?

Please do file a ticket.

Thanks,
Iain.




signature.asc
Description: OpenPGP digital signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] [metrics-team] Atlas is now Relay Search!

2017-11-14 Thread nusenu


David Goulet:
> Quick question for you. Atlas used to have the search box at all time in the
> corner which for me was very useful because I could do many search without an
> extra click

+1

-- 
https://mastodon.social/@nusenu
twitter: @nusenu_



signature.asc
Description: OpenPGP digital signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] [metrics-team] Atlas is now Relay Search!

2017-11-14 Thread David Goulet
On 14 Nov (12:52:27), Iain R. Learmonth wrote:
> Hi All,
> 
> You may notice that Atlas has a new look, and is no longer called Atlas.
> For now no URLs have changed but this is part of work to merge this tool
> into the Tor Metrics website.

Hi Iain!

Great stuff! Thanks for this and letting us know also, I would have been very
confused as I use Atlas all the time :).

Quick question for you. Atlas used to have the search box at all time in the
corner which for me was very useful because I could do many search without an
extra click to go back one level down like the new site has.

How crazy would it be to bring it back? Always hovering in the top corner? :)
Maybe a ticket would be a better way to ask?

Big thanks!
David

> 
> The style is determined by the Tor Metrics Style, and modifications have
> been made to fit this.
> 
> The decision was made to deploy these changes before the actual
> integration into metrics.torproject.org to allow for other issues to be
> worked on. This was a big change and it was tricky maintaining two
> branches of the codebase.
> 
> Issues should still be reported on the Metrics/Atlas component in the
> Tor trac if they arise. When we come to full integration, URLs will
> change but there will be a period where we maintain redirects to prevent
> any URLs from breaking while waiting for being updated.
> 
> Thanks,
> Iain.
> 




> ___
> metrics-team mailing list
> metrics-t...@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/metrics-team


-- 
M72MGWsMq9KJ+hYLXg8sXrwfexA4QUqnNwWVOMxVBvM=


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics for assessing EFF's Tor relay challenge?

2014-04-08 Thread Kostas Jakeliunas
On Wed, Apr 9, 2014 at 4:18 AM, Kostas Jakeliunas wrote:

> On Wed, Apr 9, 2014 at 4:06 AM, Lukas Erlacher  wrote:
>
>> Hi Kostas,
>>
>> right now, we're coding challenger against what exists in debian wheezy,
>> which means version 0.1.2 of the requests lib using the python-requests
>> package you mentioned, where response.json is correct, and not
>> response.json() to get json content from the response.
>>
>> I'd recommend that if you want to make your own "grab stuff from onionoo"
>> script suite, to work with onion-py[1] . It's very new, very spiffy and
>> uses python 3 and the newest requests lib. (full disclosure: It's my baby
>> and I'm desperately looking for testers/users, but that should be obvious
>> to anyone who read this thread.)
>> Alternatively, convince the right people (presumably Karsten and arma)
>> that challenger should switch to a more sustainable runtime than "what we
>> can get from wheezy's repositories". ;-)
>>
>
> A-ha! :) That makes sense. (fwiw, i used pip under virtualenv in wheezy;
> requests lib version ancient indeed; such is life. fwiw, convincing wheezy
> cavepeople to use what you suggest makes sense. It's a false dichotomy
> between 'ensures dependences vs. breaks dependencies.')
>
> So
>
>   - the timeout stuff might be useful to everyone involved; it's rough
>   - the 'fix' might be useful for people using old 'requests'
>

Actually, I might have that one kind of backwards. So timeout stuff for
everyone (who wants to use things from the
'luk3duk3-onionoo-integration'[2] branch), the 'fix' for *certain* people
(for example, for those using pip.)


>- your onion-py sounds nice
>
> g'day
>
>
>> Cheers,
>> Luke
>>
>> [1] https://github.com/duk3luk3/onion-py
>
>
[2]:
https://github.com/kloesing/challenger/commits/luk3duk3-onionoo-integration
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics for assessing EFF's Tor relay challenge?

2014-04-08 Thread Kostas Jakeliunas
On Wed, Apr 9, 2014 at 4:06 AM, Lukas Erlacher  wrote:

> Hi Kostas,
>
> right now, we're coding challenger against what exists in debian wheezy,
> which means version 0.1.2 of the requests lib using the python-requests
> package you mentioned, where response.json is correct, and not
> response.json() to get json content from the response.
>
> I'd recommend that if you want to make your own "grab stuff from onionoo"
> script suite, to work with onion-py[1] . It's very new, very spiffy and
> uses python 3 and the newest requests lib. (full disclosure: It's my baby
> and I'm desperately looking for testers/users, but that should be obvious
> to anyone who read this thread.)
> Alternatively, convince the right people (presumably Karsten and arma)
> that challenger should switch to a more sustainable runtime than "what we
> can get from wheezy's repositories". ;-)
>

A-ha! :) That makes sense. (fwiw, i used pip under virtualenv in wheezy;
requests lib version ancient indeed; such is life. fwiw, convincing wheezy
cavepeople to use what you suggest makes sense. It's a false dichotomy
between 'ensures dependences vs. breaks dependencies.')

So

  - the timeout stuff might be useful to everyone involved; it's rough
  - the 'fix' might be useful for people using old 'requests'
  - your onion-py sounds nice

g'day


> Cheers,
> Luke
>
> [1] https://github.com/duk3luk3/onion-py
>
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics for assessing EFF's Tor relay challenge?

2014-04-08 Thread Lukas Erlacher
Hi Kostas,

right now, we're coding challenger against what exists in debian wheezy, which 
means version 0.1.2 of the requests lib using the python-requests package you 
mentioned, where response.json is correct, and not response.json() to get json 
content from the response.

I'd recommend that if you want to make your own "grab stuff from onionoo" 
script suite, to work with onion-py[1] . It's very new, very spiffy and uses 
python 3 and the newest requests lib. (full disclosure: It's my baby and I'm 
desperately looking for testers/users, but that should be obvious to anyone who 
read this thread.)
Alternatively, convince the right people (presumably Karsten and arma) that 
challenger should switch to a more sustainable runtime than "what we can get 
from wheezy's repositories". ;-)

Cheers,
Luke

[1] https://github.com/duk3luk3/onion-py



signature.asc
Description: OpenPGP digital signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics for assessing EFF's Tor relay challenge?

2014-04-08 Thread Kostas Jakeliunas
On Tue, Apr 8, 2014 at 12:59 PM, Karsten Loesing wrote:

> On 05/04/14 17:46, Lukas Erlacher wrote:
> > Hello Nikita, Karsten,
> >
> > On 04/05/2014 05:03 PM, Nikita Borisov wrote:
> >> On Sat, Apr 5, 2014 at 3:58 PM, Karsten Loesing
> >>  wrote:
> >>> Installing packages using Python-specific package managers is
> >>> going to make our sysadmins sad, so we should have a very good
> >>> reason for wanting such a package.  In general, we don't need
> >>> the latest and greatest package.  Unless we do.
> >> What about virtualenv? Part of the premise behind it is that you
> >> can configure appropriate packages as a developer / operator
> >> without having to bother sysadmins and making them worried about
> >> system-wide effects.
> >>
> >> - Nikita
> >
> > I was going to mention virtualenv as well, but I have to admit that
> > I find it weird and scary, especially since I haven't found good
> > documentation for it. If there is somebody who is familiar with
> > virtualenv that would probably be the best solution.
>
> I'm afraid I don't know enough about Python or virtualenv.  So far, it
> was almost zero effort for our sysadmins to install a package from the
> repositories and keep that up-to-date.  I'd like to stick with the
> apt-get approach and save the virtualenv approach for situations when
> we really need a package that is not contained in the repositories.
>
> Thanks for the suggestion, though!
>
> > On 04/05/2014 04:58 PM, Karsten Loesing wrote:
> >> My hope with challenger is that it's written quickly, working
> >> quietly for a year, and then disappearing without anybody
> >> noticing.  I'd rather not want to maintain yet another thing.
> >> So, maybe Weather is a better candidate for using onion-py than
> >> challenger.
> >
> > Yes, I understand.
> >
> >> Yeah, I think we'll want to define a maximum lifetime of cache
> >> entries, or the poor cache will explode pretty soon.
> >
> > What usage patterns do we have to expect? Do we want to hit onionoo
> > to check if the cache is still valid for every request, or should
> > we do "hard caching" for several minutes? The best UX solution
> > would be to have a background task that keeps the cache current so
> > user requests can be delivered without hitting onionoo at all.
>
> That's a fine question.  I can see various caching approaches here.
> But I just realize that this is premature optimization.  Let's first
> build the thing and download whatever we need and whenever we need it.
>  And once we know what caching needs we have, let's build the cache.
>
> > In other words, unless we do something intelligent with the cache,
> > the cache is not actually going to be very useful.
>
> Valid point. :)
>
> >> Great, your help would be much appreciated!  Want to send me a
> >> pull request whenever you have something to merge?
> >
> > Will do.
>
> Great.  Thanks!
>

Hi Karsten and others,

I got to run the challenger script by chance[1], and spotted a small
mistake that was preventing Lukas' onion.py downloader code from working.
Ended up forking and creating a separate branch:

https://github.com/wfn/challenger/commits/wfn_fix_luk3s_download

Relevant commits:

  - 38d88bcb1136f97881f81152d3d883c4e9480188[2] (enables downloader)
  - 39c800643c040474402fc62d2a2db75c25889dfc[3] (this is the one with the
small thingie-fix)

(It was a very small thing with the way the 'requests' module
handles/provides json documents.)

I was doing this to be able to give Roger the 'combined-*.json' files for
currently vulnerable (re: openssl) relays (he wanted to see which part of
the combined weight fraction they comprise, etc.)

Fingerprints for those relays are here, fwiw:
http://ravinesmp.com/volatile/challenger-stuff/vuln_fingerprints.txt (the
original link that Roger gave me was http://fpaste.org/92688/ )
(count: 1024.)

If you download these fingerprints, you can just run `python challenge.py
-f vuln_fingerprints.txt`

(for anyone using virtualenv, you might need to `pip install requests`, and
then things should work. For anyone who's just cloned the thing, everything
should probably work after simply installing the 'requests' python module,
if it's not there. I see that 'python-requests' is available in the repos.)

I guess the code hasn't been tested for those amounts of fingerprints
before. Good news: it works (where 'works' means 'i opened the resulting
files and they contained all those fingerprints, and/or they contained lots
of numbers.') Kinda-bad news: Onionoo doesn't seem to share the enthusiasm,
and hiccups, and spits 502 Proxy Error some time after the lookups for the
first document (combined bandwidth) are made.

My cheap quick hack was to insert time.sleep() here and there:

  - 7425ef6fc00dedf3b2b7f2649e832fb4c93909ae[4]

(cheap hack is cheap, but it worked. Note: takes time to download
everything. Didn't time it yet - sorry.)

For anyone interested, these are the resulting 'combined-*.json' files from
all those fingerprints:

  -
http://ravinesmp.com

Re: [tor-relays] Metrics for assessing EFF's Tor relay challenge?

2014-04-08 Thread Karsten Loesing
On 05/04/14 17:46, Lukas Erlacher wrote:
> Hello Nikita, Karsten,
> 
> On 04/05/2014 05:03 PM, Nikita Borisov wrote:
>> On Sat, Apr 5, 2014 at 3:58 PM, Karsten Loesing
>>  wrote:
>>> Installing packages using Python-specific package managers is
>>> going to make our sysadmins sad, so we should have a very good
>>> reason for wanting such a package.  In general, we don't need
>>> the latest and greatest package.  Unless we do.
>> What about virtualenv? Part of the premise behind it is that you
>> can configure appropriate packages as a developer / operator
>> without having to bother sysadmins and making them worried about
>> system-wide effects.
>> 
>> - Nikita
> 
> I was going to mention virtualenv as well, but I have to admit that
> I find it weird and scary, especially since I haven't found good
> documentation for it. If there is somebody who is familiar with
> virtualenv that would probably be the best solution.

I'm afraid I don't know enough about Python or virtualenv.  So far, it
was almost zero effort for our sysadmins to install a package from the
repositories and keep that up-to-date.  I'd like to stick with the
apt-get approach and save the virtualenv approach for situations when
we really need a package that is not contained in the repositories.

Thanks for the suggestion, though!

> On 04/05/2014 04:58 PM, Karsten Loesing wrote:
>> My hope with challenger is that it's written quickly, working
>> quietly for a year, and then disappearing without anybody
>> noticing.  I'd rather not want to maintain yet another thing.
>> So, maybe Weather is a better candidate for using onion-py than
>> challenger.
> 
> Yes, I understand.
> 
>> Yeah, I think we'll want to define a maximum lifetime of cache 
>> entries, or the poor cache will explode pretty soon.
> 
> What usage patterns do we have to expect? Do we want to hit onionoo
> to check if the cache is still valid for every request, or should
> we do "hard caching" for several minutes? The best UX solution
> would be to have a background task that keeps the cache current so
> user requests can be delivered without hitting onionoo at all.

That's a fine question.  I can see various caching approaches here.
But I just realize that this is premature optimization.  Let's first
build the thing and download whatever we need and whenever we need it.
 And once we know what caching needs we have, let's build the cache.

> In other words, unless we do something intelligent with the cache,
> the cache is not actually going to be very useful.

Valid point. :)

>> Great, your help would be much appreciated!  Want to send me a
>> pull request whenever you have something to merge?
> 
> Will do.

Great.  Thanks!

All the best,
Karsten

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics for assessing EFF's Tor relay challenge?

2014-04-07 Thread Christian
On 07.04.2014 10:43, Karsten Loesing wrote:
> On 06/04/14 21:29, Christian wrote:
>> On 04.04.2014 19:13, Karsten Loesing wrote:
>>> Christian, Lukas, everyone,
>>>
>>> I learned today that we should have something working in a week or two.
>>>  That's why I started hacking on this today and produced some code:
>>>
>>> https://github.com/kloesing/challenger
>>>
>>> Here are a few things I could use help with:
>>>
>>>  - Anybody want to help turning this script into a web app, possibly
>>> using Flask?  See the first next step in README.md.
>>>
>>>  - Lukas, you announced OnionPy on tor-dev@ the other day.  Want to look
>>> into the "Add local cache for ..." bullet points under "Next steps"?  Is
>>> this something OnionPy could support?  Want to write the glue code?
>>>
>>>  - Christian, want to help write the graphing code that visualizes the
>>> `combined-*.json` files produced by that tool?  The README.md suggests a
>>> few possible graphs.
>>>
>>
>> Sure,
>> should I create a new repo for the website with graphing code or work
>> directly in the kloesing/challenger repository?
> 
> My hope is that we can turn my script into a Flask web app which serves
> JSON data which is then graphed by your JavaScript that is embedded into
> the HTML.  So it probably makes sense to have everything in a single
> repository.  I'd say feel free to clone kloesing/challenger and send me
> pull requests.  And feel free to create new directories as needed, we
> can still move around things later.
> 

I send you a pull request with the first working version:
https://github.com/kloesing/challenger/pull/2 .
The ui is temporary but it works so far.

> All the best,
> Karsten
> 

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics for assessing EFF's Tor relay challenge?

2014-04-07 Thread Karsten Loesing
On 06/04/14 21:29, Christian wrote:
> On 04.04.2014 19:13, Karsten Loesing wrote:
>> Christian, Lukas, everyone,
>>
>> I learned today that we should have something working in a week or two.
>>  That's why I started hacking on this today and produced some code:
>>
>> https://github.com/kloesing/challenger
>>
>> Here are a few things I could use help with:
>>
>>  - Anybody want to help turning this script into a web app, possibly
>> using Flask?  See the first next step in README.md.
>>
>>  - Lukas, you announced OnionPy on tor-dev@ the other day.  Want to look
>> into the "Add local cache for ..." bullet points under "Next steps"?  Is
>> this something OnionPy could support?  Want to write the glue code?
>>
>>  - Christian, want to help write the graphing code that visualizes the
>> `combined-*.json` files produced by that tool?  The README.md suggests a
>> few possible graphs.
>>
> 
> Sure,
> should I create a new repo for the website with graphing code or work
> directly in the kloesing/challenger repository?

My hope is that we can turn my script into a Flask web app which serves
JSON data which is then graphed by your JavaScript that is embedded into
the HTML.  So it probably makes sense to have everything in a single
repository.  I'd say feel free to clone kloesing/challenger and send me
pull requests.  And feel free to create new directories as needed, we
can still move around things later.

All the best,
Karsten

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics for assessing EFF's Tor relay challenge?

2014-04-06 Thread Christian
On 04.04.2014 19:13, Karsten Loesing wrote:
> Christian, Lukas, everyone,
> 
> I learned today that we should have something working in a week or two.
>  That's why I started hacking on this today and produced some code:
> 
> https://github.com/kloesing/challenger
> 
> Here are a few things I could use help with:
> 
>  - Anybody want to help turning this script into a web app, possibly
> using Flask?  See the first next step in README.md.
> 
>  - Lukas, you announced OnionPy on tor-dev@ the other day.  Want to look
> into the "Add local cache for ..." bullet points under "Next steps"?  Is
> this something OnionPy could support?  Want to write the glue code?
> 
>  - Christian, want to help write the graphing code that visualizes the
> `combined-*.json` files produced by that tool?  The README.md suggests a
> few possible graphs.
> 

Sure,
should I create a new repo for the website with graphing code or work
directly in the kloesing/challenger repository?
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics for assessing EFF's Tor relay challenge?

2014-04-05 Thread Lukas Erlacher
On 04/05/2014 04:58 PM, Karsten Loesing wrote:
> Great, your help would be much appreciated!  Want to send me a pull
> request whenever you have something to merge?
>
>
Alright, so I wrote a few lines and sent you a pull request. Could you please 
check if that downloads the data you expect?
And when we know what exactly we want to cache and how, I'll add the logic for 
that.

Cheers,
Luke



signature.asc
Description: OpenPGP digital signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics for assessing EFF's Tor relay challenge?

2014-04-05 Thread Lukas Erlacher
Hello Nikita, Karsten,

On 04/05/2014 05:03 PM, Nikita Borisov wrote:
> On Sat, Apr 5, 2014 at 3:58 PM, Karsten Loesing  
> wrote:
>> Installing packages using Python-specific package managers is going to
>> make our sysadmins sad, so we should have a very good reason for
>> wanting such a package.  In general, we don't need the latest and
>> greatest package.  Unless we do.
> What about virtualenv? Part of the premise behind it is that you can
> configure appropriate packages as a developer / operator without
> having to bother sysadmins and making them worried about system-wide
> effects.
>
> - Nikita

I was going to mention virtualenv as well, but I have to admit that I find it 
weird and scary, especially since I haven't found good documentation for it. If 
there is somebody who is familiar with virtualenv that would probably be the 
best solution.

On 04/05/2014 04:58 PM, Karsten Loesing wrote:
> My hope with challenger is that it's written quickly, working quietly
> for a year, and then disappearing without anybody noticing.  I'd
> rather not want to maintain yet another thing.  So, maybe Weather is a
> better candidate for using onion-py than challenger.

Yes, I understand.
> Yeah, I think we'll want to define a maximum lifetime of cache
> entries, or the poor cache will explode pretty soon.

What usage patterns do we have to expect? Do we want to hit onionoo to check if 
the cache is still valid for every request, or should we do "hard caching" for 
several minutes? The best UX solution would be to have a background task that 
keeps the cache current so user requests can be delivered without hitting 
onionoo at all.
In other words, unless we do something intelligent with the cache, the cache is 
not actually going to be very useful.

> Great, your help would be much appreciated!  Want to send me a pull
> request whenever you have something to merge?

Will do.

Cheers,
Luke



signature.asc
Description: OpenPGP digital signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics for assessing EFF's Tor relay challenge?

2014-04-05 Thread Karsten Loesing
On 05/04/14 16:42, Nikita Borisov wrote:
> On Sat, Apr 5, 2014 at 8:58 AM, Karsten Loesing  
> wrote:
>> Right now, the script sums up all graphs contained in Onionoo's
>> bandwidth, clients, uptime, and weights documents.  It also limits the
>> range of the new graphs to max(first) to max(last) of given input graphs.
>>
>> For example, assume we want to know the total bandwidth provided by the
>> following 2 relays participating in the relay challenge:
>>
>> datetime:  0, 1, 2, 3, 4, 5, ...
>>
>> relay 1: [5, 4, 5, 6]
>> relay 2:  [4, 3, 5, 4]
>>
>> combined:[8, 9, 9, 6]
>>
>> This is not perfect for various reasons, but it's the best I came up
>> with yesterday.  Also, as we all know, perfect is the enemy of good.
>>
>> (If you're curious, reason #1: the graph goes down at the end, and we
>> can't say whether it's because relay 2 disappeared or did not report
>> data yet; reason #2: we're weighting both relays' B/s equally, though
>> relay 1 might have been online 24/7 and relay 2 only long enough that
>> Onionoo doesn't put in null; there may be more reasons.)
> 
> For the relay challenge, wouldn't you want to include the entire
> period that data is available for (i.e., min(first) to max(last))?
> Otherwise, if you are looking at a month's worth of data and a new
> relay arrives on the last day, your graph would only contain that day.

Very good point!

The reason why I didn't include everything from min(first) to max(last)
is that any graph covers the last $time_period of the relay or bridge
being online and reporting data.  So, the "3_days" graph of a specific
relay could show a 3-day period weeks ago, and we wouldn't want to merge
that with other 3-day periods which are more recent.  Of corse, you're
right that a new relay covering only a few hours in their "3_days" graph
would reduce our combined graph to just that.  Oops.

So, I guess what we want to do is include everything from $(now - 3
days) to $now in the combined graph.  Will fix.

> Also, I think you would want to do datetime.strptime(max(first), ...)
> here: 
> https://github.com/kloesing/challenger/blob/master/challenge.py#L177-L178
> Otherwise you're just taking the last relay's first and last values as
> the new_first and new_last.

Another very good point.  Will fix.

Thanks for the review!

All the best,
Karsten

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics for assessing EFF's Tor relay challenge?

2014-04-05 Thread Karsten Loesing
On 05/04/14 12:19, Lukas Erlacher wrote:
> Hi Karsten,
> 
> On 04/05/2014 09:58 AM, Karsten Loesing wrote:
>> On second thought, and after sleeping over this, I'm less
>> convinced that we should use an external library for the caching.
>> We should rather start with a simple dict in memory and flush it
>> based on some simple rules. That would allow us to tweak the
>> caching specifically for our use case. And it would mean avoiding
>> a dependency. We can think about moving to onion-py at a later
>> point. That gives you the opportunity to unspaghettize your code,
>> and once that is done we'll have a better idea what caching needs
>> we have for the challenger tool to decide whether to move to
>> onion-py or not. Would you still want to help write the simple
>> caching code for challenger?
> 
> I cleaned up the caching code and added a simple in-memory dict
> caching provider that has no further dependencies to onion-py. (it
> also has no provisions for eviction/flushing at all, but I will add
> that next. Right now everything is cached forever, but of course a
> new response from OnionOO replaces an old one.)

Yeah, I think we'll want to define a maximum lifetime of cache
entries, or the poor cache will explode pretty soon.

> I can write the OnionOO API code and caching code for challenger,
> if I can use Python 3 and the requests library. (See below)

Great, your help would be much appreciated!  Want to send me a pull
request whenever you have something to merge?

See my response regarding Python 3 below.

> Of course I'd really like to actually have a user for onion-py,
> since it would help getting the necessary feedback and polish to
> push the library to version 1.0, but I understand if that isn't
> appropriate for this project.

My hope with challenger is that it's written quickly, working quietly
for a year, and then disappearing without anybody noticing.  I'd
rather not want to maintain yet another thing.  So, maybe Weather is a
better candidate for using onion-py than challenger.

>>> I don't really understand what the code does. What is meant by 
>>> "combining" documents? What exactly are we trying to measure?
>>> Once I know that and have thought of a sensible way to
>>> integrate it into onion-py I'm confident I can infact write
>>> that glue code :)
>> Right now, the script sums up all graphs contained in Onionoo's 
>> bandwidth, clients, uptime, and weights documents.  It also
>> limits the range of the new graphs to max(first) to max(last) of
>> given input graphs.
>> 
>> For example, assume we want to know the total bandwidth provided
>> by the following 2 relays participating in the relay challenge:
>> 
>> datetime:  0, 1, 2, 3, 4, 5, ...
>> 
>> relay 1: [5, 4, 5, 6] relay 2:  [4, 3, 5, 4]
>> 
>> combined:[8, 9, 9, 6]
>> 
>> This is not perfect for various reasons, but it's the best I came
>> up with yesterday.  Also, as we all know, perfect is the enemy of
>> good.
>> 
>> (If you're curious, reason #1: the graph goes down at the end,
>> and we can't say whether it's because relay 2 disappeared or did
>> not report data yet; reason #2: we're weighting both relays' B/s
>> equally, though relay 1 might have been online 24/7 and relay 2
>> only long enough that Onionoo doesn't put in null; there may be
>> more reasons.)
> 
> Ah, I see! :) So for scalar attributes of relays (such as
> consensus_weight_fraction) it's just a sum, and for histories it's
> the graphs combined as you just outlined. That makes sense, thank
> you!

Right.  Though details documents are not included, so just graphs, no
scalar attributes.

>> I'm not also sure about Python 3.  Whatever we write needs to run
>> on Debian Wheezy with whatever libraries are present there.  If
>> they're all Python 3, great.  If not, can't do.
> 
> I would strongly prefer to use Python 3. I understand wanting to
> use debian stable (I use it myself), but Python 3 is 6 years old
> and Python 2 is completely dead and its use for new projects is not
> recommended. The only mandatory dependency for onion-py, and for
> me, is requests (I really dislike using urllib* directly - if you
> want to know why, check
> https://gist.github.com/kennethreitz/973705), and the
> python3-requests package in Wheezy is from 2012, and there is no
> python3-flask. :-(
> 
> Is there anything standing against using pip (python3-pip package)
> to install requests and flask from pypi?

If there's a way to build it only with packages coming out of Wheezy's
apt-get, our sysadmins will like us more, and that's a good thing.

Installing packages using Python-specific package managers is going to
make our sysadmins sad, so we should have a very good reason for
wanting such a package.  In general, we don't need the latest and
greatest package.  Unless we do.

All the best,
Karsten

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics for assessing EFF's Tor relay challenge?

2014-04-05 Thread Nikita Borisov
On Sat, Apr 5, 2014 at 3:58 PM, Karsten Loesing  wrote:
> Installing packages using Python-specific package managers is going to
> make our sysadmins sad, so we should have a very good reason for
> wanting such a package.  In general, we don't need the latest and
> greatest package.  Unless we do.

What about virtualenv? Part of the premise behind it is that you can
configure appropriate packages as a developer / operator without
having to bother sysadmins and making them worried about system-wide
effects.

- Nikita
-- 
Nikita Borisov - http://hatswitch.org/~nikita/
Associate Professor, Electrical and Computer Engineering
Tel: +1 (217) 244-5385, Office: 460 CSL
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics for assessing EFF's Tor relay challenge?

2014-04-05 Thread Nikita Borisov
On Sat, Apr 5, 2014 at 8:58 AM, Karsten Loesing  wrote:
> Right now, the script sums up all graphs contained in Onionoo's
> bandwidth, clients, uptime, and weights documents.  It also limits the
> range of the new graphs to max(first) to max(last) of given input graphs.
>
> For example, assume we want to know the total bandwidth provided by the
> following 2 relays participating in the relay challenge:
>
> datetime:  0, 1, 2, 3, 4, 5, ...
>
> relay 1: [5, 4, 5, 6]
> relay 2:  [4, 3, 5, 4]
>
> combined:[8, 9, 9, 6]
>
> This is not perfect for various reasons, but it's the best I came up
> with yesterday.  Also, as we all know, perfect is the enemy of good.
>
> (If you're curious, reason #1: the graph goes down at the end, and we
> can't say whether it's because relay 2 disappeared or did not report
> data yet; reason #2: we're weighting both relays' B/s equally, though
> relay 1 might have been online 24/7 and relay 2 only long enough that
> Onionoo doesn't put in null; there may be more reasons.)

For the relay challenge, wouldn't you want to include the entire
period that data is available for (i.e., min(first) to max(last))?
Otherwise, if you are looking at a month's worth of data and a new
relay arrives on the last day, your graph would only contain that day.

Also, I think you would want to do datetime.strptime(max(first), ...)
here: https://github.com/kloesing/challenger/blob/master/challenge.py#L177-L178
Otherwise you're just taking the last relay's first and last values as
the new_first and new_last.

Cheers,
- Nikita
-- 
Nikita Borisov - http://hatswitch.org/~nikita/
Associate Professor, Electrical and Computer Engineering
Tel: +1 (217) 244-5385, Office: 460 CSL
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics for assessing EFF's Tor relay challenge?

2014-04-05 Thread Lukas Erlacher
Hi Karsten,

On 04/05/2014 09:58 AM, Karsten Loesing wrote:
> On second thought, and after sleeping over this, I'm less convinced that we 
> should use an external library for the caching. We should rather start with a 
> simple dict in memory and flush it based on some simple rules. That would 
> allow us to tweak the caching specifically for our use case. And it would 
> mean avoiding a dependency. We can think about moving to onion-py at a later 
> point. That gives you the opportunity to unspaghettize your code, and once 
> that is done we'll have a better idea what caching needs we have for the 
> challenger tool to decide whether to move to onion-py or not. Would you still 
> want to help write the simple caching code for challenger? 
I cleaned up the caching code and added a simple in-memory dict caching 
provider that has no further dependencies to onion-py. (it also has no 
provisions for eviction/flushing at all, but I will add that next. Right now 
everything is cached forever, but of course a new response from OnionOO 
replaces an old one.)

I can write the OnionOO API code and caching code for challenger, if I can use 
Python 3 and the requests library. (See below)
Of course I'd really like to actually have a user for onion-py, since it would 
help getting the necessary feedback and polish to push the library to version 
1.0, but I understand if that isn't appropriate for this project.
>>  I don't really understand what the code does. What is meant by
>> "combining" documents? What exactly are we trying to measure? Once I
>> know that and have thought of a sensible way to integrate it into
>> onion-py I'm confident I can infact write that glue code :)
> Right now, the script sums up all graphs contained in Onionoo's
> bandwidth, clients, uptime, and weights documents.  It also limits the
> range of the new graphs to max(first) to max(last) of given input graphs.
>
> For example, assume we want to know the total bandwidth provided by the
> following 2 relays participating in the relay challenge:
>
> datetime:  0, 1, 2, 3, 4, 5, ...
>
> relay 1: [5, 4, 5, 6]
> relay 2:  [4, 3, 5, 4]
>
> combined:[8, 9, 9, 6]
>
> This is not perfect for various reasons, but it's the best I came up
> with yesterday.  Also, as we all know, perfect is the enemy of good.
>
> (If you're curious, reason #1: the graph goes down at the end, and we
> can't say whether it's because relay 2 disappeared or did not report
> data yet; reason #2: we're weighting both relays' B/s equally, though
> relay 1 might have been online 24/7 and relay 2 only long enough that
> Onionoo doesn't put in null; there may be more reasons.)
Ah, I see! :) So for scalar attributes of relays (such as 
consensus_weight_fraction) it's just a sum, and for histories it's the graphs 
combined as you just outlined. That makes sense, thank you!
> I'm not also sure about Python 3.  Whatever we write needs to run on
> Debian Wheezy with whatever libraries are present there.  If they're all
> Python 3, great.  If not, can't do.

I would strongly prefer to use Python 3. I understand wanting to use debian 
stable (I use it myself), but Python 3 is 6 years old and Python 2 is 
completely dead and its use for new projects is not recommended.
The only mandatory dependency for onion-py, and for me, is requests (I really 
dislike using urllib* directly - if you want to know why, check 
https://gist.github.com/kennethreitz/973705), and the python3-requests package 
in Wheezy is from 2012, and there is no python3-flask. :-(

Is there anything standing against using pip (python3-pip package) to install 
requests and flask from pypi?
>
> Thanks for your feedback!
>
> All the best,
> Karsten
Cheers,
Luke



signature.asc
Description: OpenPGP digital signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics for assessing EFF's Tor relay challenge?

2014-04-05 Thread Karsten Loesing
On 04/04/14 21:24, Lukas Erlacher wrote:
> Hello everyone (reply all ftw),

Hi Lukas,

> On 04/04/2014 07:13 PM, Karsten Loesing wrote:
>> Christian, Lukas, everyone,
>> 
>> I learned today that we should have something working in a week or
>> two. That's why I started hacking on this today and produced some
>> code:
>> 
>> https://github.com/kloesing/challenger
>> 
>> Here are a few things I could use help with:
>> 
>> - Anybody want to help turning this script into a web app,
>> possibly using Flask?  See the first next step in README.md.
>
> I might be able to do that, but currently I don't have enough free
> time to make a commitment.

Okay.  Maybe I'll give it a try by stealing heavily from Sathya's
Compass code.  Unless somebody else wants to give this a try?

>> - Lukas, you announced OnionPy on tor-dev@ the other day.  Want to
>> look into the "Add local cache for ..." bullet points under "Next
>> steps"?  Is this something OnionPy could support?  Want to write
>> the glue code?
>
> onion-py already supports transparent caching using memcached. I use
> a (hopefully) unique serialisation of the query as the key (see
> serializer functions here:
> https://github.com/duk3luk3/onion-py/blob/master/onion_py/manager.py#L7)
> and have a bit of spaghetti code to check for available cached data
> and the 304 response status from onionoo
> (https://github.com/duk3luk3/onion-py/blob/master/onion_py/manager.py#L97).

On second thought, and after sleeping over this, I'm less convinced that
we should use an external library for the caching.  We should rather
start with a simple dict in memory and flush it based on some simple
rules.  That would allow us to tweak the caching specifically for our
use case.  And it would mean avoiding a dependency.

We can think about moving to onion-py at a later point.  That gives you
the opportunity to unspaghettize your code, and once that is done we'll
have a better idea what caching needs we have for the challenger tool to
decide whether to move to onion-py or not.

Would you still want to help write the simple caching code for challenger?

>  I don't really understand what the code does. What is meant by
> "combining" documents? What exactly are we trying to measure? Once I
> know that and have thought of a sensible way to integrate it into
> onion-py I'm confident I can infact write that glue code :)

Right now, the script sums up all graphs contained in Onionoo's
bandwidth, clients, uptime, and weights documents.  It also limits the
range of the new graphs to max(first) to max(last) of given input graphs.

For example, assume we want to know the total bandwidth provided by the
following 2 relays participating in the relay challenge:

datetime:  0, 1, 2, 3, 4, 5, ...

relay 1: [5, 4, 5, 6]
relay 2:  [4, 3, 5, 4]

combined:[8, 9, 9, 6]

This is not perfect for various reasons, but it's the best I came up
with yesterday.  Also, as we all know, perfect is the enemy of good.

(If you're curious, reason #1: the graph goes down at the end, and we
can't say whether it's because relay 2 disappeared or did not report
data yet; reason #2: we're weighting both relays' B/s equally, though
relay 1 might have been online 24/7 and relay 2 only long enough that
Onionoo doesn't put in null; there may be more reasons.)

> Cutting off the rest of the quote tree here (is that a polite thing
> to do on mailing lists? Sorry if not.), I just have two more comments
> towards Roger's thoughts:
> 
> 1. Groups of relays taking the challenge together could just form
> relay families and we could count relay families in aggregate. (I'm
> already thinking about relay families a lot because gamambel wants me
> to overhaul the torservers exit-funding scripts to use relay
> families.)

Relay families are a difficult topic.  I remember spending a day or two
figuring out how to group by family in Compass a while back.  There must
be some notes or thoughts on Trac if you're curious.

Regarding these graphs, I'm not sure what we would gain from grouping
new relays by family.  My current plan is to provide only graphs that
have a single graph line for all relays and bridges participating in the
challenge.  So, "total bytes read", "total bytes written", "total number
of new relays and bridges", "total consensus weight fraction added",
"total advertised bandwidth added", etc.  I don't think we should add
categories by family or any other criteria.  KISS.

> 2. If you want to do something with consensus weight, why
> not compare against all other new relays based on the first_seen
> property? ("new" can be adjusted until sufficiently pretty graphs
> emerge; and we'd need to periodically (every 4 or 12 or 24 hours?)
> fetch the consensus_weights from onionoo)

I'm not sure what you mean.  We do have consensus weight fractions in
(combined) weights documents.  I'm also planning to add absolute
consensus weights to those documents in the future.

By "fetching something periodically from Onionoo", do you mean 

Re: [tor-relays] Metrics for assessing EFF's Tor relay challenge?

2014-04-04 Thread Lukas Erlacher
Hello everyone (reply all ftw),

On 04/04/2014 07:13 PM, Karsten Loesing wrote:
> Christian, Lukas, everyone,
>
> I learned today that we should have something working in a week or two.
>  That's why I started hacking on this today and produced some code:
>
> https://github.com/kloesing/challenger
>
> Here are a few things I could use help with:
>
>  - Anybody want to help turning this script into a web app, possibly
> using Flask?  See the first next step in README.md.
I might be able to do that, but currently I don't have enough free time to make 
a commitment.
>  - Lukas, you announced OnionPy on tor-dev@ the other day.  Want to look
> into the "Add local cache for ..." bullet points under "Next steps"?  Is
> this something OnionPy could support?  Want to write the glue code?
onion-py already supports transparent caching using memcached. I use a 
(hopefully) unique serialisation of the query as the key (see serializer 
functions here: 
https://github.com/duk3luk3/onion-py/blob/master/onion_py/manager.py#L7) and 
have a bit of spaghetti code to check for available cached data and the 304 
response status from onionoo 
(https://github.com/duk3luk3/onion-py/blob/master/onion_py/manager.py#L97).

I don't really understand what the code does. What is meant by "combining" 
documents? What exactly are we trying to measure? Once I know that and have 
thought of a sensible way to integrate it into onion-py I'm confident I can 
infact write that glue code :)

Cutting off the rest of the quote tree here (is that a polite thing to do on 
mailing lists? Sorry if not.), I just have two more comments towards Roger's 
thoughts:

1. Groups of relays taking the challenge together could just form relay 
families and we could count relay families in aggregate. (I'm already thinking 
about relay families a lot because gamambel wants me to overhaul the torservers 
exit-funding scripts to use relay families.)
2. If you want to do something with consensus weight, why not compare against 
all other new relays based on the first_seen property? ("new" can be adjusted 
until sufficiently pretty graphs emerge; and we'd need to periodically (every 4 
or 12 or 24 hours?) fetch the consensus_weights from onionoo)

Cheers,
Luke

PS: If you'd like me to support different backends for the caching in onion-py, 
I'm open to integrating anything that has a python 3 library.



signature.asc
Description: OpenPGP digital signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics for assessing EFF's Tor relay challenge?

2014-04-04 Thread Karsten Loesing
Christian, Lukas, everyone,

I learned today that we should have something working in a week or two.
 That's why I started hacking on this today and produced some code:

https://github.com/kloesing/challenger

Here are a few things I could use help with:

 - Anybody want to help turning this script into a web app, possibly
using Flask?  See the first next step in README.md.

 - Lukas, you announced OnionPy on tor-dev@ the other day.  Want to look
into the "Add local cache for ..." bullet points under "Next steps"?  Is
this something OnionPy could support?  Want to write the glue code?

 - Christian, want to help write the graphing code that visualizes the
`combined-*.json` files produced by that tool?  The README.md suggests a
few possible graphs.

Thanks in advance!  You're all helping grow the Tor network!

Also replying to Christian's mail inline.

On 28/03/14 09:07, Christian wrote:
> On 27.03.2014 16:25, Karsten Loesing wrote:
>> On 27/03/14 11:57, Roger Dingledine wrote:
>>> Hi Christian, other tor relay fans,
>>>
>>> I'm looking for some volunteers, hopefully including Christian, to work
>>> on metrics and visualization of impact from new relays.
>>>
>>> We're working with EFF to do another "Tor relay challenge" [*], to both
>>> help raise awareness of the value of Tor, and encourage many people to
>>> run relays -- probably non-exit relays for the most part, since that's
>>> the easiest for normal volunteers to step up and do.
>>>
>>> You can read about the first round from several years ago here:
>>> https://www.eff.org/torchallenge
>>>
>>> To make it succeed, the challenge for us here is to figure out what to
>>> measure to track progress, and then measure it and graph it for everybody.
>>>
>>> I'm figuring that like last time, EFF will collect a list of fingerprints
>>> of relays that signed up "because of the challenge".
>>>
>>> One of the main pushes we're aiming for this year is longevity: it's
>>> easy to sign up a relay for two weeks and then stop. We want to emphasize
>>> consistency and encourage having the relays up for many months.
> 
> Do you want the challenge application to simply provide some graphs or
> give some sort of interactive dashboard (clientside JavaScript)?

You asked Roger, and I'm not Roger, but I'd say let's start with some
graphs.  We can always make it more interactive later.  Though I doubt
it will be necessary.

>> Before going through your list of things we'd want to track below, let's
>> first talk about our options to turn a list of fingerprints into fancy
>> graphs:
>>
>>  1. Write a new metrics-web module and put graphs on the metrics
>> website.  This means parsing relay descriptors and storing certain
>> per-relay statistics for all relays.  That gives us maximum flexibility
>> in the kinds of statistics, but is also most expensive in terms of
>> developer hours.  I don't want to do this.
>>
>>  2. Extend Globe to show details pages for multiple relays.  This
>> requires us to move to the server-based Globe-node, because the poor
>> browser shouldn't download graph data for all relays, but the server
>> should return a single graph for all relays.  It's also unclear if the
>> new graphs will be of general interest for Globe users, and if the rest
>> of the Globe details will be confusing to people interested in the relay
>> challenge.  Probably not a great idea, but I'm not sure.
>>
> 
> I agree that Globe isn't the best place to display the challenge graphs.
> Currently the only focus for Globe is to provide data for single relays
> and bridges.
> Imo it would be better if the challenge participants list adds links to
> atlas, blutmagie and globe.

Agreed!

>>  3. Extend Onionoo to return aggregate graph data for a given set of
>> fingerprints.  Seems useful.  But has the big disadvantage that Onionoo
>> would suddenly have to create responses dynamically.  I'm worried about
>> creating a new performance bottleneck there, and this is certainly not
>> possible with poor overloaded yatei.
>>
>>  4. Write a new little tool that fetches Onionoo documents once (or
>> twice) per day for all relays participating in the relay challenge and
>> that produces graph data.  That new tool could probably re-use some
>> Compass code for the backend and some Globe code for the frontend.
>> Graphs could be integrated directly into EFF's website.  This is
>> currently my favorite approach.
>>
> 
> I like this idea.

Glad to hear!  I slightly moved away from the "fetches once or twice per
day" idea to a more elaborate approach.  But the general idea is still
the same.

>> Note for 2--4: Onionoo currently only gives out data for relays that
>> have been running in the past 7 days.  I'd have to extend it to give out
>> all data for a list of fingerprints, regardless of when relays were
>> running the last time.  That's 2--3 days of coding and testing for me.
>> It's also potentially creating a bottleneck, so we should first have a
>> replacement for yatei.
>>
>>> So what are the

Re: [tor-relays] Metrics for assessing EFF's Tor relay challenge?

2014-03-28 Thread Runa A. Sandvik
On Fri, Mar 28, 2014 at 5:45 AM, Karsten Loesing  wrote:
> On 27/03/14 19:51, Runa A. Sandvik wrote:
>> On Thu, Mar 27, 2014 at 3:25 PM, Karsten Loesing  
>> wrote:
>>> Before going through your list of things we'd want to track below, let's
>>> first talk about our options to turn a list of fingerprints into fancy
>>> graphs:
>>
>> Would it be possible to also have a "Top 10 countries with the most
>> Tor relays" graph?
>
> Hi Runa!

Hi Karsten! :)

> Hmm hmm hmm---yes!  Onionoo's details documents contain country
> information, and it shouldn't be too hard to combine them with uptime or
> bandwidth information to make per-country graphs.
>
> (Wow, your question made me rethink how we resolve relay/bridge IP
> addresses to country codes for statistics.  I was always thinking that
> we need to remember the full history of country codes that a
> relay/bridge IP address was resolved to, because a relay/bridge could be
> moved to another country, or a new IP-to-country database might change
> its mind about which country it is in.  But that doesn't really matter
> for statistics where we're mostly interested in the big picture.  We can
> probably just use whatever country code we learned last and apply that
> to the full history of the relay/bridge.  Guess I should resume working
> on per-country graphs for the metrics website soon, for both relays and
> bridges.  Thanks!)

Great! I look forward to seeing the stats for this.

-- 
Runa A. Sandvik
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics for assessing EFF's Tor relay challenge?

2014-03-27 Thread Karsten Loesing
On 27/03/14 19:51, Runa A. Sandvik wrote:
> On Thu, Mar 27, 2014 at 3:25 PM, Karsten Loesing  
> wrote:
>> Before going through your list of things we'd want to track below, let's
>> first talk about our options to turn a list of fingerprints into fancy
>> graphs:
> 
> Would it be possible to also have a "Top 10 countries with the most
> Tor relays" graph?

Hi Runa!

Hmm hmm hmm---yes!  Onionoo's details documents contain country
information, and it shouldn't be too hard to combine them with uptime or
bandwidth information to make per-country graphs.

(Wow, your question made me rethink how we resolve relay/bridge IP
addresses to country codes for statistics.  I was always thinking that
we need to remember the full history of country codes that a
relay/bridge IP address was resolved to, because a relay/bridge could be
moved to another country, or a new IP-to-country database might change
its mind about which country it is in.  But that doesn't really matter
for statistics where we're mostly interested in the big picture.  We can
probably just use whatever country code we learned last and apply that
to the full history of the relay/bridge.  Guess I should resume working
on per-country graphs for the metrics website soon, for both relays and
bridges.  Thanks!)

(Disclaimer: it's pre-second coffee time!)

All the best,
Karsten

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics for assessing EFF's Tor relay challenge?

2014-03-27 Thread Runa A. Sandvik
On Thu, Mar 27, 2014 at 3:25 PM, Karsten Loesing  wrote:
> Before going through your list of things we'd want to track below, let's
> first talk about our options to turn a list of fingerprints into fancy
> graphs:

Would it be possible to also have a "Top 10 countries with the most
Tor relays" graph?

-- 
Runa A. Sandvik
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics for assessing EFF's Tor relay challenge?

2014-03-27 Thread Karsten Loesing
On 27/03/14 11:57, Roger Dingledine wrote:
> Hi Christian, other tor relay fans,
> 
> I'm looking for some volunteers, hopefully including Christian, to work
> on metrics and visualization of impact from new relays.
> 
> We're working with EFF to do another "Tor relay challenge" [*], to both
> help raise awareness of the value of Tor, and encourage many people to
> run relays -- probably non-exit relays for the most part, since that's
> the easiest for normal volunteers to step up and do.
> 
> You can read about the first round from several years ago here:
> https://www.eff.org/torchallenge
> 
> To make it succeed, the challenge for us here is to figure out what to
> measure to track progress, and then measure it and graph it for everybody.
> 
> I'm figuring that like last time, EFF will collect a list of fingerprints
> of relays that signed up "because of the challenge".
> 
> One of the main pushes we're aiming for this year is longevity: it's
> easy to sign up a relay for two weeks and then stop. We want to emphasize
> consistency and encourage having the relays up for many months.

Before going through your list of things we'd want to track below, let's
first talk about our options to turn a list of fingerprints into fancy
graphs:

 1. Write a new metrics-web module and put graphs on the metrics
website.  This means parsing relay descriptors and storing certain
per-relay statistics for all relays.  That gives us maximum flexibility
in the kinds of statistics, but is also most expensive in terms of
developer hours.  I don't want to do this.

 2. Extend Globe to show details pages for multiple relays.  This
requires us to move to the server-based Globe-node, because the poor
browser shouldn't download graph data for all relays, but the server
should return a single graph for all relays.  It's also unclear if the
new graphs will be of general interest for Globe users, and if the rest
of the Globe details will be confusing to people interested in the relay
challenge.  Probably not a great idea, but I'm not sure.

 3. Extend Onionoo to return aggregate graph data for a given set of
fingerprints.  Seems useful.  But has the big disadvantage that Onionoo
would suddenly have to create responses dynamically.  I'm worried about
creating a new performance bottleneck there, and this is certainly not
possible with poor overloaded yatei.

 4. Write a new little tool that fetches Onionoo documents once (or
twice) per day for all relays participating in the relay challenge and
that produces graph data.  That new tool could probably re-use some
Compass code for the backend and some Globe code for the frontend.
Graphs could be integrated directly into EFF's website.  This is
currently my favorite approach.

Note for 2--4: Onionoo currently only gives out data for relays that
have been running in the past 7 days.  I'd have to extend it to give out
all data for a list of fingerprints, regardless of when relays were
running the last time.  That's 2--3 days of coding and testing for me.
It's also potentially creating a bottleneck, so we should first have a
replacement for yatei.

> So what are the things we'd want to track?
> 
> - Number of relays signed up that are Running, over time.

We can do something here with Onionoo's new uptime documents.

> - Total bandwidth history of these running relays, over time.

We can sum up data from bandwidth documents for this.

> - Maybe a graph showing the total number of bytes ever contributed
>   by these relays? That would impress people perhaps.

Sure, same data as above.

> - Total consensus weight of these running relays, over time.

We only have total consensus weight *fraction*, but yes.

> - Something emphasizing duration -- e.g. the total consensus weight of
>   the subset of the relays that have been in the consensus for 90% of
>   the past month, 2 months, 6 months, etc. Are there better ideas here
>   I hope? We'll want to be cognizant that if we're in the first week
>   of the challenge, the 2 month graph will be empty and thus look sad.

Not sure what the 90% part is for, but yes, graphs with total consensus
weight fraction are doable.

Regarding the sad-looking 2 month graph, we can easily define the data
when the challenge starts and not show graphs until they make sense.
Note that the current intervals for most data are 1 week, 1 month, 3
months, 1 year, and 5 years.

> - Something comparing the above numbers to the total numbers. Given how
>   huge some of the relays are lately, it would be easily to visualize
>   the new contribution as a tiny irrelevant fraction, which could be
>   disheartening to new relay operators even if their relays will actually
>   become a big deal with some patience. What are some strategies for
>   making this work right? E.g. a layer graph showing y layered on top of
>   x where y is the new contribution, rather than a percentage-of-total
>   graph that shows approximately 0%.

Absolute contributions to consensus weight are not availabl

[tor-relays] Metrics for assessing EFF's Tor relay challenge?

2014-03-27 Thread Roger Dingledine
Hi Christian, other tor relay fans,

I'm looking for some volunteers, hopefully including Christian, to work
on metrics and visualization of impact from new relays.

We're working with EFF to do another "Tor relay challenge" [*], to both
help raise awareness of the value of Tor, and encourage many people to
run relays -- probably non-exit relays for the most part, since that's
the easiest for normal volunteers to step up and do.

You can read about the first round from several years ago here:
https://www.eff.org/torchallenge

To make it succeed, the challenge for us here is to figure out what to
measure to track progress, and then measure it and graph it for everybody.

I'm figuring that like last time, EFF will collect a list of fingerprints
of relays that signed up "because of the challenge".

One of the main pushes we're aiming for this year is longevity: it's
easy to sign up a relay for two weeks and then stop. We want to emphasize
consistency and encourage having the relays up for many months.

So what are the things we'd want to track?

- Number of relays signed up that are Running, over time.
- Total bandwidth history of these running relays, over time.
- Maybe a graph showing the total number of bytes ever contributed
  by these relays? That would impress people perhaps.
- Total consensus weight of these running relays, over time.
- Something emphasizing duration -- e.g. the total consensus weight of
  the subset of the relays that have been in the consensus for 90% of
  the past month, 2 months, 6 months, etc. Are there better ideas here
  I hope? We'll want to be cognizant that if we're in the first week
  of the challenge, the 2 month graph will be empty and thus look sad.
- Something comparing the above numbers to the total numbers. Given how
  huge some of the relays are lately, it would be easily to visualize
  the new contribution as a tiny irrelevant fraction, which could be
  disheartening to new relay operators even if their relays will actually
  become a big deal with some patience. What are some strategies for
  making this work right? E.g. a layer graph showing y layered on top of
  x where y is the new contribution, rather than a percentage-of-total
  graph that shows approximately 0%.

We could also imagine more niche categories. For example, if we're hoping
to get people to sign up relays at universities, we could imagine that
the folks running the challenge give us a list of fingerprints of relays
that self-identify as being at universities, and then we do up the same
set of graphs with that subset of relays.

So, Christian, others, how much of this is possible as-is or with some
limited tweaking, with Globe and related scripts? I am hoping the answer
is most of it. :) I also cc Karsten because a lot of this overlaps with
the metrics scripts, but I am expecting Karsten to push back against
the idea of integrating these measurements more with the metrics project.

Any other ideas for what to measure to help people know whether their
contribution is being worthwhile?

[*] Please don't take this mail as any official announcement, or timeline,
or any of that. At this point we need to collect people to help make
this happen, not collect news stories.

Thanks!
--Roger

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics / Router Details

2012-01-14 Thread Karsten Loesing
On 1/14/12 8:27 AM, Sebastian Urbach wrote:
> Hi,
> 
> Im receiving the following error message when i try to view my router
> details on metrics:
> 
> Proxy Error
> 
> The proxy server received an invalid response from an upstream server.
> The proxy server could not handle the request GET /routerdetail.html.
> 
> Reason: Error reading from remote server
> 
> Url for this message is:
> 
> https://metrics.torproject.org/routerdetail.html?fingerprint=0aff5440ae93f2ed679b20e543081710312b7333

Should be fixed now.

The problem was that we ran SELECT MAX() on the newly partitioned
database table, an operation that isn't well supported in PostgreSQL 8.4.

Changed to a SELECT MAX() on another, non-partitioned table.  That
should do the trick.

Best,
Karsten
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] Metrics / Router Details

2012-01-13 Thread Sebastian Urbach
Hi,

Im receiving the following error message when i try to view my router
details on metrics:

Proxy Error

The proxy server received an invalid response from an upstream server.
The proxy server could not handle the request GET /routerdetail.html.

Reason: Error reading from remote server

Url for this message is:

https://metrics.torproject.org/routerdetail.html?fingerprint=0aff5440ae93f2ed679b20e543081710312b7333

-- 
Mit freundlichen Grüßen / Yours sincerely

Sebastian Urbach


Religion is something left over from the infancy of
our intelligence, it will fade away as we adopt
reason and science as our guidelines.


Bertrand Arthur William Russell (1872-1970),
British philosopher, logician, mathematician,
historian, and social critic.


signature.asc
Description: PGP signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics portal problems?

2011-10-19 Thread Karsten Loesing
On 10/19/11 10:55 PM, Rick Huebner wrote:
> Is the metrics portal at https://metrics.torproject.org/ partly down, or
> undergoing maintenance or something?  I see that some of the stats don't
> seem to have been updated in the last week or so.  Many of the graphs
> are truncated, e.g.
> https://metrics.torproject.org/users.html?graph=direct-users&start=2011-10-9&end=2011-10-19&country=all&dpi=72#direct-users,
> and the per-relay stats on the
> https://metrics.torproject.org/networkstatus.html page all seem to stop
> with the 2011-10-12 13:00:00.0 consensus.

This is a maintenance operation taking much longer than anticipated.  We
had to rebuild some database tables with aggregated values for almost 1
year of data.  We started from oldest to newest and have made it from
December 2010 to July 2011.  I now changed this to rebuild aggregates
from newest to oldest, so that the most recent data will be available on
the website in the next few hours.  Sorry for the confusion.

Best,
Karsten
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] Metrics portal problems?

2011-10-19 Thread Rick Huebner
Is the metrics portal at https://metrics.torproject.org/ partly down, or 
undergoing maintenance or something?  I see that some of the stats don't 
seem to have been updated in the last week or so.  Many of the graphs 
are truncated, e.g. 
https://metrics.torproject.org/users.html?graph=direct-users&start=2011-10-9&end=2011-10-19&country=all&dpi=72#direct-users, 
and the per-relay stats on the 
https://metrics.torproject.org/networkstatus.html page all seem to stop 
with the 2011-10-12 13:00:00.0 consensus.

___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays