[tor-relays] Relay Hibernation and Wake Up Questions

2014-04-04 Thread Nik
Hello,

I'm somewhat new to running relays and have a couple separate but
related questions about the hibernation and waking up process.

1)  I have AccountingMax and AccountingStart set (to '1850 GB' and
'month 3 15:00', respectively).  Yesterday, the last day of the
accounting period, my relay went into hibernation at around 7:00.  I
started running this relay just last month, however, and it had only
used approximately 440 GB of the AccountingMax quota.  Tor then tried to
wake up at 15:00.  Is it expected behavior with Accounting{Max,Start} to
hibernate on the last/first day of the period even if you're under quota?

2)  I have the relay configured to run on port 443.  When Tor woke up,
it was unable to bind to 443, and so the relay stayed down.  Again, is
this known/expected behavior that if a relay is set to run on a
privileged port it needs human/root intervention to re-bind after
hibernation?

By the way, I'm running Debian Stable and using the system packages, so
I have (I'm assuming) an old-ish version of Tor (2.3.25).

Sorry if this has been answered elsewhere before.

Thanks,
Nik



signature.asc
Description: OpenPGP digital signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


Re: [tor-relays] Metrics for assessing EFF's Tor relay challenge?

2014-04-04 Thread Karsten Loesing
Christian, Lukas, everyone,

I learned today that we should have something working in a week or two.
 That's why I started hacking on this today and produced some code:

https://github.com/kloesing/challenger

Here are a few things I could use help with:

 - Anybody want to help turning this script into a web app, possibly
using Flask?  See the first next step in README.md.

 - Lukas, you announced OnionPy on tor-dev@ the other day.  Want to look
into the "Add local cache for ..." bullet points under "Next steps"?  Is
this something OnionPy could support?  Want to write the glue code?

 - Christian, want to help write the graphing code that visualizes the
`combined-*.json` files produced by that tool?  The README.md suggests a
few possible graphs.

Thanks in advance!  You're all helping grow the Tor network!

Also replying to Christian's mail inline.

On 28/03/14 09:07, Christian wrote:
> On 27.03.2014 16:25, Karsten Loesing wrote:
>> On 27/03/14 11:57, Roger Dingledine wrote:
>>> Hi Christian, other tor relay fans,
>>>
>>> I'm looking for some volunteers, hopefully including Christian, to work
>>> on metrics and visualization of impact from new relays.
>>>
>>> We're working with EFF to do another "Tor relay challenge" [*], to both
>>> help raise awareness of the value of Tor, and encourage many people to
>>> run relays -- probably non-exit relays for the most part, since that's
>>> the easiest for normal volunteers to step up and do.
>>>
>>> You can read about the first round from several years ago here:
>>> https://www.eff.org/torchallenge
>>>
>>> To make it succeed, the challenge for us here is to figure out what to
>>> measure to track progress, and then measure it and graph it for everybody.
>>>
>>> I'm figuring that like last time, EFF will collect a list of fingerprints
>>> of relays that signed up "because of the challenge".
>>>
>>> One of the main pushes we're aiming for this year is longevity: it's
>>> easy to sign up a relay for two weeks and then stop. We want to emphasize
>>> consistency and encourage having the relays up for many months.
> 
> Do you want the challenge application to simply provide some graphs or
> give some sort of interactive dashboard (clientside JavaScript)?

You asked Roger, and I'm not Roger, but I'd say let's start with some
graphs.  We can always make it more interactive later.  Though I doubt
it will be necessary.

>> Before going through your list of things we'd want to track below, let's
>> first talk about our options to turn a list of fingerprints into fancy
>> graphs:
>>
>>  1. Write a new metrics-web module and put graphs on the metrics
>> website.  This means parsing relay descriptors and storing certain
>> per-relay statistics for all relays.  That gives us maximum flexibility
>> in the kinds of statistics, but is also most expensive in terms of
>> developer hours.  I don't want to do this.
>>
>>  2. Extend Globe to show details pages for multiple relays.  This
>> requires us to move to the server-based Globe-node, because the poor
>> browser shouldn't download graph data for all relays, but the server
>> should return a single graph for all relays.  It's also unclear if the
>> new graphs will be of general interest for Globe users, and if the rest
>> of the Globe details will be confusing to people interested in the relay
>> challenge.  Probably not a great idea, but I'm not sure.
>>
> 
> I agree that Globe isn't the best place to display the challenge graphs.
> Currently the only focus for Globe is to provide data for single relays
> and bridges.
> Imo it would be better if the challenge participants list adds links to
> atlas, blutmagie and globe.

Agreed!

>>  3. Extend Onionoo to return aggregate graph data for a given set of
>> fingerprints.  Seems useful.  But has the big disadvantage that Onionoo
>> would suddenly have to create responses dynamically.  I'm worried about
>> creating a new performance bottleneck there, and this is certainly not
>> possible with poor overloaded yatei.
>>
>>  4. Write a new little tool that fetches Onionoo documents once (or
>> twice) per day for all relays participating in the relay challenge and
>> that produces graph data.  That new tool could probably re-use some
>> Compass code for the backend and some Globe code for the frontend.
>> Graphs could be integrated directly into EFF's website.  This is
>> currently my favorite approach.
>>
> 
> I like this idea.

Glad to hear!  I slightly moved away from the "fetches once or twice per
day" idea to a more elaborate approach.  But the general idea is still
the same.

>> Note for 2--4: Onionoo currently only gives out data for relays that
>> have been running in the past 7 days.  I'd have to extend it to give out
>> all data for a list of fingerprints, regardless of when relays were
>> running the last time.  That's 2--3 days of coding and testing for me.
>> It's also potentially creating a bottleneck, so we should first have a
>> replacement for yatei.
>>
>>> So what are the

Re: [tor-relays] Metrics for assessing EFF's Tor relay challenge?

2014-04-04 Thread Lukas Erlacher
Hello everyone (reply all ftw),

On 04/04/2014 07:13 PM, Karsten Loesing wrote:
> Christian, Lukas, everyone,
>
> I learned today that we should have something working in a week or two.
>  That's why I started hacking on this today and produced some code:
>
> https://github.com/kloesing/challenger
>
> Here are a few things I could use help with:
>
>  - Anybody want to help turning this script into a web app, possibly
> using Flask?  See the first next step in README.md.
I might be able to do that, but currently I don't have enough free time to make 
a commitment.
>  - Lukas, you announced OnionPy on tor-dev@ the other day.  Want to look
> into the "Add local cache for ..." bullet points under "Next steps"?  Is
> this something OnionPy could support?  Want to write the glue code?
onion-py already supports transparent caching using memcached. I use a 
(hopefully) unique serialisation of the query as the key (see serializer 
functions here: 
https://github.com/duk3luk3/onion-py/blob/master/onion_py/manager.py#L7) and 
have a bit of spaghetti code to check for available cached data and the 304 
response status from onionoo 
(https://github.com/duk3luk3/onion-py/blob/master/onion_py/manager.py#L97).

I don't really understand what the code does. What is meant by "combining" 
documents? What exactly are we trying to measure? Once I know that and have 
thought of a sensible way to integrate it into onion-py I'm confident I can 
infact write that glue code :)

Cutting off the rest of the quote tree here (is that a polite thing to do on 
mailing lists? Sorry if not.), I just have two more comments towards Roger's 
thoughts:

1. Groups of relays taking the challenge together could just form relay 
families and we could count relay families in aggregate. (I'm already thinking 
about relay families a lot because gamambel wants me to overhaul the torservers 
exit-funding scripts to use relay families.)
2. If you want to do something with consensus weight, why not compare against 
all other new relays based on the first_seen property? ("new" can be adjusted 
until sufficiently pretty graphs emerge; and we'd need to periodically (every 4 
or 12 or 24 hours?) fetch the consensus_weights from onionoo)

Cheers,
Luke

PS: If you'd like me to support different backends for the caching in onion-py, 
I'm open to integrating anything that has a python 3 library.



signature.asc
Description: OpenPGP digital signature
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays


[tor-relays] Rapid multiple connections from same relay or client on data port

2014-04-04 Thread Tora Tora Tora
I am running a latest 0.2.5.3-alpha Tor build. This time I am observing
multiple connections within one minute established on a data port from
the same address (not sure if client or relay). The latest flood of
connections comes from One World Labs who claim to be a computer
security company that also searches for leaked/stolen company
information in the "dark Internet" or something along those lines.

It seems to me that, since the circuits are connected randomly, the
likelihood of the same relay having multiple connections to my single
relay within such a short period of time is low. I think someone already
pointed out earlier that some clients used to start a number of circuits
before they needed them. I guess if such "broken" client chooses my
relay as an entry point, I can imagine they might start many circuits
fast. But then 0.2.5.3 release notes claimed improvements in DOS protection.

>From a practical point, is there a rule at what point I should consider
rapid multiple connections from the same address to my relay's
directory/data port a DOS attack and take some measures?
___
tor-relays mailing list
tor-relays@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-relays