Re: [tor-dev] moved from [Tor-censorship-events] Improving the censorship event detector.

2015-08-21 Thread l.m
Hi George,

You sell yourself short. It was a good first attempt. Now I should
clarify. The last time I spoke to Karsten about this they indicated
that the measurement team has other priorities (not obvious from the
outdated roadmap). Karsten quoted an approximation of a year+ before a
replacement is expected.

I'm just an anon to them so I cannot change these things. I hope that
clarifies your question of interest.

On the other hand, my interest in the censorship detector started as
an improvement to metrics-lib and onionoo. In it's basic form the fork
takes the data, recognizes patterns using applied linguistics, and
performs some actions. Getting the data for analysis of censorship is
in some ways a simplification. However progress will be slower than
you might like because the effort here will be split between this and
the fork of metrics-lib. 

I really do appreciate your interest (and that of Joss) so I'd like to
keep this discussion going. 

In the paper by Joss Wright et al, events besides just censorship were
found to be of use as an indicator of an environment where censoring
services leads to an increase in tor use. This sounds like the
database you mention. If such a database included events like China's
attack on GitHub, or Turkey blocking twitter, or various other
social-political indicators, this would make for a concrete
improvement from the perspective of public-research stakeholders. I
was also inspired by a recent paper that showed how linguistics can be
applied to sample the social-political discourse to predict events. In
the absence of data for a country, and service, if social indicators
show dissatisfaction with a policy to block the service, you can
consider this an entry to the database. Over time this sampling would
lead to differing discourses which could be used to not just predict
anomalies but to help identify why people use tor, and what motivates
the censor. The only downside here is I'm not fluent in multiple
spoken languages, so there may be some loss of context if the data
source is chosen arbitrarily.

When it comes to distinguishing reachability and interference, a
client may try to use tor at a laundry center in an otherwise
`democratic` and `free` country. This location is independently
controlled by the owner, and if they decide to block tor, that's ok.
That shouldn't be included. This type of event is unlikely to
influence results terribly anyway. I do wish OONI Project could help
more here.

That just leaves the tor project developer stakeholder. I think I will
leave this stakeholder to it's own devices. It's questionable to ask
someone who's being censored to run any test without some assurance of
their safety.

That's all from me for now. 
Danke
--leeroy
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Number of directory connections

2015-08-21 Thread l.m
Oh I see, so they happened before. I wasn't sure about that. In that
case the last consensus stored locally must have been many days old.
If that's the case you would bootstrap from dirauths then use your
guard for tunneling later directory request.

--leeroy

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Number of directory connections

2015-08-21 Thread l.m
Hi,

UseEntryGuardsAsDirGuards defaults to 1 in torrc.

So if you did not change this default you will use entry guards for
tunneling directory connections.

--leeroy

On 8/21/2015 at 7:46 AM, tordev...@safe-mail.net wrote:
Original Message 
From: Mike Perry 
Subject: [tor-dev] Proposal: Padding for netflow record resolution
reduction
Date: Thu, 20 Aug 2015 21:12:54 -0700

 Tor clients currently maintain one TLS connection to their Guard
node to
 carry actual application traffic, and make up to 3 additional
connections to
 other nodes to retrieve directory information.

That doesn't seem to be the case. Tor from TBB 5.0 startup (directory
info was a couple of days old):

[GUARD] xxx.xxx.xxx.xxx:443  received: 17967   sent: 12865
[dizum] 194.109.206.212:443  received: 564549  sent: 19078
[Faravahar] 154.35.175.225:443   received: 253909  sent: 55616
[maatuska]  171.25.193.9:80  received: 132954  sent: 28519
[tor26] 86.59.21.38:443  received: 370630  sent: 80132

There were no other connection attempts.
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [RFC] On new guard algorithms and data structures

2015-08-21 Thread l.m
Hi,

I'm curious what analysis has been done against a gateway adversary.
In particular dealing with the effectiveness of entry guards against
such an adversary. There's a part of me that thinks it doesn't work at
all for this case. Only because I've been studying such an adversary
at the AS-level and what I see over time is disturbing. Any pointer to
related material?

thanks
--leeroy

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] moved from [Tor-censorship-events] Improving the censorship event detector.

2015-08-20 Thread l.m
Hi Joss,

Thank you for the fine paper. I look forward to reading it. Karsten
would be keen on it too (and maybe also your offer) if you haven't
already forwarded it to them. My interest in fixing it is (mostly)
recreational. I have some thoughts on how to proceed, but I'm not a
representative of tor project.

Regards
--leeroy

Hi,

These are well identified issues. We've been working here on a way to
improve the current filtering detection approach, and several of the
points above are things that we're actively hoping to work into our
approach. Differentiating 'filtering' from 'other events that affect
Tor
usage' is tricky, and will most likely have to rely on other
measurements from outside Tor. We're currently looking at ways to
construct models of 'normal' behaviour in a way that incorporates
multiple sources of data.

We have a paper up on arXiv that might be of interest. I'd be
interested
to be in touch with anyone who's actively working on this. (We have
code, and would be very happy to work on getting it into production.)
I've shared the paper with a few people directly, but not here on the
list.  

arXiv link: http://arxiv.org/abs/1507.05819

We were looking at any anomalies, not only pure Tor-based filtering
events. For the broader analysis, significant shifts in Tor usage are
very interesting. It's therefore useful to detect a range of unusual
behaviours occurring around Tor, and have a set of criteria within
that
to allow differentiating 'hard' filtering events from softer anomalies
occurring due to other factors.

Joss
-- 
Dr. Joss Wright | Research Fellow 
Oxford Internet Institute, University of Oxford
http://www.oii.ox.ac.uk/people/?id=176

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] moved from [Tor-censorship-events] Improving the censorship event detector.

2015-08-20 Thread l.m
Hi,

As some of you may be aware, the mailing list for censorship events
was recently put on hold indefinitely. This appears to be due to the
detector providing too much false positive in it's current
implementation. It also raises the question of the purpose for such a
mailing list. Who are the stakeholders? What do they gain from an
improvement?

I've read some of the documentations about this. As far as I can tell
at a minimum an `improvement` in the event detector would be to:

- reduce false positives
- distinguish between tor network reachability, and tor network
interference
- enable/promote client participation through the submission of
results from an ephemeral test (itself having property provably
correct and valid)

In order to be of use to the researchers it needs greater analysis
capability. Is it enough to say censorship is detected? By this point
the analysis is less interesting--because the discourse which itself
lead to the tor use is probably evident (or it becomes harder to
find). On the other hand, if a researcher is aware of some emerging
trend they may predict the censorship event by predicting the use of
tor. This may also be of use in analysis of other events.

- should detect more than just censorship
- accept input from researchers

From the tech reports it looks like Philipp has a plan for an
implementation of the tests noted above. It's only the format of the
results submission which is unknown.

- provide client test results to tor project developers
- make decision related data available
Regards
--leeroy
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [RFC] On new guard algorithms and data structures

2015-08-20 Thread l.m

 a) The network is not hostile and allows access just fine, but...

This came up before didn't it. Nick mentioned that the question
`network down` isn't the easiest question to answer portably.
Supposing such a network could have it's properties (like route)
enumerated this might provide another solution to (a). If the network
changes in some measurable way without also being immediately preceded
by a bootstrap (route shows next hop unreachable), then consider the
network down and schedule attempts to reestablish communication.

A key problem will be distinguishing this type of network from a
hostile network where the access is just cut off. Most likely to force
a bootstrap, or guard-rotation like activity. A warning here to
indicate the network was bootstrapped, but for some past interval(s)
the network appears down (and should the client check firewall,
gateway, or try a bridge). A client should probably be `aware` of some
level of network access, otherwise all solutions are naive.

 b) ...

Retrying guards is the crux of the problem. If you blindly retry
guards, even to prevent rotation, you eventually come to a hard place
where this will backfire badly. Even if it works sometimes. Although I
don't think the client should rely on the OS (which may be
compromised).

--leeroy

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [RFC] On new guard algorithms and data structures

2015-08-20 Thread l.m
 Thanks for the input!

Hey, no problem. Thank you for working on this too.

 Can you suggest a retry amount and time interval?

If the adversary is at the gateway and can do filtering, they pretty
much want some rotation. Whatever that reason may be (choose a guard
you've already chosen, or choose some other which may be adversarial).
In the case you describe, I would minimize retry and maximize interval
size. Sorry for being unspecific. The reason being if your client has
selected and not connected, the adversary may become aware of this
selection. They can then fingerprint your use when you change
locations using the guards. It acts as a foothold to launch further
attacks. It means even if the adversary doesn't initially succeed
against you, then can always resume their efforts later.

A simplification might be to have the client explicitly state location
changes. More than just detecting ip changes like tor. When you start
TBB you make a choice, or in the torrc. Easier than network awareness
for tor. You're at a trusted location or unknown. If the location is
trusted then less skepticism is needed when forced to choose a new
guard, or if you should retry connecting, and how often.

If the location is otherwise you expect some degree of third-party
interference is likely. You expect that rotation is unlikely to be
benign (you may already have a compromised guard). The guard which was
just chosen should be treated with skepticism, network interruption
and outage is likely suspicious, and any friendly guards can be used
to identify you if you change location where this gateway is still
used. Here you find the airport example. Are long-lived guards and the
default path selection implementation as secure here. Some analysis is
in order. Maybe short-lived (and not persistent) guards, and tuned
path selection, is as good as long-lived guards at the trusted
location? The whole question of whether the entry guards concept can
work effectively in an untrusted location is being questioned here.

It might be better to just default-drop guards between untrusted
location, while persisting guards at explicitly trusted locations.

Some symptoms of this adversary are: unable to bootstrap from
dirauths, guards which were working have now become unlisted/down,
client behaviors symptomatic of censorship which persist between
locations unless guards dropped, traffic flows begin to favor a
particular guard over time after multiple rotations, multiple guards
become unreachable at the same time when another guard is chosen.
--leeroy
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Get Stem and zoossh to talk to each other

2015-08-16 Thread l.m
Hi Philipp,

First, thank you for the input. I will certainly review your
discussions with other measurement team members. I'm sorry I wasn't
able to attend.

On the subject of databases and why they're a kludge. Databases
represent relationships between data as joins. Joins are a construct
which must be maintained by the database which must persist or be
enforced by integrity constraints. A database may be useful to store
data in it's final form, and to represent relationships between such
entities. It requires computation in an interpreted language and joins
are not represented using formal math. (In a matter of speaking
database theory does encompass some abstract math objects in the form
of sets). Storing data and representing known relationships is what a
database is designed for. Analyzing data and finding dynamic
relationships is something a database will never do well--it's outside
the intended use. Formal (mathematically) methods for representing
semantics can always be proved correct using rigorous methods, and
will always be faster. Imagine if tor's path selection algorithm were
implemented as a database. It would work but the math-derived
implementation will also be vastly superior.

Allow me to clarify further. The formal language described here is
used to derive subset languages. In a matter of speaking the base
language is a representation of tor's network communication. By adding
additional grammar to this language a researcher can define formally
the semantic relationships that hold particular interest or meaning.
One researcher, who is only interested in onionoo-like applications
(which is me in this case, not Karsten) would create a grammar
describing such content. Another who is interested in a particular
class of analysis might have another grammar. Right now my objective
in the forks is to make this possible (it's not currently). 

The advantage is it's easy to maintain for researchers, easy to
maintain for developers, easy to create proofs on the system, easy to
implement formal validation methods (which you may really want for
some important classes of research).

So there's really not a language to learn per-se. It's a formal method
of making all that tor-network gibberish make sense. Once you've
described the semantic meaning it's *all* automatic. Want that
semantic relationship to build a shiny viz in R--automatic. Want those
semantics to trigger an email for censorship--automatic. Would you
rather have a report and a graph describing nodes involved in a
potential attack--automatic. Would you like to create JSON
representations of related entity--automatic.

Strangely, in the history of analysis at tor project, no one has
tried, and it is not implemented in any reusable/presentable form. I
very much doubt a potential-sponsor would be willing to sponsor work
on metrics-lib, because it's basically useless for analysis (same as
the others I've mentioned). A researcher has to do too much work to
perform analysis to see tor project as having contributed to making it
easy.

I hope that clears things about having to learn a language. Although
that's also possible, the techniques are not being used here to create
a programming language. The techniques are being used to perform
linguistics on tor data. It's possible however to extend this work to
define a language for programming, but that's not the primary
objective. (An implementation, such as I describe, would make that
possible in a formal way--which is good of course)

Regards
--leeroy

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Get Stem and zoossh to talk to each other

2015-07-31 Thread l.m
Hi Philipp,

I know I've already mentioned some thoughts on this subject. I would
be interested in your thoughts on the types of challenging questions
such a hypothetical DSL might answer. I've already put some effort
into this (forking metrics-lib), but I'm still new to working with tor
network data. There's around a terabyte of it and I can't possibly
imagine every interesting scenario at this point. Right now I'm trying
to be as flexible (and general) as possible in the implementation.
Besides what you've already mentioned, what other types of uses do you
envision? I'm interested in being able to answer questions which can
only be answered by looking at macroscopic details level over time.
Things like how to draw interesting facts from performance data, and
how to improve collection (signalling, messaging, new metrics, etc)
towards making attacks more visible.

Areas I'm fuzzy on include torflow data. Mostly because up to a couple
weeks ago I didn't know there *was* a spec (and instead treated it as
blackbox).

If there are common, and challenging questions, that are more specific
than just 'dive in' and explore, please do be creative.

Thanks
--leeroy

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Is anyone using tor-fw-helper? (Was Re: BOINC-based Tor wrapper)

2015-07-23 Thread l.m
It's probably for the best. The implementation of upnp and nat-pmp is
frequently done incorrectly. Many implementations simply break the fw
security or leak identifying information by enabling the feature. I
once saw a case which opened port 0 everytime upnp was used. Not
closed, or stealth, but open. Encouraging running a relay is all good,
but doing it and not being able to account for implementations which
introduce security problems is risky.

--leeroy

On 7/23/2015 at 2:26 PM, Jacob Appelbaum  wrote: Also - does this
mean that after many many years... that this new
 version of tor-fw-helper be enabled by default at build time?
Pretty
 please? :-)

 Unlikely, AFAIK the general plan was to have it as a separate
package.


That is really a major bummer if so - we should be shipping this code
and enabling it by default. If a user wants to run a relay, they
should only have to express that intent with a single button.

All the best,
Jake
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Commit broken code to otherwsie working master

2015-07-20 Thread l.m
Hi,

Is it normal for a core developer to want to commit broken code to
master? I mean if the code is known to be completely broken. Wouldn't
it be better to fix the code that is broken before commit. I mean
master is a basis for working code isn't it?

--leeroy
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] tor#16518: Read-Only Filesystem Error opening tor lockfile in 0.2.6.9 but not 0.2.5.12

2015-07-08 Thread l.m
Even for read-only filesystem, tor will attempt to fix folder
permission using chmod. I find it unusual that I don't see this in
your logs. 

--leeroy
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] tor#16518: Read-Only Filesystem Error opening tor lockfile in 0.2.6.9 but not 0.2.5.12

2015-07-07 Thread l.m
Sounds like access control gone wrong. An older version works but a
newer version fails. Permissions on the filesystem look fine from
mount output. So do you use access control, apparmor, selinux,
grsecurity, fsprotect, bilibop, etc? In particular the tor package
which is mentioned in your ticket includes an apparmor profile and
suggests apparmor-utils.

Also try checking your system logs
--leeroy

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] onionoo: new field: measured flag (#16020)

2015-07-06 Thread l.m
Hi nusenu,

Since you posted to tor-dev I guess you're asking for community input
too. About your use cases, Onionoo is for obtaining data about running
relays, not tor network health, or BWAuth activity. You can answer
this question by looking at the latest consensus data from CollecTor,
counting relays, and unmeasured. So this use case is out of the scope
of Onionoo use. Your second use case doesn't help the operator debug.

What does not being measured mean in the context posed by the ticket?
(whichever, boolean or int)

a) the relay was not measured
b) the relay was supposed to be measured but didn't respond
c) the relay was supposed to be measured but the measurement was
blocked due to network interference
d) the relay was measured but the threshold number of measurements was
not obtained
... and so on. Not being measured doesn't help with debugging. The
only thing which matters is being `measured` in which case a relay has
consensus weight.

The only data which can be relied upon, and which is also a property
of a running relay that has been measured is consensus weight. You can
track this using the Weights document.

I hope that helps
--leeroy
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Bi-directional families in Onionoo and consensus weight as measure of relayed bandwidth

2015-07-02 Thread l.m
The major problem with ticket 16276 is that it isn't a fix (as you
seek here). It just moves the current implementation into the details
document rather than being done in the node index. I don't think you
*can* fix it as you seek. Bi-directionality isn't an enforceable
property. The spec makes no guarantee. The internet makes no
guarantee. You might as well remove the family property entirely than
try to do what you suggest.

What you propose isn't possible by the properties of tor's network.
The best you can do is take a measurement and hope it applies to all
views of the network. I made some comments alluding to this in 16276.
I would happily work on the ticket if it actually presented a
solution.

Comments appreciated.
--leeroy
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Bi-directional families in Onionoo and consensus weight as measure of relayed bandwidth

2015-07-02 Thread l.m

 One proposal I've liked is to socially discourage asymmetrical
 families by giving them with bad badges on Roster.  If A says B is
 part of their family but B doesn't reciprocate, A gets a penalty to
 their bandwidth points.

 Maybe don't go as far as penalizing relay operators for attempting
to
 configure a relay family and not succeeding at it.  Keeping family
 configurations updated is not exactly trivial.  And if the effect is
 that relay operators stop configuring families at all, that's not
what
 we wanted, either.

 It would be good to point out configuration problems with family
 settings and help operators debug them easily.

If my understanding of this is correct, doesn't this also have
problems with proof-of-operator? That is, exactly as the current case,
there's no inherently reliable way to prove that if A declares B and B
doesn't declare A, that there *should* be a bi-directional relation.
Other properties of a node/relay only go so far in proof-of-relation.
The existence of a bi-directional relation is only taken as a hint for
path selection. Given X and Y are chosen and have a bi-directional
relation, discard one based on the first chosen.

 I started implementing something here and will report back on the
 ticket as soon as I'm more convinced that it works.

No hints?

Regards
--leeroy
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Bi-directional families in Onionoo and consensus weight as measure of relayed bandwidth

2015-07-02 Thread l.m
So I guess I should go back to the original issue posted in this
thread. It hasn't been addressed if the (bi-directional family)
concern is actually data from Onionoo or operators that just don't
declare families. The view from Onionoo--based on consensus, taking
into consideration caching and other network reliability factors
affecting a view of tor. The view from tor--which is far more
difficult to prove. Currently tor will automatically reject implied
family relations based on ip block similarity.

Regards
--leeroy
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] How bad is not having 'enable-ec_nistp_64_gcc_128' really? (OpenBSD)

2015-06-22 Thread l.m
Hi,

Last I heard NIST groups are rubbish. You're better off without them
for security. Am I wrong?

--leeroy

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] onionoo: bug in family set detection?

2015-06-02 Thread l.m
Hello,

DirAuth's can cache multiple versions of the descriptor  and serve
what appears to be the newest in a given consensus interval.  This
coupled with routers publishing descriptors at least every 18  hours,
but potentially sooner. What you describe doesn't appear to  be a bug
in Onionoo because a consensus exists for which the family is 
declared as you describe in the last known good descriptor.

At the moment you describe Atlas doesn't show:

[1] last published 2015-06-02 17:44:27 (and [2] may still have entered
hibernate)
[2] last published 2015-06-02 09:40:45 (and [1] may not be running, or
have a last known good descriptor at DirAuths)

This doesn't take into account the possibility  of a router
hibernating (info not always available) at times between  consensus
taken or valid. A router can be running but unavailable due to 
accounting. This looks like a result of cached descriptors or router 
accounting. The data provided to Onionoo from metrics-lib appears 
accurate as far as network status. At least that's how it looks from a
preliminary glance at the data and spec--although please do your own
verification.

--leeroy

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] valid MyFamily syntax variations ('$FP=nick' ?)

2015-05-31 Thread l.m
Hi nusenu,

The spec isn't done :P Seriously though, no it's not a bug. If you
check nodelist [0] you'll see that this type of hex-encoded nickname
is normal for generating a descriptor. If you check CollecTor history
for the node your mention [1] you'll see the result of building a
descriptor. Metrics-lib parses this out in ServerDescriptorImpl [2].
Which then shows up in Atlas via Onionoo in DetailsStatus [3] during
NodeDetailsStatusUpdater [4].

I hope that clears things up. (I know I skipped a full stack trace but
its C and Java--cut me some slack)

--leeroy

[0] https://gitweb.torproject.org/tor.git/tree/src/or/nodelist.c#n499
[1]
https://collector.torproject.org/recent/relay-descriptors/server-descriptors/2015-05-31-20-05-49-server-descriptors
[2]
https://gitweb.torproject.org/metrics-lib.git/tree/src/org/torproject/descriptor/impl/ServerDescriptorImpl.java#n341
[3]
https://gitweb.torproject.org/onionoo.git/tree/src/main/java/org/torproject/onionoo/docs/DetailsStatus.java
[4]
https://gitweb.torproject.org/onionoo.git/tree/src/main/java/org/torproject/onionoo/updater/NodeDetailsStatusUpdater.java

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] How to connect test tor network from remote host.

2015-05-30 Thread l.m
Hi,

 yes i can, here is nmap output
 *
 Starting Nmap 6.47 ( http://nmap.org ) at 2015-05-30 19:05 CEST
 Nmap scan report for 10.0.2.11
 Host is up (0.00070s latency).
 PORT STATE SERVICE
 7000/tcp open  afs3-fileserver
 7001/tcp open  afs3-callback
 7002/tcp open  afs3-prserver

Did you get the same result testing 5000, 5001, 5002 ? When the
bootstrap fails using TBB launcher you see the 'Tor Network Settings'
dialog and an error message. You will see the 'Copy Tor Log to
Clipboard' button has an . What are the contents of the error
message/tor log ?

--leeroy
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] How to connect test tor network from remote host.

2015-05-30 Thread l.m
Hello,

 I  setup  Tor Test-Network in my laptop using chutney with basic-min
 
 configuration and i also configured tor-browser with this test
network  
 to browse internet.
 
 But now i want to bootstrap Test Tor-Network inside Virtual Machine.

 (Virtual Machine 1 = (3 AUTHORITY + 1 RELAY) IP= 10.0.2.11
 (Virtual Machine 2 = (tor-browser) IP= 10.0.2.9
 But this time tor-browser is not bootstrapping.  I can ping from (11
to 9)

On VM2, if you run a nmap scan of VM1, such as: nmap 10.0.2.11 -p
7000,7001,7002

Does it show the ports as open?

--leeroy
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Is it okay to use Debian Jessie for development?

2015-05-28 Thread l.m
Hello,

I probably should have asked this sooner. How quickly does tor project
upgrade to the latest Debian stable on development machines [0] ?
Thanks in advance.

--leeroy

[0] https://db.torproject.org/machines.cgi
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] shipping with fallbackdir sources

2015-05-22 Thread l.m
Hi, a couple questions about fallback directories.

On 4/17/15, Peter Palfrader  wrote:
 We want them to have been around and using their current key,
address,
 and port for a while now (120 days), and have been running, a guard,
and
 a v2 directory mirror for most of that time.

In the script (and proposal 206) a candidate for fallback uses the
orport. When is the http directory connection used? Is it just for
backwards compatibility?

Does the directory address being different not matter? A candidate can
only be or-address+orport?

If the document is signed why is bootstrapping using the
or-address+or-port for candidates a key consideration? Aren't the
clients going to switch to guards and tunneled dir-connections anyway?
(Unless configured otherwise) In a worst-case the client would still
use bridges if tor is blocked--and it doesn't look like the
fallback-dir are being considered for this case anyway.

Is there more documentation somewhere?

Thank-you

--leeroy
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Listen to Tor

2015-05-22 Thread l.m
Client perspective--Maybe listen to controller events? Integrate exit
map for audible notification of impending doom.
Exit perspective--Crying kittens, non-stop

On 5/22/2015 at 6:33 PM, Kenneth Freeman  wrote:On 05/22/2015 04:27
PM, l.m wrote:
 
 So...wouldn't the torified traffic sound like...white noise? I can
 fall asleep to that.

In and of itself a sufficient condition.___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Listen to Tor

2015-05-22 Thread l.m

So...wouldn't the torified traffic sound like...white noise? I can
fall asleep to that.

On 5/22/2015 at 6:09 PM, Kenneth Freeman  wrote:On 05/21/2015 07:29
AM, Michael Rogers wrote:

 Hi Kenneth,
 
 What a cool idea! I played around with sonification of network
traffic
 once upon a time, using kismet, tcpdump and fluidsynth glued
together
 with a bit of perl. You can listen to the results here:
 
 http://sonification.eu/

Seriously cool.

 To avoid the privacy issues with monitoring exit node traffic,
perhaps
 you could run this on the client's LAN, producing two pieces of
music,
 one for unanonymised traffic and the other for the same traffic
passed
 through Tor? Then we'd know what privacy sounds like. :-)

I hadn't considered the identified vis-à-vis anonymous traffic
differential before. Interesting!

In a sense I'd like to do with Tor what the same sense  sensibility
has
already done with Wi-Fi with digital hearing aids: code
transformation.

http://www.newscientist.com/article/mg22429952.300-the-man-who-can-hear-wifi-wherever-he-walks.html#.VV-hR6G8o4Q

I haven't any skill set for coding whatsoever, but a DJ at KRBX Radio
Boise hase expressed some interest (I wrote the Wikipedia article on
the
Treefort Music Fest), and I've spoken on the air about Tor, so an
acoustic rendition of Tor is not beyond reason.___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] onionoo resource requirements

2015-05-02 Thread l.m
Hi Luke,

Django (and by implication, python) are an accepted technology
at tor,  but as much as I wish it would be different, the tor web 
infrastructure  is still based on python 2.7 (basically, you can 
only depend on whatever  is in wheezy and wheezy-backports if 
you want something to run on tor's  infrastructure). Of course if 
you don't intend for your project to ever  replace tor's own onionoo
deployment, that doesn't matter.

Thanks  for pointing that out. I think that won't be a big problem as
this  isn't intended to be a replacement. If it ends up being a
fruitful  experiment then it's a success. It's a success if it
demonstrates some improvement over the currently deployed design. If I
stay away from python3 then the main difference is the use of
postgresql+pgbouncer/pgpool. My instincts are telling me that python3
is needed for aiohttp to demonstrate that asynchronous io, lightweight
concurrency, and various 
database optimizations can yield improvements. If the results end up
meriting reproduction then virtual environments can be used for
testing without breaking existing infrastructure.

PS:  I'm also going to take this opportunity to plug my onionoo 
client  library that you can use to check that your onionoo clone
performs to  spec ;-) https://github.com/duk3luk3/onion-py

I saw that. I'll definitely keep it in mind for comparison. Thanks
again.

--leeroy 
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] onionoo resource requirements

2015-04-27 Thread l.m
Hi Karsten,

Not sure what frameworks you have in mind.  But I'm happy
to hear more about frameworks that would make Onionoo 
easier to extend and not perform worse (or even better) than
now.  If you have something in mind, please say so.

Thanks for the clarification. I'm not against the  choice of Java, nor
claiming better choices. I have fond memories of  Java. In particular
I've been working a lot with Django recently. I didn't want to  redo
works that may have already been performed. I was thinking of some 
recreational uses of a server. I started looking at the onionoo 
documentation and my curiosity was piqued. Precisely because the first
 thing I thought of was reusing a cloned server for, well, a 
onionoo-clone.

The JSON formatted files could be used as fixtures  for setup. The two
apps could be run separately as you've already mentioned.

The other development specifics are:
nginx-gunicorn(greenlets/aiohttp)
postgresql-pgbouncer

Is it an experiment worth pursuing? Your thoughts are appreciated.
Thanks in advance.

--leeroy
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] onionoo resource requirements

2015-04-25 Thread l.m
Hi,

Actually I've been meaning to ask a question related to this. I've
been wondering if, during the development of Onionoo, you considered
any other frameworks? I'm not familiar with the history of Onionoo so
I don't know if you made the choice based on some constraint. I read
the design doc which made me curious.

--leeroy
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Where can I find info for TunnelDirConns 0|1 ?

2015-03-07 Thread l.m
On 3/7/2015 at 1:49 AM, HOANG NGUYEN PHONG  wrote:Dear all, 
I read a discussion about How can Tor use a one hop circuit to a
directory server during initial bootstrap? here
However, why I cannot find TunnelDirConns 0|1 in
torproject.org/docs/tor-manual.html.en? Is the feature already removed
or replaced with another name? Next, may I ask that as  Weasel
mentioned  in his answer for the question: This so-called tunneled
connection  doesn't provide anonymity. It only provides
confidentiality, i.e. nobody  listening on your network can know
exactly what you fetched However,  in Tor directory Protocol 3, there
is a sentence all directory  information is uploaded and downloaded
with HTTP which means no  encryption for what we fetch, so where is
the confidentiality?
Best Regard.Hi,

The option was removed in 0.2.5.x in response to ticket  10849. All
directory connections are tunneled by default using the  directory
ORport. If you're bootrapping for the first-time it won't matter
because the directory authorities are well known. In this case you
would need to use bridges.
--leeroy
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] bittorrent based pluggable transport

2015-03-04 Thread l.m
 It's a mistake to say that if something doesn't
 work in China (or any other single concrete 
 threat environment), then it's useless.

Out of respect for the work you've done I'm not going to assume you're
taking typed-word out of context incorrectly.

I'm concerned that this PT exchanges one threat for another and is
thought to be a good for integration with Tor. It's one thing to use
Google/Azure/etc where there are legitimate uses. It's another to
trade the threat of secure-encrypted traffic (with crypto-secure PRNG
in most PT cases) for another option that utilizes insecure
obfuscation of file transfer together with a server that utilizes
(presumably) secure communication. In one you increase vulnerability
surface of the censored user, in the other the only threat is this
unknown communication which can be easily blocked. I digress.

Allow me to attack the problem head on then. What do I know about
bitorrent. Not much. I know, for any user of bitorrent, the infohash
is easily derivable and so is the peer list. So, if you don't
participate in the swarm intersection of peer lists means it's less
difficult to find the needle-in-the-haystack that is the PT-server.
Just look at the unique peers across multiple users of the PT who each
create unique torrent swarm fingerprints corresponding to the infohash
of the files shared. You, the PT-server, must participate in the
swarm.

Suppose then that you, the PT-server, do participate in the swarm.
Long transfers with peers who provide hash-failing pieces breaks BT
spec. The adversary just needs to force peer list rotation. How can
this be done? Well, the adversary knows the infohash and the peer list
to expect. So, flip-bit, as you put it. Only do it for all peers who
cross the country-firewall. If the client is indeed running a
bitorrent client sit back and watch the churn. Only something stands
out. There's a peer, you, the PT-server, who is ignoring the ban
fingerprint. This can be done in either direction of piece share.
Because you the the PT-user differ from the spec you stand out.

Another case. The adversary can monitor the bitfield of the peer
connected to the PT-server. When the torrent is complete the client
will disconnect from all peers and take the seed role. Only there's a
problem. They're still transferring data with the PT-server as if they
were a leech. It's not enough to change torrent swarms because it
would be immediately apparent that they re-establish communication
with the PT-server, crossing swarms.

A final thought. It's one thing for an adversary to not be able to
attack a communication besides blocking it entirely. This would be the
case with crypto-secure communications. Bitorrent doesn't fall into
this category. Especially when facing the state-level adversary. So
your PT communication would need to be crypto secure (not saying it's
not). The caveat is that if one were to try and pack encrypted data
within BT-spec obfuscation and that BT-spec obfuscation better not
ever fail. If it did the user of the PT can be proven to be hiding
data via steganography in hash failing pieces (as you've mentioned).
This can provide justification for an accusation of state-offense.
This would be different from packing data where no hash fail is
apparent such as regular steganography, minus bittorrent. Video
streaming or audio streaming combined with data hiding, and without
any checksum, is a different beast than video transfer over BT.

tl;dr -- It's a novel idea to prevent detection of the PT-server by
tunneling in some other traffic.
--leeroy
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal 242: Better performance and usability for the MyFamily option

2015-03-02 Thread l.m
Hi,

If I understand the factors, as things stand currently, regarding
family use with respect to the *security* of Tor.

Pros
1 - Prevents information disclosure in case of using related relay too
much (relay configuration or seizure of hardware).

Cons
2 - It's not used by operators with malicious intent.
3 -  Reduces diversity in choosing non-malicious relay assuming all
relay in  a family have similar performance/bandwidth. If the metrics
vary widely  in the first place there's the chance it won't matter.
4 - Allows use of nickname and fingerprint.
5 - Can be used arbitrarily by unrelated nodes to influence path
selection.
6 - Clients already disobey family under non-deterministic
circumstance (not reliably reproduced but have measured)

The  proposed changes, in absence of any errata, are an improvement
for  enforcing a bidirectional relationship. For this reason it
mitigates (4), and (5). If arbitrary nodes cannot simply join a family
it also has less of an impact on (3) than when the tickets were
originally filed.

Towards mitigating (3) it might be worth considering the AS of the
related relays. I know  this increases computation cost so it's more
of a thought than even a  suggestion. If the AS differs across related
relay you might consider  this (honest?) operator a safe choice for
not setting the  family. That is that the family might be better based
on similarity of  AS, supposing that this would make it easier to
compromise usage data.  Another thought is to consider families as a
single node for the purpose  of computing network diversity. If a
situation occurs where diversity  is low it would be useful to not
consider families or to reconsider the  so-called safe-families. You
might call this a discussion towards relaxing family definition
(slightly) in favor of  increasing diversity, but staying within the
changes of Proposal 242.
--leeroy
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] bittorrent based pluggable transport

2015-03-02 Thread l.m
Hi,

I'm wondering about a particular case--let me explain. From your
threat model you assume that the adversary has suspicions about
encrypted traffic and may block them without strong justification. You
also take as given that the adversary may be state-level. From the
adversary objective this is because the adversary wants to know who
and what this communication is about. In the limitations you state
that the adversary (counter-intuitively) has strong socio-economic
reasons to not block bittorent. It does not follow... In China it's
not uncommon to hijack torrent sites or ban them entirely. They
perform mitm even for encrypted sites like github. They have a
one-strike policy that they don't normally enforce regarding file
sharing. The golden shield is sophisticated enough to correlate the
use of a bridge across multiple users. Which means you need strength
in numbers. Then again, outside China, bittorrent is commonly
subjected to traffic shaping. I'm unclear about how this helps the
censored user. Under such circumstance wouldn't it be possible to have
a common peer show up in multiple unique torrent swarms?
--leeroy
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [Proposal 241] Resisting guard-turnover attacks [DRAFT]

2015-02-02 Thread l.m

Nick Mathewson wrote:
If the number of guards we have *tried* to connect to in
the last PERIOD days is greater than 
CANDIDATE_THRESHOLD, do not attempt to connect
to any other guards; only attempt the ones we have 
previously *tried* to connect to.

Torrc  allows the use of multiple guards with numentryguards  1. So
if an  adversary hasn't yet managed to compromise the guard this might
lead to a  DOS. The adversary may be interested in getting you to
connect to *one*  malicious guard at a time. It might be worth
considering to perform a  reach-ability test using a built circuit. I
mention built because Tor  wouldn't know if the connectable guard is
adversary controlled. It might  also be worth considering guards that
successfully connect after a failure as  suspect for a time. That is,
they shouldn't be optimistically added to the list of guards unless
absolutely needed.

We need to make this play nicely with mobility.  When
a user has three guards on port 9001 and they move
to a firewall that only allows 80/443, we'd prefer that 
they not simply grind to a halt.  If nodes are configured
to stop when too many of their guards have gone away,
this will confuse them.

If 80/443 are the least restricted types of connection maybe it would
be a good idea to try n of those before stopping. If they work but
other guards fail it might be a symptom of a fascist firewall to warn
about.

If people need to turn FascistFirewall on and off, great.
But if they just clear their state file as a workaround,
that's not so good.

If FacistFirewall is 1 with some ports specified, and state isn't
empty, don't consider the disjoint guards contributing to thresholds?

If we could tie guard choice to location, that would
help a great deal, but we'd need to answer the question,
Where am I on the network, which is not so easy to
do passively if you're behind a NAT.

If network location were the combination of
physicaladdr+dnsname+subnet would it be so bad if you're behind a NAT
and dnsname were unavailable?

--leeroy
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Website Fingerprinting Defense via Traffic Splitting

2015-01-22 Thread l.m
Daniel Forster wrote:
 Hello Guys,

 it would be great if I could get a few opinions regarding my
 upcoming master thesis topic.

 My supervisor is Andriy Panchenko (you may know some of his work
 from Mike Perry's critique on website fingerprinting attacks).
 As a defense, we'd like to experiment with traffic splitting (like
 conflux- split traffic over multiple entry guards, but already
 merging at the middle relay) and padding.

 I know that the no. of entry guards got decreased from three to one.
 May it be worth the research or is the approach heading in a not so
 great direction w.r.t. the Tor Project's only one entry node
 decision? Or, actually, what do you think in general..?
I think it will be interesting to see how a client of Tor can be
fingerprinted by the guards chosen. In particular if the circuit
length tends to be three and you perform a merge at the middle node.
By watching the incoming n-tuple of guards, having chosen in advance
the role of middle-hop, can clients be identified through correlation
with exit traffic. I'm aware that the choice of guards can already
make a client fingerprintable--but how much more so in this case. This
might not be the adversary you're intending to address but is still a
consequence. Unless I'm reading your proposal incorrectly.

How might the possible threat be addressed. Perhaps a more robust
implementation of network coding and a revisit of circuit length. I'm
just throwing out thoughts. I too am interested in the application of
network coding to the goals of Tor. I'll be eagerly awaiting your
results. Good luck and thanks.

-- leeroy

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev