Re: [tor-dev] Interoperation with libp2p

2021-12-07 Thread Jeff Burdges


> On 7 Dec 2021, at 19:26, Jeff Burdges  wrote:
> I advise against allowing any libp2p cruft into tor itself though.

Among the many reasons. I’d expect libp2p to be a nightmare of downgrade 
attacks, given the amount of badly rolled stuff they must still support, like 
their dangerous key exchange SECIO built on the legacy curve sep256k1, but 
it’ll go deep than that.

Jeff
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Interoperation with libp2p

2021-12-07 Thread Jeff Burdges

I work on a project that selected libp2p, but only write cryptographic code, 
not networking code..  I’d caution against using libp2p for anything serious.

Protocol Labs always took a pretty sophomoric approach:  libp2p managed to be 
better than ethereum’s but ignored almost everyone working in that space.  It 
devp2p.  IPFS might still be inferior to Tahoe LAFS in real terms, especially 
due to lacking erasure coding.

At some point Protocol Labs spun off libp2p, and by then its core devs 
recognized many of the underlying mistakes.  It also benefits from considerable 
interest but I think our stronger networking people remain unimpressed. 

It’ always possible to learn from their mistakes of course, but I suspect tor 
people learned most of those lessons from I2P’s efforts.  


Now libp2p doing their own scheme for sending their stuff over Tor’s existing 
streams makes sense.  Maybe someone would even pay Tor folk a support contract 
for the assistance designing that?

We've a relatively low bar for grants up to 30k EUR, and more carefully 
evaluate ones up to 100k EUR, so if any Tor people want to submit a grant for 
improving the rust libp2p’s Tor usage, then I’ll ask for it to be supported:  
  https://github.com/w3f/General-Grants-Program/
  https://github.com/libp2p/rust-libp2p

I advise against allowing any libp2p cruft into tor itself though.


> On 10 Nov 2021, at 16:26, Mike Mestnik 
>  wrote:
> https://gitlab.torproject.org/tpo/core/torspec/-/issues/64

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] staking donations

2021-03-25 Thread Jeff Burdges

I realize this would be the wrong part of the Tor project to which to suggest 
this, but I know the people on this list and I do not know the people elsewhere 
in Tor, and there is some small technical content here, so..

There are now a bunch of proof-of-stake cybercoins aka crypto-currencies, like 
polkadot by my employer, cosmos, cardano, etc.  As a rule, these have the 
property that people who own the currency for one reason or another can lock 
their currency to help deSybil some special nodes, usually called validators.  
This earns them some rewards because doing so usually makes them liable for 
misbehavior of the validator.

This is very different from bitcoin where five guys in China and one guy in 
Island take all the rewards, and those six guys spend a lot of real money doing 
so.  Now quite a lot of people are claiming these reward, and the real work 
they are doing is running one server, or even merely saying “I know that dude 
and he seems okay”. 

This means a lot of people suddenly have an income stream on which they need to 
pay taxes, but for which the taxes are complex.  At least a few of these people 
do not yet stake because they do not yet know how to handle the taxes, like 
maybe they need time to set up a company or whatever. 

At least a few of these should’ve a somewhat flexible key infrastructure like I 
had us do in polkadot, and in particular users can simply point their rewards 
at another address besides the one from which they stake.

If you’re a non-profit that can avoid the tax complexities, then it’d be a good 
time to accept donations in these new crypto currencies, because aside from 
regular donation some people might just skip thinking about their own taxes for 
a while by point their rewards at someone like Tor who’d spend the money 
usefully.. at least temporarily. 

I realize there might be other complexities of course, but if this were 
interesting then I could likely convince someone around here to implement some 
partial rewards division thing, which might enable people to contribute longer 
term.

Best,
Jeff


___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Building a privacy-preserving "contact tracing" app

2020-04-24 Thread Jeff Burdges

>> The French state is making a glosing about the "privacy-preserving", 
>> "anonymous" contact tracing app they are developing with Inria (national 
>> informatics research agency). You can check about the protocol proposal, 
>> ROBERT, here: https://github.com/ROBERT-proximity-tracing/documents (in 
>> English!)
>> 
>> As you would expect, the proposal is not privacy-preserving unless you 
>> believe the State would never ever misbehave, e.g. link IP address to 
>> identity with the help of ISPs etc. There is some relevant criticism here: 
>> https://github.com/ROBERT-proximity-tracing/documents/issues/6

ROBERT should not be considered privacy preserving because uninfected users 
reveal everything.  Yes, the IP address makes this much worse, but metadata 
like timing and the time in the exposure status request (ESR) in section 7 on 
pages 11 of 
https://github.com/ROBERT-proximity-tracing/documents/blob/master/ROBERT-specification-EN-v1_0.pdf

Aside from privacy, I think bandwidth also dictates that uninfected users 
should not reveal anything anything in their queries, like DP-3T, but unlike 
ROBERT

>> I'd like to propose a really private "contact tracing" counter-proposal, 
>> which would use Tor's onion services for sender-receiver anonymity. Not that 
>> I am a proponent of the idea, but we need to come up with alternatives in 
>> the debate.
> 
> There are a few decent privacy-preserving contact protocols.

I think this is being overly generous.  ;)  There are less horrible designs 
like DP-3T that reveal infected user information, which sounds unfortunate but 
legal for reportable diseases, but reveal nothing about uninfected users.  
Implicitly, there are extremely few infected users because contact tracing 
becomes far less helpful as the infected user population grows, which makes for 
quite an interesting assumption.

At least one Swiss group that advocate for contact tracing thinks 25 new cases 
per day sounds small enough given Switzerland’s density, but 25 new cases per 
day was clearly not small enough for Singapore’s contact tracing effort, so 
they eventually needed a lockdown.  There are also countries like the U.S. and 
U.K. where media ignores such subtleties and where contact tracing could simply 
become some justification for increased economic activity.

You might agree with revealing reportable disease data for public health, and 
parameterise a contact tracing app around the public health assumptions, only 
to discover governments use it for economic reasons that harm public health, 
and then make your privacy preserving aspects into the scape goat.

> I'm sure we'd love to help. But maybe the Tor network can't scale to hundreds 
> of millions of people using an app?

Ignoring the nasty political realities, there are cute mixnet tricks for 
contact tracing apps:

All users create and broadcast tiny single-use reply blocks (SURBs) over 
Bluetooth LE.  There are no SURB designs that fit into one single Bluetooth LE 
announcement, but you could manage with two using erasure coding, ala 
https://github.com/TracingWithPrivacy/paper/issues/10 which sounds acceptable 
given the MAC address rotates more slowly anyways.  We’ve no MAC tags inside 
these SURBs so imagine them living partially between Sphinx and a voting 
mixnet, but both non-verifiable and verifiable mixnet designs work.

Infected users reveal the SURBs they downloaded over Bluetooth LE to a health 
authority who sends the SURB.  After many hops, the SURB arrives with some 
mailbox maintained by a user.  We'd expose the sender to the receiver if we 
allow the receiver to unwind their SURB, so instead we simply drop the message 
and notify the receiver that they received one warning message.

We’re not really more privacy preserving than schemes in which uninfected users 
download all ephemeral identifiers for all infected users.  We do avoid 
revealing infected user data however and use almost no bandwidth.  We’d need 
some cover traffic that might generate false positives depending upon the SURB 
size.  Anonymity degrades under conditions where contact tracing actually 
works, meaning a health authority could discover a user’s mailbox by listening 
to their bluetooth LE beacon and sending a false message, but mixing parameters 
like one batch per day reduce this risk.

Jeff




signature.asc
Description: Message signed with OpenPGP
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Improving onion service availability during DoS using anonymous credentials

2020-03-23 Thread Jeff Burdges
There is another component of the design space:

Do you want credentials to be movable from one introduction point to another?

If so, you can do this or not with both blind signatures and OPRFs by enabling 
or restricting their malleability properties, but probably not with anything 
symmetric.  If tokens are movable, then this encourages users to use multiple 
introduction points, but doing so sounds unlikely, but worse gives DoS 
attackers parallel access to introduction points.  I suppose no for this 
reason, but maybe it’s worth considering for the future..


> On 23 Mar 2020, at 14:23, George Kadianakis  wrote:
> - Discrete-logarithm-based credentials based on blind signatures:
> 
>This is a class of anon credential schemes that allow us to separate the
>verifier from the issuer. In particular this means that we can have the
>service issue the tokens, but the introduction point being the verifier.
> 
>They are usually based on blind signatures like in the case of Microsoft's
>U-Prove system [UUU].

We should mention that Fuchsbauer, et al. recently addressed the forgeability 
problem for blind Schnorr signatures in https://eprint.iacr.org/2019/877.pdf 
which should improve performance, but still costs more round trips than slower 
blind signature variants.  I think the attacks were never relevant for DoS 
protection anyways though.

You need 64 bytes for the Schnorr signature plus whatever you require to 
identify the signing key, so 80-96 bytes .

> - Discrete-logarithm-based credentials based on OPRF:
> 
>Another approach here is to use OPRF constructions based on the discrete
>logarithm problem to create an anonymous credential scheme like in the case
>of PrivacyPass [PPP]. The downside, IIUC, is that in PrivacyPass you can't
>have a different issuer and verifier so it's the onion service that needs
>to do the token verification restricting the damage soaking potential.

Issuer and verifier must share secret key material, so not exactly the same 
thing as being the same.  You must share some special public key material for 
the blind signatures.

I believe redemption could cost 64-96 bytes bytes, so a 32 byte curve point, a 
16-32 bytes that identifies the issuing key, and a 16-32 bytes seed that gets 
hashed to the curve.

Jeff





signature.asc
Description: Message signed with OpenPGP
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal for PoW DoS defenses during introduction (was Re: Proposal 305: ESTABLISH_INTRO Cell DoS Defense Extension)

2019-06-20 Thread Jeff Burdges

> On 2019-06-20 00:19, Watson Ladd wrote:
>> 
>> Privacy Pass has already been integrated into Tor Browser. Perhaps
>> work could be done to use it here?
> 
> As I said above, any oblivious PRF scheme like privacy pass violates privacy 
> *if* you can supply different keys to different users.  We cannot derive the 
> OPRF key from the HS key, so this requires some messy solution like 
> certificate transparency or more likely zero-knowlege proofs.

Actually there is a method to use oblivious PRFs without sharing secrets, which 
then makes the HS key itself usable:  Just check the oblivious PRF token at the 
HS itself, not at the introducer.  If the token checks out then the HS responds 
quickly, but if not then it responds after some delay.  Introducers do nothing 
different in this design, but introduce cells can contain more data.


> If otoh you use blind signatures then the blind signing key can be derived 
> from the HS key, which avoids this complexity.  We’ve new complexity in that 
> blind signatures using an Edwards curve really suck, but we should be fine so 
> long as only the soundness is weak, not the blindness.  I have not refreshed 
> my memory on this point yet.

There is a tricky one-more-forgery attack on Schnorr blind signatures, but not 
afaik any key recovery attack:
https://www.iacr.org/archive/crypto2002/24420288/24420288.pdf

As a defence, one could do blind signatures in G^3 requiring three scalar 
multiplications per signature from both signer and client, but limiting the 
forgery count to 1 per 63 signatures, which sounds acceptable.
https://www.math.uni-frankfurt.de/~dmst/teaching/SS2012/Vorlesung/EBS5.pdf

We’d need to work out if using some key derived from the HS key works however 
because we must avoid creating a signing oracle for HS keys too.


So.. Do you want the filter at the introducer or at the HS itself?

Jeff




signature.asc
Description: Message signed with OpenPGP
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal for PoW DoS defenses during introduction (was Re: Proposal 305: ESTABLISH_INTRO Cell DoS Defense Extension)

2019-06-20 Thread Jeff Burdges


On 2019-06-20 00:19, Watson Ladd wrote:
> 
> Privacy Pass has already been integrated into Tor Browser. Perhaps
> work could be done to use it here?


As I said above, any oblivious PRF scheme like privacy pass violates privacy 
*if* you can supply different keys to different users.  We cannot derive the 
OPRF key from the HS key, so this requires some messy solution like certificate 
transparency or more likely zero-knowlege proofs.

If otoh you use blind signatures then the blind signing key can be derived from 
the HS key, which avoids this complexity.  We’ve new complexity in that blind 
signatures using an Edwards curve really suck, but we should be fine so long as 
only the soundness is weak, not the blindness.  I have not refreshed my memory 
on this point yet.


On 20 Jun 2019, at 15:41, Chelsea Holland Komlo  wrote:
> An approach akin to Privacy Pass could be an option to avoid serving
> challenges to clients with each request (see reference to anonymous
> tokens above), but it cannot be a drop in fix, of course. Furthermore,
> an acceptable POW or POS scheme still needs to be selected, the
> tradeoffs of which we are currently discussing.

Why?  Just rate limit connections by adding artificial latency.

If an HS operator turns on the DoS defences, then they are responsible for 
judging the client’s behaviour and notifying their Tor to issue blind signed 
tokens.  At that point the HS tor invites the client’s tor to submit blinded 
tokens, which the HS tor signs and sends back, and the client’s tor unblinds 
and uses.  It’s only three round trips I think.

If the HS never notifies tor to issue tokens, then the HS just behaves 
sluggishly, but a correct configuration gives an operator complete control over 
issuing tokens.

Jeff





signature.asc
Description: Message signed with OpenPGP
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal for PoW DoS defenses during introduction (was Re: Proposal 305: ESTABLISH_INTRO Cell DoS Defense Extension)

2019-06-16 Thread Jeff Burdges

As a rule, proof-of-work does not really deliver the security properties people 
envision.  We’ve no really canonical anti-sibel criteria in this case, but 
maybe some mixed approach works:

First, introduction points have a default mode in which they rate limit new 
connections and impose some artificial latency.  Second, an onion service can 
issue rerandomizable certificates, blind signature, or oblivious PRFs that 
provide faster and non-rate limited access through a specific introduction 
points.

Coconut would suffice for the rerandomizable certificates of course, but sounds 
like overkill.. and slow.

We should consider an oblivious PRF for this use case too:

It’s easy to make an oblivious PRF from the batched DLEQ proof implemented in 
https://github.com/w3f/schnorrkel/blob/master/src/vrf.rs  I suppose Tor might 
adapt this to not use Ristretto, or maybe choose an Ed25519 to Ristretto map, 
but regardless the whole scheme is not too much more complex than a Schnorr 
signature.

We require the oblivious PRF secret key be known by both the introduction point 
for verification and the onion service for issuing.  In this, we do not share 
the oblivious PRF key among different introduction points because introduction 
points cannot share a common double redemption database anyways.

I’m worried about different oblivious PRF keys being used to tag different 
users though.  There are complex mechanisms to prevent this using curves 
created with Cocks-Pinch, but..

We could simply employ blind signatures however, which avoids sharing any 
secrets, and thus permits binding the key uniquely to the hidden service.  As a 
rule, blind signatures require either slow cryptography like pairings or RSA, 
or else issuing takes several round trips and have weak soundness.  I think 
weak soundness sounds workable here, although I’m no longer sure about all the 
issues with such scheme.

Best,
Jeff

p.s.  We’re hiring in security https://web3.bamboohr.com/jobs/view.php?id=38 
and several research areas like cryptography 
https://web3.bamboohr.com/jobs/view.php?id=29




signature.asc
Description: Message signed with OpenPGP
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] New revision: Proposal 295: Using ADL for relay cryptography (solving the crypto-tagging attack)

2019-04-08 Thread Jeff Burdges

If I understand, proposal 295 looks similar to either BEAR or LION from the 
LIONNESS.  I vaguely recall both BEAR and LION being "broken" in some setting, 
although I cannot site the paper.  Anyone?

I suppose the BEAR and LION break originates from using them for authentication 
while proposal 295’s separate SVer function fixes this?

Jeff





signature.asc
Description: Message signed with OpenPGP
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] New revision: Proposal 295: Using ADL for relay cryptography (solving the crypto-tagging attack)

2019-04-03 Thread Jeff Burdges

If I understand, proposal 295 looks similar to either BEAR or LION from the 
LIONNESS.  I vaguely recall both BEAR and LION being "broken" in some setting, 
although I cannot site the paper.  Anyone?

I suppose the BEAR and LION break originates from using them for authentication 
while proposal 295’s separate SVer function fixes this?

Jeff




signature.asc
Description: Message signed with OpenPGP
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] archive.is alternative for CFC addon

2016-10-09 Thread Jeff Burdges
On Sat, 2016-10-01 at 14:35 +0200, ban...@openmailbox.org wrote:
> Since there were plans to use this service to circumvent Cloudflare 
> CAPTCHAs and now its behind Cloudflare itself (it requires users to 
> execute JS to access content) what alternative is planned for the 
> upcoming CFC addon?

There is a previous exchange on this list that looks relevant : 

On Mon, 2016-05-16 at 12:59 -0700, David Fifield wrote:
> On Fri, Apr 01, 2016 at 06:06:18PM +, Yawning Angel wrote:
> > I'll probably add support for other (user-configurable?) cached 
> > content providers when I have time.  The archive.is person doesn't 
> > seem to want to respond to e-mail, so asking them to optionally 
> > not set X-F-F, seems like it'll go absolutely nowhere.
> 
> This is some kind of meta-archive service. Their about page lists many
> web archives (some of the specialized):
> http://timetravel.mementoweb.org/about/
> http://www.mementoweb.org/guide/quick-intro/




signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] prop224: Ditching key blinding for shorter onion addresses

2016-09-29 Thread Jeff Burdges
On Wed, 2016-09-28 at 19:45 -0400, Jesse V wrote:
> I am curious, what is your issue with the subdomains? Are you
> referring to enumerating all subdomains, or simply being able
> to confirm that a particular subdomain exists? 

Yes, confirmation of subdomans can become a problem in some contexts
where GNS gets discussed as a possible solution.  

If I know that blabla.onion exists as a website, then it's good that I
can learn that www.blabla.onion exists as a website.  

If otoh I know that blabla.zkey is a GNS record representing a Bob's
contact list, then it's bad that I can learn that alice.blabla.zkey
exists.  

Jeff

p.s.  In my message, I suggested roughly :  Let P = p G be a elliptic
curve point so that P.onion is a hidden service with abbreviated URL x.
If y is a domain name element string, then y.P.onion and y.x point to
Q.onion where Q = H(s,P) * P for some hash function H mapping into the
scalars.  And q = H(s,P) p is the private key for that HS record.  Now
this Q.onion HS record could simply forward users to another HS record
with a private key not controlled by p.  This means the owner of x & p
can make the HSDirs forward requests in a verifiable way.  The downside
is more HS records.  This could help create a diversity of naming
solutions whose TLDs (x above) are controlled by different authorities.
It's not helpful if you want x to be controlled in some distributed way
though.  In fact, I suppose most Tor name service proposals would be
distributed, giving my idea only very limited usefulness. 




signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] prop224: Ditching key blinding for shorter onion addresses

2016-09-27 Thread Jeff Burdges

I'll add a note on GNS to your wiki George, but..

On Sat, 2016-08-20 at 16:48 +0300, George Kadianakis wrote:
> We really need to start serious work in this area ASAP! Maybe let's
> start by
> making a wiki page that lists the various potential solutions (GNS,
> Namecoin,
> Blockstack, OnioNS, etc.)?

There were a couple reasons I stopped the work on integrating
GNS with Tor, which Christian asked me to do :  First, I did not like
that users could confirm that a particular subdomain exists if they know
the base domain's public key.  Second, I disliked the absence of the
collaborative random number generator protections you guys added to Tor.

Now my first concern is not an issue in the context of a "name system"
for servers, well it's clearly desirable there.  It's just not desirable
if you start talking about using the name system for more social
applications, which people do for GNS.

Also, my second concern is not an issue if you envision the system being
backed only by Tor HS record, not GNS records.  In that scenario, the
cost you pay is (1) you need a forwarding record for Tor HSs, and (2)
sites with subdomains need to continually reupload those them as the
collaborative random number changes. 

This does not give you global names obviously, but it does give you GNS
style non-global names in the threat model desired by the 224 or
whatever.  In effect, you'd use the existing HSDirs for non-global name
links, instead of some new PIR scheme like U+039B and others proposed.

Now non-global names are not necessarily useful unless people can
socially construct naming hierarchies around them, but that's doable.
And they can refer to each other.  etc.

Anyways, I think adding forwarding records and the signing key
derivation trick from GNS to Tor might give the Tor project a way to let
different naming schemes develop organically.  And not be overly
responsible for issues arising from Zooko's triangle. 

tl;dr  I'm not saying GNS itself is the way to go, but GNS's subdomoman
crypto trick along with Tor's existing HSDir structure might improve
things.

Jeff







signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Post-quantum proposals #269 and #270

2016-08-05 Thread Jeff Burdges

I suspect the two known families you "do not want to rule out" are SIDH
schemes and LWE schemes with no ring structure, like Frodo.  At present
SIDH is too slow and LWE keys are too big, but both could improve
dramatically over the next several years. 

Jeff





signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [proposal] Post-Quantum Secure Hybrid Handshake Based on NewHope

2016-05-16 Thread Jeff Burdges

Just a a couple questions :

Is SIDH costing 100 times the CPU such a big deal, assuming it's running
on another thread?  Can it be abused for DOS attacks for example?  Is
that CPU time needed for symmetric crypto?  etc.  If so, is it worth
restricting to your guard node? 

Is New Hope's 3+ times the bandwidth a big deal?  I suppose circuit
building does not occupy much bandwidth, so no.  


On Thu, 2016-05-12 at 12:33 +, Yawning Angel wrote:
> We pre-build circuits, but the telescoping extension process and
> opportunistic data both mean that circuits see "traffic"
> near-immediately in most cases (everyone but the exit will see the
> traffic of handshaking to further hops, the exit sees opportunistic
> data in some cases).

Ok.  I suppose that leaks a node's position in the circuit regardless,
but perhaps that's not a concern.  And I donno anything about
opportunistic data.  


> I don't think SIDH is really something to worry about now anyway...

If you like, I could ask Luca de Feo if he imagines it getting much
faster, but I suspect his answer would be only a smallish factor, like
another doubling or so. 

Assuming we stick to schemes with truly hybrid anonymity, then I suspect
the anonymity cost of early adoption is that later parameter tweaks leak
info about a user's tor version.  We can always ask the MS SIDH folk,
Luca, etc. what parameters could be tweaked in SIDH to get some idea. 

Jeff

p.s.  If taken outside Tor's context, I would disagree with your
statement on SIDH : 

I donno NTRU well enough to comment on even how different the underlying
reconciliation is from New Hope, but there might be an argument that
most advances big enough to actually break New Hope would break NTRU and
NTRU' too, so maybe one Ring-LWE scheme suffices.  SIDH is an entirely
different beast though. 

I've warm fuzzy feelings about the "evaluate on two points trick" used
by Luca de Feo, et al., and by this SIDH, to fix previous attempts.  It
could always go down in mathematical flames, but it makes the scheme
obnoxiously rigid, leaving jack for homomorphic properties, and could
prove remarkably robust as a trapdoor. 

By comparison, there are going to be more papers on Ring-LWE because
academic cryptographers will enjoy playing with it's homomorphic
properties.  Yet, one could imagine the link between Ring-LWE and
dihedral HSP becoming more dangerous "looking", not necessarily
producing a viable quantum attack, but maybe prompting deeper
disagreements about parameter choices. 

In other words, I'd expect our future trust in Ring-LWE and SIDH to
evolve in different ways.  And counting papers will not be informative. 

Imho, almost anyone protecting user-to-user communications should hybrid
ECDH, Ring-LWE, and SIDH all together, as users have CPU cycles to burn.
Tor is user-to-volunteer-server though, so the economics are different. 




signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [proposal] Post-Quantum Secure Hybrid Handshake Based on NewHope

2016-05-12 Thread Jeff Burdges
On Thu, 2016-05-12 at 15:54 +0200, Peter Schwabe wrote:
> Can you describe a pre-quantum attacker who breaks the non-modified
> key
> exchange and does not, with essentially the same resources, break the
> modified key exchange? I'm not opposed to your idea, but it adds a bit
> of complexity and I would like to understand what precisely the
> benefit
> is.

Assuming I understand what Yawning wrote :

It's about metadata leakage, not actual breaks.

If Tor were randomly selecting amongst multiple post-quantum algorithms,
then a malicious node potentially learns more information about the
user's tor by observing the type of the subsequent node's handshake. 

In particular, if there is a proliferation of post-quantum choices, then
it sounds very slightly more dangerous to allow users to configure what
post-quantum algorithms they use without Yawning's change. 

Jeff

p.s.  At the extreme example, there is my up thread comment refuting the
idea of using Sphinx-like packets with Ring-LWE.  

I asked : Why can't we send two polynomials (a,A) and mutate them
together with a second Ring-LWE like operation for each hop?  It's
linear bandwidth in the number of hops as opposed to quadratic
bandwidth, which saves 2-4k up in Tor's case and maybe keeps node from
knowing quite as much about their position. 

Answer : If you do that, it forces the whole protocol's anonymity to
rest on the Ring-LWE assumption, so it's no longer a hybrid protocol for
anonymity, even though cryptographically it remains hybrid.  





signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [proposal] Post-Quantum Secure Hybrid Handshake Based on NewHope

2016-05-12 Thread Jeff Burdges
On Thu, 2016-05-12 at 11:17 +, Yawning Angel wrote:
> Well, if we move the handshake identifier inside the AE(AD) envelope,
> we can also add padding to normalize the handshake length at minimal
> extra CPU cost by adding a length field and some padding inside as
> well.
> 
> It would remove some of the advantages of using algorithms with
> shorter
> keys (since it would result in more traffic on the wire than otherwise
> would have been), but handshakes will be indistinguishable to anyone
> but space aliens and the final destinations...

Is that even beneficial though?  

If we choose our post-quantum algorithm randomly from New Hope and SIDH,
and add random delays, then maybe an adversary has less information
about when a circuit build is progressing to the next hop, or when it's
actually being used? 

Is there some long delay between circuit build and first use that makes
anything done to obscure build useless? 

Jeff



signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [proposal] Post-Quantum Secure Hybrid Handshake Based on NewHope

2016-05-12 Thread Jeff Burdges

On Thu, 2016-05-12 at 05:29 +, Yawning Angel wrote:
> and move the handshake
> identifier into the encrypted envelope) so that only the recipient
> can see which algorithm we're using as well (So: Bad guys must have
> a quantum computer and calculate `z` to figure out which post quantum
> algorithm we are using).

This sounds like a win.

We still do not know if/when quantum computers will become practical.
It was only just last year that 15 was finally factored "without
cheating" : http://www.scottaaronson.com/blog/?p=2673

We do know that advancements against public key crypto systems will
occur, so wrapping up the more unknown system more tightly sounds wise.


In the shorter term, SIDH would take only one extra cell, maybe none if
tweaked downward, as compared to the four of New Hope, and whatever NTRU
needs.  This variation might be good or bad for anonymity, but it's
sound better if fewer nodes can compare the numbers of packets with the
algorithms used.

Jeff




signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [proposal] Post-Quantum Secure Hybrid Handshake Based on NewHope

2016-05-08 Thread Jeff Burdges
On Sun, 2016-05-08 at 13:15 +, isis wrote:
> Also, deriving `a` "somehow" from the shared X25519 secret is a bit
> scary
> (c.f. the §3 "Backdoors" part of the NewHope paper,

Oh wow.  That one is nasty. 

>  or Yawning's PoC of a
> backdoored NewHope handshake [0]).
> 
> [0]:
> https://git.schwanenlied.me/yawning/newhope/src/nobus/newhope_nobus.go

I see.  The point is that being ambiguous about the security
requirements of the seed for a lets you sneak in a bad usage of it
elsewhere. 

In some cases, I suppose both sides contributing to a might help them
know the other side is not backdoored, but that's not so relevant for
Tor. 

Jeff



signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [proposal] Post-Quantum Secure Hybrid Handshake Based on NewHope

2016-05-07 Thread Jeff Burdges

On Sat, 2016-05-07 at 22:01 +, Yawning Angel wrote:
> how an adversary will be limited to just this information, and not
> things that enable a strong attack on it's own like packet timing
> escapes me

Yes, it's clear that an adversary who can get CPU timing can get packet
timing.  

It's not clear if some adversary might prefer information about the seed
to simplify their larger infrastructure, like say by not needing to
worry about clock skew on their exit nodes, or even choosing to
compromise exit nodes soon after the fact. 

> Hmm?  The timing information that's available to a local attacker
> would be the total time taken for `a` generation.

Really?  I know nothing about the limits of timing attacks.  I just
naively imagined they learn from the timing of CPU work vs memory writes
or something. 

Jeff




signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [proposal] Post-Quantum Secure Hybrid Handshake Based on NewHope

2016-05-07 Thread Jeff Burdges
On Sat, 2016-05-07 at 13:14 -0700, Watson Ladd wrote:
> I'm not sure I understand the concern here. An attacker sees that we
> got unlucky: that doesn't help them with recovering SEED under mild
> assumptions we need anyway about SHAKE indistinguishability.

We're assuming the adversary controls a node in your circuit and hence
sees your seed later.  You get unlucky like over 400 times, so, if they
can record enough of the failure pattern, then their node can recognize
you from your seed. 

Jeff




signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [proposal] Post-Quantum Secure Hybrid Handshake Based on NewHope

2016-05-07 Thread Jeff Burdges
On Sat, 2016-05-07 at 19:41 +, lukep wrote:
> It's hard to guarantee that any fixed, finite amount of SHAKE
> output will be sufficient for any rejection sampling method
> like gen_a.
 
Isn't some small multiple usually enough?  I think 1024 is large enough
to tend towards the expected 42%ish failures. 

Also, can't one simply start the sampling over from the beginning if one
runs out? 

I've no idea if an maybe an arithmetic coding scheme would be more
efficient.

> Or let a be a system-wide parameter changing say on a daily basis?

I mentioned using the Tor collaborative random number generator for a in
my other message, but only as feint to get to the meat of my argument
that Isis and Peter's proposal sounds optimal.  I think rotating a
network wide a would get messy and dangerous in practice. 

If bandwidth is an issue, then a could be derived from the ECDH
handshake, thereby making it zero cost. 

Jeff



signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [proposal] Post-Quantum Secure Hybrid Handshake Based on NewHope

2016-05-07 Thread Jeff Burdges

Just a brief aside about post-quantum handshake approaches that
seemingly do not work. 

I suppose Tor folk remember the article  Using Sphinx to Improve
Onion Routing Circuit Construction  by  Aniket Kate and Ian Goldberg.
As key sizes are a main obstacle to a post-quantum key exchange,
one might explore using a Sphinx-like key mutation trick to save
bandwidth.  

I doubt SIDH could support anything like the key mutation trick in
Sphinx because the public key is much too far removed from the private
key. 

There are key mutation tricks for Ring-LWE key exchanges though.  As an
example, the article  Lattice Based Mix Network for Location Privacy in
Mobile System  by  Kunwar Singh, Pandu Rangan, and A. K. Banerjee
describes a primitive similar to universal reencryption. 

It's likely that key mutation requires fixing the polynomial a in
advance.  If a must change, then maybe it could be seeded by Tor's
collaborative random number generator, so that's actually okay. 

Now, a Sphinx-like route building packet could consist of :
   (1) a polynomial  u_i = s_i a + e_i,
along with an onion encrypted packet that gives each server
   (3) maybe their reconciliation data r_i, and
   (3) a transformation x_i : u_i -> u_{i+1} = s_{i+1} a + e_{i+1},
where i is the node's index along the path.

Any proposal for this transformation x_i needs a proof of security.
About the best you're going to do here is reducing its security to
existing Ring-LWE assumptions.  If say x_i means add s' a + e' so that
s_{i+1} = s_i + s' and e_{i+1} = e_i + e', then you're depending upon
the Ring-LWE assumptions to know that s' a + e' looks like a random
polynomial. 

As a result, your hybrid protocol is unlikely to provably offer stronger
_anonymity_ properties than a pure Ring-LWE key exchange, even if its
_cryptography_ is as strong as the stronger of Ring-LWE and ECDH.  

I could say more about why say the choice of s' and e' might leak
information about s_i and e_i, but I wanted to keep this brief.  And the
essence of the observation is that any sort of the Sphinx-like key
mutation trick requires assumptions not required in a group. 

I found this an interesting apparent limit on making hybrids more
efficient than what Isis and Peter have proposed.  

Best,
Jeff




signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] [proposal] Post-Quantum Secure Hybrid Handshake Based on NewHope

2016-05-06 Thread Jeff Burdges
On Fri, 2016-05-06 at 19:17 +, isis wrote:

>   --- Description of the Newhope internal functions ---
> 
>   gen_a(SEED seed) receives as input a 32-byte (public) seed.  It expands
>   this seed through SHAKE-128 from the FIPS202 standard. The output of 
> SHAKE-128
>   is considered a sequence of 16-bit little-endian integers. This sequence is
>   used to initialize the coefficients of the returned polynomial from the 
> least
>   significant (coefficient of X^0) to the most significant (coefficient of
>   X^1023) coefficient. For each of the 16-bit integers first eliminate the
>   highest two bits (to make it a 14-bit integer) and then use it as the next
>   coefficient if it is smaller than q=12289.
>   Note that the amount of output required from SHAKE to initialize all 1024
>   coefficients of the polynomial varies depending on the input seed.
>   Note further that this function does not process any secret data and thus 
> does
>   not need any timing-attack protection.

Aren't the seed and polynomial a actually secret for negotiation with
any node after your guard?  

An adversary who can do a timing attack on a user's tor process would
gain some deanonymizing information from knowing which a elements get
skipped.  I suppose this adversary has already nailed the user via
correlation attack, but maybe worth rewording at least.  

And maybe an adversary could employ different attack infrastructure if
they can learn some skipped elements of a. 

Best,
Jeff




signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Quantum-safe Hybrid handshake for Tor

2016-04-29 Thread Jeff Burdges


On Sun, 2016-04-03 at 15:36 +, Yawning Angel wrote:
> http://cacr.uwaterloo.ca/techreports/2014/cacr2014-20.pdf
> 
> Is "optimized" in that, it is C with performance critical parts in
> assembly (Table 3 is presumably the source of the ~200 ms figure from
> the wikipedia article).  As i said, i just took the performance figures
> at face value.
> 
> I'm sure it'll go faster with time, but like you, I'm probably not going
> to trust SIDH for a decade or so.

There is a new SIDH library from MS Research : 
https://eprint.iacr.org/2016/413.pdf
https://research.microsoft.com/en-us/projects/sidh/



On Tue, 2016-04-26 at 15:05 +, isis wrote:
> It's not my paper, so I probably shouldn't give too much away, but…
> 
> Essentially, there are two different optimisations being discussed: one which
> allows faster signature times via batching, which can optionally also be used
> to decrease the size of the signatures (although assuming you're sending
> several signatures in succession to the same party).  That optimisation is
> maybe useful for something like PQ Bitcoin; probably not so much for Tor.

It's maybe worth keeping this sort of tool in mind for tools like co-signing.





signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Quantum-safe Hybrid handshake for Tor

2016-04-22 Thread Jeff Burdges
On Fri, 2016-04-22 at 11:10 +, Yawning Angel wrote:
> On Fri, 22 Apr 2016 11:41:30 +0200
> Jeff Burdges  wrote:
> > I'd imagine everyone in this thread knows this, but New Hope requires
> > that "both parties use fresh secrets for each instantiation".  
> 
> Yep.  Alice can cache the public 'a' parameter, but everything else
> needs to be fresh, or really really bad things happen.

I'd assume that 'a' could be generated by both parties from a seed
parameter declared by Alice?  I haven't noticed it being secretly
tweaked by Alice.

If 'a' must be sent, then 'a' would double the key size from one side,
no?  In that case, one should consider if reversing the Alice and Bob
roles of the client and server helped by either (a) adjusting the
traffic flow to mask circuit building, or (b) allowing one to stash 'a'
in the descriptor.  I donno if there are any concerns like the client
needing to influence 'a'. 

> > There is some chance SIDH might wind up being preferable for key
> > exchanges with long term key material. 
> 
> Maybe.  Some people have hinted at an improved version of SPHINCS256
> being in the works as well. 

Ain't clear how that'd work.  

There are homomorphic MAC like constructions, like accumulators, which
might let you reuse your Merkle tree.  I though these usually needed
assumptions that Shor's algorithm nukes, like one is  f(x,y) = x^7 + 3
y^7 mod N with N=pq secret, for example.  

I suppose Nyberg's accumulator scheme is post-quantum, but I though it's
huge too.  I'm not completely sure if just an accumulator helps anyways.

Jeff




signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Quantum-safe Hybrid handshake for Tor

2016-04-22 Thread Jeff Burdges

I'd imagine everyone in this thread knows this, but New Hope requires
that "both parties use fresh secrets for each instantiation".  

I suppose any key exchanges designed around this meshes well enough with
ntor, so that's okay.  It leaves you relying on ECDH for the key
exchange with long term key material though. 

I have not read the papers on doing Ring-LWE key exchanges with long
term key material, but presumably they increase the key side. 


On Wed, 2016-04-20 at 19:00 +, Yawning Angel wrote:
> And my gut feeling is RingLWE will have performant, well defined
> implementations well before SIDH is a realistic option.

This is undoubtedly true because too few people are looking into SIDH. 

I've been chatting with Luca about writing a "more production ready"
implementation, like optimizing the GF(p^2) field operations and things.
If that happens, maybe it'll spur more interest. 

There is some chance SIDH might wind up being preferable for key
exchanges with long term key material. 

Jeff




signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Yawning's CFC, web caching, and PETs

2016-04-03 Thread Jeff Burdges

Should we try to organize some public chat about web caching at PETs or
HotPETs this summer? 

By that I mean, a discussion with anonymity researchers on security and
anonymity concerns around making tools like Yawning's CFC a long-term
solution to the CloudFlare problem? 

Aside from our not knowing if CloudFlare will become more accommodating,
a trustworthy web cache would enable more serious efforts towards
alpha-mixing, either in Tor itself, or with mixnets on the side of Tor.
And archival tools make the web better in numerous ways, like by making
it harder to removing anything. 

There are interesting problems in this space like :  Big scary adversary
issues.  Archiving TLS sessions along with HTML transformations so that
subsequent clients can verify the original site's certificate.  How best
to one distribute the cache. 

Jeff 




signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Quantum-safe Hybrid handshake for Tor

2016-04-03 Thread Jeff Burdges

On Sat, 2016-04-02 at 18:48 -0400, Jesse V wrote:
> I just wanted to resurrect this old thread to point out that
> supersingular isogeny key exchange (SIDH) is the isogeny scheme that
> that you're referring to. Using a clever compression algorithm, SIDH
> only needs to exchange 3072 bits (384 bytes) at a 128-bit quantum
> security level. This beats SPHINCS by a mile and unlike NTRUEncrypt,
> fits nicely into Tor's current cell size. I don't know about key
> sizes, though. If I recall correctly, SIDH's paper also references
> the "A quantum-safe circuit-extension handshake for Tor" paper that
> lead to this proposal.

I should read up on this compression business since I'd no idea they
were so small.  At first blush, these SIDH schemes must communicate
curve parameters of the curve the isogeny maps to and two curve points
to help the other party compute the isogeny on their prime's subgroup,
so maybe 3-4 times the size of a curve point, but the curve is far
larger than any used with normal ECDH too.  

Warning : The signature schemes based on SIDH work by introducing
another virtual party employing a third prime.  And another more recent
scheme needs two additional parties/primes!  A priori, this doubles or
triples the key materials size, although it's maybe not so bad in
practice.  Also, these signature scheme have some unusual properties. 

> Again, I have very little understanding of post-quantum crypto and
> I'm just starting to understand ECC, but after looking over
> https://en.wikipedia.org/wiki/Supersingular_isogeny_key_exchange 
> and skimming the SIDH paper, I'm rather impressed. SIDH doesn't
> seem to be patented, it's reasonably fast, it uses the smallest
> bandwidth, and it offers perfect forward secrecy. It seems to me
> that SIDH actually has more potential for making it into Tor than
> any other post-quantum cryptosystem.

It'll be years before anyone trusts SIDH because it's the youngest.  And
Ring-LWE has a much larger community doing optimizations, etc. 

I like SIDH myself.  I delved into it to see if it offered the blinding
operation needed for the Sphinx mixnet packet format.  It seemingly does
not.  And maybe no post-quantum system can do so. 

All these post-quantum public key systems work by burring the key
exchange inside a computation that usually goes nowhere, fails, etc.*
In SIDH, it's replacing the kernel of the isogeny, which one can move
between curves, with two curve points that let the other party evaluate
your isogeny on their subgroup.  As the isogenies themselves form only a
groupoid, algorithms like Shor observe almost exclusively a failed
product, so the QFT rarely yields anything. 

As usual, there are deep mathematical questions here like : Has one
really hidden the kernel by revealing only the isogeny on a special
subgroup?  Are there parameterizations of appropriate isogenies in ways
that make the QFT dangerous again?  

As an aside, there are new quantum query attacks on symmetric crypto
like AEZ in: http://arxiv.org/abs/1602.05973 
We believe a quantum query attack against symmetric crypto sounds
unrealistic of course: http://www.scottaaronson.com/blog/?p=2673 
A quantum query attack is completely realistic against a public key
system though, so one should expect renewed effort to break the
post-quantum systems by inventing new QFT techniques.


On Sun, 2016-04-03 at 06:52 +, Yawning Angel wrote:
> Your definition of "reasonably fast" doesn't match mine.  The number 
> for SIDH (key exchange, when the thread was going off on a tangent 
> about signatures) is ~200ms.  

What code were you running?  I think the existing SIDH implementations
should not be considered optimized.  Sage is even used in : 
https://github.com/defeo/ss-isogeny-software  
I've no idea about performance myself, but obviously the curves used in
SIDH are huge, and the operations are generic over curves.  And existing
signature schemes might be extra slow due to this virtual third or
fourth party.  I know folks like Luca De Feo have ideas for optimizing
operations that much be generic over curves though.  


Around signatures specifically, there are deterministically stateful
hashed based, or partially hash based, scheme that might still be
useful :  One might for example pre-compute a unique EdDSA key for each
consensus during the next several months and build the Merkle tree of
the public keys of their hashes.  Any given consensus entry is
vulnerable to a quantum attack immediately after the key gets used, but
not the whole Merkle tree of EdDSA keys.  A signature costs O(log m)
where m is the number of consensuses covered by a single key.  It's
maybe harder to attack such a scheme while keeping your quantum computer
secret. **

Jeff

*  I'd be dubious that any non-abelian "group-based" scheme would remain
post-quantum indefinitely specifically because they lack this "usually
just fails" property.  It's maybe related to the issues with blinding
operations an the difficulties in making si

Re: [tor-dev] Request for feedback/victims: cfc-0.0.2

2016-04-01 Thread Jeff Burdges

Are there any more sites where CloudFalre appears on archive.is?

https://www.aei.org/publication/gen-michael-hayden-on-apple-the-fbi-and-data-encryption/
​https://archive.is/7u5P8

It's some particularly harsh CloudFlare configuration perhaps? 

Jeff



signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Request for feedback/victims: cfc-0.0.2

2016-03-30 Thread Jeff Burdges

I'm impressed with how much nicer the web gets with this. Thank you
Yawning!  :) 

On Sun, 2016-03-27 at 06:12 +, Yawning Angel wrote:

>* (QoL) Skip useless landing pages (github.com/twitter.com will be
>  auto-redirected to the "search" pages).

Ahh that's why that happened.  lol

>* (Privacy) Kill twitter's outbound link tracking (t.co URLs) by
>  rewriting the DOM to go to the actual URL when possible.  Since
>  DOM changes made from content scripts are isolated from page
>  scripts, this shouldn't substantially alter behavior.

Nice!

> TODO:
> 
>  * Try to figure out a way to mitigate the ability for archive.is to
>track you.  The IFRAME based approach might work here, needs more
>investigation.

Interesting point.

>  * Handle custom CloudFlare captcha pages (In general my philosophy is
>to minimize false positives, over avoiding false negatives).
>Looking at the regexes in dcf's post, disabling the title check may
>be all that's needed.

I've noticed some hiccups with medium on the auto mode, like say
https://medium.com/@octskyward/the-resolution-of-the-bitcoin-experiment-dabb30201f7
It sometimes works if you hit refresh though.

>  * Look into adding a "contact site owner" button as suggested by Jeff
>Burdges et al (Difficult?).

Just noticed this minimalist whois client in node.js : 
https://github.com/carlospaulino/node-whoisclient/blob/master/index.js 

>  * Support a user specified "always use archive.is for these sites"
>list.
> 
>  * UI improvements.

A task bar icon might find several uses:
- A "View this page through archive.is" button for when CFC misses a
CAPTCHA, or even if the CAPTCHA is not CloudFlare.
- A "contact site button" that worked even after passing to archive.is.
- A "Give me the CAPTCHA" button for those who configure CFC to
automatically load archive.is.  

I'm using another browser profile for this last point currently.  In
fact, it fit perfectly into my existing pattern of browser profiles.
Yet, browser profiles are not user-friendly, especially in TBB, so this
would benefit people who do not use profiles. 

Wonderful extension!
Jeff




signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Request for feedback/victims: cfc

2016-03-24 Thread Jeff Burdges
On Wed, 2016-03-23 at 14:09 -0400, Paul Syverson wrote:
> On Wed, Mar 23, 2016 at 12:33:15PM -0400, Adam Shostack wrote:
> > Random thought: rather than "unreachable from Tor", "unreachable when
> > using the internet safely."  This is really about people wanting
> > security, and these companies not wanting to grapple with what their
> > customers want.
> 
> Yes! Not random at all. When trying to succincly contrast current means
> to access and use registered-domain sites vs. onionsites I not infrequently
> slip into calling them the insecure web and the secure web respectively.

Yes, that sounds reasonable.  There would be a bunch of linguistic
decisions like that.  I suggested waiting until Kate finishes her
CloudFlare FAQ specifically because she would already be making many
relevant such decisions. 

I think the main technical question is : How hard is it to safely use
whois from JavaScript? 

Jeff



signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Request for feedback/victims: cfc

2016-03-23 Thread Jeff Burdges

Thank you, Yawning!  This looks great.  :)


I think Kate was planning on writing up an official position of the Tor
project on the CloudFlare situation.  Amongst other things, it's
expected to contain several strong arguments for convincing sites that
the CAPTCHA does them no good and to make their CloudFlare configuration
more Tor friendly.  Or simply use another CDN like Akamai.

After that appears, one could add a mailto: link alongside the cfc
button, so that users could easily start a dialog with the site where
they encounter a CloudFlare CAPTCHA. 

A mailto: link can have email header and body information like
mailto:..@..?subject=Unreachable from Tor due to CloudFlare
CAPTCA&body=..  
And the body could contain some text derived from whatever Kate writes.

In principle, the mailto: link's destination could determine the site's
contact information from whois : 
 https://stackoverflow.com/questions/8435678/whois-with-javascript 
If that's annoying, then simply placing a unix command like  "whois
[site] | grep Email" into the body along with some explanation should
suffice. 

It's easy enough to do all this with a shell script of course, but if
cfc moves towards many people using it then maybe encouraging people to
email sites will help. 

Jeff




On Wed, 2016-03-23 at 11:00 +, Yawning Angel wrote:
> [I hate replying to myself.]
> 
> On Wed, 23 Mar 2016 09:15:36 +
> Yawning Angel  wrote:
> > My "proof of concept" tech demo is what I consider good enough for
> > use by brave people that aren't me, so I have put up an XPI package
> > at: https://people.torproject.org/~yawning/volatile/cfc-20160323/
> 
> I noticed some dumb bugs and UI issues in the version I pushed so I
> changed a lot of things and uploaded a new version that should be
> better behaved.  In particular:
> 
>  * It is now Content Script based, and does IPC so it may survive the
>transition to sandboxed/multiprocess firefox better.
> 
>  * It will always inject a button into the DOM instead of trying to
>display browser UI stuff (content scripts are supposed to have
>isolation...).
> 
>* The UI selection pref is removed.
> 
>* The ask on captcha option for behavior is removed, since a button
>  always will be there to bypass it.
> 
>  * Loading lots of pages that end up displaying street signs *should*
>now behave correctly.
> 
> The old release is under `./old` for posterity.
> 
> Sorry for the inconvenience,
> 
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev



signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Fwd: Downloadable content: Fonts!

2016-02-25 Thread Jeff Burdges

I think this strays a bit far afield from tor-dev, but..

If an academic group was interested in basically redesigning the web to
be more sane, then Servo might be a good place to start.  

There are a whole bunch of things one could do, like forcing much more
to be catchable by using content based addressing, restrict cross
site/origin communication to be with single use blind signed tokens
and/or involve user approval, restrict the role of javascript, embed a
better PKI, etc.  All the stuff that TBB cannot do because it'd break
to many sites.

In short, one could attempt to build a better freenet using grants for
"security" work.  And the long game would be to guilt the browser
makers and web standards people into tightening things up.





On Fri, 2016-02-19 at 20:57 +0100, Jeff Burdges wrote:
> On Fri, 2016-02-19 at 16:21 +, Spencer wrote:
> > At what point do the efforts to patch Firefox out weigh the efforts
> > to build a browser from scratch?
> 
> Browsers are extremely complicated.  
> 
> If you want to explore Mozilla's efforts to build a more modern
> browser, then I suggest you look over and build Servo: 
> 
> https://github.com/servo/servo
> https://github.com/servo/servo/wiki/Design
> https://servo.org/ 
> 
> It's cool to imagine free software and privacy communities turning
> Servo into a viable browser that caters to their interests.  Afaik,
> Servo is the only realistic option for minimizing C code in the
> browser
> too.  In reality, Servo fails to render much of the web correctly
> because it's a messy problem. 
> 
> Jeff
> 
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev

signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Post-quantum symetric crypto

2016-02-22 Thread Jeff Burdges

Symmetric crypto might start worrying more about being post-quantum
soon : http://arxiv.org/abs/1602.05973

Best,
Jeff



signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Fwd: Downloadable content: Fonts!

2016-02-19 Thread Jeff Burdges
On Fri, 2016-02-19 at 16:21 +, Spencer wrote:
> At what point do the efforts to patch Firefox out weigh the efforts
> to build a browser from scratch?

Browsers are extremely complicated.  

If you want to explore Mozilla's efforts to build a more modern
browser, then I suggest you look over and build Servo: 

https://github.com/servo/servo
https://github.com/servo/servo/wiki/Design
https://servo.org/ 

It's cool to imagine free software and privacy communities turning
Servo into a viable browser that caters to their interests.  Afaik,
Servo is the only realistic option for minimizing C code in the browser
too.  In reality, Servo fails to render much of the web correctly
because it's a messy problem. 

Jeff



signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Quantum-safe Hybrid handshake for Tor

2016-02-03 Thread Jeff Burdges
On Fri, 2016-01-01 at 11:14 +, Yawning Angel wrote:

> On Thu, 31 Dec 2015 20:51:43 +
> isis  wrote:
> [snip]
> > I feel like there needs to be some new terminology here.  It's
> > certainly not post-quantum secure, but "quantum-safe" doesn't seem
> > right either, because it's exactly the point at which the adversary
> > gains appropriate quantum computational capabilities that it become
> > *unsafe*.  If I may, I suggest calling it "pre-quantum secure". :)
> 
> Post-quantum forward-secrecy is what I've been using to describe this
> property.

Isn't that using "forward security" to denote a weakening when it
usually denotes a strengthening? 

> I personally don't think that any of the PQ signature schemes are
> usable
> for us right now, because the smallest key size for an algorithm that
> isn't known to be broken is ~1 KiB (SPHINCS256), and we probably
> can't
> afford to bloat our descriptors/micro-descriptors that much.

Did you mean to talk about the 41ish kb signature here?

I donno that you'll ever beat that 1kb key size with a post-quantum
system.  There is a lattice based signature scheme and an isogeny based
scheme that'll both beat SPHINCS on signature sizes, but I think not so
much on key size. 

Jeff

p.s.  I'd imagine that key size might come from the public key itself
proving that it's a SPHINCS public key or doing a simple initial
signature or something.  If you didn't care during storage that the key
is really a key, or what its good for, then a 256 bit fingerprint of a
SPHINCS public key would be as good as a SPHINCS public key itself,
right?  It's dubious that Tor, or anyone really, could use fingerprints
in such a context-free way though.  




signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Bitcoin-paid hidden meek relays?

2015-12-11 Thread Jeff Burdges

Appears Isis has interesting work that addresses the bridge problem
much more directly than anything in this thread. 


On Fri, 2015-12-11 at 15:52 +0100, Henry de Valence wrote:

> > Taler is an electronic payment system that was built with the goal
> > of supporting taxation.  With Taler, the receiver of any form of
> > payment is known, and the payment information comes attached with
> > some data about what the payment was made for ... governments can
> > use this data to tax businesses and individuals ... making tax
> > evasion and black markets less viable.
> 
> For a user in a country where Tor is blocked, funding Tor bridges is,
> by definition, a black market in that country.
> 
> Could you explain how you see this feature of Taler fitting with the
> threat model bridges are meant to address?  Which governments should
> get
> detailed data on donations to bridges, and *to whom* is "the receiver
> of
> any form of payment" known?


Any anonymity system provides anonymity only within a particular
anonymity set.  A priori, a blind signing based system like Taler
anonymizes the transactions between two anonymity sets, the customers
and the merchants.  Its mint always knows the total amount that each
customer spends and the total income that each merchant receives,  but
not the specific transactions. 

In Taler, we ensure that a customer and a mint can collaborate to
deanonymize the merchant side of a particular transaction, so the
merchants are no longer an anonymity set.  A merchant and mint cannot
 collaborate to deanonymize a customer side of a particular transaction
though, so the customers remain an anonymity set. 

It follows that Taler cannot protect the identity of merchants from the
country where the Taler mint is based.  In the bridges case, a bridge
user and the mint can collaborate to expose the operator of a
particular bridge, which seems harmless, or even beneficial, and
achievable via the CDN anyways. 

Jeff

p.s. Aside from anonymity sets, one might worry about pseudo-anonymity
of membership in the set of customers or merchants.  In the bridge
case, if say China hacked this meek mint, then they could learn
whatever the mint knows about its customers, but not what bridge that
customer paid for.  A Taler mint funding bridge operators should
ideally pass any user details it must retain through a data diode. 



signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Bitcoin-paid hidden meek relays?

2015-12-10 Thread Jeff Burdges
On Thu, 2015-12-10 at 13:22 +0200, George Kadianakis wrote:
>   The next idea would be to rate limit based on a different scarce
> resource
>   (e.g. Bitcoin, passport, etc.). All of them kind of suck for
> different
>   reasons, but maybe some of them are fine for most threat models.

After we get Taler running then Taler becomes a payment option too :
https://taler.net/
Although that does not solve the suckage around using money in general.

Alternatively, one could build a Taler mint that uses an identifying
document like a passport to open an account, but there after issues
users a constant stream of anonymous tokens with which they can obtain
new meek addresses.  Ain't clear if that's really such a great idea
either though as countries do not really run short of passports.

Jeff


signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] W3C Payment Groups

2015-11-16 Thread Jeff Burdges

I've been talked into formally participating in the W3C Payment
Interest/Working Group as part of working on GNU Taler, well my
employer INRIA is a member.

Drop me a line if there is anything I can do/say/watch to help keep
that potential standard tor friendly.

Best,
Jeff

p.s.  There are related groups on privacy and credentials that might be
of more interest to Tor Browser which I'm not involved in, btw.

p.s.2  Just fyi, Taler is a transaction system based on RSA blind
singing.  We haven't finished the demo for Taler yet, probably by 32c3
though.  Feel free to look at these if you want to know more : 
http://taler.net
http://grothoff.org/christian/taler-draft.pdf


burdges-pub-0.gpg
Description: application/pgp-keys


signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] UniformDH question

2015-10-08 Thread Jeff Burdges

What is the advantage of using X or p-X in UniformDH in obfsproxy?

https://gitweb.torproject.org/pluggable-transports/obfsproxy.git/tree/d
oc/obfs3/obfs3-protocol-spec.txt#n65

Isn't just X itself dense pretty quickly anyways? 

Jeff



signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Special-use-TLD support

2015-10-06 Thread Jeff Burdges

Just an update on this :

If anyone wants this in the short-term, then it should be done the
OnioNS was, like Roger suggests.

In the longer term, there are now a handful of parties interested in
building a "libnss2" that provides an asynchronous name interface to :
- help resolve the disaster that arises from DNSSEC TLSA records
arriving slower than the regular DNS records, 
- move NSS configuration into user space (DJB & other's want this), and
- improve support for the capabilities of GNS and Namecoin. 

If you consider what that API might look like, then you realize it's
potentially not so Tor friendly : Imagine running Tor on an external
device, but to do name resolution the way the user wants tor must talk
to nss daemon on the user's machine, but that daemon must understand
that tor requests should only go over tor.  Ick!

So rather than a proposal for Tor, what we need to do is write an API
proposal for a local name resolution system that solves the issues with
DNSSEC, and does other things, and does not cause problems for Tor
users.

Oh, there is already some asynchronous DNS library in the GNU world,
but it's probably not what anyone wants. 





On Mon, 2015-09-28 at 16:26 -0400, Roger Dingledine wrote:
> On Mon, Sep 28, 2015 at 03:20:47PM +0200, Jeff Burdges wrote:
> > I proposed that Tor implement NameService rules using UNIX domain
> > sockets, or ports, since that's how GNUNet works, but maybe Tor
> > should
> > instead launch a helper application it communicates with via stdin
> > and
> > stdout.  I donno if that'll work well on Windows however.
> 
> If you're to be running a second program that does the "resolves",
> then
> I think you should really think about adding a third program that
> talks
> to Tor on the control port and does all of these rewrites via the
> control
> protocol without needing any further Tor modifications. (If you
> wanted,
> you could make these second and third programs be just one program.)
> 
> This is I believe how Jesse's "OnioNS" tool works at present: you
> connect
> to the control port (e.g. via a Stem script), tell Tor that you want
> to
> decide what to do with each new stream (set __LeaveStreamsUnattached
> to
> 1), and then you let Tor pick (attachstream to 0) for all the streams
> except the special ones. When you see a new special stream, you do
> the
> lookup or resolve or whatever on your side, then REDIRECTSTREAM the
> stream to become the new address, then yield control of the stream
> back
> to Tor so Tor picks a circuit for it.
> 
> The main downside here is that you need to run a new Tor controller.
> But
> if you're already needing to run a separate program, you should be
> all set.
> 
> What am I missing?
> 
> --Roger
> 
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev

signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Anycast Exits (related : Special-use-TLD support)

2015-09-30 Thread Jeff Burdges
On Wed, 2015-09-30 at 15:39 +0200, Tim Wilson-Brown - teor wrote:

> >  First, Tor adds the line "ACE  :" to the
> > node's
> >  Second, Tor allows connections to ip:port as if the torrc contains
> > :
> >ExitPolicy allow:
> >  As ExitPolicyRejectPrivate defaults to 1, these policies should be
> >  allowed even if the ip lies in a range usually restricted.  
> >  In particular localhost and 127.0.0.1 are potentially allowed.
> Tor exit policies don’t contain hostnames like “localhost", did you
> mean 127.0.0.0/8 and ::1?
> 
> I am concerned about the security considerations of opening up local
> addresses, as local processes often trust connections from the local
> machine. Perhaps we could clarify it to say that only the specific
> port on 127.0.0.0/8 and ::1 is allowed?

Yes, that's the effect of the ExitPolicy line described.  We should not
disable ExitPolicyRejectPrivate, merely ensure that the new exit policy
be processed before it.  I'll add some language to clarify, slightly. 
 I'm futzing around to make sure that just an ExitPolicy line does this
already too.

Jeff



signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Anycast Exits (related : Special-use-TLD support)

2015-09-30 Thread Jeff Burdges

I have attached below the second half of the Special-Use TLD proposal
that discusses how a local name service tool contact a peer-to-peer
application running on an exist node 

There is nothing specific here to providing name services, any peer-to
-peer application might potentially want an anycast style gateway from
Tor to its own network.

At the same time, this proposal is *very* hackish since Tor seems
almost able to provide the same functionality with a judiciously chosen
ExitPolicy lines, and a bunch of work on the application's side.  And
maybe that's really the right way to do it in the end.

There is no discussion here of dealing with bad exit gateways for
protocols that Tor does not even know about, presumably that requires
some thought as well.  I donno if Tor combats exits doing highly
targeted DNS manipulation all that well either though.

Jeff









Filename: xxx-anycast-exit.txt
Title: Anycast Exit
Author: Jeffrey Burdges
Created: 28 September 2015
Status: ?
Implemented-In: ?

Abstract

  Provide an anycast operation to help bootstrap Tor aware peer-to-peer
  applications.

Overview

  Peer-to-peer protocols must define a method by which new peers locate
  the existing swarm, but available techniques remain rather messy.
  We propose that Tor provide an "anycast" facility by which peer-to
-peer
  applications built on top of Tor can easily find their peers using
the
  full aka useless descriptors.

Server Side

  We propose an AnycastExit Tor configuration option 

AnycastExit  :

  Here protocol must be a string consisting of letters, numbers, and 
  underscores. 

  There are two changes Tor's behavior resulting from this option :  

  First, Tor adds the line "ACE  :" to the node's
  full descriptor.  

  Second, Tor allows connections to ip:port as if the torrc contains :
ExitPolicy allow:
  As ExitPolicyRejectPrivate defaults to 1, these policies should be
  allowed even if the ip lies in a range usually restricted.  
  In particular localhost and 127.0.0.1 are potentially allowed.

Client Side

  Users enable anycast usage by adding the configuration line 

FetchUselessDescriptors 1

  Software queries the Anycast lines in the full descriptor by sending
Tor 
  control port the line :

GETINFO anycast//

  This query returns 
250-anycast//="..exit:"
  where  is a node identity for a node whose full descriptor
  contains the line "Anycast  :".

  After receiving such a query for anycast nodes supporting ,
  Tor builds, and later maintains, a list of nodes whose full
descriptor
  contains an "ACE  .." line in lexicographic order according
to
  .  Tor returns the th node from this list.
  Also, if  exceeds the number of nodes with Anycast 
  lines, then an error is returned.

  Clients contact the anycast server ..exit on port
.
  As AllowDotExit defaults to off, applications should use the Tor
control
  port to request a circuit to that particular exit using MapAddress:
MapAddress ..exit=
  After this, the peer-to-peer application can connect to  over
  the Tor socks port.

  MapAddress usually produces a four hop circuit, but many peer-to-peer
  applications, including name service provides, can accept the small 
  additional latency.

Future Work

  Tor directory authorities could aggregate the lists of anycast
supporting
  nodes, so that clients do not need to download the full descriptors. 

  AnycastExit could support UNIX domain sockets.

Hackish Alternative

  In principle, the ExitPolicy line produced by AnycastExit might
suffice
  if both doing so bypasses ExitPolicyRejectPrivate, and the port could
  identify the protocol. 

  Additionally, an application could parse the cached-descriptors*
files
  themselves to locate exits with the desired exit policies.

Acknowledgments

  Based on discussions with George Kadianakis, Christian Grothoff.  
  Indirectly based on discussions between Christian Grothoff and 
  Jacob Appelbaum about accessing the GNU Name System over Tor.


signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Special-use-TLD support

2015-09-29 Thread Jeff Burdges
On Tue, 2015-09-29 at 00:59 +, Jeremy Rand wrote:

> Do I infer correctly that the main intention of this is to decrease
> the possibility of attack by a Sybil attack on the Namecoin network,
> by making the Namecoin peer selection process have similar properties
> to Tor relay selection (which is relatively Sybil-resistant)?  (And I
> guess this would also eliminate issues where a Tor client connects to
> a Namecoin peer who also happens to be his/her guard node.)  If so, I
> think I cautiously agree that this may be a good idea.  (I haven't
> carefully considered the prospect, so there may be problems
> introduced
> that I haven't thought about -- but from first glance it sounds like
> an improvement over what Namecoin does now, at least in this
> respect.)

I have not thought specifically about Namecoin's threat model.  If the
DNS providing peer runs on the exit node then it reduces the threat
model to the same threat model as DNS.

> The issue I do see is that SPV validation doesn't work well unless
> you
> ask multiple peers to make sure that you're getting the chain with
> the
> most PoW.  So I gather that this would require connecting to Namecoin
> peers running on multiple exit nodes.  I don't think that's
> problematic, but it would have to be taken into account.

This is no different from validation for existing DNS results.  Tor
attempts to prevent this by building a list of bad exits, but it's
challenging to catch an exit that attacks only one website.

You could check multiple peers but that costs you some anonymity.  If
you use many .bit names, this might expose the fact that you use
Namecoin to your guard.  

There are many Tor programs like Ricochet and Pond, and many websites,
that should be detectable by a sufficiently dedicated guard, so that's
not a compelling reason not to check multiple exits, but it requires
consideration.

One could maybe design the Namecone shim to check obtain general-but
-relevant information from multiple exits running the Namecoin client,
but only obtain the actual result from one exit.  Or maybe that's
reinventing the SPV client.

Jeff



signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Special-use-TLD support

2015-09-28 Thread Jeff Burdges
On Mon, 2015-09-28 at 16:26 -0400, Roger Dingledine wrote:
> On Mon, Sep 28, 2015 at 03:20:47PM +0200, Jeff Burdges wrote:
> > I proposed that Tor implement NameService rules using UNIX domain
> > sockets, or ports, since that's how GNUNet works, but maybe Tor
> > should
> > instead launch a helper application it communicates with via stdin
> > and
> > stdout.  I donno if that'll work well on Windows however.
> 
> If you're to be running a second program that does the "resolves",
> then
> I think you should really think about adding a third program that
> talks
> to Tor on the control port and does all of these rewrites via the
> control
> protocol without needing any further Tor modifications. (If you
> wanted,
> you could make these second and third programs be just one program.)
> 
> This is I believe how Jesse's "OnioNS" tool works at present: you
> connect
> to the control port (e.g. via a Stem script), tell Tor that you want
> to
> decide what to do with each new stream (set __LeaveStreamsUnattached
> to
> 1), and then you let Tor pick (attachstream to 0) for all the streams
> except the special ones. When you see a new special stream, you do
> the
> lookup or resolve or whatever on your side, then REDIRECTSTREAM the
> stream to become the new address, then yield control of the stream
> back
> to Tor so Tor picks a circuit for it.
> 
> The main downside here is that you need to run a new Tor controller.
> But
> if you're already needing to run a separate program, you should be
> all set.
> 
> What am I missing?

Very interesting.  Yes, this sounds reasonable in the short run.  In
the longer run, there are several people with an interest in
externalizing Tor's DNS handling, which changes things.  I'll check out
OnioNS and discuss this with people at the meeting.  


In the mean time, I updated the previous proposal based on comments
here.  Also, I remove the NameSubstitution idea when I remembered
MapAddress.





Filename: xxx-special-use-tld-support.txt
Title: Special-Use TLD Support
Author: Jeffrey Burdges
Created: ?? Sept 2015
Status: Draft
Implemented-In: ?

Abstract

  Suppose Special-Use TLDs in Tor via external Domain Name System (DNS)
  suppliers, such as the GNU Name System and Namecoin.

Background

  Special-use TLD supplier software integrates with the host operating
  system's DNS layer so that other software resolves the special-use
TLD
  identically to standard DNS TLDs.  On Linux for example, a Special
-Use
  TLD package could create a plugin for the Name Service Switch (NSS)
  subsystem of the GNU C Library.  

  Tor cannot safely use the local system's own DNS for name resolution,
  as doing so risks deanonmizing a user through their DNS queries.  
  Instead Tor does DNS resolution at a circuit's exit relay.  It
follows
  that Tor users cannot currently use special-use TLDs packages in a
safe
  manor.  

  In addition, there are projects to add public key material to DNS,
like
  TLSA records and DNSSEC, that necessarily go beyond NSS.  

Design

  We denote by N an abstract name service supplier package.
  There are two steps required to integrate N safely with Tor :  

  Of course, N must be modified so as to (a) employ Tor for its own
  traffic and (b) to use Tor in a safe way.  We deem this step outside
  the scope of the present document since it concerns modifications to
N
  that depend upon N's design.  We caution however that peer-to-peer 
  technologies are famous for sharing unwanted information and
producing
  excessively distinctive traffic profiles, making (b) problematic.
  Another proposal seeks to provide rudimentary tools to assist with
(a).

  We shall instead focus on modifying Tor to route some-but-not-all DNS
  queries to N.  For this, we propose a NameService configuration
option
  that tells Tor where to obtain the DNS record lying under some
specific
  TLD.

  Anytime Tor resolves a DNS name ending in an Special-Use TLD
appearing
  in an NameService configuration line then Tor makes an RPC request
for
  the name record using given UNIX domain socket or address and port.

  We should allow CNAME records to refer to .onion domains, and to 
  regular DNS names, but care must be taken in handling CNAME records
  that refer to Special-Use TLDs handled by NameService lines.
  Tor should reject CNAME records that refer to .exit domains.

Configuration

  We propose two Tor configuration options :

NameService [.] 
  [noncannonical] [timeout=num]
  [-- service specific options]

  We require that  be either the path to a UNIX domain
socket
  or an address of the form IP:port.  We also require that each

  be a string conforming to RFC 952 and RFC 1123 sec. 2.1. 
  In other words, a dnsspec consists of a series of labels separated by
  periods . with e

Re: [tor-dev] Special-use-TLD support

2015-09-28 Thread Jeff Burdges
On Sun, 2015-09-27 at 19:47 +0200, Jeff Burdges wrote:
...
> Configuration
> 
...
> NameService [.]dnspath socketspec
>   [noncannonical] [timeout=num]
>   [-- service specific options]
> 
>   We require that socketspec be either the path to a UNIX domain
> socket 
>   or an address of the form IP:port.  We also require that that each
>   *dnspath be a string conforming to RFC 952 and RFC 1123 sec. 2.1.
...

I asked Yawning today if this part should (a) use a socket to a process
that already exists, or (b) exec a helper program that communicates
over a pipe tied to its stdio.  He mentioned the PT spec does both,
which I interpreted as going either way at the time.  In fact, the PT
spec literally requires both stdio and a socket, which sounds overly
complex for this. 

Anyways, if we wanted to to exec a helper program, the configuration
might look something like :
NameService [.]dnspath [opts ..] exec helper_prog [opts ..]
We could simply speak to the helper over a pipe tied to its stdio.

Jeff


signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Special-use-TLD support

2015-09-28 Thread Jeff Burdges

Special-use TLDs is the official name, according to someone or other attached 
to DNS.  I'd rate that good enough.


Afaik, there is actually no coherent name for these naming system meant
to compete with DNS.  Arguably, the most logical approach would be to
genericize the DNS trademark and accept the resulting ambiguity, but
that's not realistic.  In places, I've used terms like name service,
and similar, because it's largely unambiguous how Tor would use a
naming scheme.  I'd personally consider calling Namecoin or GNS simply
"name systems" to be an exaggeration though, maybe "server name
consistency system" or something.  I don't think this really impacts
the spec since the DNS terminology is well defined.





On Mon, 2015-09-28 at 15:32 -0300, hellekin wrote:
> On 09/27/2015 02:47 PM, Jeff Burdges wrote:
> > 
> > This is the first of two torspec proposals to help Tor 
> > work with Sepcial-Use TLDs, like the GNU Name system or
> > NameCoin.  The second part will be an anycast facility.   - Jeff
> > 
> 
> Jeff, I'd be careful using DNS (as in Ze DNS) vocabulary in
> specifications that are not concerned with, well, the Domain Name
> System.  The DNS should be considered like SMTP or HTTP: its own
> protocol with its own rules, yada yada.
> 
> A TLD (Top-Level Domain) therefore is only TLD when it's used with
> the
> DNS root servers.  You cannot talk about "alternate Domain Name
> Systems
> (DNS) providers", since DNS is unique, global, and served by the
> official DNS root servers.  However, you can certainly mention
> "alternate global name systems", and choke a suit or three by being
> legitimately precise to the point you might be considered arrogant
> (but
> thoughtful) in doing so.
> 
> I'll wait for the next version of this draft and a bit more available
> time for further comments.  Glad you're joining the club of polishing
> text to speak to genuine volunteers in making the Internet cool
> again.
>  ;o)
> 
> ==
> hk
> 
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev

signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Special-use-TLD support

2015-09-28 Thread Jeff Burdges
On Mon, 2015-09-28 at 00:05 +0200, Tom van der Woerdt wrote:

> Questions :
>  * are those directives handled on the relay or the client? If relay,
> how will the client know which node to talk to?

They route name resolution requests on the client to another piece of
software on the client.  That piece of software is responsible for
using Tor correctly, usually by being a thin shim that contacts a real
client running on a volunteer exit node.

>  * please don't add support for .exit here, external parties should
> never be able to lead users to that (and having cnames point at them
> would break that)

Yes .exit is banned from CNAME records for exactly this reason.

>  * what happens if two directives compete for the same TLD?
> Especially if these are handled at the relay...

NameService lines should explicitly specify the TLDs to which they
refer.  If Namecon wants to manage .coin but the torrc only gives it
.bit then it only gets .bit.

Jeff

signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Special-use-TLD support

2015-09-28 Thread Jeff Burdges
On Sun, 2015-09-27 at 22:31 +, Jeremy Rand wrote:
> On 09/27/2015 05:47 PM, Jeff Burdges wrote:
> > 
> > This is the first of two torspec proposals to help Tor work with
> > Sepcial-Use TLDs, like the GNU Name system or NameCoin.  The second
> > part will be an anycast facility.   - Jeff
> 
> Hi Jeff,
> 
> Thanks for working on this; Namecoin is definitely interested in this
> effort.  I have one comment.  SPV-based Namecoin clients will, under
> some circumstances, generate network traffic to other Namecoin P2P
> nodes containing names being looked up.  To avoid linkability, stream
> isolation should be used so that different Namecoin lookups go over
> different Tor circuits if the lookups correspond to TCP streams that
> go over different Tor circuits.  (Also, the choice of Namecoin nodes
> to peer with should be different for each identity.)  Therefore, it
> seems to me that there should be a mechanism for Tor to provide
> stream
> isolation information to the naming systems that it calls, along with
> "new identity" commands.
> 
> The above issue doesn't affect full Namecoin clients, or SPV Namecoin
> clients that download the full unspent domain name set.  I don't know
> enough about the GNU Name System to know how this issue affects it,
> if
> at all.
> 
> Thoughts on this?

Yes.  I distrust running p2p applications not specifically designed for
Tor over Tor.  The GNU Name System will therefore run the DHT process
on volunteer Tor exist nodes, much like how DNS queries are handled by
exit nodes.  

Imho, Namecoin should similarly develop a Tor Namecoin shim client that
contacts special SPV Namecoin clients running on volunteer exit nodes. 
 I'm working on a second torspec proposal that adds an AnycastExit
option to simplify this. 

In the long term, there are obviously concerns about bad exit nodes,
especially if there are only like two exits supporting Namecoing or
GNS, but currently so few people use GNS or Namecoin that we can
probably ignore this. 

> Also, trivial spelling nitpick: "Namecoin" is typically spelled with 
> a lowercase "c", like "Bitcoin".

Thanks!

Jeff




signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Special-use-TLD support

2015-09-28 Thread Jeff Burdges
On Sun, 2015-09-27 at 23:32 +0200, Tim Wilson-Brown - teor wrote:
> I have some questions about how NameSubstitution rules work in some
> edge cases:

In truth, I originally wrote the NameSubstitution rules bit for the
.gnu TLD.  In the end, Christian explained why that doesn't work,
mostly that the .gnu TLD should never query the network. 

I left NameSubstitution in as a discussion point, but it wouldn't
surprise me if NameSubstitution didn't quite suffice for any real
purposes.

It's probably best if one instead writes a simple tool called from a
NameService rule that provides NameSubstitution like functionality.

> Are multiple NameSubstitution rules applied in the order they are
> listed?
> 
> For example:
> NameSubstitution .com .net
> NameSubstitution .example.net .example.org
> 
> What does foo.example.com get transformed into?

In principle, one could apply the most specific (longest) rule, but..

My prejudice is that disjointness should be enforced for anything in
the torrc.  Otherwise, one must worry more about attackers modifying
torrc files. 

> Are trailing periods significant?

I believe they do not make sense.  DNS names may not end in a period,
so this is covered by the references I gave, not sure if I speced it
correctly though.

> Are leading periods significant?

I doubt the leading periods matter, but they make rules marginally
easier to read.  

> Are duplicate rules significant?

No.


> Is there a length limit for the final query?
> (DNS names are limited to 255 characters.)

> For example:
> NameSubstitution .a .<254 characters>
> 
> What does <253 characters>.a get transformed into?

Originally, I'd meant to propose 510 characters since I'd envisioned
blahblah.gnu being translated into blahblah.hash.zkey where .zkey gets
processed by GNS.  There is no need for that now, so I'm ambivalent. 


As I said, we should probably drop the NameSubstitution rules in favor
of an external application that one calls via a NameService rule, but
this brings up a larger question :

I proposed that Tor implement NameService rules using UNIX domain
sockets, or ports, since that's how GNUNet works, but maybe Tor should
instead launch a helper application it communicates with via stdin and
stdout.  I donno if that'll work well on Windows however.

Jeff


signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Special-use-TLD support

2015-09-27 Thread Jeff Burdges

This is the first of two torspec proposals to help Tor work with Sepcial-Use 
TLDs, like the GNU Name system or NameCoin.  The second part will be an anycast 
facility.   - Jeff





Filename: xxx-special-use-tld-support.txt
Title: Special-Use TLD Support
Author: Jeffrey Burdges
Created: 20 Sept 2015
Status: Draft
Implemented-In: ?

Abstract

  Suppose Special-Use TLDs in Tor via external Domain Name System (DNS) 
  suppliers, such as the GNU Name System and NameCoin.

Background

  Special-use TLD supplier software integrates with the host operating
  system's DNS layer so that other software resolves the special-use TLD
  identically to standard DNS TLDs.  On Linux for example, a Special-Use
  TLD package could create a plugin for the Name Service Switch (NSS)
  subsystem of the GNU C Library.  

  Tor cannot safely use the local system's own DNS for name resolution,
  as doing so risks deanonmizing a user through their DNS queries.  
  Instead Tor does DNS resolution at a circut's exit relay.  It follows
  that Tor users cannot currently use special-use TLDs packages in a safe
  manor.  

  In addition, there are projects to add public key material to DNS, like
  TLSA records and DNSSEC, that necessarily go beyond NSS.  

Design

  We denote by N an abstract name service supplier package.
  There are two steps required to integrate N safely with Tor :  

  Of course, N must be modified so as to (a) employ Tor for it's own
  traffic and (b) to use Tor in a safe way.  We deem this step outside
  the scope of the present document since it concerns modifications to N
  that depend upon N's design.  We caution however that peer-to-peer 
  technologies are famous for sharing unwanted information and producing
  excessively distinctive traffic profiles, making (b) problematic.
  Another proposal seeks to provide rudementary tools to asist with (a).

  We shall instead focus on modifying Tor to route some-but-not-all DNS
  queries to N.  For this, we propose a NameService configuration option
  that tells Tor where to obtain the DNS record lying under some specific
  TLD.

  Anytime Tor resolves a DNS name ending in an Special-Use TLD appearing
  in an NameService configuration line then Tor makes an RPC request for
  the name record using given UNIX domain socket or address and port.

  We should allow CNAME records to refer to .onion domains, and to 
  regular DNS names, but care must be taken in handling CNAME records
  that refer to Special-Use TLDs handled by NameSerice lines.
  Tor should reject CNAME records that refer to the .exit domains.

Configuration

  We propose two Tor configuration options :

NameSubstitution [.]source_dnspath [.]target_dnspath
NameService [.]dnspath socketspec
  [noncannonical] [timeout=num]
  [-- service specific options]

  We require that socketspec be either the path to a UNIX domain socket 
  or an address of the form IP:port.  We also require that that each
  *dnspath be a string conforming to RFC 952 and RFC 1123 sec. 2.1.
  In other words, a dnsspec consists of a series of labels separated by
  periods . with each label of up to 63 characters consisting of the 
  letters a-z in a case insensitive mannor, the digits 0-9, and the
  hyphen -, but hyphens may not appear at the beginning or end of labels.

  NameSubstitution rules are applied only to DNS query strings provided
  by the user, not CNAME results.  If a trailing substring of a query
  matches source_dnspath then it is replaced by target_dnspath.

  NameService rules route matching query to to appropriate name service
  supplier software.  If a trailing substring of a query matches dnspath,
  then a query is sent to the socketspec using the RPC protcol descrived
  below.  Of course, NameService rules are applied only after all the
  NameSubstitution rules. 

  There is no way to know in advance if N handles cahcing itself, much 
  less if it handles caching in a way suitable for Tor.  
  Ideally, we should demands that N return an approporaite expiration
  time, which  Tor can respect  without harming safety or performance.  
  If this  proves problematic, then configuration options could be added
  to adjust Tor's caching behavior.

  Seconds is the unit for the timeout option, which defaults to 60 and
  applies only to the name service supplier lookup.  Tor DNS queries, 
  or attempts to contact .onion addresses, that result from CNAME records
  should be given the full timeout alloted to standard Tor DNS queries,
  .onion lookups, etc.

  Any text following -- is passed verbatim to the name service suppllier
  as service specific options, according to the RPC protocol described 
  below.

Control Protocol

  An equivalent of NameService and NameSubstitution should be added to
  the Tor control protocol, so that multiple users using the same Tor
  daemon can have different name resolution rules. 

RPC protocol

  We require an RPC format that communicates two values,
first any service speci

Re: [tor-dev] Proposal: Single onion services

2015-09-08 Thread Jeff Burdges

As an aside, we chatted briefly about the naming options for single
onion services or whatever at CCC camp.  Amongst those present, there
was no strong love for any existing naming proposal.  An interesting
naming idea surfaced however : 

We do not want people using these anyways unless they know what they're
doing.  So why not use a scary opaque acronym to tempt anyone interested
to actually read the documentation? 

An obvious choice is Directly Peered Rendezvous Services, or DPR
Services for short, with the config option name DPRService. 

It flows relatively well in conversation and contains no grammatically
mess.  Single onion services is grammatically amiss.  I suppose
single-layer onion service might work, but that's still confusing
logically.  Importantly, it does not encourage improper usage like
Fast-but-not-hidden services might. 

I suppose one might use Direct Peered Rendezvous Service too, not 100%
sure on the optimal form for typical English grammar there.

Best,
Jeff

p.s.  At camp, we first discussed the name Direct Point Rendezvous
Services, but the acronym would be the same.   

;)



p.s.  I'm thinking of this now partially because the discussion between
Yawning, Mike, and David seems to rest partially on such services being
handled like introduction points or like rendezvous points.  I'd just
assumed they needed to be handled like rendezvous points to defend
against attacks like Yawning mentioned. 




signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Hash Visualizations to Protect Against Onion Phishing

2015-08-20 Thread Jeff Burdges

A per browser salt is a wonderful idea.  It's basically impossible to
fake even small key poems or whatever if you cannot guess their salt.  

Just some thoughts :

- The salt should be a text field users can interact with easily.  It
could be displayed prominently in the extensions config, or even
with the key poem or whatever.  

- Initially (no pre-existing salt) the salt should be set to contain a
few dictionary words, maybe using a call to key poem routines.  Using
dictionary words makes it easier for users to copy the salt between
machines.

- Ideally, the initial salt should be set using machine specific
information, like CPU ids or default mac address or whatever, instead of
a temporary random number, so a clean reinstall of the operating system
should produce the same salt. 

- Upgrades should attempt to preserve the salt.  Tails should attempt to
persist the salt too, but ideally TBB should produce the same salt when
run on under Tails on the same machine without persistent storage.  If
machine identifiers cannot be used, then maybe Tails could set the salt
when the boot image is created.

- Documentation should recommend that users who fear targeted attacks
set a stronger salt and advice that all users share the same salt across
all their machines.  There could be buttons to create a random strong
salt and reset the salt to the machine's default.  

- Ideally, the documentation should explain that if you want to compare
representations with another person then you should use a temporary
salt, so as not to reveal your usual salt.


On Thu, 2015-08-20 at 15:47 +, Yawning Angel wrote:
> It was a hypothetical example.  If we're willing to go with the visual
> equivalent of key poems (which is what my suggestion roughly
> corresponds to) with a per-client secret to prevent brute forcing,
then
> there's no reason why we couldn't let the user choose a visual
> representation they're most comfortable with.

Yeah, if there are multiple representations then users could simply
select the ones they like.  If someone wants to do a research project on
visually recognizing key material then they can add an option to the
extension.  I'd expect card hands, mahjong tiles, etc. suck for most
users, btw.

I wonder if memes like lolcats might make an good compromise for the
different memorability constraints.  It could be as simple as an image
database with associated keywords, a dictionary, and word transition
probabilities.

If we've a per browser salt, then one could simply select a unix fortune
cookie or something similarly entertaining but low entropy.

> > Perhaps a notification "You've never visited this site before" that
> > pushes down from the top like some other notifications might go a
long
> > way?
> 
> People would likely complain about storing "did access foo.onion in
the
> past" type information to disk.  I could argue for/against "well, use
a
> per-client keyed bloom filter, false positive rate", but depending
> on the adversary model, people will probably (rightfully) be uneasy at
> the thought of persisting even that.
> 
> The moment people are willing to store "I accessed this onion in the
> past", I'm inclined to think "this is functionally equivalent to the
> user bookmarking said onion".

Yes exactly.  In fact, if you've added the bookmark star to FireFox's
toolbar then it changes color when you visit a bookmarked page, so you
could already do this by bookmarking the site's root and briefly
returning to it to check.  

Another idea : If the users has added the bookmark star to FireFox's
toolbar, and bookmarked anyplace on the site but not the exact page,
then the bookmark star changes to another color to indicate the site has
been visited before, and lets users quickly find all the bookmarks on
that site.

Jeff





signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Hash Visualizations to Protect Against Onion Phishing

2015-08-20 Thread Jeff Burdges

I first learned about key poems here : 
https://moderncrypto.org/mail-archive/messaging/2014/000125.html
If one wanted a more language agnostic system, then one could use a
sequence of icons, but that's probably larger than doing a handful of
languages.

I once encountered an article claiming that SSH randomart doesn't work
well.  I'm not sure about that article locate or correctness however.
Random art might work if you used icons, ala pink panda riding a blue
horse eating a green lion, but this grows harder to implement.  In fact,
algorithms that invent or merge human faces might work for people who
remember faces well. 

If we believe key poems might work, then we could build a firefox
extension that does them, and make the code quite readable.  Anyone with
more interest in doing the human research could pick up where we leave
it.

Jeff

p.s.  I briefly mentioned this general application of key poems in the
Future Onion Addresses and Human Factors thread too.  I don't recall
that thread pursuing the matter though.





On Thu, 2015-08-20 at 16:49 +0300, George Kadianakis wrote:
> Hello,
> 
> this mail lays down an idea for a TBB UI feature that will make it slightly
> harder to launch phishing attacks against hidden services. The idea is based 
> on
> hash visualizations like randomart [0] and key poems:
> 
> ---
>   |   o=.   |
>   |o  o++E  |
>   |   + . Ooo.  |
>   |+ O B..  |
>   | = *S.   |
>   |  o  |
>   | |
>   | |
>   | |
>     ---
> 
> The idea came up during a discussion with Jeff Burdges in CCC camp.
> This is a heavily experimental idea and there are various unresolved research
> and UX issues here, but we hope that we will motivate further study.
> 
> The aim is to make it harder to phish people who click untrusted onion links.
> Think of when you click an onion link on a forum, or when a friend sends you 
> an
> onion URL.
> 
> The problem is that many people don't verify the whole onion address, they 
> just
> trust the onion link or verify the first few characters. This is bad since an
> attacker can create a hidden service with a similar onion address very
> easily. There are currently ready-made scripts that people have been using to
> launch impersonation attacks against real hidden services.
> 
> The suggested improvement here is for TBB to provide additional visual
> fingerprints that people can use to verify the authenticity of a hidden 
> service.
> So when TBB connects to a hidden service, it uses the onion address to 
> generate
> a randomart or key poem and makes them available for the user to examine.
> 
> Here is an experimental mockup just to illustrate the idea:
> 
> https://people.torproject.org/~asn/tbb_randomart/randomart_mockup.png
> 
> The idea is that you hash (or scrypt!) the onion address, and then you make a
> visualization using the hash output. This forces the phishing attacker to
> generate a similar onion address _and_ similar hash visualizations. We assume
> that this will be harder to do than simply faking a similar onion address, 
> which
> increases the startup cost for such an attacker.
> 
> This is the basic concept. Now, here are some thoughts:
> 
> - What hash visualizations can we use here?
> 
>   The SSH randomart is an obvious idea, and we can even have it colored since 
> we
>   are not in a terminal.
> 
>   Then we have the key poem idea, which generates a poem from a key. I don't
>   think this is used by a deployed system currently.
> 
>   Then we could imagine music-based hash fingerprints, where the onion address
>   corresponds to a small tune that is played when you visit an onion.
> 
>   Then there are even more crazy ideas like the "Dynamic Security Skins"
>   paper [1]. So for example, TBB could generate a unique UI theme for each
>   hidden service.
>   
> - Of course, none of the above ideas is perfect. Actually most of them suck.
> 
>   For example, when it comes to randomart, many people are colorblind to some
>   degree and most people are not good at recognizing subtle color differences.
> 
>   Furthermore, given a randomart, it's easy [2] to generate similar
>   randomarts. However we hope that most of those similar randomarts will not
>   correspond to a public key that is similar to the original one. 
> 
>   When it comes to key poems, given even a moderately sized dictionary leads 
> to
>   pretty big poems. 

Re: [tor-dev] Future Onion Addresses and Human Factors

2015-08-09 Thread Jeff Burdges
On Sun, 2015-08-09 at 07:26 +, Jeremy Rand wrote:
> > Isn't the 51% attack down to a 20ish% attack now?
> 
> The estimate I did was based on Namecoin hashrate, not Bitcoin
> hashrate.  I assume that's the distinction you're referring to, though
> you're not really making it clear.

No.  I haven't kept up to date on blockchain technologies as they never
looked particularly great to me, but..

There was a succession of research results that lowers the 51% attack on
btcoin into the 30s % range and eventually into the 20s % range.  

I donno if OnionNS is susceptible to these attacks, as it's threat model
is slightly different.

> I think you will find that a number of users are unlikely to
> exclusively use bookmarks and not use web links.  

There is no need for a domain on links within a single site.  It's true
that cross site links are common enough that fishing attacks can trick
users into typing their password into a facebookfakeblah.onion url.

Jeff




signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Future Onion Addresses and Human Factors

2015-08-08 Thread Jeff Burdges

> I did a
> rough calculation about a year ago of how much it would cost to buy
> ASIC miners that could 51%-attack Namecoin, and it came out to just
> under a billion USD.  

Isn't the 51% attack down to a 20ish% attack now?  

> Of course, a real-world attacker would (in my
> estimate) probably be more likely to try to compromise existing miners
> (via either technical attacks, extortion/blackmail/bribery, or legal
> pressure).  

Isn't 50ish% controlled by one organization already  Is it not a
particularly tight not organization or something?

Isn't the real world attack that you simply isolate a namecoin user from
the wider namecoin network?  That's cheap for state level attackers.  

I'd imagine OnioNS should have a massive advantage here because Tor has
pinned directory authorities, who presumably help OnioNS accurately
identify honest quorum servers. 

> An end user will be much more likely to notice when a
> Namecoin or OnioNS name changes, compared to when a .onion name
> changes.  So this isn't really a clear win for .onion -- it's a
> tradeoff, and which is more "secure" depends on which end users we're
> talking about, and what threat model we're dealing with.  

This is false.  Users must enter the .onion address from somewhere.  

If they go through a search engine, then yes the .onion address itself
is hard to remember, especially if they visit many sites.  Key poems
address this.  

If however they employ bookmarks, copy from a file, etc., and roughly
proposal 244 gets adopted, then an attacker must hack the user's
machine, hack the server, or break a curve25519 public key.

Yes, a search engine covers .onion addresses should ask users to
bookmark desirable results, as opposed to revisiting the search engine,
mostly for the protection of the search engine.




signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Future Onion Addresses and Human Factors

2015-08-08 Thread Jeff Burdges



> On Sat, Aug 08, 2015 at 11:36:35AM +, Alec Muffett wrote: 
> > 4) from Proposal 244, the next generation addresses will probably be
> > about this long:
> > 
> > a1uik0w1gmfq3i5ievxdm9ceu27e88g6o7pe0rffdw9jmntwkdsd.onion
> > 
> > 5) taking a cue from World War Two cryptography, breaking this into
> > banks of five characters which provide the eyeball a point upon
> > which to rest, might help:
> > 
> > a1uik-0w1gm-fq3i5-ievxd-m9ceu-27e88-g6o7p-e0rff-dw9jm-ntwkd-sd.onion

We could make the .onion URL human recognizable, but not actually human
typeable.  I like key poems : 
https://moderncrypto.org/mail-archive/messaging/2014/000125.html

In fact, there is no need to conceptualize this as a URL except for
convenience.  We could provide a .onion key poem library so that TBB,
etc. can display them when you mouse over the URL bar.


> > 8) being inconsistent (meaning: “we extract the second label and
> > expunge anything which is not a base32 character”, ie: that
> > with-hyphens and without-hyphens) may help or hinder, we’re not
> > really sure; it would permit mining addresses like:
> > 
> > agdjd-recognisable-word-kjhsdhkjdshhlsdblahblah.onion #
> > illustration purposes only
> > 
> > …which *looks* great, but might encourage people to skimp on
> > comparing [large chunks of] the whole thing and thereby enable point
> > (2) style passing-off.

There are a couple choices for mappings for the non-essential characters
in base 32 encodings.  I believe the usual one was designed to make
spelling fuck impossible or some stupidity like that.  I think the one
GNUNet uses was selected to provide as much flexibility as possible.
It's no worse for type-o squating if u and v map to the same value and a
few similar things.


> > 9) appending a credit-card-like “you typed this properly” extra few
> > characters checksum over the length might be helpful (10..15 bits?)
> > - ideally this might help round-up the count of characters to a full
> > field, eg: XXX in this?
> > 
> > 
> > a1uik-0w1gm-fq3i5-ievxd-m9ceu-27e88-g6o7p-e0rff-dw9jm-ntwkd-sdXXX.onion

Yes, checksums help enormously.

Interestingly, there are situations where you're entering a password,
but the machine should add a checksum while remaining human readable.  

In those situations, there is a clever suggestion by Christian Grothoff
that an algorithm tweak the human readable passphrase until some hash is
zero.  

Pond's PANDA key exchange is an example of when one should do this,
although Pond does not.

I doubt that's relevant for .onion URL itself, but maybe worth
considering for vanity key poems or something. 


On Sat, 2015-08-08 at 09:05 -0400, Roger Dingledine wrote:
> I'm a fan:
> https://trac.torproject.org/projects/tor/ticket/15622
> 
> Though I fear my point in the ticket about the Host: header might be
> a good one.

A priori, "pet names" sounds vaguely like the GNU Name System (GNS),
meaning short names consistent for the user, but not globaly unique. 

In GNS, there is a .short.gnu domain so that after you visit facebook's
blah.zkey then facebook.short.gnu becomes a meaningful domain.  I'd
worry however that, if your anti-facebook friend jake sets his preferred
short name to facebook, and you visit his zone fist, then he gets
facebook.short.gnu on your machine.  TOFU world problems.  ;)


On Sat, 2015-08-08 at 08:44 -0400, Paul Syverson wrote:
> One is to produce human meaningful names in association with onion
> addresses. Coincidentally Jesse has just announce to this same list a
> beta-test version of OnionNS that he has been working on for the Tor
> Summer of Privacy. See his message or
> 
> https://github.com/Jesse-V/OnioNS-literature

OnioNS has a relatively weak adversary model, like Namecoin, right?
It's certainly plausible that's good enough for most users, including
all financial users, but maybe not everyone.  

There are several approaches to improving upon that adversary model :

- Pinning in the Name System - If a name X ever points to a hidden
service key, then X cannot be registered to point to a different hidden
service key for 5 years.  Alternatively, if our name system involves
another private key, then X cannot be registered under another private
key for 5 years. 

- Pinning/TOFU in the browser - If my browser ever visits X then it
locks either the .onion key and/or the TLS key permanently.
Alternatively pin both but one at a time to change.  Sounds bad for
Tails, etc. though.

- Awareness - Just yell and scream about how OnioNS, Namecoin, etc. are
nowhere near as secure as a .onion address.  And especially tell anyone
doing journalism, activist, etc. to use full .onion addresses.

- Key Poems maybe?

Jeff



signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Linting the ' iff ' ism...

2015-07-16 Thread Jeff Burdges
On Thu, 2015-07-16 at 14:37 -0400, grarpamp wrote:
> On Thu, Jul 16, 2015 at 1:22 PM, Tom van der Woerdt  wrote:
> > https://en.wikipedia.org/wiki/If_and_only_if
> 
> Aha, documentation, use presumed consistent, carry on, thanks.

Actually the "use it freely" reference there describes iff as a
borderline case of w.l.o.g., s.t., etc. and suggests that iff be
explicitly defined when used in non-mathematical writing.

I avoid it even in mathematical articles myself on the grounds that it's
poor erasure coding and easy to miss-read.  There is a tangible
advantage in writing faster on a blackboard that disappears in print.

Jeff




signature.asc
Description: This is a digitally signed message part
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] (Draft) Proposal 224: Next-Generation Hidden Services in Tor

2015-05-28 Thread Jeff Burdges

On 28 May 2015, at 12:59, Michael Rogers  wrote:
> 
> I wasn't thinking about the sizes of the sets so much as the probability
> of overlap. If the client picks n HSDirs or IPs from the 1:00 consensus
> and the service picks n HSDirs or IPs from the 2:00 consensus, and the
> set of candidates is fairly stable between consensuses, and the ordering
> is consistent, we can adjust n to get an acceptable probability of
> overlap. But if the client and service (or client and IP) are picking a
> single RP, there's no slack - they have to pick exactly the same one.

Yes.  If I recall, 224 picks HSDirs by selecting node ids nearest to various 
hashes, so that missing HSDirs elsewhere cause no problems.  We could lower the 
failure probability by dividing each IP into slices by typical availability in 
consensuses, while retaining this property, like you just proposed doing to 
rate IPs by bandwidth.  Aren’t availability computations done anyways for 
granting nodes additional flags?

In any case, I suggested this as a way to save half a hop thereby allowing the 
HS to partially pin its second hop.  I certainly do not know if the threat of 
clients repeatedly dropping circuits to expose an HS’s guard actually warrants 
the amount of work this approach entails.  If so, then maybe it’d warrant some 
failure probability too, especially if a retry doesn’t cost much or risk 
exposing anything.  I don’t know if HS’s partially pinning their second hop 
creates a traffic pattern that exposes them more to a GPA either.

In any case, there is a simpler approach :  The client sends (IP, t, x, y) to 
the HS where t is the client’s consensus, x is a random number, and y = 
hash(x++c++RP++HS) where again c is the global random number in 224.  An HS 
would refuse connections if IP is very far from y, or y is not derived 
correctly.  If IP is near y but not the closest match, and the closest match 
has existed for a while, then the HS would merely log the suspicious.  If 
hash() is hard to reverse, this proves that y is fairly random, so the HS can 
have a bit more trust in the IP being selected randomly.  I suppose that'd 
justify the HS changing it’s second hop less often.

Of course, one could always just make the IP the client’s third hop, analogous 
to when using an exit node, thus giving the HS a full 4 hops to control.  I do 
not actually understand why that’s not the situation anyways.  

Jeff



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] (Draft) Proposal 224: Next-Generation Hidden Services in Tor

2015-05-28 Thread Jeff Burdges

On 28 May 2015, at 11:45, Michael Rogers  wrote:

> On 12/05/15 20:41, Jeff Burdges wrote:
>> Alright, what if we collaboratively select the RP as follows :
>> 
>> Drop the HSDirs and select IPs the way 224 selects HSDirs, like John
>> Brooks suggests. Protect HSs from malicious IPs by partially pinning
>> their second hop, ala (2).
>> 
>> An HS opening a circuit to an IP shares with it a new random number
>> y. I donno if y could be a hash of an existing shared random value,
>> maybe, maybe not.
>> 
>> A client contacts a hidden services as follows :
>> - Select an IP for the HS and open a circuit to it.
>> - Send HS a random number w, encrypted so the IP cannot see it.
>> - Send IP a ReqRand packet identifying the HS connection.
>> 
>> An IP responds to ReqRand packet as follows :
>> - We define a function h_c(x,y) = hash(x++y++c), or maybe some
>> hmac-like construction, where c is a value dependent upon the current
>> consensus.
>> - Generate z = h_c(x,y) where x is a new random number.
>> - Send the client z and send the HS both x and z.
>> 
>> An HS verifies that z = h_c(x,y).
>> 
>> Finally, the client and HS determine the RP to build the circuit
>> using h_c(z,w) similarly to how 224 selects HSDirs.
> 
> One small problem with this suggestion (hopefully fixable) is that
> there's no single "current consensus" that the client and IP are
> guaranteed to share:
> 
> https://lists.torproject.org/pipermail/tor-dev/2014-September/007571.html

If I understand, you’re saying the set of candidate RPs is larger than the set 
of candidate IPs which is larger than the set of candidate HSDirs, so 
disagreements about the consensus matter progressively more.  Alright, fair 
enough.

An IP is quite long lived already, yes?  There is no route for the HS to tell 
the client its consensus time, but the client could share its consensus time, 
assuming that information is not sensitive elsewhere, so the HS could exclude 
nodes not considered by the client.  It's quite complicated though, so maybe 
not the best approach.

Jeff



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] (Draft) Proposal 224: Next-Generation Hidden Services in Tor

2015-05-12 Thread Jeff Burdges

On 12 May 2015, at 10:39, Michael Rogers  wrote:
> Something like this was suggested last May, and a concern was raised
> about a malicious IP repeatedly killing the long-term circuit in order
> to cause the HS to rebuild it. If the HS were ever to rebuild the
> circuit through a malicious middle node, the adversary would learn the
> identity of the HS's guard.
> 
> I don't know whether that's a serious enough threat to outweigh the
> benefits of this idea, but I thought it should be mentioned.

Just to clarify :

In any HS redesign, the issue a malicious IP could always tear down a circuit 
to force selecting a new middle node.  If that’s done enough, then the middle 
node could be pushed into a desired of malicious middle nodes.  A malicious IP 
is potentially prevented from doing this in 224 because the HS could choose 
another IP to publish the HSDirs if circuits to an IP keep collapsing.  There 
is no way for the HS to choose another IP in John Brooks proposal though.

As I understand it, an IP is the fourth hop from the HS so the IP won’t see the 
middle node directly, but the malicious IP and malicious middle node set can do 
a correlation attack.  It’s an advanced attack, but very doable.  In 
particular, the malicious IP and middle node set need not coordinate in real 
time.  I donno if anything prevents a large malicious node set form surveying 
many HS guards.

Two alternatives come to mind :

(1)  A HS could simply possess more IPs and drop a suspicious one or two.  I 
donno if this places an undo burden on the network.  Not the spec/code to drop 
a suspicious IP is as almost complex as the code to replace a suspicious IP.

(2)  A HS could trust its IP less by using a longer circuit and partially 
pinning the second hop (first middle node) similarly to how it pins guards now. 
 Again this places slightly more burden on the network, but not necessarily 
much.  This might be quite simple to do.


Also, what prevents a malicious client and malicious middle node set from doing 
the same correlation attack only over the rendezvous circuit rather than the IP 
circuit?  Is there anything besides partially pinning the second hop ala (2) 
that’d achieve that?  Except, the rendezvous circuit carries heavy traffic, 
unlike the introduction circuit, so we’re presumably less willing to lengthen 
it.


Alright, what if we collaboratively select the RP as follows :

Drop the HSDirs and select IPs the way 224 selects HSDirs, like John Brooks 
suggests.  Protect HSs from malicious IPs by partially pinning their second 
hop, ala (2).

An HS opening a circuit to an IP shares with it a new random number y.  I donno 
if y could be a hash of an existing shared random value, maybe, maybe not.

A client contacts a hidden services as follows :
- Select an IP for the HS and open a circuit to it.
- Send HS a random number w, encrypted so the IP cannot see it.
- Send IP a ReqRand packet identifying the HS connection.

An IP responds to ReqRand packet as follows :
- We define a function h_c(x,y) = hash(x++y++c), or maybe some hmac-like 
construction, where c is a value dependent upon the current consensus.
- Generate z = h_c(x,y) where x is a new random number.
- Send the client z and send the HS both x and z.

An HS verifies that z = h_c(x,y).

Finally, the client and HS determine the RP to build the circuit using h_c(z,w) 
similarly to how 224 selects HSDirs.

In this way, both client and HS are confident that the RP was selected 
randomly, buying us an extra hop on the rendezvous circuit that the HS can use 
to partially pin its second hop on RP circuits.  In other words, the HS can 
select its third hop more like it’d currently select its middle node.

There are attacks on this scheme if the IP can (a) break the encryption used 
for w and (b) very quickly attack the hash function used in h_c, but that’s 
probably fine.

Jeff

p.s.  I donno if all this hop pinning creates observable traffic 
characteristics that lead to other attacks.



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] (Draft) Proposal 224: Next-Generation Hidden Services in Tor

2015-04-26 Thread Jeff Burdges

Interesting idea!  

I was toying with another unrelated idea which seems worth asking about now :

Can we shorten the circuits used for hidden services any?   At present, both 
the introduction point (IP) connection and the rendezvous point (RP) connection 
have seven (interior) hops, right?
  
I donno if there is a reason to keep the IP circuit at 7 hops.  Could we drop 
it to 6 hops by making the IP be the hidden server’s third hop.  Is there a 
name for the third hop from one side in a hidden service connection?  
quasi/interior-exit maybe?

We could presumably drop the RP connection, which is the actual circuit 
carrying traffic, from 7 hops to 6 hops by making the RP be the hidden 
service's client’s third hop (interior-exit), right?  

Could we shorten the RP connection down to 5 hops?  Idea, the hidden service's 
client and the IP engage in shared random number generation using commit and 
reveal.  I’m not quite familiar enough with the IP connection, but maybe the 
hidden server itself could be involved too, even if not through commit and 
reveal.  

In any case, we select the RP using this collaboratively generated random 
number.  Now this RP could be the third hop from both the hidden service server 
and client, because the hidden service’s client and the IP generated this 
number together, and the hidden service selected the RP. 

I have not done any research to figure out if shortening hidden service 
connections to 5 or 6 hops either improves performance or costs much anonymity, 
but the collaborative random number generation trick for shortening the circuit 
seemed worth consider.

Jeff





On 26 Apr 2015, at 18:14, John Brooks  wrote:

> It occurred to me that with proposal 224, there’s no longer a clear reason
> to use both HSDirs and introduction points. I think we could select the IP
> in the same way that we plan to select HSDirs, and bypass needing
> descriptors entirely.
> 
> Imagine that we select a set of IPs for a service using the HSDir process in
> section 2.2 of the proposal. The service connects to each and establishes an
> introduction circuit, identified by the blinded signing key, and using an
> equivalent to the descriptor-signing key (per IP) for online crypto.
> 
> The client can calculate the current blinded public key for the service and
> derive the list of IPs as it would have done for HSDirs. We likely need an
> extra step for the client to request the “auth-key” and “enc-key” on this IP
> before building an INTRODUCE1 cell, but that seems straightforward.
> 
> The IPs end up being no stronger as an adversary than HSDirs would have
> been, with the exception that an IP also has an established long-term
> circuit to the service. Crucially, because the IP only sees the blinded key,
> it can’t build a valid INTRODUCE1 without external knowledge of the master
> key.
> 
> The benefits here are substantial. Services touch fewer relays and don’t
> need to periodically post descriptors. Client connections are much faster.
> The set of relays that can observe popularity is reduced. It’s more
> difficult to become the IP of a targeted service.
> 
> One notable loss is that we won’t be able to use legacy relays as IP, which
> the current proposal tries to do. Another difference is that we’ll select
> IPs uniformly, instead of by bandwidth weight - I think this doesn’t create
> new problems, because being a HSDir is just as intensive.
> 
> Could that work? Is it worth pursuing?
> 
> - John
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Brainstorming ideas for controller features for improved testing; want feedback

2015-03-20 Thread Jeff Burdges

On 20 Mar 2015, at 12:33, Jeff Burdges  wrote:

> I could imagine an “onion token” variant of ephemeral hidden services in 
> which the person who initiates the connection does not know what they’re 
> connecting to, like sending a message to a mailbox. Example :
> 
> Alice wants Bob to send her a message asynchronously by anonymously dropping 
> it into a numbered mailbox system, but Alice only wants to check one mailbox 
> for all her contacts, so she does not want Bob to be able to reveal her 
> mailbox. 
> 
> Rough outline : 
> - Alice gives bob a “token” that contains a bunch of pre-encrypted tor 
> extends, data, etc. frames and some additional data such as symmetric keys.  
> Alice goes offline. 

Actually Alice building this token would presumably involve the mailbox system 
too since it’s operating as a hidden service itself. 

> - Bob sends Alice’s mailbox a message by building a circuit to a specified 
> machine, encrypting each of the frames supplied by Alice for all of his 
> circuit except the endpoint because Alice already did that encryption, and 
> sending them.  
> - These frames continue building a circuit from that endpoint to wherever 
> Alice wants it to go. 
> - Bob encrypts his data frames using first the additional data supplied by 
> Alice so that they can traverse this longer circuit that he only understands, 
> and then encrypts those for the portion of the circuit he understands. 
> - Alice logs back in, contacts the mailbox hidden service, and retrieves her 
> messages, including Bob’s message. 
> 
> Optional : 
> - Amongst the frames Bob needs to use to set up the circuit might be one that 
> causes re-incryption so that even if an adversary hacked both Bob and the 
> mailbox system they cannot search the mailbox system for Bob’s message. 
> 
> Of course “onion tokens” would not live forever since Alice’s token fails to 
> describe a valid circuit if any server she selected goes down, but maybe it’s 
> provide a nice short-term asynchronous delivery options for IM systems like 
> Ricochet. 
>   https://github.com/ricochet-im/ricochet
> 
> Best,
> Jeff

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Brainstorming ideas for controller features for improved testing; want feedback

2015-03-20 Thread Jeff Burdges

I could imagine an “onion token” variant of ephemeral hidden services in which 
the person who initiates the connection does not know what they’re connecting 
to, like sending a message to a mailbox.  Example :

Alice wants Bob to send her a message asynchronously by anonymously dropping it 
into a numbered mailbox system, but Alice only wants to check one mailbox for 
all her contacts, so she does not want Bob to be able to reveal her mailbox. 

Rough outline : 
- Alice gives bob a “token” that contains a bunch of pre-encrypted tor extends, 
data, etc. frames and some additional data such as symmetric keys.  Alice goes 
offline. 
- Bob sends Alice’s mailbox a message by building a circuit to a specified 
machine, encrypting each of the frames supplied by Alice for all of his circuit 
except the endpoint because Alice already did that encryption, and sending 
them.  
- These frames continue building a circuit from that endpoint to wherever Alice 
wants it to go. 
- Bob encrypts his data frames using first the additional data supplied by 
Alice so that they can traverse this longer circuit that he only understands, 
and then encrypts those for the portion of the circuit he understands. 
- Alice logs back in, contacts the mailbox hidden service, and retrieves her 
messages, including Bob’s message. 

Optional : 
- Amongst the frames Bob needs to use to set up the circuit might be one that 
causes re-incryption so that even if an adversary hacked both Bob and the 
mailbox system they cannot search the mailbox system for Bob’s message. 

Of course “onion tokens” would not live forever since Alice’s token fails to 
describe a valid circuit if any server she selected goes down, but maybe it’s 
provide a nice short-term asynchronous delivery options for IM systems like 
Ricochet. 
https://github.com/ricochet-im/ricochet

Best,
Jeff





On 20 Mar 2015, at 11:55, Nick Mathewson  wrote:

> Hi!  I've got an end-of-month deliverable to flesh out as many good
> ideas here as I can, and I'd appreciate feedback on what kind of
> features it would be good to add to the controller protocol in order
> to better support testing.
> 
> More ideas would be most welcome.
> 
> Yes, some of these ideas are probably foolish or pointless or
> half-baked or useless or even dangerous; this is a brainstorming
> exercise, not a declaration of intent.  The goal right now is to
> generate a lot of ideas and thoughts now, and to make decisions about
> what to build later.
> 
> 
> 
> 
> IDEAS
> =
> 
> 
> 1. Step-by-step hidden service connections
> 
>   Add the ability to create connections to hidden services step by
>   step, to best
> 
>   What's necessary here is commands to:
>  * Establish a rendezvous point on a given circuit.
>  * Construct and send an introduce2 cell on a given circuit.
>  * Realize that a rendezvous circuit has been constructed.
> 
> 
> 2. Send a single cell on a circuit
> 
>   (TESTING ONLY)
> 
>   For fuzzing and low-level testing purposes, it would be handy to be
>   able to send a single cell on a tor circuit.
> 
>   This might be better to expose via a low-level modular API than via
>   the control port.
> 
> 3. Intercept cell by cell on a circuit
> 
>   (TESTING ONLY)
> 
>   For fuzzing, testing, and debugging purposes, it might be handy for
>   a controller to be able to observe data cell by cell on a circuit of
>   interest.
> 
>   This might be better to expose via a low-level modular API than via
>   the control port.
> 
> 4. Send a single cell on a connection.
> 
>   (TESTING ONLY)
> 
>   As 2, but for connections.  Note that we might even, for testing,
>   expose this at a sub-cell level.
> 
> 5. Intercept all cells on a connection
> 
>   (TESTING ONLY)
> 
>   As 2, but for connections.
> 
> 6. Plug-in to handle a relay or other command.
> 
>   Right now, all Tor's features need to be baked into Tor; it's not
>   easy to write extensions.  We could change that by having the
>   controller able to intersect particular relay or extension commands
>   and act accordingly.  This could be used for prototyping new
>   features, etc.
> 
> 7. Force a given protocol on a given connection
> 
>   We could add a feature to restrict what protocols can be negotiated
>   on a given connection we create.  This could help us better test our
>   protocols for interoperatbility.
> 
> 8. Examine fine-grained connection detail.
> 
>   There are many data available for a given connection (such as
>   fine-grained TLS information) that are not currently exposed on the
>   GETINFO interface.  We could make most of this available for testing,
>   pending security analysis.
> 
> 9. Examine cache in detail
> 
>   In the past we've seen crazy issues with our descriptor caching
>   code.  It might be good to expose for testing information about
>   where exactly descriptors are stored, what attributes are set on
>   them, and so on.  We could also expose events for cache compaction
>   and discarded expired

Re: [tor-dev] RFC: Tor Messenger Alpha

2014-12-22 Thread Jeff Burdges

This little session at 31c3 may be of interest to anyone working on Tor 
Messenger.  -Jeff

> From: Ximin Luo 
> Subject: [messaging] Session at 31C3
> Date: 22 December 2014 at 7:13:22 EST
> To: Messaging 
> 
> For those of you going to 31C3, we are going to meet up and find a side room 
> to have a discussion in:
> 
> https://events.ccc.de/congress/2014/wiki/Session:Messaging
> 
> We'll be focusing on technical points and specific areas of work to be doing 
> next year. I'll add potential agenda items over the next few weeks; please 
> feel free to make suggestions.
> 
> The suggested time slot is preliminary; please let me know off-thread if 
> you'd prefer a different slot.




___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] high latency hidden services

2014-12-09 Thread Jeff Burdges


I’m interested in helping out with this, mostly because we’ll want it for Pond 
: https://pond.imperialviolet.org/

I’ve read the alpha-mixing paper, but not the others, so I’ll check em’ out. 

Jeff



On 9 Dec 2014, at 16:40, Michael Rogers  wrote:

> Signed PGP part
> On 25/11/14 12:45, George Kadianakis wrote:
> > Yes, integrating low-latency with high-latency anonymity is a very
> > interesting probleml. Unfortunately, I haven't had any time to
> > think about it.
> >
> > For people who want to think about it there is the "Blending
> > different latency traffic with alpha-mixing" paper. Roger mentioned
> > that one of the big challenges of making the paper usable with Tor,
> > is switching from the message-based approach to stream-based.
> >
> > Other potential papers are "Stop-and-Go-MIX" by Kesdogan et al.
> > and "Garbled Routing (GR): A generic framework towards unification
> > of anonymous communication systems" by Madani et al. But I haven't
> > looked into them at all...
> 
> Two of these papers were also mentioned in the guardian-dev thread, so
> I guess we're thinking along similar lines.
> 
> Alpha mixes and stop-and-go mixes are message-oriented, which as you
> said raises the question of how to integrate them into Tor. Judging by
> the abstract of the garbled routing paper (paywalled), it's a hybrid
> design combining message-oriented and circuit-oriented features. I
> think there might also be scope for circuit-oriented designs with
> higher latency than Tor currently provides, which might fit more
> easily into the Tor architecture than message-oriented or hybrid designs.
> 
> A circuit-oriented design would aim to prevent an observer from
> matching the circuits entering a relay with the circuits leaving the
> relay. In other words it would prevent traffic confirmation at each
> hop, and thus also end-to-end.
> 
> At least four characteristics can be used to match circuits entering
> and leaving a relay: start time, end time, total traffic volume and
> traffic timing. The design would need to provide ways to mix a circuit
> with other circuits with respect to each characteristic.
> 
> The current architecture allows start and end times to be mixed by
> pausing at each hop while building or tearing down a circuit. However,
> each hop of a given circuit must start earlier and end later than the
> next hop.
> 
> Traffic volumes can also be mixed by discarding padding at each hop,
> but each hop must carry at least as much traffic as the next hop (or
> vice versa for traffic travelling back towards the initiator). This is
> analogous to the problem of messages shrinking at each hop of a
> cypherpunk mix network, as padding is removed but not added.
> 
> There's currently no way to conceal traffic timing - each relay
> forwards cells as soon as it can.
> 
> Here's a crude sketch of a design that allows all four characteristics
> to be mixed, with fewer constraints than the current architecture.
> Each hop of a circuit must start earlier than the next hop, but it can
> end earlier or later, carry more or less traffic, and have different
> traffic timing.
> 
> The basic idea is that the initiator chooses a traffic pattern for
> each direction of each hop. The traffic pattern is described by a
> distribution of inter-cell delays. Each relay sends the specified
> traffic pattern regardless of whether it has any data to send, and
> regardless of what happens at other hops.
> 
> Whenever a relay forwards a data cell along a circuit, it picks a
> delay from the specified distribution, adds it to the current time,
> and writes the result on the circuit's queue. When the scheduler
> round-robins over circuits, it skips any circuits with future times
> written on them. If a circuit's time has come, the relay sends the
> first queued data cell if there is one; if not, it sends a single-hop
> padding cell.
> 
> Flow control works end-to-end in the same way as any other Tor
> circuit: single-hop padding cells aren't included in the circuit's
> flow control window.
> 
> When tearing down the circuit, the initiator tells each relay how long
> to continue sending the specified traffic pattern in each direction.
> Thus each hop may stop sending traffic before or after the next hop.
> 
> Even this crude design has multiple parameters, so its anonymity
> properties may not be easy to reason about. Even if we restrict
> traffic patterns to a single-parameter distribution such as the
> exponential, we also have to consider the pause time at each hop while
> building circuits and the 'hangover time' at each hop while tearing
> them down. But I think we can mine the mix literature for some ideas
> to apply - and probably some attacks against this first attempt at a
> design as well.
> 
> Cheers,
> Michael
> 
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev

__