Re: [tor-dev] Even more notes on relay-crypto constructions

2012-10-09 Thread Robert Ransom
On 10/9/12, Nick Mathewson ni...@alum.mit.edu wrote:
 On Tue, Oct 9, 2012 at 12:31 PM, Robert Ransom rransom.8...@gmail.com
 wrote:
  [...]
   AES-CTR + HMAC-SHA512/256.

   AES-CTR + Poly1305.  Poly1305 requires nonces, but we can use a
   counter for those.

 Poly1305AES requires nonces.  Poly1305 itself requires
 (computationally-indistinguishable-from-) independent keys for each
 message.

 Right; I meant to say the output of a stream cipher used as a PRF, but
 what I meant isn't what I said.  Should've proofed more carefully

  Please actually read the paper (see also
 http://cr.yp.to/papers.html#pema section 2.5 for how DJB uses Poly1305
 now).

 I read everything I cited.  If there is something I didn't understand,
 or something I missed, or something I got wrong, or something I said
 wrong, that doesn't mean I didn't read the paper.

 I am not going to be able to draw all the right inferences from every
 paper on my own, though.  And I am *definitely*, *frequently*, going
 to read papers, come up with questions, and post those questions here
 sometimes even when the paper, properly understood, would answer my
 questions.  If I were writing for publication, I'd want to keep all my
 ideas secret until I could answer all my questions and make sure all
 my answers were right, but I'm not writing for publication -- I am
 writing to get feedback from other people and learn things.

You have been talking about using Poly1305 truncated to 64 bits for
weeks.  It is truly not difficult to find Theorem 3.3 in the Poly1305
paper and figure out that the polynomial-evaluation part of Poly1305
truncated to its first (least significant) 64 bits has differential
probabilities of at most 2^(-34) for Tor-cell-size messages (and thus
Poly1305 has a probability of forgery of at most 2^(-34) when
truncated to its first 64 bits).



 This entire category of designs still has the problems that it had
 before: it leaks the length of the circuit to the last node in the
 circuit, and consumes a fairly large portion of each cell (for one
 truncated mac per hop).  Further, it's going to be a bit fiddly
 (though not impossible) to get it to work with rendezvous circuits.

 Explicitly leaking the circuit length is very very bad.

 Are there any non-obvious reasons why?  Does it lead to any better
 attacks than the obvious ones?

* Cannibalized circuits are one hop longer than non-cannibalized
circuits; knowing that a particular circuit was cannibalized leaks
information about the client's previous exit-selection behaviour.

* Some users might want to use alternate path-selection algorithms
(e.g. http://freehaven.net/anonbib/#ccs2011-trust ); they might not
want to leak the fact that they are using such algorithms to the
non-first relays in their entry-guard chains.


 The MAC-based designs do not mention how to prevent end-to-end tagging
 in the exit-to-client direction.  I suspect they won't try to prevent
 it at all.

 That's correct.  Why would it be an attack for the exit to send a
 covert signal to the client?  The exit already has valid means to send
 overt signals to the client, and the client Tor is presumed not to
 want to break its own anonymity.

* Mallory's malicious entry and exit relays suspect (based on timing
correlation) that they control both ends of a circuit, and want to
confirm that.  The exit tags a cell it sends to the client by XORing
it with a secret random constant; the entry XORs (what it believes is)
that cell with the same constant.  If the circuit survives, Mallory's
relays know that they control both ends.

* Mallory doesn't want to pay the bandwidth costs for non-compromised
traffic, so his/her/its entry and exit relays tag *every* circuit's
first exit-to-client cell with the same secret random constant.  (You
added a crapload of bugs to 0.2.3.x based on Mike Perry's claim that
someone is likely to actually perform an ‘attack’ of this form.)



 I've heard some suggestions that we should look into BEAR or LION
 instead, but I'm leery of the idea.  They are faster, requiring one
 fewer application of their primitives, but they are PRPs, not SPRPs --
 meaning I think that they don't give you Ind-CCA2 [Bear-Lion, section
 6].  I don't know a way to exploit this in the Tor network, but
 deviating from an ideal block cipher seems to me like one of those
 ideas that's practically begging for disaster.

 The notion of ‘Ind-CCA2’ is defined for public-key encryption systems;
 it doesn't make sense for permutations.

 Even LIONESS is not an ‘ideal block cipher’ -- that term has an actual
 definition, and it is a much stronger assumption than Tor's relay
 crypto needs.  (http://cs.nyu.edu/~dodis/ps/ic-ro.pdf and
 http://cs.nyu.edu/~dodis/ps/ext-cipher.pdf contain some potentially
 useful, potentially interesting information.)

 Keep in mind that Tor's current relay crypto breaks *completely* if
 the circuit-extension handshake ever produces the same session key
 twice, and some parts of Tor's protocols

Re: [tor-dev] resistance to rubberhose and UDP questions

2012-10-06 Thread Robert Ransom
On 10/6/12, Mike Perry mikepe...@torproject.org wrote:

 Yet still, as Roger and Robert point out, there are some serious
 questions about the viability of decentralized directory/consensus
 systems. Or, at least questions that sexified attack papers can make to
 seem serious. (For example: I don't believe TorSK was actually broken
 beyond Tor's current properties...).

Torsk relied on a trusted party to sign relay descriptors.  Its goal
was to reduce the (asymptotic) total amount of directory
communication, not to remove the need for directory authorities.


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Another key exchange algorithm for extending circuits: alternative to ntor?

2012-08-11 Thread Robert Ransom
On 8/10/12, Robert Ransom rransom.8...@gmail.com wrote:
 On 8/8/12, Nick Mathewson ni...@freehaven.net wrote:

 http://www.infsec.cs.uni-saarland.de/~mohammadi/owake.html

 Also, where does this paper specify that the participants must check
 that public-key group elements are not equal to the identity element?
 That's rather important, as Tor's relay protocol is likely to break if
 an attacker can force a server to open additional circuits to an
 attacker using the same key material that a legitimate client's
 circuit has.

It occurred to me later that the server would know that H(g^(x_1*b +
x_2*y), g^y) is ‘fresh’, and that the client would know that the
server would not use H(g^(x_1*b + x_2*y), g^(x_1), g^(x_2), g^y) as a
key with a party that presented some public key other than (g^(x_1),
g^(x_2)) to it, so I checked the paper for *that* defense against
tampering and found it in Figure 4.  (That is a critical detail of
this protocol, and not necessary to protect honest clients against key
reuse in ntor, so it should have been included in the specifications
of the protocol in Figures 3 and 5 and the first two paragraphs of
section 3.1.  Hopefully the authors will fix that too when they revise
their paper.)

I don't see any way to attack ‘Ace’ with the client and server
ephemeral public keys included in the data passed to the KDF.


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Another key exchange algorithm for extending circuits: alternative to ntor?

2012-08-10 Thread Robert Ransom
On 8/8/12, Nick Mathewson ni...@freehaven.net wrote:

 http://www.infsec.cs.uni-saarland.de/~mohammadi/owake.html

Also, where does this paper specify that the participants must check
that public-key group elements are not equal to the identity element?
That's rather important, as Tor's relay protocol is likely to break if
an attacker can force a server to open additional circuits to an
attacker using the same key material that a legitimate client's
circuit has.


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Another key exchange algorithm for extending circuits: alternative to ntor?

2012-08-09 Thread Robert Ransom
On 8/9/12, aniket kate aniketpk...@gmail.com wrote:
 Date: Thu, 9 Aug 2012 00:22:59 +
 From: Robert Ransom rransom.8...@gmail.com

 On 8/8/12, Nick Mathewson ni...@freehaven.net wrote:

 Michael Backes, Aniket Kate, and Esfandiar Mohammadi have a paper in
 submission called, An Efficient Key-Exchange for Onion Routing.
 It's meant to be more CPU-efficient than the proposed ntor
 handshake.  With permission from Esfandiar, I'm sending a link to the
 paper here for discussion.

 http://www.infsec.cs.uni-saarland.de/~mohammadi/owake.html

 What do people think?

 * If you finish my implementation of the Ed25519 group operations
 (which you would need in order to implement this protocol), you can
 use them to implement a signature-based protocol (specified as
 A-DHKE-1 in http://eprint.iacr.org/1999/012), which requires only one
 precomputed and one on-line exponentiation per protocol run on the
 server when implemented with a slightly modified version of Ed25519.
 (The client's performance is much less important than the server's.)

 I went through A-DHKE-1 description (Page 36 of Eprint 1999/012). I
 find that A-DHKE-1 also requires one online signature generation on
 the server side along with one online exponentiation. Therefore,
 A-DHKE-1 is computationally more expensive than the discussed protocol
 and probably even the ntor protocol based on the employed signature
 scheme.

For a short-term keypair, Ed25519 session secret keys can be generated
by applying a PRF to a counter; the corresponding public keys can be
computed offline.  This leaves only a few hash computations and a
multiplication in the exponent field to be done online for the
signature generation; neither of these is as expensive as EC point
multiplication.

The server's Diffie-Hellman keypair can be reused for more than one
protocol run (keeping it for up to 5 minutes is very unlikely to
reduce forward secrecy) if either (a) the server performs replay
detection for client keys or (b) the protocol includes the signature
system's session key in the material fed to the KDF (along with the DH
shared secret).

So, A-DHKE-1 can indeed be performed with one offline exponentiation
(for the Ed25519 session key) and one online exponentiation (to
compute the DH shared secret) on the server side.


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Another key exchange algorithm for extending circuits: alternative to ntor?

2012-08-09 Thread Robert Ransom
On 8/9/12, Watson Ladd watsonbl...@gmail.com wrote:
 On Wed, Aug 8, 2012 at 8:22 PM, Robert Ransom rransom.8...@gmail.com
 wrote:
 On 8/8/12, Nick Mathewson ni...@freehaven.net wrote:

 Michael Backes, Aniket Kate, and Esfandiar Mohammadi have a paper in
 submission called, An Efficient Key-Exchange for Onion Routing.
 It's meant to be more CPU-efficient than the proposed ntor
 handshake.  With permission from Esfandiar, I'm sending a link to the
 paper here for discussion.

 http://www.infsec.cs.uni-saarland.de/~mohammadi/owake.html

 What do people think?

 * This paper has Yet Another ‘proof of security’ which says nothing
 about the protocol's security over any single group or over any
 infinite family of groups in which (as in Curve25519) the Decision
 Diffie-Hellman problem is (believed to be) hard.

 Do you think a DDH oracle cracks CDH in Curve25519? If no the theorem
 says something.

Do you think a DDH oracle for Curve25519 can be implemented efficiently?


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Another key exchange algorithm for extending circuits: alternative to ntor?

2012-08-09 Thread Robert Ransom
On 8/8/12, Nick Mathewson ni...@freehaven.net wrote:

 Michael Backes, Aniket Kate, and Esfandiar Mohammadi have a paper in
 submission called, An Efficient Key-Exchange for Onion Routing.
 It's meant to be more CPU-efficient than the proposed ntor
 handshake.  With permission from Esfandiar, I'm sending a link to the
 paper here for discussion.

 http://www.infsec.cs.uni-saarland.de/~mohammadi/owake.html

 What do people think?

Ohhh-kay, after trying to make sense out of the details of their
security claims, I *hope* that they need to re-read and revise the
first few paragraphs of section 3.2.  (Perhaps while they're at it
they can replace the mentions of ‘ppt’ algorithms and attackers
throughout their paper with a useful claim about execution time.)


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Is it possible to run private Exit nodes?

2012-08-08 Thread Robert Ransom
On 8/8/12, SiNA Rabbani s...@redteam.io wrote:
 I have been running private bridges for my VIP contacts for a long time.
 I use PublishServerDescriptor 0 to keep my bridges private.

 Is it possible to also run a private Exit node?

Yes, but (a) anyone who notices that it exists can use it, and (b) it
would be very risky for anyone to make their Tor client able to use a
private exit node (it could identify them as someone who knows that
that exit node exists, even if they aren't currently trying to use
it).


 What would happen, if I hard coded an exit into my torrc that is not
 published (if possible at all)?

Nothing.  You also need to feed its descriptor to Tor using the control port.


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Fwd: [tor-relays] tcmalloc in FreeBSD

2012-08-08 Thread Robert Ransom
On 8/8/12, Nick Mathewson ni...@alum.mit.edu wrote:
 On Wed, Aug 8, 2012 at 4:26 PM, Jordi Espasa Clofent
 jespa...@minibofh.org wrote:
 What line does the build process use when linking Tor?


 Hi again Nick,

 I have no idea, but I guess I could do the next:

 1. Stop the tor service
 2. deinstall the present port
 3. Check the tcmalloc option is enabled
 4. install the port and redirect all the output to a log file

 $ /usr/local/etc/rc.d/tor stop
 $ cd /usr/ports/security/tor
 $ make deinstall
 $ make showconfig
 $ make install  /tmp/tor_freebsd_port.log

 and send to you the log.

 What do you think?

 Sounds pretty involved!  Is there really no way to recompile software
 from a freebsd port without uninstalling it?

Remove (or move) the port's ‘work’ directory and run ‘make’.

(You can't easily *install* an upgraded or recompiled FreeBSD port or
package without removing the old one, though.)


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Another key exchange algorithm for extending circuits: alternative to ntor?

2012-08-08 Thread Robert Ransom
On 8/8/12, Nick Mathewson ni...@freehaven.net wrote:

 Michael Backes, Aniket Kate, and Esfandiar Mohammadi have a paper in
 submission called, An Efficient Key-Exchange for Onion Routing.
 It's meant to be more CPU-efficient than the proposed ntor
 handshake.  With permission from Esfandiar, I'm sending a link to the
 paper here for discussion.

 http://www.infsec.cs.uni-saarland.de/~mohammadi/owake.html

 What do people think?

* This paper has Yet Another ‘proof of security’ which says nothing
about the protocol's security over any single group or over any
infinite family of groups in which (as in Curve25519) the Decision
Diffie-Hellman problem is (believed to be) hard.

* The protocol requires that EC points be either transmitted in or
converted from and to a form in which point addition is efficient.
(ntor does not require point addition, so it can be implemented
initially using curve25519-donna.)

* If you finish my implementation of the Ed25519 group operations
(which you would need in order to implement this protocol), you can
use them to implement a signature-based protocol (specified as
A-DHKE-1 in http://eprint.iacr.org/1999/012), which requires only one
precomputed and one on-line exponentiation per protocol run on the
server when implemented with a slightly modified version of Ed25519.
(The client's performance is much less important than the server's.)


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Brainstorming about steganographic transports

2012-07-26 Thread Robert Ransom
On 7/26/12, David Fifield da...@bamsoftware.com wrote:

 We can use appid-like signatures to make steganographic channels, if we
 assume that the signatures are a realistic reflection of actual use of
 the protocols. But: this relies critically on the accuracy of the model.
 (Specifically, does it match the censor's model? If he uses simple
 regular expressions for blocking, then we win; if not, then we probably
 lose.)

Not quite.  If the language your syntactic model was based on is
accepted by the particular regular expressions that the censor is
currently using, you win (until They change to new regexps).
Otherwise, you lose.

For example, https://code.google.com/p/appid/source/browse/trunk/apps/irc
accepts “UseR :BOGUS line containing only a username with too many
spaces\n\n\n\n\n\r”, but no real IRC client will generate “UseR” (or
the other protocol violation on that line).  If They are using appid
with that particular protocol-recognition file, you win; if They
validate IRC using better regexps, you lose.


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] First-time tails/tor user feedback

2012-04-21 Thread Robert Ransom
On 2012-04-21, Andrew Lewman and...@torproject.is wrote:

 # Third issue: green onion

 3 of 8 people saw the green onion appear in the menu bar up top. These
 three people hovered over it and saw the 'Connected to the Tor Network'
 message. No one knew to double-click on it to get a menu of other things
 to do. No one knew to right-click on it to get the drop-down menu.

What should they have wanted to do with Vidalia?

 They
 were presented with the default check.torproject.org 'congratulations'
 page and then sat there.

 # Fourth issue: check.tpo is not helpful

 8 of 8 people saw the default check.torproject.org site telling them
 'congratulations. Your browser is configured to use tor.' 7 of 8 people
 asked 'where is my browser?' The one who didn't ask this question was
 already a firefox user and recognized the interface. 0 of 8 understood
 what the IP address message meant. Comments ranged from 'is that
 different than my current IP address?' to 'what's an ip address?'

 As an aside, when showing someone TBB on their own laptop, they saw the
 check.tpo site, and then went to Safari and started it up. When asked
 why they did this, the answer was 'safari is my browser. this says your
 browser is configured to use tor.'

That is exactly why I suggested the phrase “Congratulations. *This*
browser is configured to use Tor.” (emphasis added) on
https://bugs.torproject.org/2289 .  But when I explained on IRC that
there is a big difference between “this browser” and “your browser”,
no one believed that users would interpret them differently.



Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal 198: Restore semantics of TLS ClientHello

2012-03-26 Thread Robert Ransom
   which is never supported. Clients MUST advertise support for at least one
 of
   TLS_DHE_RSA_WITH_AES_256_CBC_SHA or TLS_DHE_RSA_WITH_AES_128_CBC_SHA.

I'm no longer comfortable with 128-bit symmetric keys.  An attacker
with many messages encrypted with a 128-bit symmetric cipher can
attempt a brute-force search on many messages at once, and is likely
to succeed in finding keys for some messages.  (See
http://cr.yp.to/papers.html#bruteforce .)


   The server MUST choose a ciphersuite with ephemeral keys for forward
   secrecy; MUST NOT choose a weak or null ciphersuite; and SHOULD NOT
   choose any cipher other than AES or 3DES.

 Discussion and consequences:


   Currently, OpenSSL 1.0.0 (in its default configuration) supports every
   cipher that we would need in order to give the same list as Firefox
   versions 8 through 11 give in their default configuration, with the
   exception of the FIPS ciphersuite above.  Therefore, we will be able
   to fake the new ciphersuite list correctly in all of our bundles that
   include OpenSSL, and on every version of Unix that keeps up-to-date.

   However, versions of Tor compiled to use older versions of OpenSSL, or
   versions of OpenSSL with some ciphersuites disabled, will no
   longer give the same ciphersuite lists as other versions of Tor.  On
   these platforms, Tor clients will no longer impersonate Firefox.
   Users who need to do so will have to download one of our bundles, or
   use a (non-system) OpenSSL.

s/(non-system)/non-system/



   The proposed spec change above tries to future-proof ourselves by not
   declaring that we support every declared cipher, in case we someday
   need to handle a new Firefox version.  If a new Firefox version
   comes out that uses ciphers not supported by OpenSSL 1.0.0, we will
   need to define whether clients may advertise its ciphers without
   supporting them; but existing servers will continue working whether
   we decide yes or no.

Why standardize on OpenSSL 1.0.0, rather than OpenSSL 1.0.1?



   The restriction to servers SHOULD only pick AES or 3DES is meant to
   reflect our current behavior, not to represent a permanent refusal to
   support other ciphers.  We can revisit it later as appropriate, if for
   some bizarre reason Camellia or Seed or Aria becomes a better bet than
   AES.

 Open questions:

   Should the client drop connections if the server chooses a bad
   cipher, or a suite without forward secrecy?

   Can we get OpenSSL to support the dubious FIPS suite excluded above,
   in order to remove a distinguishing opportunity?  It is not so simple
   as just editing the SSL_CIPHER list in s3_lib.c, since the nonstandard
   SSL_RSA_FIPS_WITH_3DES_EDE_CBC_SHA cipher is (IIUC) defined to use the
   TLS1 KDF, while declaring itself to be an SSL cipher (!).

Would that FIPS ciphersuite provide forward secrecy?  If not, then
there is no point in having clients or servers implement it.


   Can we do anything to eventually allow the IE7+[**] cipher list as
   well?  IE does not support TLS_DHE_RSA_WITH_AES_{256,128}_SHA or
   SSL_DHE_RSA_WITH_3DES_EDE_CBC_SHA, and so wouldn't work with current
   Tor servers, which _only_ support those.  It looks like the only
   forward-secure ciphersuites that IE7+ *does* support are ECDHE ones,
   and DHE+DSS ones.  So if we want this flexibility, we could mandate
   server-side ECDHE, or somehow get DHE+DSS support (which would play
   havoc with our current certificate generation code IIUC), or say that
   it is sometimes acceptable to have a non-forward-secure link
   protocol[***].  None of these answers seems like a great one.  Is one
   best?  Are there other options?

The certificate-chain validation code and the v3 handshake protocol
would be a bigger issue with DSS or ECDSA ciphersuites.


   [**] Actually, I think it's the Windows SChannel cipher list we
   should be looking at here.
   [***] If we did _that_, we'd want to specify that CREATE_FAST could
   never be used on a non-forward-secure link.  Even so, I don't like the
   implications of leaking cell types and circuit IDs to a future
   compromise.

A relay whose link protocol implementations can't provide forward
secrecy to its clients cannot be used as an entry guard -- it would be
overloaded with CREATE cells very quickly.


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal 198: Restore semantics of TLS ClientHello

2012-03-26 Thread Robert Ransom
On 2012-03-26, Nick Mathewson ni...@alum.mit.edu wrote:
 On Mon, Mar 26, 2012 at 3:17 AM, Robert Ransom rransom.8...@gmail.com
 wrote:
  [...]
(OpenSSL before 1.0.0 did not support ECDHE ciphersuites; OpenSSL
before 1.0.0e or so had some security issues with them.)

 Can Tor detect that it is running with a version of OpenSSL with those
 security issues and refuse to support the broken ciphersuites?

 We can detect if the version number is for a broken version, but I
 don't know a good way to detect if the version number is old but the
 issues are fixed (for example, if it's one of those Fedora versions
 that lock the openssl version to something older so that they don't
 run into spurious ABI incompatibility).

 I need to find out more about what the security issues actually were:
 when I took a quick look, the only one I was a problem with doing
 multithreaded access to SSL data structures when using ECC.  That
 wouldn't be a problem for us, but if there are other issues, we should
 know about them.

The only security issue that I knew affected ECDHE in old versions of
OpenSSL was http://eprint.iacr.org/2011/633 .  The paper indicates
that that bug was never in any OpenSSL 1.0.0 release.


   Otherwise, the ClientHello has these semantics: The inclusion of any
   cipher supported by OpenSSL 1.0.0 means that the client supports it,
   with the exception of
   SSL_RSA_FIPS_WITH_3DES_EDE_CBC_SHA
   which is never supported. Clients MUST advertise support for at least
 one
 of
   TLS_DHE_RSA_WITH_AES_256_CBC_SHA or TLS_DHE_RSA_WITH_AES_128_CBC_SHA.

 I'm no longer comfortable with 128-bit symmetric keys.  An attacker
 with many messages encrypted with a 128-bit symmetric cipher can
 attempt a brute-force search on many messages at once, and is likely
 to succeed in finding keys for some messages.  (See
 http://cr.yp.to/papers.html#bruteforce .)

 Hm. We'd need to check whether all the servers today support an AES256
 ciphersuite.  Also, wasn't there some dodgy issue in the AES256 key
 schedule?  Or is that basically irrelevant?

I am not aware of any additional bugs in AES-256 that are as severe as
the small keyspace of AES-128.

I am not aware of any bugs (other than very serious side-channel leaks
in most implementations) in AES-256 when used with keys generated by
an acceptable key-derivation function or random number generator.


   Can we get OpenSSL to support the dubious FIPS suite excluded above,
   in order to remove a distinguishing opportunity?  It is not so simple
   as just editing the SSL_CIPHER list in s3_lib.c, since the nonstandard
   SSL_RSA_FIPS_WITH_3DES_EDE_CBC_SHA cipher is (IIUC) defined to use the
   TLS1 KDF, while declaring itself to be an SSL cipher (!).

 Would that FIPS ciphersuite provide forward secrecy?  If not, then
 there is no point in having clients or servers implement it.

 The idea would be that, so long as we advertise ciphers we can't
 support, an MITM adversary could make a Tor detector by forging
 ServerHello responses to choose the FIPS suite, and then seeing
 whether the client can finish the handshake to the point where they
 realize that the ServerHello was forged.

 This is probably not the best MITM Tor-detection attack, but it might
 be nice to stomp them as we find them.

Does OpenSSL validate the certificate chain at all before Tor allows
it to complete the TLS handshake?  If not, They can MITM a user's
connection by sending a ServerHello with an invalid certificate chain
(e.g. one in which a certificate is not signed correctly), and see
whether the client completes the TLS handshake like Tor or closes the
connection like a normal client.


   [**] Actually, I think it's the Windows SChannel cipher list we
   should be looking at here.
   [***] If we did _that_, we'd want to specify that CREATE_FAST could
   never be used on a non-forward-secure link.  Even so, I don't like the
   implications of leaking cell types and circuit IDs to a future
   compromise.

 A relay whose link protocol implementations can't provide forward
 secrecy to its clients cannot be used as an entry guard -- it would be
 overloaded with CREATE cells very quickly.

 Why is that?  It shouldn't be facing more than 2x the number of create
 cells that a relay faces, and with the ntor handshake, create cell
 processing ought to get much faster.

Clients often produce rapid bursts of circuit creation.  If bursts of
CREATE cells from two or three clients hit an entry guard at the same
time, the guard could be overloaded.

I expect that this link-protocol change will be deployed before a new
circuit-extension protocol is deployed.  I expect that the ntor
handshake will not be deployed.


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal: Integration of BridgeFinder and BridgeFinderHelper

2012-03-26 Thread Robert Ransom
On 2012-03-22, Mike Perry mikepe...@torproject.org wrote:
 Thus spake Robert Ransom (rransom.8...@gmail.com):

 [ snip ]

 Ok, attempt #2. This time I tried to get at the core of your concerns
 about attacker controlled input by requring some form of authentication
 on all bridge information that is to be automatically configured.

I rewrote most of the ‘Security Concerns’ section for
BridgeFinder/Helper.  Please merge:
  https://git.torproject.org/rransom/torspec.git bridgefinder2



 Security Concerns: BridgeFinder and BridgeFinderHelper

  1. Do not allow attacks on your IPC channel by malicious local 'live data'

 The biggest risk is that unexpected applications will be
 manipulated into posting malformed data to the BridgeFinder's IPC
 channel as if it were from BridgeFinderHelper. The best way to
 defend against this is to require a handshake to properly
 complete before accepting input. If the handshake fails at any
 point, the IPC channel MUST be abandoned and closed. Do *not*
 continue scanning for good input after any bad input has been
 encountered; that practice may allow cross-protocol attacks by
 malicious JavaScript running in the user's non-Tor web browser.

 Additionally, it is wise to establish a shared secret between
 BridgeFinder and BridgeFinderHelper, using an environment
 variable if possible.  For an a good example on how to use such a
 shared secret properly for authentication; see Tor ticket #5185
 and/or the SAFECOOKIE Tor control port authentication method.


  2. Do not allow attacks against the Controller

 Care has to be taken before converting BridgeFinderHelper data into
 Bridge lines, especially for cases where the BridgeFinderHelper data
 is fed directly to the control port after passing through
 BridgeFinder.

 In specific, the input MUST be subjected to a character whitelist
 and should also be validated against a regular expression to
 verify format, and if any unexpected or poorly-formed data is
 encountered, the IPC channel MUST be closed.

 Malicious control-port commands can completely destroy a user's
 anonymity.  BridgeFinder is responsible for preventing strings
 which could plausibly cause execution of arbitrary control-port
 commands from reaching the Controller.


  3. Provide information about bridge sources to users

 BridgeFinder MUST provide complete information about how each
 bridge was obtained (who provided the bridge data, where the
 party which provided the data intended that it be sent to users,
 and what activities BridgeFinder extracted the data from) to
 users so that they can make an informed decision about whether to
 trust the bridge.

 BridgeFinder MUST authenticate, for every piece of discovered
 bridge data, the party which provided the bridge address, the
 party which prepared the bridge data in BridgeFinder's input
 format, and the time, location, and manner in which the latter
 party intended that the bridge data be distributed.  (Use of an
 interactive authentication protocol is not sufficient to
 authenticate the intended location and manner of distribution of
 the bridge data; those facts must be explicitly authenticated.)

 These requirements are intended to prevent or mitigate several
 serious attacks, including the following:

 * A malicious bridge can 'tag' its client's circuits so that a
   malicious exit node can easily recognize them, thereby
   associating the client with some or all of its anonymous or
   pseudonymous activities.  (This attack may be mitigated by new
   cryptographic protocols in a near-future version of Tor.)

 * A malicious bridge can attempt to limit its client's knowledge
   of the Tor network, thereby biasing the client's path selection
   toward attacker-controlled relays.

 * A piece of bridge data containing the address of a malicious
   bridge may be copied to distribution channels other than those
   through which it was intended to be distributed, in order to
   expose more clients to a particular malicious bridge.

 * Pieces of bridge data containing the addresses of non-malicious
   bridges may be copied to other-than-intended distribution
   channels, in order to cause a particular client to attempt to
   connect to a known, unusual set of bridges, thus allowing a
   malicious ISP to monitor the client's movements to other
   network and/or physical locations.

 BridgeFinder MUST warn users about the above attacks, and warn
 that other attacks may also be possible if users accept
 improperly distributed bridge data.


  4. Exercise care with what is written to disk

 BridgeFinder developers must be aware of the following attacks,
 and ensure that their software does not expose users to any of
 them:

 * An attacker could plant

Re: [tor-dev] Implement JSONP interface for check.torproject.org

2012-03-26 Thread Robert Ransom
Oh, I forgot to mention one requirement:  check.torproject.org must be
usable by people who have turned off JavaScript in their browser
(whether TBB or not).  That rules out XmlHttpRequest.


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Implement JSONP interface for check.torproject.org

2012-03-23 Thread Robert Ransom
On 2012-03-23, Arturo Filastò a...@baculo.org wrote:

 Since I noticed that check.tpo was removed from the front page I was
 thinking it would be a good idea to bring back up the topic of migrating
 check.torproject.org to a JSONP based system.

JSONP gives the party which is expected to provide a piece of data the
ability to run arbitrary JavaScript code in the security context of
the website which requested the data.  The Tor Project should never
put itself in a position to have that level of control over other
parties' websites.


 Such a system would also allow to have the JSONP check nodes distributed
 across multiple machines (avoiding the single point of failure that check
 currently is) and the client side software could be embedded inside of
 TBB directly.

 People could further promote the usage of Tor by placing an Anonymity
 badge on their website.

 A person wishing to setup such a node needs to simply install TorBel
 and a python based web app that runs this JSONP system.

 My threat model for this is very lax, so I don't see any purpose in
 bad actors telling a client when he is not using Tor that he is using it.
 If check.tpo tells the user is not using Tor it already means that TBB
 failed, the purpose of it is just to provide visual feedback to the user
 that all is did went well.

check.torproject.org is the only service which can warn Tor users that
a security upgrade is available for the Tor Browser Bundle.

It is also accessed by every Tor Browser Bundle as the first page
shown after the user uses the ‘New Identity’ Torbutton command; any
party which can impersonate check.torproject.org can plant
user-tracking cookies in every TBB user's browser.

check.torproject.org cannot ever be run by untrusted parties, and
cannot ever use a JSONP service provided by untrusted parties.


 If check is moved to git and you think it is a good idea I can start
 working on this.

It is a more horrible idea now than it was the first time you proposed
it.


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal 193: Safe cookie authentication

2012-03-22 Thread Robert Ransom
On 2012-03-16, Sebastian Hahn hahn@web.de wrote:

 On Feb 10, 2012, at 12:02 AM, Robert Ransom wrote:
 The sole exception to ‘non-safe cookie authentication must die’ is
 when a controller knows that it is connected to a server process with
 equal or greater access to the same filesystem it has access to.  In
 practice, this means ‘only if you're completely sure that Tor is
 running in the same user account as the controller, and you're
 completely sure that you're connected to Tor’, and no controller is
 sure of either of those.

 Why is it so hard to do this?

I am not aware of any sane way for a program to determine which user
ID is on the other end of a TCP socket, even over the loopback
interface.  (Scraping the output of netstat or sockstat or lsof is
insane.)

 Can't we tell controllers to do a
 check of permissions, and only if they can't be sure refuse to use the
 requested path by default unless a config whitelist or user prompt
 allows it? I think that's a lot easier to implement for controllers, and
 I just don't really see the huge threat here. If you have malicious
 system-wide software on your host, you lost anyway.

* Not every program which can receive connections on the loopback
interface should be allowed to read every 32-byte file which I can
access.  (Such programs might not have access to any part of my
filesystem.)

* If Tor were intended to have read access to every file in my user
account, the Debian package would configure it to keep running as root
(even after startup).

* If an attacker compromises the Tor process after it has dropped
privileges, Tor can fool a controller into opening the wrong file by
dropping a symlink in the whitelisted location for the system-wide
cookie file.  There is no good way to avoid following a symlink when
opening a file.  (O_NOFOLLOW isn't a good way -- it still follows
parent-directory symlinks, it may not be available on all OSes, and it
is not likely to be available in all programming languages.)  fstat
(to check ownership and permissions after opening a cookie file) is
difficult enough to use that someone will not use it, even if their
controller can correctly guess what ownership and permissions the
cookie file should have.

* A user who configures a controller to connect to a remote Tor
instance's control port knows that he/she/it is allowing attackers on
the LAN to control the Tor instance.  He/she/it is unlikely to know
that attackers on the LAN can also read 32-byte files from his/her/its
client system's disk.

* I will have very sensitive 32-byte files in my Unixoid VFS tree Real
Soon Now.  Perhaps other people will, too.

* A subtle complex flaky kludge which most controller implementors
will not realize is necessary is not a valid substitute for a simple
new cookie-based authentication protocol that avoids filesystem
permission-check hacks entirely.


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal: Integration of BridgeFinder and BridgeFinderHelper

2012-03-21 Thread Robert Ransom
On 2012-03-22, Mike Perry mikepe...@torproject.org wrote:
 Thus spake Robert Ransom (rransom.8...@gmail.com):

 [ snip ]

 I've updated the proposal to address your concerns at
 mikeperry/bridgefinder2.

 I've relaxed some of the requirements a little, but I think I still
 properly cover everything you mentioned.

Yes.

 Here's the updated proposal inline for more comment:


   4. Exercise care with disk activity

  If transport plugins or definition/configuration files are to be
  downloaded, the BridgeFinder MUST ensure that they are only written to
  a known, controlled subdirectory of the Tor Browser Bundle, and with
  predictable extensions and properly applied permissions.

In particular, on platforms and filesystems which have an ‘execute
bit’ (primarily non-FAT filesystems on a Unixoid OS), the execute bit
MUST NOT be set on files which are not intended to be executed
directly by the operating system.  (This *should* be obvious, but I'm
afraid that it isn't.)

  In particular, BridgeFinder MUST NOT create files with (entirely or
  partially) attacker-controlled contents or files with
  attacker-controlled names or file extensions.

Some reasons for this restriction are:

* An attacker can plant illegal data (e.g. pictures of naked ankles)
on a user's computer.

* An attacker can plant data which exploits bugs in code which a
file-manager application will apply to the contents of files in any
directory which the user browses to.

* An attacker could plant malicious software in a subdirectory of the
Tor Browser Bundle, and then persuade users to go run it.

If a user asks a BridgeFinder to store not-yet-authenticated data to
disk, I recommend that BridgeFinder ‘grizzle’ the data first.  (See
http://www.cl.cam.ac.uk/~rja14/Papers/grizzle.pdf , and note that the
nonce and integrity check are *very* important here.)


   5. Exercise care when operating from within Tor Browser

  Any BridgeFinderHelper operating from within Tor Browser MUST NOT
  use the same passive side-channel and/or steganographic techniques
  employed by the Non-Tor BridgeFinderHelper, as these types of
  techniques can be (ab)used by malicious exit nodes to deanonymize
  users by feeding them specific, malicious bridges.

I was worried about malicious content, not necessarily malicious exit
nodes or servers.  (For example, They send e-mail containing one piece
of BridgeFinderHelper information to a dissident which They want to
locate, and spray the other pieces of information for Their malicious
bridge all over.)

  Any bridge discovery performed from within Tor Browser MUST be active
  in nature (with bridge sources fully controlled by BridgeFinderHelper)
  and MUST be authenticated (via TLS+cert pinning and/or HMAC).

Public-key signatures are better than either of those authentication methods.

  Further, a BridgeFinder or BridgeFinderHelper MAY make its own
  connections through Tor for the purpose of finding new bridge
  addresses (or updating previously acquired addresses), but MUST use
  Tor's stream isolation feature to separate BridgeFinder streams from
  the user's anonymous/pseudonymous activities.



Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal: Integration of BridgeFinder and BridgeFinderHelper

2012-03-21 Thread Robert Ransom
On 2012-03-22, Mike Perry mikepe...@torproject.org wrote:
 Thus spake Robert Ransom (rransom.8...@gmail.com):

 [ snip ]

 I've updated the proposal to address your concerns at
 mikeperry/bridgefinder2.

 I've relaxed some of the requirements a little, but I think I still
 properly cover everything you mentioned.

I missed something: You need to use a protocol name other than
“POSTMESSAGE” for the protocol which will be spoken over the
POSTMESSAGE transport layer.

If you aren't feeling creative, ‘pwgen -0A’ might help.  Or look
through a list of names of potential bikeshed colors.


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Analysis of the Relative Severity of Tagging Attacks

2012-03-12 Thread Robert Ransom
On 2012-03-12, Watson Ladd watsonbl...@gmail.com wrote:
 On Sun, Mar 11, 2012 at 10:45 PM, Robert Ransom rransom.8...@gmail.com
 wrote:

 (The BEAR/LION key would likely be different for each cell that a
 relay processes.)
 Different how: if we simply increment the key we still remain open to
 replay attacks.

The paper proves that BEAR and LION are 'secure' if the two (three?)
parts of the key are 'independent'.  Choosing the subkeys
independently is too expensive for Tor, but the standard way to
generate 'indistinguishable-from-independent' secrets is to feed your
key to a stream cipher (also known as a 'keystream generator').
Incrementing that stream cipher's key after processing each cell would
indeed prevent replay attacks (unless the stream cipher is something
really horrible like RC4), but it's probably easier to just take the
next 2n (3n?) bytes of keystream.

 Losing semantic security is a Bad Thing. I'll freely admit there are
 issues with incorporating a leak of circuit length into the protocol,
 as well as possibly (depending on details of TLS) leaking what lengths
 end where to a global adversary.

 An end-to-end MAC inside the BEAR/LION wrapper should provide all the
 security properties we need (note that the MAC key would also need to
 be different for each cell).
 So we need to include nonces with each cell, which we need to do anyway.

No -- each cell needs a different nonce.  Hopefully the nonce won't
need to be sent with every cell.

(End-to-end out-of-order delivery, non-reliable delivery, and
variable-sized relay cells are unlikely to happen soon, even after a
UDP-based link protocol is added to Tor, because they make end-to-end
tagging much easier.)


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Analysis of the Relative Severity of Tagging Attacks

2012-03-12 Thread Robert Ransom
On 2012-03-12, Watson Ladd watsonbl...@gmail.com wrote:
 On Mon, Mar 12, 2012 at 9:04 AM, Robert Ransom rransom.8...@gmail.com
 wrote:
 On 2012-03-12, Watson Ladd watsonbl...@gmail.com wrote:
 On Sun, Mar 11, 2012 at 10:45 PM, Robert Ransom rransom.8...@gmail.com
 wrote:

 (The BEAR/LION key would likely be different for each cell that a
 relay processes.)
 Different how: if we simply increment the key we still remain open to
 replay attacks.

 The paper proves that BEAR and LION are 'secure' if the two (three?)
 parts of the key are 'independent'.  Choosing the subkeys
 independently is too expensive for Tor, but the standard way to
 generate 'indistinguishable-from-independent' secrets is to feed your
 key to a stream cipher (also known as a 'keystream generator').
 Incrementing that stream cipher's key after processing each cell would
 indeed prevent replay attacks (unless the stream cipher is something
 really horrible like RC4), but it's probably easier to just take the
 next 2n (3n?) bytes of keystream.

 As I understand the tagging attacks of our favorite scavenger they
 repeat a cell, turning it and all following cells in a circuit into
 gibberish. This causes the circuit to close. I don't understand how
 changing keys after each cell affects this attack: we still get
 gibberish when a cell is repeated, precisely because the key changes.

No, They tag a cell by changing a few bits of it.  Because Tor uses
AES128-CTR alone for its relay protocol, the cell reaches the other
end of the circuit with that bitwise difference intact; an honest
relay would reject and ignore the cell (thus causing all further cells
on the circuit to fall into the bitbucket with it -- see tor-spec.txt
section 6.1), but a malicious relay can recognize and remove the tag.


 Losing semantic security is a Bad Thing. I'll freely admit there are
 issues with incorporating a leak of circuit length into the protocol,
 as well as possibly (depending on details of TLS) leaking what lengths
 end where to a global adversary.

 An end-to-end MAC inside the BEAR/LION wrapper should provide all the
 security properties we need (note that the MAC key would also need to
 be different for each cell).
 So we need to include nonces with each cell, which we need to do anyway.

 No -- each cell needs a different nonce.  Hopefully the nonce won't
 need to be sent with every cell.
 We can of course not send the nonce with each cell, incrementing on
 successful arrival.
 But why does the MAC key need to be different for each cell? MACs take
 nonces to prevent replay attacks.

For the same reasons that DJB switched to generating a new Poly1305
key (i.e. a new pair (r, s)) for each secretbox operation, rather than
taking the trouble to keep one secret r around and generate a new s by
applying a secret PRF to a non-secret nonce (as Poly1305-AES did):

* Generating and using 32 extra bytes of stream-cipher output with the
message's nonce is cheaper than generating and using 16 bytes of
stream-cipher output with a fixed nonce (for r), then generating and
using 16 extra bytes of stream-cipher output with the message's nonce
(for s), with a typical good stream cipher like Salsa20.

* If an attacker obtains mathematically useful information about r,
the attacker can modify every message which uses that same value of r.
 This becomes much less problematic when each r is used for only one
message (more precisely, when the attacker cannot use information
about the value of r used for one message to obtain information about
the value of r used for any other message).


 Anyway, you probably have something much more final figured out, which
 I should wait to poke holes in when you propose it.

It's not final yet.


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal 195: TLS certificate normalization for Tor 0.2.4.x

2012-03-09 Thread Robert Ransom
.

 1.4. Self-signed certificates with better DNs

When we generate our own certificates, we currently set no DN fields
other than the commonName.  This behavior isn't terribly common:
users of self-signed certs usually/often set other fields too.
[TODO: find out frequency.]

Unfortunately, it appears that no particular other set of fields or
way of filling them out _is_ universal for self-signed certificates,
or even particularly common.  The most common schema seem to be for
things most censors wouldn't mind blocking, like embedded devices.
Even the default openssl schema, though common, doesn't appear to
represent a terribly large fraction of self-signed websites.  [TODO:
get numbers here.]

So the best we can do here is probably to reproduce the process that
results in self-signed certificates originally: let the bridge and relay
operators to pick the DN fields themselves.  This is an annoying
interface issue, and wants a better solution.

 1.5. Better commonName values

Our current certificates set the commonName to a randomly generated
field like www.rmf4h4h.net.  This is also a weird behavior: nearly
all TLS certs used for web purposes will have a hostname that
resolves to their IP.

The simplest way to get a plausible commonName here would be to do a
reverse lookup on our IP and try to find a good hostname.  It's not
clear whether this would actually work out in practice, or whether
we'd just get dynamic-IP-pool hostnames everywhere blocked when they
appear in certificates.

What if a bridge's IP address and reverse-DNS hostname change?

How does this interact with the v3 link protocol signaling mechanism?

How will a bridge's client be told what hostname to specify in its
server name indication field?


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal 195: TLS certificate normalization for Tor 0.2.4.x

2012-03-09 Thread Robert Ransom
On 2012-03-10, George Kadianakis desnac...@riseup.net wrote:
 Nick Mathewson ni...@freehaven.net writes:

 Filename: 195-TLS-normalization-for-024.txt
 Title: TLS certificate normalization for Tor 0.2.4.x
 Author: Jacob Appelbaum, Gladys Shufflebottom, Nick Mathewson, Tim Wilde
 Created: 6-Mar-2012
 Status: Draft
 Target: 0.2.4.x

 snip

 2. TLS handshake issues

 2.1. Session ID.

Currently we do not send an SSL session ID, as we do not support
 session
resumption.  However, Apache (and likely other major SSL servers) do
 have
this support, and do send a 32 byte SSLv3/TLSv1 session ID in their
 Server
Hello cleartext.  We should do the same to avoid an easy fingerprinting
opportunity.  It may be necessary to lie to OpenSSL to claim that we
 are
tracking session IDs to cause it to generate them for us.

(We should not actually support session resumption.)


 This is a nice idea, but it opens us to the obvious active attack of
 Them checking if a host *actually* supports session resumption or if
 it's faking it.

 What is the reason we don't like session resumption? Does it still
 makes sense to keep it disabled even after #4436 is implemented?

Session resumption requires keeping some key material around after a
TLS connection is closed, thereby possibly denting Tor's link-protocol
forward secrecy if a bridge/relay is compromised soon after a
connection ends.

OpenSSL provides an implementation of session resumption, with the
code quality you should expect to find in a rarely-used piece of
OpenSSL.  There have been several OpenSSL security-fix releases due to
code-exec bugs in the session-resumption code.


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Mnemonic 80-bit phrases (proposal)

2012-02-28 Thread Robert Ransom
On 2012-02-28, Sai t...@saizai.com wrote:
 Hello all.

 We've written up our proposal for mnemonic .onion URLs.

 See
 https://docs.google.com/document/d/sT5CulCVl0X5JeOv4W_wC_A/edit?disco=AERhFsE
 for details; please read the full intro for explanations and caveats,
 as some are important.

I'm not going to follow that link.  (Tor specification-change
proposals are sent to the tor-dev mailing list in their entirety and
copied into a Git repository for archival, not left on an
easily-changed web page.)


 tl;dr: It's a system that would have all three properties of being
 secure, distributed, and human-meaningful… but would *not* also have

We do not care whether names are ‘human-meaningful’.  (“Tor” is not a
human-meaningful name.)

We would like a naming system which provides *memorable* names, if
that is possible.  (I've never seen a distributed naming system which
provides secure and memorable names.)

But we care even more about other usability properties of a naming
system, such as how easily users can type a name given a copy of it on
paper, how easily users can transfer a name to a friend over the
telephone, and how easily users can compare two names maliciously
crafted by an attacker with plausible computational power to be
similar (whether in written form or in spoken form).

 choice of name (though it has the required *canonicality* of names),

By proposing to add a new naming system for Tor's existing hidden
service protocol, you are already assuming and claiming that hidden
service names do not need to be canonical.  Why do you think
‘canonicality’ is required?

 and has a somewhat absurdist definition of 'meaningful'. :-P

Then your system's names are unlikely to be memorable.


 Please feel free to put comments there or on list.

 Right now we're at the stage just before implementation; namely, we
 haven't yet collated the necessary dictionaries, but we have a
 reasonably good idea of how the system would work, including needed
 constraints on the dictionaries. If you have suggestions or comments,
 now is a good time to talk about them, so that if any of it affects
 the dictionary collation step we don't waste work.

The dictionaries required by a dictionary-based naming system strongly
influence whether the resulting names will be memorable.  The
usability tests which will prove that your scheme does not provide
sufficient usability benefit to justify shipping many large
dictionaries with Tor cannot begin until after you have collected the
dictionaries.


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Pluggable Transport through SOCKS proxy

2012-02-28 Thread Robert Ransom
On 2012-02-29, Arturo Filastò hell...@torproject.org wrote:
 On Feb 28, 2012, at 5:38 PM, Robert Ransom wrote:

 On 2012-02-29, Arturo Filastò hell...@torproject.org wrote:

  When Tor is configured to use both a Pluggable Transport proxy and SOCKS
  proxy it should delegate the proxying to the pluggable transport proxy.

  This can be achieved by setting the environment variables for the SOCKS
  proxy to that specified inside of the torrc.

  When the pluggable transport proxy starts it will first read the
 environment
  variables and if it detects that it should be using a SOCKS proxy make
  all it's traffic go through it. Once the pluggable transport proxy has
 successfully
  established a connection to the SOCKS proxy it should notify Tor of it's
  success or failure.
  When both the SOCKS and the PluggableTransport directives are set Tor
  should set the environemnt variable start the pluggabletransport proxy
 and
 wait
  for it to report back on the SOCKS proxy status. If the pluggable
 transport
  reports back a failure or it does not report back at all (maybe because
  it is an outdated version), Tor should notify the user of the failure
  and exit with an error.

 That's not very nice.  At a minimum, Vidalia users will never be able
 to use the GUI to recover from setting such a configuration.  (Users
 can put Tor into such a configuration using the GUI, by configuring
 Tor to use a proxy while a managed transport which does not support
 one is specified in the torrc.)


 Specifications: Tor Pluggable Transport communication

  When Tor detects a SOCKS proxy directive and a Pluggable Transport
  proxy directive it sets the environment variable:

TOR_PT_PROXY -- This is the address of the proxy to be used by
the pluggable transport proxy. It is in the format:
proxy_type://user_name?:password?@ip:port
ex. socks5://tor:test1234@198.51.100.1:8000
socks4a://198.51.100.2:8001

 What does Tor send if a SOCKS username or password contains ':', '@', or
 '\0'?


 Well I think no username and password should contain that characters,
 especially the
 last. If that is not the case I am sure an approach to deal with this
 problem has been
 found by many others in the past. I would just take a look at their
 solution.

The SOCKS 5 spec permits all of those characters.  (I know this
because I found a bug related to embedded NULs in Tor's SOCKS 5 server
and stream-isolation code.)  If we design our spec to not allow
managed transports to support special characters, someone somewhere
someday will block all Tor managed transports in one fell swoop by
forcing all traffic through a SOCKS proxy that requires one of those
characters in its passwords.

(Since this is a discussion about a proposal, not a final spec, “I
would just take a look at their solution.” is a valid answer for now.
But the spec we implement *must* specify an actual solution to this
problem.)


 How does Tor specify an HTTP proxy?


HTTPProxy host[:port]
Tor will make all its directory requests through this host:port
 (or host:80 if port is not specified), rather than connecting directly to
 any directory
servers.

I know how the user specifies an HTTP proxy to Tor.  How does Tor
specify an HTTP proxy to a managed transport?


 How does Tor specify an HTTP/HTTPS proxy (i.e. an HTTP proxy which
 supports the CONNECT method)?


HTTPSProxy host[:port]
Tor will make all its OR (SSL) connections through this host:port
 (or host:443 if port is not specified), via HTTP CONNECT rather than
 connecting
directly to servers. You may want to set FascistFirewall to
 restrict the set of ports you might try to connect to, if your HTTPS proxy
 only allows
connecting to certain ports.


 How does Tor pass proxy settings to a managed transport after it has
 started?  (If it can't, then you'll have to either (a) break all OR
 connections through that transport by stopping and restarting it or
 (b) remember to not use that instance of the transport again, and
 launch and start using another instance of the same transport for new
 OR connections with the same managed transport specified.  (a) is
 easier to implement, but not nice.)


 via the environment variable TOR_PT_PROXY. This means of communication
 is documented inside of proposal 180.

Tor cannot change a managed transport's environment variables after
the managed transport has been started.

  If the pluggable transport proxy detects that the TOR_PT_PROXY
 environment
  variable is set it attempts connecting to it. On successs it will
  write to stdout (as specified in 180-pluggable-transport.txt)
  PROXY true. On failure it should write PROXY-ERROR errormessage.

 What kinds of failures lead to a PROXY-ERROR response?


 That the proxy server in unreachable, that the authentication has failed for
 example.

A managed transport should not attempt to connect to the network
before it finishes printing

Re: [tor-dev] Proposal 193: Safe cookie authentication

2012-02-09 Thread Robert Ransom
I've pushed a revised protocol change to branch safecookie of
git.tpo/rransom/torspec.git, and a (messy, needs rebase,
untested) implementation to branch safecookie-023 of
git.tpo/rransom/tor.git.

Now, the client and server nonces are fed to the same HMAC
invocation, so that the client can believe (modulo Merkle-Damgard
and general iterative hash function ‘features’) that the server
knows the cookie (rather than just HMAC(constant, cookie)).

Almost all controllers must drop almost all support for non-safe
cookie authentication ASAP, because a compromised system-wide Tor
process could drop a symlink to /home/rransom/.ed25519-secret-key in
where it was supposed to put a cookie file.

The sole exception to ‘non-safe cookie authentication must die’ is
when a controller knows that it is connected to a server process with
equal or greater access to the same filesystem it has access to.  In
practice, this means ‘only if you're completely sure that Tor is
running in the same user account as the controller, and you're
completely sure that you're connected to Tor’, and no controller is
sure of either of those.


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal xxx: Safe cookie authentication

2012-02-07 Thread Robert Ransom
On 2012-02-07, Nick Mathewson ni...@alum.mit.edu wrote:
 On Sun, Feb 5, 2012 at 7:46 AM, Robert Ransom rransom.8...@gmail.com
 wrote:
 See attached, because GMail would wrap lines if I sent it inline.

 Added as proposal 193.

Remember to push it.

 This seems like a general case of A and B prove to each other that
 they both know some secret S without revealing S.  Are there existing
 protocols for that with security proofs?  It seems like something
 that's probably been done before.

Yes.  I believe this is an existing protocol, except for the extra
(inner) HMAC (see next chunk of reply).

 I wonder, have you got the HMAC arguments reversed in some places?
 When you do HMAC(string, cookiestring), you seem to be using the
 secret thing as the message, and the not-secret thing as the key.

I am, but that HMAC is meant only as a ‘tweaked message-digest
function’, so that we never ever compute
HMAC(potentially_secret_cookie_string, something_else).  (It's
remotely possible that someone could have a 32-byte HMAC-SHA256 key
stored as a binary file; I want to keep the server from abusing such a
key.)

 This would be a little easier to read if the function
 HMAC(HMAC(x,y),z) were given a name.

 Part of me wants to to incorporate both the ClientChallengeString and
 ServerChallengeString in both of the authenticator values, just on the
 theory that authenticating more fields of the protocol is probably
 smarter.

I'll think about this further.

 I'd note that this doesn't actually prevent information leakage
 entirely.  Instead of making you reveal some secret 32-byte file S,
 the attacker now makes you reveal HMAC(HMAC(k,S),c), where k is
 constant and the attacker controls c.   That's fine if S has plenty of
 entropy, but not so good if (say) S has 20 bytes of predictable data
 and 12 bytes of a user-generated password.  Then again, I'm not so
 sure a zero-knowledge protocol is really warranted here.

The server reveals its string first, thereby proving knowledge of the
secret (unless the client e.g. reuses a challenge, in which case it
deserves to lose) or access to an oracle for the server-to-controller
PoK.  (If the server has access to an oracle, it can already
brute-force a low-entropy secret.  An honest server's secret is not
low-entropy, so we don't have to worry about a client using this
attack.)

This is also another reason that I used the weird HMAC-of-HMAC
construction for both proofs -- no one has an excuse for using a
protocol which this authentication protocol could be used to attack.

 I am leery of adding this to 0.2.3.x (which is in feature-freeze),
 much less backporting it, but I'm having a hard time coming up with a
 way to do this entirely in the controller, so I guess we could call it
 a security fix rather than a feature if we can't think of another
 way to kludge around the problem.

The best that a controller can do without this protocol is to refuse
to use the cookie path Tor specifies in its response to a PROTOCOLINFO
command unless the controller's user has whitelisted that cookie path.
 I don't know whether that would be acceptable to controller authors
and users.


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal xxx: Safe cookie authentication

2012-02-06 Thread Robert Ransom
See branch safecookie of
https://gitweb.torproject.org/rransom/torspec.git for a revised ‘safe
cookie authentication’ protocol (in spec-patch form); see branch
safecookie-023 of https://gitweb.torproject.org/rransom/tor.git for a
completely untested implementation on Tor 0.2.3.x.  This needs testing
and a backport, and a few Trac tickets.


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


[tor-dev] Proposal xxx: Safe cookie authentication

2012-02-05 Thread Robert Ransom
See attached, because GMail would wrap lines if I sent it inline.


Robert Ransom
Filename: xxx-safe-cookie-authentication.txt
Title: Safe cookie authentication for Tor controllers
Author: Robert Ransom
Created: 2012-02-04
Status: Open

Overview:

  Not long ago, all Tor controllers which automatically attempted
  'cookie authentication' were vulnerable to an information-disclosure
  attack.  (See https://bugs.torproject.org/4303 for slightly more
  information.)

  Now, some Tor controllers which automatically attempt cookie
  authentication are only vulnerable to an information-disclosure
  attack on any 32-byte files they can read.  But the Ed25519
  signature scheme (among other cryptosystems) has 32-byte secret
  keys, and we would like to not worry about Tor controllers leaking
  our secret keys to whatever can listen on what the controller thinks
  is Tor's control port.

  Additionally, we would like to not have to remodel Tor's innards and
  rewrite all of our Tor controllers to use TLS on Tor's control port
  this week (or deal with the many design issues which that would
  raise).

Design:

From af6bf472d59162428a1d7f1d77e6e77bda827414 Mon Sep 17 00:00:00 2001
From: Robert Ransom rransom.8...@gmail.com
Date: Sun, 5 Feb 2012 04:02:23 -0800
Subject: [PATCH] Add SAFECOOKIE control-port authentication method

---
 control-spec.txt |   59 ++---
 1 files changed, 51 insertions(+), 8 deletions(-)

diff --git a/control-spec.txt b/control-spec.txt
index 66088f7..3651c86 100644
--- a/control-spec.txt
+++ b/control-spec.txt
@@ -323,11 +323,12 @@
   For information on how the implementation securely stores authentication
   information on disk, see section 5.1.
 
-  Before the client has authenticated, no command other than PROTOCOLINFO,
-  AUTHENTICATE, or QUIT is valid.  If the controller sends any other command,
-  or sends a malformed command, or sends an unsuccessful AUTHENTICATE
-  command, or sends PROTOCOLINFO more than once, Tor sends an error reply and
-  closes the connection.
+  Before the client has authenticated, no command other than
+  PROTOCOLINFO, AUTHCHALLENGE, AUTHENTICATE, or QUIT is valid.  If the
+  controller sends any other command, or sends a malformed command, or
+  sends an unsuccessful AUTHENTICATE command, or sends PROTOCOLINFO or
+  AUTHCHALLENGE more than once, Tor sends an error reply and closes
+  the connection.
 
   To prevent some cross-protocol attacks, the AUTHENTICATE command is still
   required even if all authentication methods in Tor are disabled.  In this
@@ -949,6 +950,7 @@
   NULL   / ; No authentication is required
   HASHEDPASSWORD / ; A controller must supply the original password
   COOKIE / ; A controller must supply the contents of a cookie
+  SAFECOOKIE   ; A controller must prove knowledge of a cookie
 
  AuthCookieFile = QuotedString
  TorVersion = QuotedString
@@ -970,9 +972,9 @@
   methods that Tor currently accepts.
 
   AuthCookieFile specifies the absolute path and filename of the
-  authentication cookie that Tor is expecting and is provided iff
-  the METHODS field contains the method COOKIE.  Controllers MUST handle
-  escape sequences inside this string.
+  authentication cookie that Tor is expecting and is provided iff the
+  METHODS field contains the method COOKIE and/or SAFECOOKIE.
+  Controllers MUST handle escape sequences inside this string.
 
   The VERSION line contains the Tor version.
 
@@ -1033,6 +1035,47 @@
 
   [TAKEOWNERSHIP was added in Tor 0.2.2.28-beta.]
 
+3.24. AUTHCHALLENGE
+
+  The syntax is:
+AUTHCHALLENGE SP AUTHMETHOD=SAFECOOKIE
+SP COOKIEFILE= AuthCookieFile
+SP CLIENTCHALLENGE= 2*HEXDIG / QuotedString
+CRLF
+
+  The server will reject this command with error code 512, then close
+  the connection, if Tor is not using the file specified in the
+  AuthCookieFile argument as a controller authentication cookie file.
+
+  If the server accepts the command, the server reply format is:
+250-AUTHCHALLENGE
+SP CLIENTRESPONSE= 64*64HEXDIG
+SP SERVERCHALLENGE= 2*HEXDIG
+CRLF
+
+  The CLIENTCHALLENGE, CLIENTRESPONSE, and SERVERCHALLENGE values are
+  encoded/decoded in the same way as the argument passed to the
+  AUTHENTICATE command.
+
+  The CLIENTRESPONSE value is computed as:
+HMAC-SHA256(HMAC-SHA256(Tor server-to-controller cookie authenticator,
+CookieString)
+ClientChallengeString)
+  (with the HMAC key as its first argument)
+
+  After a controller sends a successful AUTHCHALLENGE command, the
+  next command sent on the connection must be an AUTHENTICATE command,
+  and the only authentication string which that AUTHENTICATE command
+  will accept is:
+HMAC-SHA256(HMAC-SHA256(Tor controller-to-server cookie authenticator,
+CookieString

Re: [tor-dev] Proposal xxx: Safe cookie authentication

2012-02-05 Thread Robert Ransom
On 2012-02-05, Damian Johnson atag...@gmail.com wrote:
 Unlike other commands besides AUTHENTICATE

 AUTHENTICATE and PROTOCOLINFO

 HMAC-SHA256(Tor controller-to-server cookie authenticator, CookieString)

 I'm more than a little green with HMAC. Does this mean that the hmac
 key is that static string, so it would be implemented like...

 import hmac
 cookie_file = open(/path/to/cookie)
 h = hmac.new(Tor controller-to-server cookie authenticator,
 cookie_file.read())

 # that second wrapper, where it looks like the above is the key
 h = hmac.new(h.hexdigest(), server_challenge_response)

Yes.  (See the line below that, which tells you which argument is the key.)

 # send to the controller
 send_to_controller(h.hexdigest())

This seems backwards.

 Also, is HMAC-SHA256 some special hmac implementation that I need to
 look up? Is it part of the builtin python lib?

Some versions of Python include SHA256 and a generic HMAC
implementation (which can be used with SHA256) in their standard
library.

 Speaking as someone who will need to implement the controller side of
 this I'm not really sure what I'm supposed to do with this. Some
 points of clarification that are needed:

 1. Is CLIENTCHALLENGE just any arbitrary client provided string used
 as a salt for the hash?

It is a nonce, used to prove that the CLIENTRESPONSE value is ‘fresh’.

 2. The CLIENTRESPONSE is something that I validate then discard, right?

Yes.

 3. What happens if a user issues a AUTHCHALLENGE, PROTOCOLINFO, then
 AUTHENTICATE? What about PROTOCOLINFO, AUTHCHALLENGE, AUTHENTICATE?

The former is an error; the latter is expected behaviour.

The safe cookie authentication protocol is only needed for controllers
which look at Tor's response to the PROTOCOLINFO command to decide
where to look for a cookie file.

 Personally I don't see the reason for the last handshake. The
 controller is proving that it should have access by providing the
 cookie contents. Providing both the cookie contents and
 SERVERCHALLENGE proves that we sent and received the AUTHCHALLENGE
 which isn't terribly interesting.

In the safe cookie authentication protocol, the controller never sends
the cookie itself.  That is the entire point of the protocol.

 If we only included the AUTHCHALLENGE message and response then this
 would not require a new authentication method so controllers could opt
 into the extra cookie validation. That said, if your intent is to
 force controllers to do the SAFECOOKIE handshake then this makes
 sense.

The old cookie authentication protocol exposes the *controller* to an
attack by (what it thinks is) Tor.  Controllers which use PROTOCOLINFO
to determine which cookie file to use should be updated to remove
support for the old COOKIE protocol.  Controllers which only look for
cookie files at paths whitelisted by their users can safely continue
to use COOKIE.


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal 190: Password-based Bridge Client Authorization

2012-01-17 Thread Robert Ransom
On 2012-01-17, Ian Goldberg i...@cs.uwaterloo.ca wrote:
 On Tue, Jan 17, 2012 at 08:43:00PM +0200, George Kadianakis wrote:
 [0]: Did the Telex people clean up the patch, generalize it, and post
 it in openssl-dev? Having configurable {Server,Client}Hello.Random in
 a future version of OpenSSL would be neat.

 At USENIX Security, Adam opined that openssl's callback mechanism should
 be able to do this with no patches to the source.  (I think there was
 one part of Telex that would still need patches to openssl, but I don't
 think that was it.)  You basically request a callback right after the
 clienthello.random is generated, and in the callback, overwrite the
 value.  Or something like that; I don't remember exactly.

In a Telex TLS connection, the client's DH secret key is derived from
the ECDH shared secret between the client's Telex ECDH key and the
Telex server's ECDH key.  (This has the unfortunate side effect that a
client attempting to find Telex servers gives up forward secrecy for
its TLS connections.)  This may be the part of Telex which requires an
OpenSSL patch.


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal 190: Password-based Bridge Client Authorization

2011-11-04 Thread Robert Ransom
 resolve the requirements of the
'Motivation' section.

Furthermore, an adversary who compromises a bridge, steals the
shared secret and attempts to replay it to other bridges of the
same bridge operator will fail since each shared secret has a
digest of the bridge's identity key baked in it.

Where do passwords come from?

In my opinion, each Tor bridge configured to require a password should
generate its own password, as a sufficiently long random string.  80
bits of entropy should be far more than enough for a bridge password.
In this case, different bridges should never have the same password.


The bridge's identity key digest also serves as a salt to counter
rainbow table precomputation attacks.

Precomputation should not be useful if each password contains 80 bits
of entropy.  The bridge's identity key digest is not used in the
protocol specified above; only the identity key itself.


 4. Tor implementation

The Tor implementation of the above scheme uses SHA256 as the hash
function 'H'.

SHA256 also makes HASH_LEN equal to 32.

 5. Discussion

 5.1. Do we need more authorization schemes?

Probably yes.

The centuries-old problem with passwords is that humans can't get
their passwords right.

Passwords used for this purpose should be provided to clients as part
of a Bridge torrc line, in either written or electronic form.  The
user will not retype them every time he/she/it starts Tor.


To avoid problems associated with the human condition, schemes
based on public key cryptography and certificates can be used. A
public and well tested protocol that can be used as the basis of a
future authorization scheme is the SSH publickey authorization
protocol.

Secret keys for DSA (with a fixed group) and EC-based signature
schemes can be short enough to be fairly easy to transport.  Secret
keys for RSA are a PITA to transport, unless you either (a) specify a
deterministic key-generation procedure, or (b) make the public key
available to all clients somehow, and provide enough information to
clients intended to access a bridge that the client can factor the
modulus efficiently.


 5.2. What should actually happen when a bridge rejects an AUTHORIZE
  cell?

When a bridge detects a badly formed or malicious AUTHORIZE cell,
it should assume that the other side is an adversary scanning for
bridges. The bridge should then act accordingly to avoid detection.

This proposal does not try to specify how a bridge can avoid
detection by an adversary.


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Proposal 189: AUTHORIZE and AUTHORIZED cells

2011-11-04 Thread Robert Ransom
-something
s/In/At/

 4.3. AUTHORIZED seems useless. Why not use VPADDING instead?

As noted in proposal 187, the Tor protocol uses VPADDING cells for
padding; any other use of VPADDING makes the Tor protocol kludgy.

In the future, and in the example case of a v3 handshake, a client
can optimistically send a VERSIONS cell along with the final
AUTHORIZE cell of an authorization protocol. That allows the
bridge, in the case of successful authorization, to also process
the VERSIONS cell and begin the v3 handshake promptly.


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Survey on Tor Trac usage and how you manage your tasks

2011-10-19 Thread Robert Ransom
On 2011-10-19, Karsten Loesing karsten.loes...@gmx.net wrote:

 3. Delete all obsolete versions from the list, and try harder to add new
 versions.  Erinn and I tried to delete old versions and found that it's
 safe to do so.  The deleted version string in a ticket will remain the
 same, but one cannot create new tickets using the deleted version or
 change the version field of existing tickets to it.

If a vandal changes the version field of an existing ticket, we will
be unable to undo that change.

I would still prefer that we make the ‘Version’ field a plain
text-entry field.  I think we still won't add new versions to the list
reliably, and I assume you don't plan to add the many different TBB
version numbers to the list.


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Tor download link

2011-10-10 Thread Robert Ransom
On 2011-10-10, Marco Bonetti si...@slackware.it wrote:
 Hello all,
 I have noticed that the distribution directory:
 https://www.torproject.org/dist/ only contains source code for the latest
 stable and unstable version.
 This somehow breaks package distribution mechanism like SlackBuilds.org
 which do not host source code by themselves. I have already submitted an up
 to date SlackBuild: as soon it will be accepted, new Slackware users will be
 able again to build Tor again so it's not a ground breaking issue but
 nevertheless a bit annoying.
 So, is there a directory which hosts older but still accepted as valid
 version of Tor which I did not see or is a torproject.org policy to just
 keep latest versions online?

Non-current Tor packages are somewhere on
https://archive.torproject.org/ .  Current Tor packages may also be
there.


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Survey on Tor Trac usage and how you manage your tasks

2011-09-06 Thread Robert Ransom
On 2011-09-06, Karsten Loesing karsten.loes...@gmx.net wrote:

 1.1 Which of the reports (stored ticket queries) do you use most often?

 From all the replies, there's 1 person using 5 reports, 2 persons using
 3 reports, 1 person using 2 reports, 4 persons using 1 report, and 5
 persons using no reports at all.  There's only 1 report being viewed by
 more than one person.  5 persons are not using reports at all.

 The following reports are viewed: 7, 8, 12 (mentioned twice), 14, 22, 23
 (mentioned twice), 27, 28, 34, 35, 36, 38, 39, 40 (mentioned twice).
 Hence, the following reports are not viewed: 1, 2, 3, 4, 5, 6, 10, 11,
 15, 16, 17, 18, 19, 20, 21, 24, 29, 30, 31, 32, 33, 37.

 [Suggestion: Backup and delete all reports with a component name in
 them.  The current list of Available Reports list is mostly useless for
 newcomers who don't care much about components, but who are interested
 in finding something to work on.  Developers and volunteers can bookmark
 custom queries or put links to them on a wiki page belonging to a
 component.  For example, report 12 Tor: Active Tickets by Milestone is
 https://trac.torproject.org/projects/tor/query?status=!closedgroup=milestonecomponent=Tor+Relaycomponent=Tor+Clientcomponent=Tor+Bridgecomponent=Tor+Hidden+Servicescomponent=Tor+bundles%2Finstallationcomponent=Tor+Directory+Authorityorder=prioritycol=idcol=summarycol=componentcol=statuscol=typecol=prioritycol=milestonecol=versioncol=keywords
 ]

This is unnecessary.  New developers need to find the 'Custom Query'
page anyway; leaving reports on the 'Available Reports' page that no
one reported using in this survey will not make that any harder.
Making the 'Search the Tor bug tracker' link on the wiki main page
bold might help.

Also, at least one of the reports on that page
(https://trac.torproject.org/projects/tor/report/24 (Archived
Mixminion-* Tasks)) is for a query that we can no longer perform using
the 'Custom Query' page; many of the rest would be difficult to
recreate as a custom query.

 1.7 What are typical search terms that you use when using the search
 features?

 3 persons search Trac using key words and 1 person types in ticket
 numbers in the search field.  The rest doesn't use the search feature.

I type ticket numbers into my browser's search field, too.  I don't
consider typing a Trac link target specifier into the search field to
be searching.

 The Version field (3, 3, 5) is not used by many components and is
 considered not very useful, because bug reporters get versions wrong in
 most cases anyway.  Also, current versions of products are never in the
 list.

 [Suggestion: Delete all obsolete versions from the list, and try harder
 to add new versions.]

This field might receive more useful input from users if it were an
ordinary text field.  There are already far too many possible values
for this field to be useful in searches; allowing arbitrary strings
here cannot make it less useful

 Single tor component: I don't know what confuses others, but IMO the
 proliferation of components that are all tor doesn't help me, and
 makes stuff slightly harder.

If these components were merged, I would have much more trouble
digging through a custom query to find a particular ticket.


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Survey on Tor Trac usage and how you manage your tasks

2011-09-05 Thread Robert Ransom
 where they will
not be readily accessible to web browsers and search engines, then
purged from our Trac installation.

The rest of the pages in the Trac wiki should be moved to a wiki which
allows a user who edit a page concurrently with another user to merge
his/her/its changes into the wiki page.  Perhaps we should just use
Git and give up on browser-based wikis.


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Improving Private Browsing Mode/Tor Browser

2011-06-23 Thread Robert Ransom
On Thu, 23 Jun 2011 10:10:35 -0700
Mike Perry mikepe...@fscked.org wrote:

 Thus spake Georg Koppen (g.kop...@jondos.de):
 
   If you maintain two long sessions within the same Tor Browser Bundle
   instance, you're screwed -- not because the exit nodes might be
   watching you, but because the web sites' logs can be correlated, and
   the *sequence* of exit nodes that your Tor client chose is very likely
   to be unique.
 
 I'm actually not sure I get what Robert meant by this statement. In
 the absence of linked identifiers, the sequence of exit nodes should
 not be visible to the adversary. It may be unique, but what allows the
 adversary to link it to actually track the user? Reducing the
 linkability that allows the adversary to track this sequence is what
 the blog post is about...

By session, I meant a sequence of browsing actions that one web site
can link.  (For example, a session in which the user is authenticated
to a web application.)  If the user performs two or more distinct
sessions within the same TBB instance, the browsing actions within
those sessions will use very similar sequences of exit nodes.


 Or are we assuming that the predominant use case is for a user to
 continually navigate only by following links for the duration of their
 session (thus being tracked by referer across circuits and exits), as
 opposed to entering new urls frequently?
 
 I rarely follow a chain of links for very long. I'd say my mean
 link-following browsing session lifetime is waay, waay below the Tor
 circuit lifetime of 10min. Unless I fall into a wikipedia hole and
 don't stop until I hit philosophy... But that is all the same site,
 which can link me with temporary cache or session cookies.

The issue is that two different sites can use the sequences of exit
nodes to link a session on one site with a concurrent session on
another.


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Improving Private Browsing Mode/Tor Browser

2011-06-23 Thread Robert Ransom
On Thu, 23 Jun 2011 11:19:45 -0700
Mike Perry mikepe...@fscked.org wrote:

 So perhaps Torbutton controlled per-tab proxy username+password is the
 best option? Oh man am I dreading doing that... (The demons laugh
 again.)

If you do this, you will need to give the user some indication of each
tab's ‘compartment’, and some way to move tabs between compartments.

Coloring each tab to indicate its compartment may fail for anomalous
trichromats like me and *will* fail for more thoroughly colorblind
users.  Putting a number or symbol in each tab will confuse most users.

I suggest one compartment per browser window.  (Of course, you can and
should leave more detailed hooks in the browser's source if possible,
in case someone wants to experiment with a different scheme.)


Robert Ransom
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Tor and BGP integration

2011-06-09 Thread Robert Ransom
On Thu, 9 Jun 2011 21:34:17 +
Jacob Appelbaum ja...@appelbaum.net wrote:

 On Thu, Jun 9, 2011 at 8:40 PM, grarpamp grarp...@gmail.com wrote:
 
  Some thoughts from a quasi network operator...
 
  Perhaps a tracking reason not to do this...
 
  Normally exit traffic is free to travel the globe across jurisdictions
  on its way to its final destination (ie: webserver). Doing this
  forces that traffic to sink at the exit jurisdiction... removing
  that part of its independence.
 
 
 No, it does not change anything except adding more exiting bandwidth to the
 network. People who otherwise would run a middle node are willing to endure
 Tor connections *to their own netblocks* from their own Tor nodes. That will
 only improve things and it does not aide in tracking and Tor will still use
 three hop circuits...

No.

Three hops are enough for normal Tor circuits because in a three-hop
circuit, although the second hop knows some information about the
client (one of its guard nodes) and the third hop knows the
destination, no single hop has useful information about both.  When a
client's choice of exit node leaks useful information about its
intended destination, as it does when using an ‘exit enclave’ and would
when using an exit node that exits to a small number of destinations.


Robert Ransom


signature.asc
Description: PGP signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] memcmp() co. timing info disclosures?

2011-05-06 Thread Robert Ransom
On Fri, 6 May 2011 23:14:58 -0400
Nick Mathewson ni...@freehaven.net wrote:

 On Fri, May 6, 2011 at 10:40 PM, Marsh Ray ma...@extendedsubset.com wrote:
 [...]
  but the problem in general is worrisome and we
  should indeed replace (nearly) all of our memcmps with
  data-independent variants.
 
  Maybe some of the str*cmps too? I grep 681 of them.
 
 Yeah; most of these are not parsing anything secret, but we should
 audit all of them to look for worrisome cases.  It's even less clear
 what data-independence means for strcmp: possibly it means that the
 run-time should depend only on the length of the shorter string, or
 the longer string, or on one of the arguments arbitrarily designated
 as the non-secret string, or such.  All of these could be reasonable
 under some circumstances, but we should figure out what we actually
 mean.
 
  We should also look for other cases where any data or padding might be
  checked, decompressed, or otherwise operated on without being as obvious as
  calling memcmp. Lots of error conditions can disclose timing information.
 
 Yeah.  This is going to be a reasonably big job.
 
  (Pedantic nit-pick: we should be saying data-independent, not
  constant-time.  We want a memcmp(a,b,c) that takes the same number
  of cycles for a given value of c no matter what a and b are.  That's
  data-independence.  A constant-time version would be one that took the
  same number of cycles no matter what c is.)
 
  That's a good point. In most of the code I glanced at, the length was fixed
  at compile-time. I suppose a proper constant-time function would have to
  take as much time as a 2GB comparison (on 32) :-).
 
  int mem_neq(const void *m1, const void *m2, size_t n)
  {
   const uint8_t *b1 = m1, *b2 = m2;
   uint8_t diff = 0;
   while (n--)
     diff |= *b1++ ^ *b2++;
   return diff != 0;
  }
  #define mem_eq(m1, m2, n) (!mem_neq((m1), (m2),(n)))
 
  Looks good to me.
 
  What if n is 0? Is 'equals' or 'neq' a more conservative default ?
 
 If n is 0, then equals is the answer: all empty strings are equal, right? :)
 
  Would it make sense to die in a well-defined way if m1 or m2 is NULL?
 
 Adding a tor_assert(m1  m2) would be fine.
 
  Also, if the MSB of n is set it's an invalid condition, the kind that could
  result from a conversion from a signed value.
 
 Adding a tor_assert(n  SIZE_T_CEILING) is our usual way of handling this.
 
 Also, as I said on the bug, doing a memcmp in constant time is harder
 than doing eq/neq.  I *think* that all of the cases where we care
 about memcmp returning a tristate -1/0/1 result, we don't need
 data-independence... but in case we *do* need one, we'll have to do
 some malarkey like
 
 int memcmp(const void *m1, const void *m2, size_t n)
 {
 /*XXX I don't know if this is even right; I haven't tested it at all */
   const uint8_t *b1 = m1, *b2 = m2;
   int retval = 0;
 
   while (n--) {
 const uint8_t v1 = b1[n], v2 = b2[n];
 int diff = (int)v1 - (int)v2;
 retval = (v1 == v2) * retval + diff;
   }
 
   return retval;
 }

GCC is likely to turn (v1 == v2) into a backdoor.  Also, we would need
to make sure sign extension is constant-time; it *probably* is on IA-32
and AMD64, but we may need to disassemble the compiler's output to
verify that on ARM.

Other than that, it looks correct.  We *can* fix the dependence on ==
and make the multiply unnecessary at the same time, though.


I've attached my optimized constant-time comparison functions for
16-byte and 32-byte values to this message.  They're packaged in the
format for a submission to SUPERCOP and/or NaCl, but for some reason I
never actually submitted them.


Robert Ransom


rransom-crypto_verify-2010-04-13-01.tar.xz
Description: application/xz


signature.asc
Description: PGP signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Embedding Parrot in Tor as GSoC Project

2011-04-05 Thread Robert Ransom
On Tue, 5 Apr 2011 00:56:44 -0700
Robert Ransom rransom.8...@gmail.com wrote:

 On Mon, 4 Apr 2011 21:51:57 -0700
 Jonathan \Duke\ Leto jonat...@leto.net wrote:
 
  Parrot recently added a GSoC proposal idea to embed Parrot into Tor.
  It would be great to get the feedback from Tor developers.
  
  Also, this could be under the Parrot or the Tor GSoC project,
  whichever makes the most sense.
  
  The proposal idea could use more detail, but the high-level view that
  I imagine is:
  
  1) Allow Parrot to talk to libtor (this will use NCI - Native Call
  Interface) via PIR
  2) Ability to create Parrot interpreter objects from within Tor via C
  3) Write glue code for a High Level Language (HLL) to talk to libtor
 
 I've never heard of ‘libtor’.  What's that?

And why do you think we should want to embed Parrot into Tor?


Robert Ransom


signature.asc
Description: PGP signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev