Re: [tor-dev] Questions about "Tor Messenger CONIKS integration"

2016-04-20 Thread Go
Hi,

For the first question: I understand that the private indices obfuscate the
usernames. But when computing an index i for a username u, the CONIKS
server will see u in plaintext rather than hashed or encrypted results of u
(correct me if I'm wrong). In this case, a CONIKS server controlled by an
attacker will be able to collect the usernames of new registered users,
right?

Thanks!

On Wed, Apr 20, 2016 at 2:53 PM, Marcela S. Melara 
wrote:

> Hi,
>
> I think Ismail was trying to answer your first question when he described
> the private indices in the CONIKS key directories. What these private
> indices do, in other words, is obfuscate the usernames in the directory, so
> an attacker who breaks into the server cannot see the usernames registered
> at the compromised key server.
>
> As for your second question, we haven't fully fleshed out the mechanism
> you found. But if you want to use Tor Messenger for your Twitter account,
> you will have to register your legitimate Twitter name with the key server.
> Our idea is that you will receive some kind of email with a confirmation
> link to prove that you own the Twitter account. This, by no means, means
> that Tor Messenger now has access to your full account. But Tor Messenger
> does need to confirm that you own the Twitter name you're registering to
> prevent an attacker from trying to impersonate you.
>
> It's also important to note that CONIKS uses additional crypto mechanisms
> to ensure that all data (including the public keys) associated with names
> registered with CONIKS key servers isn't stored in plain.
>
> I hope this helps!
> Best,
> Marcela
>
> On Apr 20, 2016, at 14:28, Go  wrote:
>
> Hi,
>
> Thanks for you quick reply. I still have few questions:
>
> 1. If one CONIKS server has been compromised, and I happen to register to
> this server; I guess the server can see my username in this case,  right?
> 2. I found the ticket
> https://trac.torproject.org/projects/tor/ticket/17961. The answer for the
> second question says "We can ask for a proof of ownership of the name...".
> So when do CONIKS need to do proof of account ownership? Could please
> anyone give me some concrete scenarios? My concern is that in order to do
> proof of ownership, we have to hand out the real accounts to CONIKS.
>
> Sorry for being paranoid.
>
> Thanks!
>
> On Tue, Apr 19, 2016 at 4:57 PM, Ismail Khoffi 
> wrote:
>
>> Hi there,
>>
>> I don't know about much about the concrete plans for the Tor Messenger
>> and CONIKS but I'm quite familiar with the original CONIKS design. First of
>> all: I’m sure no one would force you to give your "real" identity, you
>> could for instance use large  identity provider which is rather difficult
>> to compromise, at least for non-state actors (for example gmail and the
>> pseudonym simplesmtptest123 ;-). Maybe, for the Tor messenger integration
>> there will be/people might choose some other identity providers (with a
>> stronger focus on privacy and more freedom to choose pseudonyms instead of
>> real names).
>>
>> If an identity provider (one of the several "CONIKS servers") is
>> compromised, the attacker is able to read the provider's local directory
>> (containing public key of already registered providers), he would basically
>> see a more or less ‘randomly' looking Merkle tree. Theoretically, the
>> attacker would still need to know all the user real-names beforehand to
>> (for instance) query for their public keys. (This is achieved using the
>> following "crypto-tricks": identities are stored at a private “index" in
>> the tree; computed using a verifiable unpredictable function from a
>> cryptographic commitment/hash of the username instead from the username
>> itself). Of course one would also need to make sure that the stored
>> public-key material (in the leaf-nodes) is pruned from user identifying
>> data (like an identity in GPG); otherwise the attacker could guess the
>> identities from that information.
>> Also, in general, the attacker won’t be able to see that you used Tor
>> Messenger from the mere fact that you use a certain identity provider, even
>> if he still could recompute your user-name from the directory.
>>
>> Hope that helps?
>> Ismail
>>
>>
>> On 19 Apr 2016, at 21:28, Go  wrote:
>>
>> Hi,
>>
>> CONIKS seems to be a very useful system. Just curious: do Tor messenger
>> users need to hand out their real identities (facebook account, twitter
>> account, etc.) to CONIKS servers? If so it seems dangerous to put all the
>> identities in a centralized service.  If the CONIKS servers have been
>> compromised, will the attacker be able to figure out the social networking
>> profiles of Tor messenger users?
>>
>>
>> Thanks!
>> ___
>> tor-dev mailing list
>> tor-dev@lists.torproject.org
>> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>>
>>
>>

Re: [tor-dev] Quantum-safe Hybrid handshake for Tor

2016-04-20 Thread Yawning Angel
On Wed, 20 Apr 2016 18:30:14 + (UTC)
lukep  wrote:
> Beware that the definition of newhope has changed! The authors have
> published a new version of this paper and some of the numbers are
> different. The parameter for the binomial distribution has changed
> from 12 to 16, the probability of failure has changed from 2^-110 to
> 2^-64, the core hardness of the attack has increased from 186 to 206
> bits on a quantum computer, and the timings have increased slightly
> too.

I track the paper and reference code in the implementation I maintain.
FWIW, the performance hasn't changed noticeably, unless there's
something newer than 20160328.

> I'm not sure that the newhope algorithm has settled down yet. There's
> also a new paper on IACR called "How (not) to instantiate ring-LWE"
> which has some ideas on how to choose the error distribution - this
> might mean that newhope has to change again??

Most of the changes since the paper has been released have been minor.
The last major algorithmic change I'm aware of was 20151209 which
altered the reconciliation mechanism (I don't particularly count the
March changes that changed the on-the-wire encoding format to be
major, it's just a more compact way to send the same things).

Kind of a moot point since by the time any of this will actually be
used in core tor things would have settled.  And my gut feeling is
RingLWE will have performant, well defined implementations well before
SIDH is a realistic option.

Regards,

-- 
Yawning Angel


pgplNCEOAyDgG.pgp
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Questions about "Tor Messenger CONIKS integration"

2016-04-20 Thread Marcela S. Melara
Hi,

I think Ismail was trying to answer your first question when he described the 
private indices in the CONIKS key directories. What these private indices do, 
in other words, is obfuscate the usernames in the directory, so an attacker who 
breaks into the server cannot see the usernames registered at the compromised 
key server.

As for your second question, we haven't fully fleshed out the mechanism you 
found. But if you want to use Tor Messenger for your Twitter account, you will 
have to register your legitimate Twitter name with the key server. Our idea is 
that you will receive some kind of email with a confirmation link to prove that 
you own the Twitter account. This, by no means, means that Tor Messenger now 
has access to your full account. But Tor Messenger does need to confirm that 
you own the Twitter name you're registering to prevent an attacker from trying 
to impersonate you.

It's also important to note that CONIKS uses additional crypto mechanisms to 
ensure that all data (including the public keys) associated with names 
registered with CONIKS key servers isn't stored in plain.

I hope this helps!
Best,
Marcela

> On Apr 20, 2016, at 14:28, Go  wrote:
> 
> Hi,
> 
> Thanks for you quick reply. I still have few questions:
> 
> 1. If one CONIKS server has been compromised, and I happen to register to 
> this server; I guess the server can see my username in this case,  right? 
> 2. I found the ticket https://trac.torproject.org/projects/tor/ticket/17961. 
> The answer for the second question says "We can ask for a proof of ownership 
> of the name...". So when do CONIKS need to do proof of account ownership? 
> Could please anyone give me some concrete scenarios? My concern is that in 
> order to do proof of ownership, we have to hand out the real accounts to 
> CONIKS. 
> 
> Sorry for being paranoid. 
> 
> Thanks!
> 
>> On Tue, Apr 19, 2016 at 4:57 PM, Ismail Khoffi  
>> wrote:
>> Hi there,
>> 
>> I don't know about much about the concrete plans for the Tor Messenger and 
>> CONIKS but I'm quite familiar with the original CONIKS design. First of all: 
>> I’m sure no one would force you to give your "real" identity, you could for 
>> instance use large  identity provider which is rather difficult to 
>> compromise, at least for non-state actors (for example gmail and the 
>> pseudonym simplesmtptest123 ;-). Maybe, for the Tor messenger integration 
>> there will be/people might choose some other identity providers (with a 
>> stronger focus on privacy and more freedom to choose pseudonyms instead of 
>> real names). 
>> 
>> If an identity provider (one of the several "CONIKS servers") is 
>> compromised, the attacker is able to read the provider's local directory 
>> (containing public key of already registered providers), he would basically 
>> see a more or less ‘randomly' looking Merkle tree. Theoretically, the 
>> attacker would still need to know all the user real-names beforehand to (for 
>> instance) query for their public keys. (This is achieved using the following 
>> "crypto-tricks": identities are stored at a private “index" in the tree; 
>> computed using a verifiable unpredictable function from a cryptographic 
>> commitment/hash of the username instead from the username itself). Of course 
>> one would also need to make sure that the stored public-key material (in the 
>> leaf-nodes) is pruned from user identifying data (like an identity in GPG); 
>> otherwise the attacker could guess the identities from that information. 
>> Also, in general, the attacker won’t be able to see that you used Tor 
>> Messenger from the mere fact that you use a certain identity provider, even 
>> if he still could recompute your user-name from the directory.
>> 
>> Hope that helps?
>> Ismail
>> 
>> 
>>> On 19 Apr 2016, at 21:28, Go  wrote:
>>> 
>>> Hi, 
>>> 
>>> CONIKS seems to be a very useful system. Just curious: do Tor messenger 
>>> users need to hand out their real identities (facebook account, twitter 
>>> account, etc.) to CONIKS servers? If so it seems dangerous to put all the 
>>> identities in a centralized service.  If the CONIKS servers have been 
>>> compromised, will the attacker be able to figure out the social networking 
>>> profiles of Tor messenger users? 
>>> 
>>> 
>>> Thanks!
>>> ___
>>> tor-dev mailing list
>>> tor-dev@lists.torproject.org
>>> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>> 
>> 
>> ___
>> tor-dev mailing list
>> tor-dev@lists.torproject.org
>> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
> 
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
___
tor-dev mailing list

Re: [tor-dev] Quantum-safe Hybrid handshake for Tor

2016-04-20 Thread lukep
Yawning Angel  writes:

> 
> On Sat, 2 Apr 2016 18:48:24 -0400
> Jesse V  wrote:
> > Again, I have very little understanding of post-quantum crypto and I'm
> > just starting to understand ECC, but after looking over
> > https://en.wikipedia.org/wiki/Supersingular_isogeny_key_exchange and
> > skimming the SIDH paper, I'm rather impressed. SIDH doesn't seem to be
> > patented, it's reasonably fast, it uses the smallest bandwidth, and it
> > offers perfect forward secrecy. It seems to me that SIDH actually has
> > more potential for making it into Tor than any other post-quantum
> > cryptosystem.
> 
> Your definition of "reasonably fast" doesn't match mine.  The number
> for SIDH (key exchange, when the thread was going off on a tangent
> about signatures) is ~200ms.
> 
> A portable newhope (Ring-LWE) implementation[0] on my laptop can do one
> side of the exchange in ~190 usec.  Saving a few cells is not a good
> reason to use a key exchange mechanism that is 1000x slower
> (NTRUEncrypt is also fast enough to be competitive).
> 
> nb: Numbers are rough, and I don't have SIDH code to benchmark.
> newhope in particular vectorizes really well and the AVX2 code is even
> faster.
> 

Beware that the definition of newhope has changed! The authors have
published a new version of this paper and some of the numbers are different.
The parameter for the binomial distribution has changed from 12 to 16, the
probability of failure has changed from 2^-110 to 2^-64, the core hardness
of the attack has increased from 186 to 206 bits on a quantum computer, and
the timings have increased slightly too.

I'm not sure that the newhope algorithm has settled down yet. There's also a
new paper on IACR called "How (not) to instantiate ring-LWE" which has some
ideas on how to choose the error distribution - this might mean that newhope
has to change again??


-- lukep

___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Questions about "Tor Messenger CONIKS integration"

2016-04-20 Thread Go
Hi,

Thanks for you quick reply. I still have few questions:

1. If one CONIKS server has been compromised, and I happen to register to
this server; I guess the server can see my username in this case,  right?
2. I found the ticket https://trac.torproject.org/projects/tor/ticket/17961.
The answer for the second question says "We can ask for a proof of
ownership of the name...". So when do CONIKS need to do proof of account
ownership? Could please anyone give me some concrete scenarios? My concern
is that in order to do proof of ownership, we have to hand out the real
accounts to CONIKS.

Sorry for being paranoid.

Thanks!

On Tue, Apr 19, 2016 at 4:57 PM, Ismail Khoffi 
wrote:

> Hi there,
>
> I don't know about much about the concrete plans for the Tor Messenger and
> CONIKS but I'm quite familiar with the original CONIKS design. First of
> all: I’m sure no one would force you to give your "real" identity, you
> could for instance use large  identity provider which is rather difficult
> to compromise, at least for non-state actors (for example gmail and the
> pseudonym simplesmtptest123 ;-). Maybe, for the Tor messenger integration
> there will be/people might choose some other identity providers (with a
> stronger focus on privacy and more freedom to choose pseudonyms instead of
> real names).
>
> If an identity provider (one of the several "CONIKS servers") is
> compromised, the attacker is able to read the provider's local directory
> (containing public key of already registered providers), he would basically
> see a more or less ‘randomly' looking Merkle tree. Theoretically, the
> attacker would still need to know all the user real-names beforehand to
> (for instance) query for their public keys. (This is achieved using the
> following "crypto-tricks": identities are stored at a private “index" in
> the tree; computed using a verifiable unpredictable function from a
> cryptographic commitment/hash of the username instead from the username
> itself). Of course one would also need to make sure that the stored
> public-key material (in the leaf-nodes) is pruned from user identifying
> data (like an identity in GPG); otherwise the attacker could guess the
> identities from that information.
> Also, in general, the attacker won’t be able to see that you used Tor
> Messenger from the mere fact that you use a certain identity provider, even
> if he still could recompute your user-name from the directory.
>
> Hope that helps?
> Ismail
>
>
> On 19 Apr 2016, at 21:28, Go  wrote:
>
> Hi,
>
> CONIKS seems to be a very useful system. Just curious: do Tor messenger
> users need to hand out their real identities (facebook account, twitter
> account, etc.) to CONIKS servers? If so it seems dangerous to put all the
> identities in a centralized service.  If the CONIKS servers have been
> compromised, will the attacker be able to figure out the social networking
> profiles of Tor messenger users?
>
>
> Thanks!
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
>
>
> ___
> tor-dev mailing list
> tor-dev@lists.torproject.org
> https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev
>
>
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Latest state of the guard algorithm proposal (prop259) (April 2016)

2016-04-20 Thread Fan Jiang
Hi,

> Hello Fan and team,
>
> I think I'm not a big fan of the pending_guard and pending_dir_guard
> concept. To me it seems like a quick hack that tries to address fundamental
> issues with our algorithm that appeared when we tried to adapt the
> proposal to
> the tor codebase.
>
>
Yeah agree, this pending_guard hack was trying to avoid some implementation
problem, we need to redesign.
I haven't got any good idea about this, that will be nice if you already
got some thoughts.


> I think one of the main issues with the current algorithm structure is that
> _one run of the algorithm_ can be asked to _setup multiple circuits_, and
> each
> of those circuits has different requirements for guards. That is, since we
> do
> filtering on START based on the requirements of circuit #1, this means
> that any
> other circuits that appear before END is called, also have to adapt to the
> requirements of circuit #1. This is obvious in the code since we use
> guard_selection->for_directory throughout the whole algorithm run, even
> though
> for_directory was just the restriction of circuit #1.
>
> Specifically about the pending_guard trick, I feel that it interacts in
> unpredictable ways with other features of the algorithm. For example,
> consider
> how it interacts with the primary guards heuristic. It could be time for
> the
> algorithm to reenter the primary guards state and retry the top guards in
> the
> list, but because of the pending_guard thing we actually return the 15th
> guard
> to the circuit.
>
> IMO we should revisit the algorithm so that one run of the algorithm can
> accomodate multiple circuits by design and without the need for hacks.
> Here is
> an idea towards that direction:
>
>   I think one very important change that we can do to simplify things is to
>   remove the need to filter guards based on whether they are dirguards,
> fast,
>   or stable. My suggestion here would be to *only* consider guards that are
>   dirguards _and_ fast _and_ stable. This way, any circuit that appears
> will be
>   happy with all the guards in our list and there is no need to do the
>   pending_dir_guard trick. See [0] on why I think that's safe to do.
>
>   This is easy to do in the current codebase. You just need to call
>   entry_is_live() with need_uptime, need_capacity and for_directory all
>   enabled (instead of flags being 0).
>
>   If you do the above, your sampled guard set will be able to accomodate
> any
>   circuit that comes its way and that should simplify logic considerably.
>
>
Sounds great, that can simplify the logic a lot, I've done the change, no
more pending_dir_guard.

Let me know if the above does not make sense.
>
> Here are some more comments:
>
> - So the above idea addresses a large part of the filtering logic that
> happens
>   on START. The rest of the filtering logic has to do with ClientUsesIPv6,
>   ReachableAddreses, etc. . I think it's fine to conduct that filtering on
>   START as well.
>
> - I tried to run the branch as of bb3237d, but it segfaulted. Here is
> where it crashed:
>
>  #1  0x5567eb25 in guards_update_state (next=0x559c3f40,
> next@entry=0x559c35e8, guards=guards@entry=0x559c4620,
>   config_name=config_name@entry=0x5570c47e "UsedGuard") at
> src/or/prop259.c:1137
>  1137   !strchr(e->chosen_by_version, ' ')) {
>
>   Let me know if you need more info here.
>
> Never saw this before, will look into it.

- There is a memleak on 'extended' in filter_set().
>
>   In general, I feel that logic in that function is a bit weird. The
> function
>   is called filter_set() but it can actually end up adding guards to the
> list.
>   Maybe it can be renamed?
>
>
Split it up to filter_set & expand_set probably can make this clear.

- What's up with the each_remaining_by_bandwidth() function name?
>
>
I guess it should be iterate_remaining_guards_by_bandwidth.

---
>
>
> [0]: I think that's OK to do and here is why:
>
> All Guards are Fast.
> About 95% of Guards are Stable (this will become 100% with #18624)
> About 80% of Guards are V2Dir/dirguards (this will become 100%
> with #12538)
>
>  #12538 got merged in 0.2.8, so if prop259 gets merged in 0.2.9, by the
>  time prop259 becomes active, almost all guards will be dirguards.
>
>  So I think it's fine to only consider guards that are dirguards &&
> fast &&
>  stable now, since by the time prop259 gets deployed that will be the
> case
>  for almost 100% of guards.
>



-- 



Fan Jiang 蒋帆
Amateur Code Chef
Thoughtworks, Inc.
mobile +86-150-9189-3714
skype f...@torchz.net
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev