Re: [tor-dev] Temporary hidden services

2018-10-22 Thread Michael Rogers
On 19/10/2018 16:05, Leif Ryge wrote:
> On Wed, Oct 17, 2018 at 07:27:32PM +0100, Michael Rogers wrote:
> [...] 
>> If we decided not to use the key blinding trick, and just allowed both
>> parties to know the private key, do you see any attacks?
> [...]
> 
> If I'm understanding your proposal correctly, I believe it would leave
> you vulnerable to a Key Compromise Impersonation (KCI) attack.
> 
> Imagine the scenario where Alice and Bob exchange the information to
> establish their temporary rendezvous onion which they both know the
> private key to, and they agree that Bob will be the client and Alice
> will be the server.
> 
> But, before Bob connects to it, the adversary somehow obtains a copy of
> everything Bob knows (but they don't have the ability to modify data or
> software on his computer - they just got a copy of his secrets somehow).
> 
> Obviously the adversary could then impersonate Bob to Alice, because
> they know everything that Bob knows. But, perhaps less obviously, in the
> case that Bob is supposed to connect to Alice's temporary onion which
> Bob (and now the adversary) know the key to, the adversary can also
> now impersonate Alice to Bob (by overwriting Alice's descriptors and
> taking over her temporary onion service).
> 
> In this scenario, if Bob hadn't known the private key for Alice's
> temporary onion that he is connecting to, impersonating Alice to Bob
> would not have been possible.
> 
> And of course, if the adversary can successfully impersonate both
> parties to eachother at this phase, they can provide their own long-term
> keys to each of them, and establish a long-term bidirectional mitm -
> which, I think, would otherwise not be possible even in the event that
> one party's long-term key was compromised.
> 
> Bob losing control of the key material before using it (without his
> computer being otherwise compromised) admittedly seems like an unlikely
> scenario, but you asked for "any attacks", so, I think KCI is one (if
> I'm understanding your proposal correctly).

Hi Leif,

Thanks for pointing this out - I'd heard about this kind of attack but
I'd forgotten to consider it.

We're planning to do a key exchange at the application layer after
making the hidden service connection, so I don't think the adversary's
ability to impersonate Alice's hidden service to Bob would necessarily
lead to application-layer impersonation on its own. But if you hadn't
raised this we might have designed the application-layer key exchange in
a way that was vulnerable to KCI as well, so thanks!

Cheers,
Michael


0x11044FD19FC527CC.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
tor-dev mailing list
tor-dev@lists.torproject.org
https://lists.torproject.org/cgi-bin/mailman/listinfo/tor-dev


Re: [tor-dev] Temporary hidden services

2018-10-22 Thread Michael Rogers
On 19/10/2018 14:01, George Kadianakis wrote:
> Michael Rogers  writes:
>> A given user's temporary hidden service addresses would all be related
>> to each other in the sense of being derived from the same root Ed25519
>> key pair. If I understand right, the security proof for the key blinding
>> scheme says the blinded keys are unlinkable from the point of view of
>> someone who doesn't know the root public key (and obviously that's a
>> property the original use of key blinding requires). I don't think the
>> proof says whether the keys are unlinkable from the point of view of
>> someone who does know the root public key, but doesn't know the blinding
>> factors (which would apply to the link-reading adversary in this case,
>> and also to each contact who received a link). It seem like common sense
>> that you can't use the root key (and one blinding factor, in the case of
>> a contact) to find or distinguish other blinded keys without knowing the
>> corresponding blinding factors. But what seems like common sense to me
>> doesn't count for much in crypto...
>>
> 
> Hm, where did you get this about the security proof? The only security
> proof I know of is https://www-users.cs.umn.edu/~hoppernj/basic-proof.pdf and 
> I don't see
> that assumption anywhere in there, but it's also been a long while since
> I read it.

I may have misunderstood the paper, but I was talking about the
unlinkability property defined in section 4.1.

If I understand right, the proof says that descriptors created with a
given identity key are unlinkable to each other, in the sense that an
adversary who's allowed to query for descriptors created with the
identity key can't tell whether one of the descriptors has been replaced
with one created with a different identity key.

It seems to follow that the blinded keys used to sign the descriptors*
are unlinkable, in the sense that an adversary who's allowed to query
for blinded keys derived from the identity key can't tell whether one of
the blinded keys has been replaced with one derived from a different
identity key - otherwise the adversary could use that ability to
distinguish the corresponding descriptors.

What I was trying to say before is that although I don't understand the
proof in section 5.1 of the paper, I *think* it's based on an adversary
who only sees the descriptors and doesn't also know the identity public
key. This is totally reasonable for the original setting, where we're
not aiming to provide unlinkability from the perspective of someone who
knows the identity public key. But it becomes problematic in this new
setting we're discussing, where the adversary is assumed to know the
identity public key and we still want the blinded keys to be unlinkable.

* OK, strictly speaking the blinded keys aren't used to sign the
descriptors directly, they're used to certify descriptor-signing keys -
but the paper argues that the distinction doesn't affect the proof.

> I think in general you are OK here. An informal argument: according to
> rend-spec-v3.txt appendix A.2 the key derivation is as follows:
> 
> derived private key: a' = h a (mod l)
> derived public key: A' = h A = (h a) B
> 
> In your case, the attacker does not know 'h' (the blinding factor),
> whereas in the case of onion service the attacker does not know 'a' or
> 'a*B' (the private/public key). In both cases, the attacker is missing
> knowledge of a secret scalar, so it does not seem to make a difference
> which scalar the attacker does not know.
> 
> Of course, the above is super informal, and I'm not a cryptographer,
> yada yada.

I agree it seems like it should be safe. My point is really just that we
seem to have gone beyond what's covered by the proof, which tends to
make me think I should prefer a solution that I understand a bit better.

(At the risk of wasting your time though, I just want to suggest an
interesting parallel. Imagine we're just dealing with a single ordinary
key pair, no blinding involved. The public key X = xB, where x is the
private key and B is the base point. Now obviously we rely on this property:

1. Nobody can find x given X and B

But we don't usually require that:

2. Nobody can tell whether public keys X and Y share the same base point
without knowing x, y, or the base point
3. Nobody can tell whether X has base point B without knowing x

We don't usually care about these properties because the base point is
public knowledge. But in the key blinding setting, the base point is
replaced with the identity public key. As far as I can see, the proof in
the paper covers property 2 but not property 3. I'm certainly not saying
that I know whether property 3 is true - I just want to point out that
it seems to be distinct from properties 1 and 2.)

>> We're testing a prototype of the UX at the moment.
>>
>> Bringing up the hidden service tends to take around 30 seconds, which is
>> a long time if you make the user sit there and watch a progress wheel,
>> but not too bad if you let them