On 14/09/15 22:44, Trevor Perrin wrote:
> On Sun, Sep 13, 2015 at 8:50 AM, Ximin Luo <[email protected]> wrote:
>> While I was doing an exercise on classifying and enumerating security 
>> properties, I came up with the following one:
>>
>> - (in: w encrypts m to r) if attacker "a" passively compromises w, they are 
>> able/unable to decrypt current (in-transit) and/or future ciphertext (i.e. 
>> "act as r")
>>
>> This is the encryption analog of KCI ("key compromise impersonation") which 
>> applies to authentication
> 
> Or is it the future analog of PFS, applied to post-compromise data
> instead of pre-compromise?

I think it's both. With the specific case I had in mind, the compromise is only 
on the sender, but a tweaked version that applies to both sender+recipient 
would be the "future secrecy" of Axolotl. I was close to saying this in a 
previous follow-up, but I'm more confident about it now.

> Most people think of PFS as applying to (pre-compromise encrypted
> data, confidentiality) and KCI applying to (post-compromise sessions,
> authentication), but the (post-compromise encrypted data,
> confidentiality) case sometimes gets included under "forward security"
> and sometimes doesn't.

Yes, there's several more cases; someone who gets Alice's long-term keys (or 
this + current session keys), might be able/unable to:

- decrypt/verify old ciphertexts from self/others in the same/older sessions
- encrypt/authenticate new ciphertexts to self/others in the same/newer sessions
- decrypt/verify new ciphertexts from self/others in the same/newer sessions

Some of these are unavoidable of course, but it's good to enumerate them. 
There's also an issue of whether a failed active attack in the past, might help 
to compromise forward secrecy in the future (or any of the properties above). 
Or, if a compromise now might help to make a future active attack succeed. I 
get the impression that formal models do account for this, but I'm not sure if 
this is adequately communicated in more "common" definitions, for people less 
trained in formal security analysis.

>> Note that the former is not exactly the same as forward secrecy, which is 
>> modelled as a passive compromise on the *decryptor's* side
> 
> There's no consistent definition for "forward secrecy" or "forward
> security" (and "perfect" in this context has always been meaningless).
> 
> If you're talking about "forward-secure public-key encryption", then
> you're correct that it only applies to the recipient's private key,
> but that's because only the recipient *has* a private key.
> 
> In mutually-authenticated key agreement, forward security or secrecy
> generally refers to both parties' long-term keys.
> 
> In one-pass key agreements, works like Gorantla and Halevi/Krawczyk
> have used "sender forward secrecy" or "sender's forward secrecy" to
> distinguish sender from recipient compromise:
> 
> https://eprint.iacr.org/2009/436
> https://eprint.iacr.org/2010/638

I was not talking directly about forward secrecy, but about sender future 
secrecy, though they are all related in different ways. Here is some high-level 
motivation for this:

In a protocol with "ideal security", roughly speaking, Alice should not be able 
to decrypt ciphertext that she sends to Bob - or in other words, Alice should 
not carry a decryption capability that only Bob needs. More generally, "ideal 
security" protocols should give parties the least capabilities they need.

While this may seem "too strict", this should (intuitively) automatically give 
maximum protection against *any* pattern of passive memory compromise and 
active communication attacks. This principle ("least authority") can also been 
used to derive motivations for KCI, forward secrecy, or various other security 
properties related to compromises.

For protocols with shared session secrets, this is hard to arrange since the 
shared secret is, by design, used to derive many capabilities. So, a slightly 
weaker version of the above is, "[someone with Alice's long-term key] should 
not be able to decrypt ciphertext that she sends to Bob [in a new session]", 
and likewise for authentication. The AKC paper, linked by Katriel previously, 
has a construction that can be applied to all mutual-authentication protocols 
to achieve this property - that is, even if one's own long-term key is leaked, 
one can still send things secretly to one's peer, but perhaps not receive 
things secretly.

I wonder if the stronger version can be achieved, though. That is, if we can 
give more power to the attacker (i.e. allowing them to passively compromise 
*all* of Alice's secrets at a given time, i.e. long-term *and* session secrets) 
and still achieve more security (i.e. protection from KCI/"attacker decrypt 
outgoing ciphertext" *in the same session*). To achieve this, at the very least 
session secrets cannot be fully shared across all members.

To take your examples above: in public-key encryption, only the recipient has a 
decryption capability, the private key. So this is fine. In a 
mutually-authenticated key agreement, that generates a shared session 
encryption key, effectively *both* parties have a decryption capability *and* 
an encryption capability. Here, compromise of one side effectively compromises 
both sides. To quote the AKC paper: "bilateral protocols can be viewed as 
combining two unilateral protocols: if Alice’s long-term secret key [+ session 
keys, for a stronger version] is compromised, Bob’s half of the bilateral 
guarantees is lost because the adversary can impersonate Alice. But what about 
Alice’s half? Since Bob’s key is not compromised, Alice might expect to obtain 
the guarantees she would have when using an appropriate unilateral protocol."

> Stepping back: the terminology is sort of a mess here, and if you want
> to speak about complex case with precision, you probably just need to
> spell out exactly what compromises you're considering and their
> consequences:
>  - compromise of key A enables attack B but not C
>  - compromise of key D enables attack E but not F
>  etc...

I agree the terminology is a mess yes. To help manage the complexity of a more 
precise definition, it would be useful to classify these things: how things are 
analogues of each other, which things are strictly stronger/weaker than others, 
hence my exercise. At the moment, a "fully formal" description is hard to chew 
through for me, and often has an "arbitrary" "why not a slightly different 
model" feel to it.

X

-- 
GPG: 4096R/1318EFAC5FBBDBCE
git://github.com/infinity0/pubkeys.git

Attachment: signature.asc
Description: OpenPGP digital signature

_______________________________________________
Messaging mailing list
[email protected]
https://moderncrypto.org/mailman/listinfo/messaging

Reply via email to