On 3 October 2014 23:03, Nadim Kobeissi <[email protected]> wrote:
>
> I'd like to add, as another person who's been lurking this discussion:
>
> For what it's worth, I think The Simple Thing has substantial value over 
> other potential models due to its reliance on pre-existing architecture and 
> pre-established norms.
>
> The building blocks are already there: the infrastructure for key exchange, 
> the norms for authentication, and so on.


What are these "norms for authentication"? Isn't the underlying
premise that most users can't do authentication currently?

>
> There isn't the need for setting up any new giant multi-tiered worldwide 
> network for auditing and keeping various actors in check should they conspire 
> to modify Bob's key. And I think this is more valuable than it seems to many 
> people at first.
>
> Some will say: "it doesn't matter! I have the will, the means and the energy 
> to build my new CT-like system!" And my response would be: Great - why don't 
> you use that energy to improve The Simple Thing, since its components and 
> norms are *already in place* and all we have to do is make sure they're more 
> user-friendly? Work on making out of band authentication easier, for example.
>
> I'm doing work on authentication right now (more on this later) and I've 
> already seen some promising work on this list in a variety of creative ways. 
> By working towards a better (simpler?) Simple Thing, you'll be building on 
> top of techniques that are established, on an infrastructure you can already 
> wrangle, on algorithms that are simpler to implement from the programmer's 
> perspective. Ease of use is literally *the only missing piece*, and I think 
> projects like Moxie's work (look at how well Signal/Redphone does per-session 
> authentication, for example) and also Cryptocat/miniLock are showing that we 
> can make this happen.

I've looked, but I can't find how Redphone, Signal, Cryptocat or
miniLock do authentication ... pointers, please?

On the energy needed to build a CT-like system: we're already writing
most of the code for CT, so it also is built on stuff that already
exists...

>
>
>
> NK
>
>
> ------ Original Message ------
> From: "Joseph Bonneau" <[email protected]>
> To:
> Cc: "messaging" <[email protected]>
> Sent: 2014-10-03 5:35:35 PM
> Subject: Re: [messaging] The Simple Thing
>
>> Let me try to summarize this thread (as I understand it) since I've been 
>> lurking and I think there may be some connections between ideas missing. 
>> Here's an attempt at outlining how MITM detection would work in two 
>> discussed cases as I understand it:
>>
>> CT-style (I think we should call it CT-style to avoid confusion with 
>> Certificate Transparency proper for TLS certificates)
>> *Alice looks up Bob's key.
>> *The Evil Log inserts a spurious key for Bob. We're assuming (I think almost 
>> all of us are willing to assume this) that log-consistency auditors ensure 
>> the log has to actually put the spurious key into a globally consistent log 
>> forever. Trying to locally fork Alice's view is too risky if some non-zero 
>> proportion of users gossip out of band.
>> *Later on (after up to the MMD) Bob gets a ping from his monitor that "a new 
>> key for Bob has been logged." Bob concludes that the Evil Log is evil. Alice 
>> learns nothing.
>>
>> The Simple Thing
>> *Alice looks up Bob's key. Two versions seem to have been discussed at 
>> different points:
>>      Version (a)-Alice gets it directly from Bob over an untrusted channel.
>>      Version (b)-Alice gets it from a semi-trusted key directory/service 
>> provider for Bob's address.
>> *In Version (a), a MITM simply changes Bob's transmitted key. In Version 
>> (b), the Evil Directory signs a spurious key for Bob and returns it to Alice.
>> *Ideally, Alice asks Bob out-of-band if this new key is correct before 
>> sending anything. If so, Bob detects the attack and warns Alice not to send. 
>> In Version (b) Bob furthermore concludes that the Evil Directory is evil.
>>
>> The assessment is that CT-style allows only the recipient to detect the 
>> attack, after the fact, and The Simple Thing allows the sender to detect the 
>> attack before sending. To me this wasn't the most intuitive summary-in both 
>> cases it's only the intended recipient (Bob) who can be certain an attack 
>> took place and that the Evil Log or Evil Directory has been evil.
>>
>> The difference is whom you need to be "paranoid" (or just perceptive). The 
>> Simple Thing detects attacks if the sender is paranoid and actually insists 
>> on preemptive fingerprint checks and CT-style detects attacks if the 
>> recipient is paranoid and has monitoring alerts set up and actually checks 
>> them.
>>
>> "Being paranoid" means slightly different things of course: setting up 
>> monitoring vs. doing fingerprint checks. Without hard data we can't really 
>> be sure, though intuitively it seems to me that setting up monitoring and 
>> checking against your own recent activity is probably easier. For one thing, 
>> in a CT-style system each key change should only require one check (by Bob) 
>> whereas with The Simple Thing each key change of Bob's requires all of his 
>> paranoid contacts to initiate a fingerprint check.
>>
>> The costs also seem more naturally aligned in CT-style systems: if Bob 
>> changes keys more often he's the one that has to do more checking of reports 
>> from monitors, whereas in The Simple Thing frequent changes by Bob impose a 
>> burden on others.
>>
>> So CT probably has some usability advantages, at the cost of complexity and 
>> extra parties (auditors, monitors) needing to operate.
>>
>> A seemingly-obvious point I haven't seen yet: it's perfectly natural to have 
>> both systems in place. Nothing prevents layering The Simple Thing on top of 
>> a CT-style log. Paranoid Alice can certainly check out of band if she looks 
>> up a new key for Bob in the log and it's different from what she's used 
>> previously. Paranoid Bob can set up monitoring. Now you get detection if 
>> either sender or receiver is paranoid.
>>
>> On Fri, Oct 3, 2014 at 7:54 PM, Tao Effect <[email protected]> wrote:
>>>
>>> Dear elijah,
>>>
>>> On Oct 3, 2014, at 11:43 AM, elijah <[email protected]> wrote:
>>>>
>>>> In the auditing-infrastructure thing, the hope is that user agents will
>>>> be written to smartly and automatically perform the auditing. Yes, it is
>>>> detection after the fact. The prediction is that the number of people
>>>> running an auditing user agent will be greater than the number of
>>>> senders doing fingerprint verification, and that this greater number
>>>> will provider greater deterrent against bogus key endorsements.
>>>
>>>
>>> In the CT world, auditing and monitoring are two very different things, and 
>>> they must not be confused.
>>>
>>> Auditing does not detect mis-issued certificates/keys/whatever before the 
>>> fact, during the fact, or after the fact [1].
>>>
>>> Kind regards,
>>> Greg Slepak
>>>
>>> [1] 
>>> https://blog.okturtles.com/2014/09/the-trouble-with-certificate-transparency/
>>>
>>> --
>>> Please do not email me anything that you are not comfortable also sharing 
>>> with the NSA.
>>>
>>>
>>>
>>> _______________________________________________
>>> Messaging mailing list
>>> [email protected]
>>> https://moderncrypto.org/mailman/listinfo/messaging
>>>
>>
>
> _______________________________________________
> Messaging mailing list
> [email protected]
> https://moderncrypto.org/mailman/listinfo/messaging
_______________________________________________
Messaging mailing list
[email protected]
https://moderncrypto.org/mailman/listinfo/messaging

Reply via email to