Hi Peter,

On 30/09/13 23:31 PM, Peter Fairbrother wrote:
On 26/09/13 07:52, ianG wrote:
On 26/09/13 02:24 AM, Peter Fairbrother wrote:
On 25/09/13 17:17, ianG wrote:
On 24/09/13 19:23 PM, Kelly John Rose wrote:

I have always approached that no encryption is better than bad
encryption, otherwise the end user will feel more secure than they
should and is more likely to share information or data they should not
be on that line.


The trap of a false sense of security is far outweighed by the benefit
of a "good enough" security delivered to more people.

Given that mostly security works (or it should), what's really important
is where that security fails - and "good enough" security can drive out
excellent security.


Indeed it can. So how do we differentiate? Here are two oft-forgotten problems.

Firstly, when systems fail, typically it is the system around the crypto that fails, not the crypto itself. This tells us that (a) the job of the crypto is to help the rest of the system to not fail, and (b) near enough is often good enough, because the metric of importance is to push all likely attacks elsewhere (into the rest of the system).

An alternative treatment is Adi Shamir's 3 laws of security:

http://financialcryptography.com/mt/archives/000147.html

Secondly, when talking about security options, we have to show where the security fails. With history, with evidence -- so we can inform our speculations with facts. If we don't do that, then our speculations become received wisdom, and we end up fielding systems that not only are making things worse, but are also blocking superior systems from emerging.


We can easily have excellent security in TLS (mk 2?) - the crypto part
of TLS can be unbreakable, code to follow (hah!) - but 1024-bit DHE
isn't say unbreakable for 10 years, far less for a lifetime.


OK, so TLS. Let's see the failures in TLS? SSL was running export grade for lots and lots of years, and those numbers were chosen to be crackable. Let's see a list of damages, breaches, losses?

Guess what? Practically none! There is no recorded history of breaches in TLS crypto (and I've been asking for a decade, others longer).

So, either there are NO FAILURES from export grade or other weaker systems, *or* everyone is covering them up. Because of some logic (like how much traffic and use), I'm going to plumb for NO FAILURES as a reasonable best guess, and hope that someone can prove me wrong.

Therefore, I conclude that perfect security is a crock, and there plenty of slack to open up and ease up. If we can find a valid reason in the whole system (beyond TLS) to open up or ease up, then we should do it.


We are only talking about security against an NSA-level opponent here.
Is that significant?


It is a significant question. Who are we protecting? If we are talking about online banking, and credit cards, and the like, we are *not* protecting against the NSA.

(Coz they already breached all the banks, ages ago, and they get it all in real time.)

On the other hand, if we are talking about CAs or privacy system operators or jihadist websites, then we are concerned about NSA-level opponents.

Either way, we need to make a decision. Otherwise all the other pronouncements are futile.


Eg, Tor isn't robust against NSA-level opponents. Is OTR?


All good questions. What you have to do is decide your threat model, and protect against that. And not flip across to some hypothetical received wisdom like "MITM is the devil" without a clear knowledge about why you care about that particular devil.


We're talking multiple orders of magnitude here.  The math that counts
is:

    Security = Users * Protection.

No. No. No. Please, no? No. Nonononononono.

It's Summa (over i)  P_i.I_i where P_i is the protection provided to
information i, and I_i is the importance of keeping information i
protected.


I'm sorry, I don't deal in omniscience.Typically we as suppliers of
some security product have only the faintest idea what our users are up
to.  (Some consider this a good thing, it's a privacy quirk.)


No, and you don't know how important your opponent thinks the
information is either, and therefore what resources he might be willing
or able to spend to get access to it


Indeed, so many unknowables. Which is why a risk management approach is to decide what you are protecting against and more importantly what you are not protecting against.

That results in, sharing the responsibility with another layer, another person. E.g., if you're not in the sharing business, you're not in the security business.


- but we can make some crypto which
(we think) is unbreakable.


In that lies the trap. Because we can make a block cipher that is unbreakable, we *think* we can make a system that is unbreakable. No such applies. Because we think we can make a system that is unbreakable, we talk like we can protect the user unbreakably. A joke. Out of this sort of fallacious extension has come non-repudiation, off-the-record, authenticated HTTPS, and other myths.

Indeed, we can't even agree on a stream cipher that is unbreakable, let alone SSL or HTTPS or SSH or Skype or those higher level services.


No matter who or what resources, unbreakable. You can rely on the math.

And it doesn't usually cost any more than we are willing to pay - heck,
the price is usually lost in the noise.

And, the trap springs. In order to make our constructions unbreakable, we do a bit of handwaving, and hand the user a crock full of implausible smelly brown stuff.

Zero crypto (theory) failures.

Ok, real-world systems won't ever meet that standard - but please don't
hobble them with failure before they start trying.


Sure. Just be sure you don't hobble real world systems with perfect security models.

Systems are used by users. Not by cryptographers. It is the security people who have the bias from reality; users not. Users are the reality, what they do with your system is the reality.


With that assumption, the various i's you list become some sort of
average

Do you mean I-i's?


P_i's and I_i's.

Ah, average, Which average might that be? Hmmm, independent
distributions of two variables - are you going to average them, then
multiply the averages?

That approximation doesn't actually work very well, mathematically
speaking - as I'm sure you know.


Indeed. Not only does it not work well to model, it provides little basis for assumption at all. Which is why risk management typically doesn't use it.

This is why the security model that is provided is typically
one-size-fits-all, and the most successful products are typically the
ones with zero configuration and the best fit for the widest market.

I totally agree with zero configuration - and best fit - but you are
missing the main point.

Would 1024-bit DHE give a reasonable expectation of say, ten years
unbreakable by NSA?


Nope.

If not, and Manning or Snowden wanted to use TLS, they would likely be
busted.


Yep, busted.  But TLS is not designed to protect Manning and Snowden.

Incidentally, would OTR pass that test?


Nope, and it famously didn't. Manning was caught with chats over OTR to a trusted accomplice who recorded them and handed them over. As is entirely predictable.

This is why I say, the threat is *always* on the node. From your perspective, we can make the wire protocol so (cryptographically) strong that we can ignore any residual weaknesses, and simply model the threat on the node. That's what we should protect for, and that's what went wrong for Manning (as well as most other examples you can find).


(sorry for the sloppy late reply)

(I'm talking about TLS2, not a BCP - but the BCP is significant)
(how's the noggin? how's Waterlooville?? can I come visit sometime?)


No problem! Hey, I'm unsure about your references, but it's like a 10 hour flight from that location to where I am ;)



iang

_______________________________________________
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Reply via email to