Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-09 Thread Watson Ladd
On Tue, Oct 8, 2013 at 7:38 AM, Jerry Leichter leich...@lrw.com wrote:

 On Oct 8, 2013, at 1:11 AM, Bill Frantz fra...@pwpconsult.com wrote:
  If we can't select ciphersuites that we are sure we will always be
 comfortable with (for at least some forseeable lifetime) then we urgently
 need the ability to *stop* using them at some point.  The examples of MD5
 and RC4 make that pretty clear.
  Ceasing to use one particular encryption algorithm in something like
 SSL/TLS should be the easiest case--we don't have to worry about old
 signatures/certificates using the outdated algorithm or anything.  And yet
 we can't reliably do even that.
 
  We seriously need to consider what the design lifespan of our crypto
 suites is in real life. That data should be communicated to hardware and
 software designers so they know what kind of update schedule needs to be
 supported. Users of the resulting systems need to know that the crypto
 standards have a limited life so they can include update in their
 installation planning.
 This would make a great April Fool's RFC, to go along with the classic
 evil bit.  :-(

 There are embedded systems that are impractical to update and have
 expected lifetimes measured in decades.  RFID chips include cryptography,
 are completely un-updatable, and have no real limit on their lifetimes -
 the percentage of the population represented by any given vintage of
 chips will drop continuously, but it will never go to zero.  We are rapidly
 entering a world in which devices with similar characteristics will, in
 sheer numbers, dominate the ecosystem - see the remote-controllable
 Phillips Hue light bulbs (
 http://www.amazon.com/dp/B00BSN8DLG/?tag=googhydr-20hvadid=27479755997hvpos=1t1hvexid=hvnetw=ghvrand=1430995233802883962hvpone=hvptwo=hvqmt=bhvdev=cref=pd_sl_5exklwv4ax_b)
 as an early example.  (Oh, and there's been an attack against them:
 http://www.engadget.com/2013/08/14/philips-hue-smart-light-security-issues/.
  The response from Phillips to that article says In developing Hue we have
 used industry standard encryption and authentication techni
  ques  [O]ur main advice to customers is that they take steps to
 ensure they are secured from malicious attacks at a network level.

 The obvious solution: Do it right the first time. Many of the TLS issues
we are dealing with today were known at the time the standard was being
developed. RFID usually isn't that security critical: if a shirt insists
its an ice cream, a human will usually be around to see that it is a shirt.
AES will last forever, unless cryptoanalytic advances develop. Quantum
computers will doom ECC, but in the meantime we are good.

Cryptography in the two parties authenticating and communicating is a
solved problem. What isn't solved, and behind many of these issues is 1)
getting the standard committees up to speed and 2) deployment/PKI issues.


 I'm afraid the reality is that we have to design for a world in which some
 devices will be running very old versions of code, speaking only very old
 versions of protocols, pretty much forever.  In such a world, newer devices
 either need to shield their older brethren from the sad realities or
 relegate them to low-risk activities by refusing to engage in high-risk
 transactions with them.  It's by no means clear how one would do this, but
 there really aren't any other realistic alternatives.

Great big warning lights saying Insecure device! Do not trust!. If Wells
Fargo customers got a Warning: This site is using outdated security when
visiting it on all browsers, they would fix that F5 terminator currently
stopping the rest of us from deploying various TLS extensions.

 -- Jerry

 ___
 The cryptography mailing list
 cryptography@metzdowd.com
 http://www.metzdowd.com/mailman/listinfo/cryptography




-- 
Those who would give up Essential Liberty to purchase a little Temporary
Safety deserve neither  Liberty nor Safety.
-- Benjamin Franklin
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-09 Thread Watson Ladd
On Tue, Oct 8, 2013 at 1:46 PM, Bill Frantz fra...@pwpconsult.com wrote:

 On 10/8/13 at 7:38 AM, leich...@lrw.com (Jerry Leichter) wrote:

 On Oct 8, 2013, at 1:11 AM, Bill Frantz fra...@pwpconsult.com wrote:


 We seriously need to consider what the design lifespan of our crypto suites 
 is in real life. That data should be communicated to hardware and software 
 designers so they know what kind of update schedule needs to be supported. 
 Users of the resulting systems need to know that the crypto standards have 
 a limited life so they can include update in their installation planning.


 This would make a great April Fool's RFC, to go along with the classic evil 
 bit.  :-(


 I think the situation is much more serious than this comment makes it appear. 
 As professionals, we have an obligation to share our knowledge of the limits 
 of our technology with the people who are depending on it. We know that all 
 crypto standards which are 15 years old or older are obsolete, not 
 recommended for current use, or outright dangerous. We don't know of any way 
 to avoid this problem in the future.

15 years ago is 1997. Diffie-Hellman is much, much older and still
works. Kerberos is of similar vintage. Feige-Fiat-Shamir is from 1988,
Schnorr signature 1989.

 I think the burden of proof is on the people who suggest that we only have to 
 do it right the next time and things will be perfect. These proofs should 
 address:

 New applications of old attacks.
 The fact that new attacks continue to be discovered.
 The existence of powerful actors subverting standards.
 The lack of a did right example to point to.
As one of the Do it right the first time people I'm going to argue
that the experience with TLS shows that extensibility doesn't work.

TLS was designed to support multiple ciphersuites. Unfortunately this
opened the door to downgrade attacks, and transitioning to protocol
versions that wouldn't do this was nontrivial. The ciphersuites
included all shared certain misfeatures, leading to the current
situation.

TLS is difficult to model: the use of key confirmation makes standard
security notions not applicable. The fact that every cipher suite is
indicated separately, rather than using generic composition makes
configuration painful.

In addition bugs in widely deployed TLS accelerators mean that the
claimed upgradability doesn't actually exist. Implementations can work
without supporting very necessary features. Had the designers of TLS
used a three-pass Diffie-Hellman protocol with encrypt-then-mac,
rather than the morass they came up with, we wouldn't be in this
situation today. TLS was not exploring new ground: it was well hoed
turf intellectually, and they still screwed it up.

Any standard is only an approximation to what is actually implemented.
Features that aren't used are likely to be skipped or implemented
incorrectly.

Protocols involving crypto need to be so damn simple that if it
connects correctly, the chance of a bug is vanishingly small. If we
make a simple protocol, with automated analysis of its security, the
only danger is a primitive failing, in which case we are in trouble
anyway.


 There are embedded systems that are impractical to update and have expected 
 lifetimes measured in decades...

 Many perfectly good PC's will stay on XP forever because even if there was 
 the will and staff to upgrade, recent versions of Windows won't run on their 
 hardware.
 ...

 I'm afraid the reality is that we have to design for a world in which some 
 devices will be running very old versions of code, speaking only very old 
 versions of protocols, pretty much forever.  In such a world, newer devices 
 either need to shield their older brethren from the sad realities or 
 relegate them to low-risk activities by refusing to engage in high-risk 
 transactions with them.  It's by no means clear how one would do this, but 
 there really aren't any other realistic alternatives.



-- 
Those who would give up Essential Liberty to purchase a little
Temporary Safety deserve neither  Liberty nor Safety.
-- Benjamin Franklin
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] AES-256- More NIST-y? paranoia

2013-10-04 Thread Watson Ladd
On Thu, Oct 3, 2013 at 3:25 PM, leich...@lrw.com wrote:

 On Oct 3, 2013, at 12:21 PM, Jerry Leichter leich...@lrw.com wrote:
  As *practical attacks today*, these are of no interest - related key
 attacks only apply in rather unrealistic scenarios, even a 2^119 strength
 is way beyond any realistic attack, and no one would use a reduced-round
 version of AES-256.
 Expanding a bit on what I said:  Ideally, you'd like a cryptographic
 algorithm let you build a pair of black boxes.  I put my data and a key
 into my black box, send you the output; you put the received data and the
 same key (or a paired key) into your black box; and out comes the data I
 sent you, fully secure and authenticated.  Unfortunately, we have no clue
 how to build such black boxes.  Even if the black boxes implement just the
 secrecy transformation for a stream of blocks (i.e., they are symmetric
 block ciphers), if there's a related key attack, I'm in danger if I haven't
 chosen my keys carefully enough.

This is complete and utter bullshit if you can count, or make big enough
random numbers if you cannot. Read Cryptography in NaCl or
Rogaway's analysis of authenticated encryption modes in standards if you
don't believe this is a solved problem in theory, or heck, even the GCM or
CCM standards. Or Rogaways OCB paper.


 No protocol anyone is likely to use is subject to a related key attack,
 but it's one of those flaws that mean we haven't really gotten where we
 should.  Also, any flaw is a hint that there might be other, more dangerous
 flaws elsewhere.

PRP security does not imply security in the related-key model. It also
doesn't imply sPRP security. But you don't need it.
Now, if you are making a claim about block cipher constructions, go show me
why this matters by publishing an attack or some theoretical analysis about
related keys leading to good attacks in a stronger setting.

 If you think in these terms about asymmetric crypto, the situation is
 much, much worse.  It turns out that you have to be really careful about
 what you shove into those boxes, or you open yourself up to all kinds of
 attacks.  The classic paper on this subject is
 http://ieeexplore.ieee.org/xpl/login.jsp?tp=arnumber=4568385url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4568363%2F4568364%2F04568385.pdf%3Farnumber%3D4568385,
 the text for which appears to available only for a fee.


 -- Jerry

 ___
 The cryptography mailing list
 cryptography@metzdowd.com
 http://www.metzdowd.com/mailman/listinfo/cryptography


Sincerely,
Watson Ladd

-- 
Those who would give up Essential Liberty to purchase a little Temporary
Safety deserve neither  Liberty nor Safety.
-- Benjamin Franklin
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] encoding formats should not be committee'ized

2013-10-04 Thread Watson Ladd
On Thu, Oct 3, 2013 at 1:35 PM, Lodewijk andré de la porte 
l...@odewijk.nlwrote:

 IMO readability is very hard to measure. Likely things being where you
 expect them to be, with minimal confusing characters but clear anchoring
 so you can start reading from anywhere.

 If someone could write a generative meta-language we can then ask people
 to do text comprehension tasks on the packed data. The relative speeds of
 completing those tasks should provide a measure of readability.

 I don't like anyone arguing about differences in readability without such
 empirical data. (it's all pretty similar unless you design against it I
 guess)

 XML is actually surprisingly readable. JSON is a lot more minimal. I find
 its restrictions frustrating and prefer using real JAVASCRIPT OBJECT
 NOTATION wherever possible, like INCLUDING FUNCTIONS and INCLUDING 'THIS'
 REFERENCES. Harder on parses, but why would you write your own anyway? (No,
 your language is not archaic/hipster enough not to have a parser for a
 popular notational format!)

What part of the Chomsky hierarchy do you not understand?
What part of running computations on untrusted data which amount to Turing
machines sounds like a good idea? The trivial DDOS, or the oh-so-amusing
use as part of a distributed computing service?
What dangers of multipass computation on potentially ambiguous data do you
think are worth the extra connivence?
And let's not forget the bugs that context-sensitive grammars invite.


 I think that's the most useful I have to say on the subject.

 ___
 The cryptography mailing list
 cryptography@metzdowd.com
 http://www.metzdowd.com/mailman/listinfo/cryptography

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] NIST about to weaken SHA3?

2013-09-30 Thread Watson Ladd
On Mon, Sep 30, 2013 at 2:21 PM, James A. Donald jam...@echeque.com wrote:

 On 2013-10-01 00:44, Viktor Dukhovni wrote:

 Should one also accuse ESTREAM of maliciously weakening SALSA?  Or
 might one admit the possibility that winning designs in contests
 are at times quite conservative and that one can reasonably
 standardize less conservative parameters that are more competitive
 in software?


 less conservative means weaker.

 Weaker in ways that the NSA has examined, and the people that chose the
 winning design have not.

This isn't true: Keccak's designers proposed a wide range of capacity
parameters for different environments.


 Why then hold a contest and invite outside scrutiny in the first place.?

 This is simply a brand new unexplained secret design emerging from the
 bowels of the NSA, which already gave us a variety of backdoored crypto.

No, it is the Keccak construction with a different rate and capacity.


 The design process, the contest, the public examination, was a lie.

 Therefore, the design is a lie.

I'm sorry, but the tradeoffs in capacity and their implications were part
of the Keccak submission from the beginning. During the entire process
commentators were questioning the difference between collision security and
preimage security, as it was clear that collisions kill a hash as dead as
preimages. This was a topic of debate on the SHA-3 list between DJB and
others, because DJB designed Cubehash to have the same tradeoff as the
design NIST is proposing to standardize.




 __**_
 The cryptography mailing list
 cryptography@metzdowd.com
 http://www.metzdowd.com/**mailman/listinfo/cryptographyhttp://www.metzdowd.com/mailman/listinfo/cryptography


Sincerely,
Watson
-- 
Those who would give up Essential Liberty to purchase a little Temporary
Safety deserve neither  Liberty nor Safety.
-- Benjamin Franklin
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] The paranoid approach to crypto-plumbing

2013-09-16 Thread Watson Ladd
On Mon, Sep 16, 2013 at 4:02 PM, Jerry Leichter leich...@lrw.com wrote:
 On Sep 16, 2013, at 6:20 PM, Bill Frantz wrote:
 Joux's paper Multicollisions in iterated hash functions
http://www.iacr.org/archive/crypto2004/31520306/multicollisions.ps
 shows that finding ... r-tuples of messages that all hash to the same
value is not much harder than finding ... pairs of messages.  This has
some surprising implications.  In particular, Joux uses it to show that, if
F(X) and G(X) are cryptographic hash functions, then H(X) = F(X) || G(X)
(|| is concatenation) is about as hard as the harder of F and G - but no
harder.
 This kind of result is why us crypto plumbers should always consult real
cryptographers. :-)
 Yes, this is the kind of thing that makes crypto fun.

 The feeling these days among those who do such work is that unless you're
going to use a specialized combined encryption and authentication mode, you
might as well use counter mode (with, of course, required authentication).
 For the encryption part, counter mode with multiple ciphers and
independent keys has the nice property that it's trivially as strong as the
strongest of the constituents.  (Proof:  If all the ciphers except one are
cracked, the attacker is left with a known-plaintext attack against the
remaining one.  The need for independent keys is clear since if I use two
copies of the same cipher with the same key, I end up sending plaintext!
 You'd need some strong independence statements about the ciphers in the
set if you want to reuse keys.  Deriving them from a common key with a
one-way hash function is probably safe in practice, though you'd now need
some strong statements about the hash function to get any theoretical
result.  Why rely on such things when you
  don't need to?)

 It's not immediately clear to me what the right procedure for multiple
authentication is.
 -- Jerry
The right procedure would be to use a universal hash function together with
counter mode encryption. This has provable security relatable to the
difficulty of finding linear approximations to the encryption function.

But I personally don't think this is much use. We have ciphers that have
stood up to lots of analysis. The real problems have been in modes of
operation, key negotiation, and deployment.
Sincerely,
Watson Ladd

 ___
 The cryptography mailing list
 cryptography@metzdowd.com
 http://www.metzdowd.com/mailman/listinfo/cryptography



-- 
Those who would give up Essential Liberty to purchase a little Temporary
Safety deserve neither  Liberty nor Safety.
-- Benjamin Franklin
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography