Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-09 Thread Watson Ladd
On Tue, Oct 8, 2013 at 7:38 AM, Jerry Leichter leich...@lrw.com wrote:

 On Oct 8, 2013, at 1:11 AM, Bill Frantz fra...@pwpconsult.com wrote:
  If we can't select ciphersuites that we are sure we will always be
 comfortable with (for at least some forseeable lifetime) then we urgently
 need the ability to *stop* using them at some point.  The examples of MD5
 and RC4 make that pretty clear.
  Ceasing to use one particular encryption algorithm in something like
 SSL/TLS should be the easiest case--we don't have to worry about old
 signatures/certificates using the outdated algorithm or anything.  And yet
 we can't reliably do even that.
 
  We seriously need to consider what the design lifespan of our crypto
 suites is in real life. That data should be communicated to hardware and
 software designers so they know what kind of update schedule needs to be
 supported. Users of the resulting systems need to know that the crypto
 standards have a limited life so they can include update in their
 installation planning.
 This would make a great April Fool's RFC, to go along with the classic
 evil bit.  :-(

 There are embedded systems that are impractical to update and have
 expected lifetimes measured in decades.  RFID chips include cryptography,
 are completely un-updatable, and have no real limit on their lifetimes -
 the percentage of the population represented by any given vintage of
 chips will drop continuously, but it will never go to zero.  We are rapidly
 entering a world in which devices with similar characteristics will, in
 sheer numbers, dominate the ecosystem - see the remote-controllable
 Phillips Hue light bulbs (
 http://www.amazon.com/dp/B00BSN8DLG/?tag=googhydr-20hvadid=27479755997hvpos=1t1hvexid=hvnetw=ghvrand=1430995233802883962hvpone=hvptwo=hvqmt=bhvdev=cref=pd_sl_5exklwv4ax_b)
 as an early example.  (Oh, and there's been an attack against them:
 http://www.engadget.com/2013/08/14/philips-hue-smart-light-security-issues/.
  The response from Phillips to that article says In developing Hue we have
 used industry standard encryption and authentication techni
  ques  [O]ur main advice to customers is that they take steps to
 ensure they are secured from malicious attacks at a network level.

 The obvious solution: Do it right the first time. Many of the TLS issues
we are dealing with today were known at the time the standard was being
developed. RFID usually isn't that security critical: if a shirt insists
its an ice cream, a human will usually be around to see that it is a shirt.
AES will last forever, unless cryptoanalytic advances develop. Quantum
computers will doom ECC, but in the meantime we are good.

Cryptography in the two parties authenticating and communicating is a
solved problem. What isn't solved, and behind many of these issues is 1)
getting the standard committees up to speed and 2) deployment/PKI issues.


 I'm afraid the reality is that we have to design for a world in which some
 devices will be running very old versions of code, speaking only very old
 versions of protocols, pretty much forever.  In such a world, newer devices
 either need to shield their older brethren from the sad realities or
 relegate them to low-risk activities by refusing to engage in high-risk
 transactions with them.  It's by no means clear how one would do this, but
 there really aren't any other realistic alternatives.

Great big warning lights saying Insecure device! Do not trust!. If Wells
Fargo customers got a Warning: This site is using outdated security when
visiting it on all browsers, they would fix that F5 terminator currently
stopping the rest of us from deploying various TLS extensions.

 -- Jerry

 ___
 The cryptography mailing list
 cryptography@metzdowd.com
 http://www.metzdowd.com/mailman/listinfo/cryptography




-- 
Those who would give up Essential Liberty to purchase a little Temporary
Safety deserve neither  Liberty nor Safety.
-- Benjamin Franklin
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Iran and murder

2013-10-09 Thread Phillip Hallam-Baker
On Wed, Oct 9, 2013 at 12:44 AM, Tim Newsham tim.news...@gmail.com wrote:

  We are more vulnerable to widespread acceptance of these bad principles
 than
  almost anyone, ultimately,  But doing all these things has won larger
 budgets
  and temporary successes for specific people and agencies today, whereas
  the costs of all this will land on us all in the future.

 The same could be (and has been) said about offensive cyber warfare.


I said the same thing in the launch issue of cyber-defense. Unfortunately
the editor took it into his head to conflate inventing the HTTP referer
field etc. with rather more and so I can't point people at the article as
they refuse to correct it.


I see cyber-sabotage as being similar to use of chemical or biological
weapons: It is going to be banned because the military consequences fall
far short of being decisive, are unpredictable and the barriers to entry
are low.

STUXNET has been relaunched with different payloads countless times. So we
are throwing stones the other side can throw back with greater force.


We have a big problem in crypto because we cannot now be sure that the help
received from the US government in the past has been well intentioned or
not. And so a great deal of time is being wasted right now (though we will
waste orders of magnitude more of their time).

At the moment we have a bunch of generals and contractors telling us that
we must spend billions on the ability to attack China's power system in
case they attack ours. If we accept that project then we can't share
technology that might help them defend their power system which cripples
our ability to defend our own.

So a purely hypothetical attack promoted for the personal enrichment of a
few makes us less secure, not safer. And the power systems are open to
attack by sufficiently motivated individuals.


The sophistication of STUXNET lay in its ability to discriminate the
intended target from others. The opponents we face simply don't care about
collateral damage. So  I am not impressed by people boasting about the
ability of some country (not an ally of my country BTW) to perform targeted
murder overlooks the fact that they can and likely will retaliate with
indiscriminate murder in return.

I bet people are less fond of drones when they start to realize other
countries have them as well.


Lets just stick to defense and make the NATO civilian infrastructure secure
against cyber attack regardless of what making that technology public might
do for what some people insist we should consider enemies.

-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

[Cryptography] The cost of National Security Letters

2013-10-09 Thread Phillip Hallam-Baker
One of the biggest problems with the current situation is that US
technology companies have no ability to convince others that their
equipment has not been compromised by a government mandated backdoor.

This is imposing a significant and real cost on providers of outsourced Web
Services and is beginning to place costs on manufacturers. International
customers are learning to shop elsewhere for their IT needs.

While moving from the US to the UK might seem to leave the customer equally
vulnerable to warrant-less NSA/GCHQ snooping, there is a very important
difference. A US provider can be silenced using a National Security Letter
which is an administrative order issued by a government agency without any
court sanction. There is no equivalent capability in UK law.

A UK court can make an intercept order or authorize a search etc. but that
is by definition a Lawful Intercept and that capability exists regardless
of jurisdiction. What is unique in the US at the moment is the National
Security Letter.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Elliptic curve question

2013-10-09 Thread James A. Donald

On 2013-10-08 03:14, Phillip Hallam-Baker wrote:


Are you planning to publish your signing key or your decryption key?

Use of a key for one makes the other incompatible.�


Incorrect.  One's public key is always an elliptic point, one's private 
key is always a number.


Thus there is no reason in principle why one cannot use the same key (a 
number) for signing the messages you send, and decrypting the messages 
you receive.



___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-09 Thread Bill Frantz

On 10/8/13 at 7:38 AM, leich...@lrw.com (Jerry Leichter) wrote:


On Oct 8, 2013, at 1:11 AM, Bill Frantz fra...@pwpconsult.com wrote:


We seriously need to consider what the design lifespan of our 
crypto suites is in real life. That data should be 
communicated to hardware and software designers so they know 
what kind of update schedule needs to be supported. Users of 
the resulting systems need to know that the crypto standards 
have a limited life so they can include update in their 
installation planning.



This would make a great April Fool's RFC, to go along with the classic evil 
bit.  :-(


I think the situation is much more serious than this comment 
makes it appear. As professionals, we have an obligation to 
share our knowledge of the limits of our technology with the 
people who are depending on it. We know that all crypto 
standards which are 15 years old or older are obsolete, not 
recommended for current use, or outright dangerous. We don't 
know of any way to avoid this problem in the future.


I think the burden of proof is on the people who suggest that we 
only have to do it right the next time and things will be 
perfect. These proofs should address:


New applications of old attacks.
The fact that new attacks continue to be discovered.
The existence of powerful actors subverting standards.
The lack of a did right example to point to.


There are embedded systems that are impractical to update and 
have expected lifetimes measured in decades...
Many perfectly good PC's will stay on XP forever because even 
if there was the will and staff to upgrade, recent versions of 
Windows won't run on their hardware.

...
I'm afraid the reality is that we have to design for a world in 
which some devices will be running very old versions of code, 
speaking only very old versions of protocols, pretty much 
forever.  In such a world, newer devices either need to shield 
their older brethren from the sad realities or relegate them to 
low-risk activities by refusing to engage in high-risk 
transactions with them.  It's by no means clear how one would 
do this, but there really aren't any other realistic alternatives.


Users of this old equipment will need to make a security/cost 
tradeoff based on their requirements. The ham radio operator who 
is still running Windows 98 doesn't really concern me. (While 
his internet connected system might be a bot, the bot 
controllers will protect his computer from others, so his radio 
logs and radio firmware update files are probably safe.) I've 
already commented on the risks of sending Mailman passwords in 
the clear. Low value/low risk targets don' need titanium security.


The power plant which can be destroyed by a cyber attack, c.f. 
STUXNET, does concern me. Gas distribution systems do concern 
me. Banking transactions do concern me, particularly business 
accounts. (The recommendations for online business accounts 
include using a dedicated computer -- good advice.)


Perhaps the shortest limit on the lifetime of an embedded system 
is the security protocol, and not the hardware. If so, how do we 
as society deal with this limit.


Cheers -- Bill

---
Bill Frantz| gets() remains as a monument | Periwinkle
(408)356-8506  | to C's continuing support of | 16345 
Englewood Ave
www.pwpconsult.com | buffer overruns. | Los Gatos, 
CA 95032


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] PGP Key Signing parties

2013-10-09 Thread Phillip Hallam-Baker
Does PGP have any particular support for key signing parties built in or is
this just something that has grown up as a practice of use?

I am looking at different options for building a PKI for securing personal
communications and it seems to me that the Key Party model could be
improved on if there were some tweaks so that key party signing events were
a distinct part of the model.


I am specifically thinking of ways that key signing parties might be made
scalable so that it was possible for hundreds of thousands of people to
participate in an event and there were specific controls to ensure that the
use of the key party key was strictly bounded in space and time.

So for example, it costs $2K to go to RSA. So if there is a key signing
event associated that requires someone to be physically present then that
is a $2K cost factor that we can leverage right there.

Now we can all imagine ways in which folk on this list could avoid or evade
such controls but they all have costs. I think it rather unlikely that any
of you would want to be attempting to impersonate me at multiple cons.

If there is a CT infrastructure then we can ensure that the use of the key
party key is strictly limited to that one event and that even if the key is
not somehow destroyed after use that it is not going to be trusted.


-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] AES-256- More NIST-y? paranoia

2013-10-09 Thread Arnold Reinhold

On Oct 7, 2013, at 12:55 PM, Jerry Leichter wrote:

 On Oct 7, 2013, at 11:45 AM, Arnold Reinhold a...@me.com wrote:
 If we are going to always use a construction like AES(KDF(key)), as Nico 
 suggests, why not go further and use a KDF with variable length output like 
 Keccak to replace the AES key schedule? And instead of making provisions to 
 drop in a different cipher should a weakness be discovered in AES,  make the 
 number of AES (and maybe KDF) rounds a negotiated parameter.  Given that x86 
 and ARM now have AES round instructions, other cipher algorithms are 
 unlikely to catch up in performance in the foreseeable future, even with an 
 higher AES round count. Increasing round count is effortless compared to 
 deploying a new cipher algorithm, even if provision is made the protocol. 
 Dropping such provisions (at least in new designs) simplifies everything and 
 simplicity is good for security.
 That's a really nice idea.  It has a non-obvious advantage:  Suppose the AES 
 round instructions (or the round key computations instructions) have been 
 spiked to leak information in some non-obvious way - e.g., they cause a 
 power glitch that someone with the knowledge of what to look for can use to 
 read of some of the key bits.  The round key computation instructions 
 obviously have direct access to the actual key, while the round computation 
 instructions have access to the round keys, and with the standard round 
 function, given the round keys it's possible to determine the actual key.
 
 If, on the other hand, you use a cryptographically secure transformation from 
 key to round key, and avoid the built-in round key instructions entirely; and 
 you use CTR mode, so that the round computation instructions never see the 
 actual data; then AES round computation functions have nothing useful to leak 
 (unless they are leaking all their output, which would require a huge data 
 rate and would be easily noticed).  This also means that even if the round 
 instructions are implemented in software which allows for side-channel 
 attacks (i.e., it uses an optimized table instruction against which cache 
 attacks work), there's no useful data to *be* leaked.

At least in the Intel AES instruction set, the encode and decode instruction 
have access to each round key except the first. So they could leak that data, 
and it's at least conceivable that one can recover the first round key from 
later ones (perhaps this has been analyzed?).  Knowing all the round keys of 
course enables one to decode the data.  Still, this greatly increases the 
volume o data that must be leaked and if any instructions are currently 
spiked, it is most likely the round key generation assist instruction. One 
could include an IV in the initial hash, so no information could be gained 
about the key itself.  This would work with AES(KDF(key+IV)) as well, however. 

 
 So this is a mode for safely using possibly rigged hardware.  (Of course 
 there are many other ways the hardware could be rigged to work against you.  
 But with their intended use, hardware encryption instructions have a huge 
 target painted on them.)
 
 Of course, Keccak itself, in this mode, would have access to the real key.  
 However, it would at least for now be implemented in software, and it's 
 designed to be implementable without exposing side-channel attacks.
 
 There are two questions that need to be looked at:
 
 1.  Is AES used with (essentially) random round keys secure?  At what level 
 of security?  One would think so, but this needs to be looked at carefully.

The fact that the round keys are simply xor'd with the AES state at the start 
of each round suggest this likely secure. One would have to examine the KDF to 
make sure the there is nothing comparable to the related key attacks on the AES 
key set up. 

 2.  Is the performance acceptable?

The comparison would be to AES(KDF(key)). And in how many applications is key 
agility critical?

 
 BTW, some of the other SHA-3 proposals use the AES round transformation as a 
 primitive, so could also potentially be used in generating a secure round key 
 schedule.  That might (or might not) put security-critical information back 
 into the hardware instructions.
 
 If Keccak becomes the standard, we can expect to see a hardware Keccak-f 
 implementation (the inner transformation that is the basis of each Keeccak 
 round) at some point.  Could that be used in a way that doesn't give it the 
 ability to leak critical information?
-- Jerry
 

Given multi-billion transistor CPU chips with no means to audit them, It's hard 
to see how they can be fully trusted.

Arnold Reinhold
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Iran and murder

2013-10-09 Thread James A. Donald

On 2013-10-08 02:03, John Kelsey wrote:

Alongside Phillip's comments, I'll just point out that assassination of key 
people is a tactic that the US and Israel probably don't have any particular 
advantages in.  It isn't in our interests to encourage a worldwide tacit 
acceptance of that stuff.


Israel is famous for its competence in that area.


And if the US is famously incompetent, that is probably lack of will,
rather than lack of ability.  Drones give the US technological supremacy in
the selective removal of key people


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] P=NP on TV

2013-10-09 Thread Ray Dillinger
On 10/07/2013 05:28 PM, David Johnston wrote:

 We are led to believe that if it is shown that P = NP, we suddenly have a 
 break for all sorts of algorithms.
 So if P really does = NP, we can just assume P = NP and the breaks will make 
 themselves evident. They do not. Hence P != NP.

As I see it, it's still possible.  Proving that a solution exists does
not necessarily show you what the solution is or how to find it.  And
just because a solution is subexponential is no reason a priori to
suspect that it's cheaper than some known exponential solution for
any useful range of values.

So, to me, this is an example of TV getting it wrong.  If someone
ever proves P=NP, I expect that there will be thunderous excitement
in the math community, leaping hopes in the hearts of investors and
technologists, and then very careful explanations by the few people
who really understand the proof that it doesn't mean we can actually
do anything we couldn't do before.

Bear
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] AES-256- More NIST-y? paranoia

2013-10-09 Thread Jerry Leichter
On Oct 8, 2013, at 6:10 PM, Arnold Reinhold wrote:

 
 On Oct 7, 2013, at 12:55 PM, Jerry Leichter wrote:
 
 On Oct 7, 2013, at 11:45 AM, Arnold Reinhold a...@me.com wrote:
 If we are going to always use a construction like AES(KDF(key)), as Nico 
 suggests, why not go further and use a KDF with variable length output like 
 Keccak to replace the AES key schedule? And instead of making provisions to 
 drop in a different cipher should a weakness be discovered in AES,  make 
 the number of AES (and maybe KDF) rounds a negotiated parameter.  Given 
 that x86 and ARM now have AES round instructions, other cipher algorithms 
 are unlikely to catch up in performance in the foreseeable future, even 
 with an higher AES round count. Increasing round count is effortless 
 compared to deploying a new cipher algorithm, even if provision is made the 
 protocol. Dropping such provisions (at least in new designs) simplifies 
 everything and simplicity is good for security.
 That's a really nice idea.  It has a non-obvious advantage:  Suppose the AES 
 round instructions (or the round key computations instructions) have been 
 spiked to leak information in some non-obvious way - e.g., they cause a 
 power glitch that someone with the knowledge of what to look for can use to 
 read of some of the key bits.  The round key computation instructions 
 obviously have direct access to the actual key, while the round computation 
 instructions have access to the round keys, and with the standard round 
 function, given the round keys it's possible to determine the actual key.
 
 If, on the other hand, you use a cryptographically secure transformation 
 from key to round key, and avoid the built-in round key instructions 
 entirely; and you use CTR mode, so that the round computation instructions 
 never see the actual data; then AES round computation functions have nothing 
 useful to leak (unless they are leaking all their output, which would 
 require a huge data rate and would be easily noticed).  This also means that 
 even if the round instructions are implemented in software which allows for 
 side-channel attacks (i.e., it uses an optimized table instruction against 
 which cache attacks work), there's no useful data to *be* leaked.
 
 At least in the Intel AES instruction set, the encode and decode instruction 
 have access to each round key except the first. So they could leak that data, 
 and it's at least conceivable that one can recover the first round key from 
 later ones (perhaps this has been analyzed?).  Knowing all the round keys of 
 course enables one to decode the data.  Still, this greatly increases the 
 volume o data that must be leaked and if any instructions are currently 
 spiked, it is most likely the round key generation assist instruction. One 
 could include an IV in the initial hash, so no information could be gained 
 about the key itself.  This would work with AES(KDF(key+IV)) as well, 
 however. 
 
 
 So this is a mode for safely using possibly rigged hardware.  (Of course 
 there are many other ways the hardware could be rigged to work against you.  
 But with their intended use, hardware encryption instructions have a huge 
 target painted on them.)
 
 Of course, Keccak itself, in this mode, would have access to the real key.  
 However, it would at least for now be implemented in software, and it's 
 designed to be implementable without exposing side-channel attacks.
 
 There are two questions that need to be looked at:
 
 1.  Is AES used with (essentially) random round keys secure?  At what level 
 of security?  One would think so, but this needs to be looked at carefully.
 
 The fact that the round keys are simply xor'd with the AES state at the start 
 of each round suggest this likely secure. One would have to examine the KDF 
 to make sure the there is nothing comparable to the related key attacks on 
 the AES key set up. 
 
 2.  Is the performance acceptable?
 
 The comparison would be to AES(KDF(key)). And in how many applications is key 
 agility critical?
 
 
 BTW, some of the other SHA-3 proposals use the AES round transformation as a 
 primitive, so could also potentially be used in generating a secure round 
 key schedule.  That might (or might not) put security-critical information 
 back into the hardware instructions.
 
 If Keccak becomes the standard, we can expect to see a hardware Keccak-f 
 implementation (the inner transformation that is the basis of each Keeccak 
 round) at some point.  Could that be used in a way that doesn't give it the 
 ability to leak critical information?
   -- Jerry
 
 
 Given multi-billion transistor CPU chips with no means to audit them, It's 
 hard to see how they can be fully trusted.
 
 Arnold Reinhold

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Iran and murder

2013-10-09 Thread Tim Newsham
 We are more vulnerable to widespread acceptance of these bad principles than
 almost anyone, ultimately,  But doing all these things has won larger budgets
 and temporary successes for specific people and agencies today, whereas
 the costs of all this will land on us all in the future.

The same could be (and has been) said about offensive cyber warfare.

 --John

-- 
Tim Newsham | www.thenewsh.com/~newsham | @newshtwit | thenewsh.blogspot.com
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] ADMIN: Reminders and No General Political Discussion please

2013-10-09 Thread Tamzen Cannoy
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


FYI I'm helping Perry out with Moderator duties. 

I've noticed an upswing on political discussion that are starting to range into 
security issues and less on Cryptography.
Consider this a gentle reminder that that's not really the charter of this 
group. I understand how it is impossible to separate the two these days, but 
let's try to lean more toward the technical rather than political side of 
things.

Here's a basic reminder from Perry of the other rules of the mailing list for 
all the new members.

==

We've got a very large number of participants on this list, and
volume has gone way up at the moment thanks to current
events. To make the experience pleasant for everyone please:

1) Cut down the original you're quoting to only the relevant portions
to minimize the amount of reading the 1600 people who will
be seeing your post will have to do.

2) Do not top post. I've explained why repeatedly.

3) Try to make sure what you are saying is interesting enough and
on topic. Minor asides etc. are not.

The list is moderated for a reason, and if you top post a one liner
followed by a 75 line intact original, be prepared to see a rejection
message.

Tamzen




-BEGIN PGP SIGNATURE-
Version: PGP Universal 3.2.0 (Build 1672)
Charset: us-ascii

wj8DBQFSVb2P5/HCKu9Iqw4RAljLAJ4oh46krUDlyEgV6nTSdvCbc2pL8QCdFiTk
jLViuUIhJse2Si23aDHuK2I=
=EAqu
-END PGP SIGNATURE-
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Elliptic curve question

2013-10-09 Thread Phillip Hallam-Baker
On Tue, Oct 8, 2013 at 4:14 PM, James A. Donald jam...@echeque.com wrote:

  On 2013-10-08 03:14, Phillip Hallam-Baker wrote:


 Are you planning to publish your signing key or your decryption key?

  Use of a key for one makes the other incompatible.�


 Incorrect.  One's public key is always an elliptic point, one's private
 key is always a number.

 Thus there is no reason in principle why one cannot use the same key (a
 number) for signing the messages you send, and decrypting the messages you
 receive.


 The original author was proposing to use the same key for encryption and
signature which is a rather bad idea.



-- 
Website: http://hallambaker.com/
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-09 Thread Watson Ladd
On Tue, Oct 8, 2013 at 1:46 PM, Bill Frantz fra...@pwpconsult.com wrote:

 On 10/8/13 at 7:38 AM, leich...@lrw.com (Jerry Leichter) wrote:

 On Oct 8, 2013, at 1:11 AM, Bill Frantz fra...@pwpconsult.com wrote:


 We seriously need to consider what the design lifespan of our crypto suites 
 is in real life. That data should be communicated to hardware and software 
 designers so they know what kind of update schedule needs to be supported. 
 Users of the resulting systems need to know that the crypto standards have 
 a limited life so they can include update in their installation planning.


 This would make a great April Fool's RFC, to go along with the classic evil 
 bit.  :-(


 I think the situation is much more serious than this comment makes it appear. 
 As professionals, we have an obligation to share our knowledge of the limits 
 of our technology with the people who are depending on it. We know that all 
 crypto standards which are 15 years old or older are obsolete, not 
 recommended for current use, or outright dangerous. We don't know of any way 
 to avoid this problem in the future.

15 years ago is 1997. Diffie-Hellman is much, much older and still
works. Kerberos is of similar vintage. Feige-Fiat-Shamir is from 1988,
Schnorr signature 1989.

 I think the burden of proof is on the people who suggest that we only have to 
 do it right the next time and things will be perfect. These proofs should 
 address:

 New applications of old attacks.
 The fact that new attacks continue to be discovered.
 The existence of powerful actors subverting standards.
 The lack of a did right example to point to.
As one of the Do it right the first time people I'm going to argue
that the experience with TLS shows that extensibility doesn't work.

TLS was designed to support multiple ciphersuites. Unfortunately this
opened the door to downgrade attacks, and transitioning to protocol
versions that wouldn't do this was nontrivial. The ciphersuites
included all shared certain misfeatures, leading to the current
situation.

TLS is difficult to model: the use of key confirmation makes standard
security notions not applicable. The fact that every cipher suite is
indicated separately, rather than using generic composition makes
configuration painful.

In addition bugs in widely deployed TLS accelerators mean that the
claimed upgradability doesn't actually exist. Implementations can work
without supporting very necessary features. Had the designers of TLS
used a three-pass Diffie-Hellman protocol with encrypt-then-mac,
rather than the morass they came up with, we wouldn't be in this
situation today. TLS was not exploring new ground: it was well hoed
turf intellectually, and they still screwed it up.

Any standard is only an approximation to what is actually implemented.
Features that aren't used are likely to be skipped or implemented
incorrectly.

Protocols involving crypto need to be so damn simple that if it
connects correctly, the chance of a bug is vanishingly small. If we
make a simple protocol, with automated analysis of its security, the
only danger is a primitive failing, in which case we are in trouble
anyway.


 There are embedded systems that are impractical to update and have expected 
 lifetimes measured in decades...

 Many perfectly good PC's will stay on XP forever because even if there was 
 the will and staff to upgrade, recent versions of Windows won't run on their 
 hardware.
 ...

 I'm afraid the reality is that we have to design for a world in which some 
 devices will be running very old versions of code, speaking only very old 
 versions of protocols, pretty much forever.  In such a world, newer devices 
 either need to shield their older brethren from the sad realities or 
 relegate them to low-risk activities by refusing to engage in high-risk 
 transactions with them.  It's by no means clear how one would do this, but 
 there really aren't any other realistic alternatives.



-- 
Those who would give up Essential Liberty to purchase a little
Temporary Safety deserve neither  Liberty nor Safety.
-- Benjamin Franklin
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Crypto Standards v.s. Engineering habits - Was: NIST about to weaken SHA3?

2013-10-09 Thread John Kelsey
On Oct 8, 2013, at 4:46 PM, Bill Frantz fra...@pwpconsult.com wrote:

 I think the situation is much more serious than this comment makes it appear. 
 As professionals, we have an obligation to share our knowledge of the limits 
 of our technology with the people who are depending on it. We know that all 
 crypto standards which are 15 years old or older are obsolete, not 
 recommended for current use, or outright dangerous. We don't know of any way 
 to avoid this problem in the future.

We know how to address one part of this problem--choose only algorithms whose 
design strength is large enough that there's not some relatively close by time 
when the algorithms will need to be swapped out.  That's not all that big a 
problem now--if you use, say, AES256 and SHA512 and ECC over P521, then even in 
the far future, your users need only fear cryptanalysis, not Moore's Law.  
Really, even with 128-bit security level primitives, it will be a very long 
time until the brute-force attacks are a concern.  

This is actually one thing we're kind-of on the road to doing right in 
standards now--we're moving away from barely-strong-enough crypto and toward 
crypto that's going to be strong for a long time to come. 

Protocol attacks are harder, because while we can choose a key length, modulus 
size, or sponge capacity to support a known security level, it's not so easy to 
make sure that a protocol doesn't have some kind of attack in it.  

I think we've learned a lot about what can go wrong with protocols, and we can 
design them to be more ironclad than in the past, but we still can't guarantee 
we won't need to upgrade.  But I think this is an area that would be 
interesting to explore--what would need to happen in order to get more ironclad 
protocols?  A couple random thoughts:

a.  Layering secure protocols on top of one another might provide some 
redundancy, so that a flaw in one didn't undermine the security of the whole 
system.  

b.  There are some principles we can apply that will make protocols harder to 
attack, like encrypt-then-MAC (to eliminate reaction attacks), nothing is 
allowed to need change its execution path or timing based on the key or 
plaintext, every message includes a sequence number and the hash of the 
previous message, etc.  This won't eliminate protocol attacks, but will make 
them less common.

c.  We could try to treat at least some kinds of protocols more like crypto 
algorithms, and expect to have them widely vetted before use.  

What else?  

 ...
 Perhaps the shortest limit on the lifetime of an embedded system is the 
 security protocol, and not the hardware. If so, how do we as society deal 
 with this limit.

What we really need is some way to enforce protocol upgrades over time.  
Ideally, there would be some notion that if you support version X of the 
protocol, this meant that you would not support any version lower than, say, 
X-2.  But I'm not sure how practical that is.  

 Cheers -- Bill

--John
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography