[Cryptography] please dont weaken pre-image resistance of SHA3 (Re: NIST about to weaken SHA3?)

2013-10-14 Thread Adam Back

On Tue, Oct 01, 2013 at 12:47:56PM -0400, John Kelsey wrote:

The actual technical question is whether an across the board 128 bit
security level is sufficient for a hash function with a 256 bit output. 
This weakens the proposed SHA3-256 relative to SHA256 in preimage

resistance, where SHA256 is expected to provide 256 bits of preimage
resistance.  If you think that 256 bit hash functions (which are normally
used to achieve a 128 bit security level) should guarantee 256 bits of
preimage resistance, then you should oppose the plan to reduce the
capacity to 256 bits.  


I think hash functions clearly should try to offer full (256-bit) preimage
security, not dumb it down to match 128-bit birthday collision resistance.

All other common hash functions have tried to do full preimage security so
it will lead to design confusion, to vary an otherwise standard assumption. 
It will probably have bad-interactions with many existing KDF, MAC,

merkle-tree designs and combined cipher+integrity modes, hashcash (partial
preimage as used in bitcoin as a proof of work) that use are designed in a
generic way to a hash as a building block that assume the hash has full
length pre-image protection.  Maybe some of those generic designs survive
because they compose multiple iterations, eg HMAC, but why create the work
and risk to go analyse them all, remove from implementations, or mark as
safe for all hashes except SHA3 as an exception.

If MD5 had 64-bit preimage, we'd be looking at preimages right now being
expensive but computable.  Bitcoin is pushing 60bit hashcash-sha256 preimage
every 10mins (1.7petaHash/sec network hashrate).

Now obviously 128-bits is another scale, but MD5 is old, broken, and there
maybe partial weakenings along the way.  eg say design aim of 128 slips
towards 80 (in another couple of decades of computing progress).  Why design
in a problem for the future when we KNOW and just spent a huge thread on
this list discussing that its very hard to remove upgrade algorithms from
deployment.  Even MD5 is still in the field.

Is there a clear work-around proposed for when you do need 256?  (Some
composition mode or parameter tweak part of the spec?) And generally where
does one go to add ones vote to the protest for not weakening the
2nd-preimage propoerty?

Adam
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] prism-proof email in the degenerate case

2013-10-14 Thread Nicolas Rachinsky
* John Denker j...@av8n.com [2013-10-10 17:13 -0700]:
 *) Each server should publish a public key for /dev/null so that
  users can send cover traffic upstream to the server, without
  worrying that it might waste downstream bandwidth.
 
  This is crucial for deniabililty:  If the rubber-hose guy accuses
  me of replying to ABC during the XYZ crisis, I can just shrug and 
  say it was cover traffic.

If the server deletes cover traffic, the nsa just needs to subscribe.
Then the messages which you sent but which were not delivered via the
list are cover traffic.

Nicolas

-- 
http://www.rachinsky.de/nicolas
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] funding Tor development

2013-10-14 Thread Eugen Leitl

Guys, in order to minimize Tor Project's dependance on
federal funding and/or increase what they can do it
would be great to have some additional funding ~10 kUSD/month.

If anyone is aware of anyone who can provide funding at
that level or higher, please contact exec...@torproject.org

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] please dont weaken pre-image resistance of SHA3 (Re: NIST about to weaken SHA3?)

2013-10-14 Thread John Kelsey
Adam,

I guess I should preface this by saying I am speaking only for myself.  That's 
always true here--it's why I'm using my personal email address.  But in 
particular, right now, I'm not *allowed* to work.  But just speaking my own 
personal take on things

We go pretty *overwhelming* feedback in this direction in the last three weeks. 
 (For the previous several months, we got almost no feedback about it at all, 
despite giving presentations and posting stuff on hash forum about our plans.). 
 But since we're shut down right now, we can't actually make any decisions or 
changes.  This is really frustrating on all kinds of levels.

Personally, I have looked at the technical arguments against the change and I 
don't really find any of them very convincing, for reasons I described at some 
length on the hash forum list, and that the Keccak designers also laid out in 
their post.  The core of that is that an attacker who can't do 2^{128} work 
can't do anything at all to SHA3 with a 256 bit capacity that he couldn't also 
do to SHA3 with a 512 bit capacity, including finding preimages.  

But there's pretty much zero chance that we're going to put a standard out that 
most of the crypto community is uncomfortable with.  The normal process for a 
FIPS is that we would put out a draft and get 60 or 90 days of public comments. 
 As long as this issue is on the table, it's pretty obvious what the public 
comments would all be about.  

The place to go for current comments, if you think more are necessary, is the 
hash forum list.  The mailing list is still working, but I think both the 
archives and the process of being added to the list are frozen thanks to the 
shutdown.  I haven't looked at the hash forum since we shut down, so when we 
get back there will be a flood of comments there.  The last I saw, the Keccak 
designers had their own proposal for changing what we put into the FIPS, but I 
don't know what people think about their proposal. 

--John, definitely speaking only for myself
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] Broken RNG renders gov't-issued smartcards easily hackable.

2013-10-14 Thread Jerry Leichter
On Oct 13, 2013, at 1:04 PM, Ray Dillinger wrote:
 This is despite meeting (for some inscrutable definition of meeting)
 FIPS 140-2 Level 2 and Common Criteria standards.  These standards
 require steps that were clearly not done here.  Yet, validation
 certificates were issued.
 
 This is a misunderstanding of the CC certification and FIPS validation 
 processes:
 
 the certificates were issued *under the condition* that the software/system 
 built on it uses/implements the RNG tests mandated. The software didn't, 
 invalidating the results of the certifications.
 
 Either way, it boils down to tests were supposed to be done or conditions
 were supposed to be met, and producing the darn cards with those 
 certifications
 asserted amounts to stating outright that they were, and yet they were not.
 
 All you're saying here is that the certifying agencies are not the ones
 stating outright that the tests were done.
How could they?  The certification has to stop at some point; it can't trace 
the systems all the way to end users.  What was certified as a box that would 
work a certain way given certain conditions.  The box was used in a different 
way.  Why is it surprising that the certification was useless?  Let's consider 
a simple encryption box:  Key goes in top, cleartext goes in left; ciphertext 
comes out right.  There's an implicit assumption that you don't simply discard 
the ciphertext and send the plaintext on to the next subsystem in line.  No 
certification can possibly check that; or that, say, you don't post all your 
keys on your website immediately after generating them.

  I can accept that, but it does
 not change the situation or result, except perhaps in terms of the placement
 of blame. I *still* hope they bill the people responsible for doing the tests
 on the first generation of cards for the cost of their replacement.
That depends on what they were supposed to test, and whether they did test that 
correctly.  A FIPS/Common Criteria Certification is handed a box implementing 
the protocol and a whole bunch of paperwork describing how it's designed, how 
it works internally, and how it's intended to be used.  If it passes, what 
passes it the exact design certified, used as described.  There are way too 
many possible system built out of certified modules for it to be reasonable to 
expect the certification to encompass them all.

I will remark that, having been involved in one certification effort, I think 
they offer little, especially for software - they get at some reasonable issues 
for hardware designs.  Still, we don't currently have much of anything better.  
Hundreds of eyeballs may have been on the Linux code, but we still ended up 
fielding a system with a completely crippled RNG and not noticing for months.  
Still, if you expect the impossible from a process, you make any improvement 
impossible.  Formal verification, where possible, can be very powerful - but it 
will also have to focus on some well-defined subsystem, and all the effort will 
be wasted if the subsystem is used in a way that doesn't meet the necessary 
constraints.
-- Jerry

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] please dont weaken pre-image resistance of SHA3 (Re: NIST about to weaken SHA3?)

2013-10-14 Thread ianG

On 14/10/13 17:51 PM, Adam Back wrote:

On Tue, Oct 01, 2013 at 12:47:56PM -0400, John Kelsey wrote:

The actual technical question is whether an across the board 128 bit
security level is sufficient for a hash function with a 256 bit
output. This weakens the proposed SHA3-256 relative to SHA256 in preimage
resistance, where SHA256 is expected to provide 256 bits of preimage
resistance.  If you think that 256 bit hash functions (which are normally
used to achieve a 128 bit security level) should guarantee 256 bits of
preimage resistance, then you should oppose the plan to reduce the
capacity to 256 bits.


I think hash functions clearly should try to offer full (256-bit) preimage
security, not dumb it down to match 128-bit birthday collision resistance.

All other common hash functions have tried to do full preimage security so
it will lead to design confusion, to vary an otherwise standard
assumption. It will probably have bad-interactions with many existing
KDF, MAC,
merkle-tree designs and combined cipher+integrity modes, hashcash (partial
preimage as used in bitcoin as a proof of work) that use are designed in a
generic way to a hash as a building block that assume the hash has full
length pre-image protection.  Maybe some of those generic designs survive
because they compose multiple iterations, eg HMAC, but why create the work
and risk to go analyse them all, remove from implementations, or mark as
safe for all hashes except SHA3 as an exception.



I tend to look at it differently.  There are ephemeral uses and there 
are long term uses.  For ephemeral uses (like HMACs) then 128 bit 
protection is fine.


For long term uses, one should not sign (hash) what the other side 
presents (put in a nonce) and one should always keep what is signed 
around (or otherwise neuter a hash failure).  Etc.  Either way, one 
wants here a bit longer protection for the long term hash.


That 'time' axis is how I look at it.  Simplistic or simple?

Alternatively, there is the hash cryptographer's outlook, which tends to 
differentiate collisions, preimages, 2nd preimages and lookbacks.


From my perspective the simpler statement of SHA3-256 having 128 bit 
protection across the board is interesting, perhaps it is OK?




If MD5 had 64-bit preimage, we'd be looking at preimages right now being
expensive but computable.  Bitcoin is pushing 60bit hashcash-sha256
preimage
every 10mins (1.7petaHash/sec network hashrate).



I might be able to differentiate the preimage / collision / 2nd pi stuff 
here if I thought about if for a long time ... but even if I could, I 
would have no confidence that I'd got it right.  Or, more importantly, 
my design gets it right in the future.


And as we're dealing with money, I'd *want to get it right*.  I'd 
actually be somewhat happier if the hash had a clear number of 128.




Now obviously 128-bits is another scale, but MD5 is old, broken, and there
maybe partial weakenings along the way.  eg say design aim of 128 slips
towards 80 (in another couple of decades of computing progress).  Why
design
in a problem for the future when we KNOW and just spent a huge thread on
this list discussing that its very hard to remove upgrade algorithms from
deployment.  Even MD5 is still in the field.


Um.  Seems like this argument only works if people drop in SHA3 without 
being aware of the subtle switch in preimage protection, *and* they 
designed for it earlier on.  For my money, let 'em hang.



Is there a clear work-around proposed for when you do need 256?  (Some
composition mode or parameter tweak part of the spec?)



Use SHA3-512 or SHA3-384?

What is the preimage protection of SHA3-512 when truncated to 256?  It 
seems that SHA3-384 still gets 256.


 And generally where

does one go to add ones vote to the protest for not weakening the
2nd-preimage propoerty?



For now, refer to Congress of the USA, it's in Washington DC. 
Hopefully, it'll be closed soon too...




iang
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


[Cryptography] /dev/random is not robust

2013-10-14 Thread dj
http://eprint.iacr.org/2013/338.pdf


___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] /dev/random is not robust

2013-10-14 Thread Dan McDonald
On Tue, Oct 15, 2013 at 12:35:13AM -, d...@deadhat.com wrote:
 http://eprint.iacr.org/2013/338.pdf

*LINUX* /dev/random is not robust, so claims the paper.

I wonder how various *BSDs or the Solarish family (Illumos, Oracle Solaris)
hold up under similar scrutiny?

Linux is big, but it is not everything.

Dan
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography


Re: [Cryptography] /dev/random is not robust

2013-10-14 Thread John Gilmore
 http://eprint.iacr.org/2013/338.pdf

I'll be the first to admit that I don't understand this paper.  I'm
just an engineer, not a mathematician.  But it looks to me like the
authors are academics, who create an imaginary construction method for
a random number generator, then prove that /dev/random is not the same
as their method, and then suggest that /dev/random be revised to use
their method, and then show how much faster their method is.  All in
all it seems to be a pitch for their method, not a serious critique of
/dev/random.

They labeled one of their construction methods robustness, but it
doesn't mean what you think the word means.  It's defined by a mess of
greek letters like this:

  Theorem 2. Let n  m, , γ ∗ be integers. Assume that G :
  {0, 1}m → {0, 1}n+ is a deterministic (t, εprg )- pseudorandom
  generator. Let G = (setup, refresh, next) be defined as above. Then
  G is a ((t , qD , qR , qS ), γ ∗ , ε)- 2 robust PRNG with
  input where t ≈ t, ε = qR (2εprg +qD εext +2−n+1 )
  as long as γ ∗ ≥ m+2 log(1/εext )+1, n ≥ m + 2
  log(1/εext ) + log(qD ) + 1.

Yeah, what he said!

Nowhere do they seem to show that /dev/random is actually insecure.
What they seem to show is that it does not meet the robustness
criterion that they arbitrarily picked for their own construction.

Their key test is on pages 23-24, and begins with After a state
compromise, A (the adversary) knows all parameters.  The comparison
STARTS with the idea that the enemy has figured out all of the hidden
internal state of /dev/random.  Then the weakness they point out seems
to be that in some cases of new, incoming randomness with
mis-estimated entropy, /dev/random doesn't necessarily recover over
time from having had its entire internal state somehow compromised.

This is not very close to what /dev/random is not robust means in
English.  Nor is it close to what others might assume the paper
claims, e.g. /dev/random is not safe to use.

John

PS: After attending a few crypto conferences, I realized that
academic pressures tend to encourage people to write incomprehensible
papers, apparently because if nobody reading their paper can
understand it, then they look like geniuses.  But when presenting at
a conference, if nobody in the crowd can understand their slides, then
they look like idiots.  So the key to understanding somebody's
incomprehensible paper is to read their slides and watch their talk,
80% of which is often explanations of the background needed to
understand the gibberish notations they invented in the paper.  I
haven't seen either the slides or the talk relating to this paper.
___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography

Re: [Cryptography] /dev/random is not robust

2013-10-14 Thread James A. Donald

On 2013-10-15 10:35, d...@deadhat.com wrote:

http://eprint.iacr.org/2013/338.pdf


No kidding.

___
The cryptography mailing list
cryptography@metzdowd.com
http://www.metzdowd.com/mailman/listinfo/cryptography