Re: Destroying confidential information from database

2009-05-02 Thread Peter Gutmann
Sandy Harris sandyinch...@gmail.com writes:

Yes, but that paper is over ten years old. In the meanwhile, disk designs and
perhaps encoding schemes have changed, journaling file systems have become
much more common and, for all I know the attack technology may have changed
too.

It's nearly fifteen years old (it was written in 1995, when the very first
PRML drives were just starting to appear, there's a reference in there to a
Quantum whitepaper published the same year) and refers to technology from the
early 1990s (and leftover stuff from the late 1980s, which was still around at
the time).  I've had an epilogue attached to the paper for, oh, at least ten
of those fifteen years saying:

  In the time since this paper was published, some people have treated the 35-
  pass overwrite technique described in it more as a kind of voodoo
  incantation to banish evil spirits than the result of a technical analysis
  of drive encoding techniques.  As a result, they advocate applying the
  voodoo to PRML and EPRML drives even though it will have no more effect than
  a simple scrubbing with random data.  In fact performing the full 35-pass
  overwrite is pointless for any drive since it targets a blend of scenarios
  involving all types of (normally-used) encoding technology, which covers
  everything back to 30+-year-old MFM methods (if you don't understand that
  statement, re-read the paper).  If you're using a drive which uses encoding
  technology X, you only need to perform the passes specific to X, and you
  never need to perform all 35 passes.  For any modern PRML/EPRML drive, a few
  passes of random scrubbing is the best you can do.  As the paper says, A
  good scrubbing with random data will do about as well as can be expected.
  This was true in 1996, and is still true now.

  Looking at this from the other point of view, with the ever-increasing data
  density on disk platters and a corresponding reduction in feature size and
  use of exotic techniques to record data on the medium, it's unlikely that
  anything can be recovered from any recent drive except perhaps a single
  level via basic error-cancelling techniques.  In particular the drives in
  use at the time that this paper was originally written have mostly fallen
  out of use, so the methods that applied specifically to the older, lower-
  density technology don't apply any more.  Conversely, with modern high-
  density drives, even if you've got 10KB of sensitive data on a drive and
  can't erase it with 100% certainty, the chances of an adversary being able
  to find the erased traces of that 10KB in 80GB of other erased traces are
  close to zero.

(the second paragraph is slightly newer than the first one).  The reason why I
haven't updated the paper is that there really isn't much more to say than
what's in those two paragraphs, EPRML and perpendicular recording are nothing
like the technology that the paper discusses, for these more modern techniques
a good scrubbing is about the best you can do, and you have to balance the
amount of effort you're prepared to expend with the likelihood of anyone even
trying to pull 10kB of data from a (well, at the time 80GB was the largest
drive, today 1TB) drive.  I made the paper as forward-looking as I could with
the information available at the time (i.e. projection to PRML/EPRML read
channels and so on in the original paper), but didn't realise that people
would skip that bit and just religiously quote the same old stuff fifteen
years later.

(I've been working on a talk on Defending where the Attacker Isn't where I
look at this sort of thing, in some areas like password best practices this
phenomenon is even more pronounced because organisations are religiously
following best practices designed to defend shared mainframes connected to
029 keypunches and model 33 teletypes, I hope the data erasure thing doesn't
follow the same lifecycle :-).

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: SHA-1 collisions now at 2^{52}?

2009-05-02 Thread Peter Gutmann
Perry E. Metzger pe...@piermont.com writes:
Greg Rose g...@qualcomm.com writes:
 It already wasn't theoretical... if you know what I mean. The writing
 has been on the wall since Wang's attacks four years ago.

Sure, but this should light a fire under people for things like TLS 1.2.

Why?

Seriously, what threat does this pose to TLS 1.1 (which uses HMAC-SHA1 and
SHA-1/MD5 dual hashes)?  Do you think the phishers will even notice this as
they sort their multi-gigabyte databases of stolen credentials?

The problem with TLS 1.2 is that it completely breaks backwards compatibility
with existing versions, it's an even bigger break than the SSL - TLS
changeover was.  If you want something to incentivise vendors to break
compatibility with the entire deployed infrastructure of TLS devices, the
attack had better be something pretty close to O( 1 ), preferably with
deployed malware already exploiting it.

Ten years ago you may have been able to do this sort of thing because it was
cool and the geeks were in charge, but today with a deployed base of several
billion devices (computers, cellphones, routers, printers, you name it) the
economists are in charge, not the cryptographers, and if you do the sums TLS
1.2 doesn't make business sense.  It may be geeky-cool to make the change, but
geeky-cool isn't going to persuade (say) Linksys to implement TLS 1.2 on their
home routers.

(I can't believe I just said that :-).

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Has any public CA ever had their certificate revoked?

2009-05-02 Thread Peter Gutmann
Subject says it all, does anyone know of a public, commercial CA (meaning one
baked into a browser or the OS, including any sub-CA's hanging off the roots)
ever having their certificate revoked?  An ongoing private poll hasn't turned
up anything, but perhaps others know of instances where this occurred.

Peter.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: SHA-1 collisions now at 2^{52}?

2009-05-02 Thread Perry E. Metzger

Peter Gutmann pgut...@cs.auckland.ac.nz writes:
 Perry E. Metzger pe...@piermont.com writes:
Greg Rose g...@qualcomm.com writes:
 It already wasn't theoretical... if you know what I mean. The writing
 has been on the wall since Wang's attacks four years ago.

Sure, but this should light a fire under people for things like TLS 1.2.

 Why?

 Seriously, what threat does this pose to TLS 1.1 (which uses HMAC-SHA1 and
 SHA-1/MD5 dual hashes)?

No immediate threat. The issue is that attacks only get better with
time. Now that we've seen this set of attacks, we can't be entirely sure
what will happen next. In three or five years, we may find that
HMAC-SHA1 is more easily attacked than it is now.

On the 1.2 issue, the real point of 1.2 is not to replace SHA-1 per se
but to permit us to deal with the situation where *any* algorithm proves
to be dangerously weak. We've learned this lesson several times now --
it is best to have protocols that can move to new crypto algorithms as
old ones need to be abandoned.

Note that I said things like TLS -- TLS is not the only issue. There
are many out there. There is no need to panic over any one of them, but
it would be good to get things replaced.

Right now, without much of a rush or any real anxiety about it we can
take the several years needed to move new mechanisms out. If we dither,
then in a few years we may find ourselves having a much less pleasant
transition where suddenly the problem isn't long term but immediate.

 Do you think the phishers will even notice this as they sort their
 multi-gigabyte databases of stolen credentials?

No, they clearly won't notice at all. However, lets broaden this and
consider not only phishermen but all attackers.

Remember, attackers go for the lowest hanging fruit, not for any
particular technique. They pick the weakest links available. The reason
bad crypto has not been an attack point is because other things have
been much easier to attack than the crypto. I would prefer to keep it
that way.

My worry isn't about the phishermen per se. My worry is about things we
haven't thought about -- tricks like the CA forgery trick lying in wait
for us. There are more and more things out there that depend on the
crypto being right -- things like signed software updates, people who
actually *need* authentication for life critical systems, etc. If we
clean things up now, in three or five or seven years we won't have to
rush.

There is no need to panic, but clearly the handwriting is on the
wall. The time to act is early when it is inexpensive to do so.

 It may be geeky-cool to make the change, but geeky-cool isn't going to
 persuade (say) Linksys to implement TLS 1.2 on their home routers.

 (I can't believe I just said that :-).

Home routers and other equipment last for years. If we slowly roll out
various protocol and system updates now, then in a number of years, when
we find ourselves with real trouble, a lot of them will already be
updated because new ones won't have issues. If we wait until things get
bad, then instead of being a natural part of the upgrade cycle things
get to be expensive and painful.

Perry
-- 
Perry E. Metzgerpe...@piermont.com

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: SHA-1 collisions now at 2^{52}?

2009-05-02 Thread Eric Rescorla
At Sat, 02 May 2009 21:53:40 +1200,
Peter Gutmann wrote:
 
 Perry E. Metzger pe...@piermont.com writes:
 Greg Rose g...@qualcomm.com writes:
  It already wasn't theoretical... if you know what I mean. The writing
  has been on the wall since Wang's attacks four years ago.
 
 Sure, but this should light a fire under people for things like TLS 1.2.
 
 Why?
 
 Seriously, what threat does this pose to TLS 1.1 (which uses HMAC-SHA1 and
 SHA-1/MD5 dual hashes)?  Do you think the phishers will even notice this as
 they sort their multi-gigabyte databases of stolen credentials?

Again, I don't want to get into a long argument with peter about TLS 1.1 vs.
TLS 1.2, but TLS 1.2 also defines an extension that lets the client tell
the server that it would take a SHA-256 certificate. Absent that, it's
not clear how the server would know. 

Of course, you could use that extension with 1.1 and maybe that's what the
market will decide...

-Ekr





-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: SHA-1 collisions now at 2^{52}?

2009-05-02 Thread Matt Blaze


On May 2, 2009, at 5:53, Peter Gutmann wrote:


Perry E. Metzger pe...@piermont.com writes:

Greg Rose g...@qualcomm.com writes:
It already wasn't theoretical... if you know what I mean. The  
writing

has been on the wall since Wang's attacks four years ago.


Sure, but this should light a fire under people for things like TLS  
1.2.


Why?

Seriously, what threat does this pose to TLS 1.1 (which uses HMAC- 
SHA1 and
SHA-1/MD5 dual hashes)?  Do you think the phishers will even notice  
this as

they sort their multi-gigabyte databases of stolen credentials?

[snip]

I must admit I don't understand this line of reasoning (not to pick
on Perry, Greg, or Peter, all of whom have a high level of
crypto-clue and who certainly understand protocol design).

The serious concern here seems to me not to be that this particular
weakness is a last straw wedge that enables some practical attack
against some particular protocol -- maybe it is and maybe it isn't.
What worries me is that SHA-1 has been demonstrated to not have a
property -- infeasible to find collisions -- that protocol designers
might have relied on it for.

Security proofs become invalid when an underlying assumption is
shown to be invalid, which is what has happened here to many
fielded protocols that use SHA-1. Some of these protocols may well
still be secure in practice even under degraded assumptions, but to
find out, we'd have to analyze them again.  And that's a non-trivial
task that as far as I know has not been done yet (perhaps I'm wrong
and it has).  They'll never figure out how to exploit it is not,
sadly, a security proof.

Any attack that violates basic properties of a crypto primitive
is a serious problem for anyone relying on it, pretty much by
definition.

-matt

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: SHA-1 collisions now at 2^{52}?

2009-05-02 Thread Eric Rescorla
At Sat, 2 May 2009 15:00:36 -0400,
Matt Blaze wrote:
 The serious concern here seems to me not to be that this particular
 weakness is a last straw wedge that enables some practical attack
 against some particular protocol -- maybe it is and maybe it isn't.
 What worries me is that SHA-1 has been demonstrated to not have a
 property -- infeasible to find collisions -- that protocol designers
 might have relied on it for.
 
 Security proofs become invalid when an underlying assumption is
 shown to be invalid, which is what has happened here to many
 fielded protocols that use SHA-1. Some of these protocols may well
 still be secure in practice even under degraded assumptions, but to
 find out, we'd have to analyze them again.  And that's a non-trivial
 task that as far as I know has not been done yet (perhaps I'm wrong
 and it has).  They'll never figure out how to exploit it is not,
 sadly, a security proof.

Without suggesting that collision-resistance isn't an important property,
I'd observe that we don't have anything like a reduction proof of
full TLS, or, AFAIK, any of the major security protocols in production
use. Really, we don't even have a good analysis of the implications
of relaxing any of the (soft) assumptions people have made about
the security of various primitives (though see [1] and [2] for some
handwaving analysis).

It's not clear this should make you feel any better when a primitive is
weakened, but then you probably shouldn't have felt that great to start
with.

-Ekr



[1] http://www.rtfm.com/dimacs.pdf 
[2] http://www.cs.columbia.edu/~smb/papers/new-hash.pdf


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com


Re: [tahoe-dev] SHA-1 broken!

2009-05-02 Thread Jon Callas


It also is not going to be trivial to do this -- but it is now in the
realm of possibility.



I'm not being entirely a smartass when I say that it's always in the  
realm of possibility. The nominal probability for SHA-1 -- either 2^80  
or 2^160 depending on context -- is a positive number. It's small, but  
it's always possible.


The recent case of cert collisions happened because of two errors,  
hash problems and sequential serial numbers. If either had been  
corrected, the problem wouldn't have happened.


I liken in in analogy to a fender-bender that happened because the  
person responsible had both worn-out brakes (an easily-fixable  
technological problem) and was tailgating (an easily-fixable  
suboptimal operational policy). It's a mistake to blame the wreck on  
either. It's enlightening to point out that either a good policy or a  
more timely upgrade schedule would have made the problem not occur.


The problem right now is not that MD5, SHA1, etc. are broken. It is  
that they are broken in ways that you have to be an expert to  
understand and even the experts get into entertaining debates about.  
Any operational expert worth their salt should run screaming from a  
technology that the boffins have debates about flaws over dinner.


Jon

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to majord...@metzdowd.com