Re: data under one key, was Re: analysis and implementation of LRW

2007-02-05 Thread Allen



Vlad SATtva Miller wrote:

Allen wrote on 31.01.2007 01:02:

I'll skip the rest of your excellent, and thought provoking post as it
is future and I'm looking at now.

From what you've written and other material I've read, it is clear that
even if the horizon isn't as short as five years, it is certainly
shorter than 70. Given that it appears what has to be done is the same
as the audio industry has had to do with 30 year old master tapes when
they discovered that the binder that held the oxide to the backing was
becoming gummy and shedding the music as the tape was playing -
reconstruct the data and re-encode it using more up to date technology.

I guess we will have grunt jobs for a long time to come. :)


I think you underestimate what Travis said about ensurance on a
long-term encrypted data. If an attacker can (and it is very likely) now
obtain your ciphertext encrypted with a scheme that isn't strong in
70-years perspective, he will be able to break the scheme in the future
when technology and science allows it, effectively compromising [part
of] your clients private data, despite your efforts to re-encrypt it
later with improved scheme.

The point is that encryption scheme for long-term secrets must be strong
from the beginning to the end of the data needed to stay secret.


Imagine this, if you will. You have a disk with encrypted data 
and the key to decrypt it. You can take two paths that I can see:


1. Encrypt the old data and its key with the new, more robust, 
encryption algorithm and key as you migrate it from the now aged 
HD which is nearing the end of its lifespan. Then use the then 
current disk wiping technology of choice to destroy the old data. 
I think a blast furnace might be a great choice for a long time 
to come.


2. Decrypt the data using the key and re-encrypt it with the new 
algorithm using a new key, then migrate it to a new HD. Afterward 
destroy the old drive/data by your favorite method at the time. I 
still like the blast furnace as tool of choice.


Both approaches suffer from one defect in common - there is the 
assumption that the old disk you have the data on is the only 
copy in existence, clearly a *bad* idea if you should have a 
catastrophic failure of the HD or other storage device, so then 
it boils down to finding all known and unknown copies of the 
encrypted data and securely destroying them as well. Not a safe 
assumption as we know from looking at the history of papers dug 
up hundreds of years after the original appears to be lost forever.


Approach 1 also suffers from the problem that we may not have the 
software readily available waaay down the road to decrypt the 
many layers of the onion. And that will surely bring tears to our 
eyes.


Since we know that we can not protect against future developments 
in cryptanalysis - just look at both linear and differential 
analysis versus earlier tools - how do we create an algorithm 
that is proof against the future? Frankly I don't think it is 
possible and storing all those one-time pads is too much of a 
headache, as well as risky, to bother with. So what do we do?


This is where I think we need to set our sights on ...good 
enough given what we know now This does not mean sloppy 
thinking, just that at some point you have done the best humanly 
possible to assess and mitigate risks.


Anyone got better ideas?

Best,

Allen

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: data under one key, was Re: analysis and implementation of LRW

2007-02-05 Thread Leichter, Jerry
|  Currently I'm dealing 
|  with very large - though not as large as 4 gig - x-ray, MRI, and 
|  similar files that have to be protected for the lifespan of the 
|  person, which could be 70+ years after the medical record is 
|  created. Think of the MRI of a kid to scan for some condition 
|  that may be genetic in origin and has to be monitored and 
|  compared with more recent results their whole life.
| 
| That's longer than computers have been available, and also longer
| than modern cryptography has existed.  The only way I would propose
| to be able to stay secure that long is either:
| 1) use a random key as large as the plaintext (one-time-pad)
...thus illustrating once again both the allure and the uselessness (in
almost all situations) of one-time pads.  Consider:  I have 4 GB of
data that must remain secure.  I'm afraid it may leak out.  So I
generate 4 GB of random bits, XOR them, and now have 4 GB of data
that's fully secure.  I can release it to the world.  The only
problem is ... what do I do with this 4 GB of random pad?  I need
to store *it* securely.  But if I can do that ... why couldn't I
store the 4 GB of original data security to begin with?

*At most*, if I use different, but as secure as I can make them,
methods for storing *both* 4 GB datasets, then someone would have to
get *both* to make any sense of the data.  In effect, I've broken my
secret into two shares, and only someone who can get both can read it.
I can break it into more shares if I want to - though if I want
information-theoretic security (presumably the goal here, since I'm
worried about future attacks against any technique that relies on
something weaker) each share will end up being the same size as
the data.

Of course, the same argument can be made for *any* cryptographic
technique!  The difference is that it seems somewhat easier to protect
a 128-bit key (or some other reasonable length anything beyond 256 is
just silly due to fundamental limits on computation:  At 256 bits, either
there is an analytic attack - which is just as likely at 2560 bits, or
running the entire universe as computer to do brute force attacks won't
give you the answer soon enough to matter) than a 4 GB one.  It's not
easy to make really solid sense of such a comparison, however, as our
ability to store more and more data in less and less space continues
for a couple of generations more.  When CD's first came out, 600 MB
seemed like more than anyone could imagine using as raw data.  These
days, that's not enough RAM to make a reasonable PC.

I would suggest that we look at how such data has traditionally been
kept safe.  We have thousands of years of experience in maintaining
physical security.  That's what we rely on to protect the *last* 70
years worth of X-ray plates.  In fact, the security on those is pretty
poor - up until a short while ago, when this stuff started to be digital
from birth, at least the last couple of year's worth of X-rays were
sitting in a room in the basement of the hospital.  The room was
certainly locked, but it was hardly a bank vault.  Granted, in digital
form, this stuff is much easier to search, copy, etc. - but I doubt
that a determined individual would really have much trouble getting
copies of most people's medical records.  If nothing else, the combination
of strict hierarchies in hospitals - where the doctor is at the top -
with the genuine need to deal with emergencies makes social engineering
particularly easy.

Anyway ... while the question how can we keep information secure for
70 years has some theoretical interest, we have enough trouble knowing
how to keep digital information *accessible* for even 20 years that it's
hard to know where to reasonably start.  In fact, if you really want to
be sure those X-rays will be readable in 70 years, you're probably best
off today putting them on microfiche or using some similar technology.
Then put the 'fiche in a vault

-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


OTP, was Re: data under one key, was Re: analysis and implementation of LRW

2007-02-05 Thread Travis H.
On Sun, Feb 04, 2007 at 11:27:00PM -0500, Leichter, Jerry wrote:
 | 1) use a random key as large as the plaintext (one-time-pad)
 ...thus illustrating once again both the allure and the uselessness (in
 almost all situations) of one-time pads.

For long-term storage, you are correct, OTP at best gives you secret
splitting.  However, if people can get at your stored data, you have
an insider or poor security (network or OS).  Either way, this is not
necessarily a crypto problem.  The system should use conventional
crypto to deal with the data remanance problem, but others have
alleged this is bad or unnecessary or both; I haven't seen it proven
either way.  In any case, keeping the opponent off your systems is
less of a crypto problem than a simple access control problem.

It was my inference that this data must be transmitted around via some
not-very-secure channels, and so the link could be primed by
exchanging key material via registered mail, courier, or whatever
method they felt comfortable with for communicating paper documents
_now_, or whatever system they would use with key material in any
other proposed system.  The advantage isn't magical so much as
practical; you don't have to transmit the pad material every time you
wish to send a message.  You do have to store it securely (see above).
You should compose it with a conventional system, for the best of both
worlds.

Of course any system can be used incorrectly; disclosing a key or
choosing a bad one can break security in most systems.  So you already
have a requirement for unpredictability and secure storage and
confidential transmission of key material (in the case of symmetric
crypto).  The OTP is the only cipher I know of that hasn't had any
cryptanalytic success against it for over 70 years, and offers a
proof. [1]

As an aside, it would be interesting to compare data capacity/density
and networking speeds to see if it is getting harder or easier to use
OTP to secure a network link.

[1] Cipher meaning discrete symbol-to-symbol encoding.  OTP's proof
does rely on a good RNG.  I am fully aware that unpredictability is
just as slippery a topic as resistance to cryptanalysis, both being
universal statements that can only be proved by a counterexample, but
that is an engineering or philosophical problem.  By securely
combining it with a CSPRNG you get the least predictable of the pair.

Everyone in reliable computing understands that you don't want single
points of failure.  If someone proposed that they were going to deploy
a system - any system - that could stay up for 70 years, and it didn't
have any form of backup or redundancy, and no proof that it wouldn't
wear down over 70 years (e.g. it has moving parts, transistors, etc.),
they'd be ridiculed.

And yet every time OTP comes up among cryptographers, the opposite
happens.

When it comes to analysis, absence of evidence is not evidence of
absence.

 Anyway ... while the question how can we keep information secure for
 70 years has some theoretical interest, we have enough trouble knowing
 how to keep digital information *accessible* for even 20 years that it's
 hard to know where to reasonably start.

I think that any long-term data storage solution would have to accept two
things:

1) The shelf life is a complete unknown.  By the time we know it, we will
be using different media, so don't hold your breath.

2) The best way to assure being able to read the data is to seal up a
seperate instance of the hardware, and to use documented formats so
you know how to interpret them.  Use some redundancy, too, with
tolerance of the kind of errors the media is expected to see.

3) Institutionalize a data refresh policy; have a procedure for
reading the old data off old media, correcting errors, and writing it
to new media (see below).

The trend seems to be that I/O capacity is going up much faster than
I/O bandwidth is increasing, and there doesn't seem to be a
fundamental limitation in the near future, so the data is cooling
rapidly and will continue to do so (in storage jargon, temperature is
related to how often the data is read or written).

Further, tape is virtually dead, and it looks like disk-to-disk is the
most pragmatic replacement.  That actually simplifies things; you can
migrate the data off disks before they near their lifespan in an
automated way (plug in new computer, transfer data over direct network
connection, drink coffee).  Or even more simply, stagger your primary
and backup storage machines, so that 1/2 way through the MTTF of the
drive, you have a new machine with a new set of drives as the backup,
do one backup and swap roles.  Now your data refresh and backup are
handled with the same mechanism.

At least, that's what I'm doing.  YMMV.
-- 
The driving force behind innovation is sublimation.
-- URL:http://www.subspacefield.org/~travis/
For a good time on my UBE blacklist, email [EMAIL PROTECTED]


pgp876Gxt2EB4.pgp
Description: PGP signature


Re: data under one key, was Re: analysis and implementation of LRW

2007-02-04 Thread Vlad \SATtva\ Miller
Allen wrote on 31.01.2007 01:02:
 I'll skip the rest of your excellent, and thought provoking post as it
 is future and I'm looking at now.

 From what you've written and other material I've read, it is clear that
 even if the horizon isn't as short as five years, it is certainly
 shorter than 70. Given that it appears what has to be done is the same
 as the audio industry has had to do with 30 year old master tapes when
 they discovered that the binder that held the oxide to the backing was
 becoming gummy and shedding the music as the tape was playing -
 reconstruct the data and re-encode it using more up to date technology.

 I guess we will have grunt jobs for a long time to come. :)

I think you underestimate what Travis said about ensurance on a
long-term encrypted data. If an attacker can (and it is very likely) now
obtain your ciphertext encrypted with a scheme that isn't strong in
70-years perspective, he will be able to break the scheme in the future
when technology and science allows it, effectively compromising [part
of] your clients private data, despite your efforts to re-encrypt it
later with improved scheme.

The point is that encryption scheme for long-term secrets must be strong
from the beginning to the end of the data needed to stay secret.

-- 
SATtva
www.vladmiller.info
www.pgpru.com




signature.asc
Description: OpenPGP digital signature