Re: Proposal (was Summary re: /dev/random)

1999-08-03 Thread Damien Miller

On Sun, 1 Aug 1999, Sandy Harris wrote:

 The question, then is how best to make it into
 a two-stage design. Mainly, choose a block cipher
 and modify the hashing to suit. 

You will never get a block cipher in the kernel because of export 
restrictions.

What is wrong with SHA1?

Regards,
Damien Miller

--
| "Bombay is 250ms from New York in the new world order" - Alan Cox
| Damien Miller - http://www.ilogic.com.au/~dmiller
| Email: [EMAIL PROTECTED] (home) -or- [EMAIL PROTECTED] (work)




Re: palm crypto

1999-08-03 Thread Declan McCullagh

Or, if you don't wish to page through the export control silliness:

http://www.certicom.com/software/SecureMemo11.ZIP
http://www.certicom.com/software/SecureMemo11.SIT.BIN

-Declan


At 08:38 PM 8-1-99 -0400, Robert Hettinga wrote:

http://www.certicom.com/software/palmmemo.htm






Re: Proposal (was Summary re: /dev/random)

1999-08-03 Thread John Gilmore

  /dev/random should become two-stage, ...

I thought that /dev/urandom was the problem: that as new entropy comes
in, the cryptographically secure pseudo-RNG needs to get its entropy
in big chunks, so an attacker can't probe it to guess each bit of new
entropy as it comes in.

This, it seems, would require keeping two pools of entropy:
running a separate pool for /dev/urandom, and only dumping in more
entropy from /dev/random's pool when /dev/random accumulates more than
N bits' worth.  (N large enough to preclude exhaustive search.)

Making this change doesn't involve changing any cryptographic primitives.

(We should definitely not be using unproven AES candidates for
anything; we're trying to rely only on algorithms that have seen
extensive and unsuccessful attack.)

The other useful change we identified is to allow /dev/random to
accumulate larger entropy than 512 bytes, and carry it across reboots
so it's available to establish tunnels quickly.  I think Ted Ts'O has
already done some of that work, though programs like pluto can't yet
tell /dev/random how big an entropy buffer they want it to keep.

In the linux-ipsec release, Pluto and KLIPS also need to document
their need for random and pseudo-random values.  And we should cut
back their appetite anywhere we safely can.

John



Re: House committee ditches SAFE for law enforcement version

1999-08-03 Thread Peter Gutmann

Bill Frantz [EMAIL PROTECTED] writes:
At 12:26 PM -0700 7/26/99, Rick Smith wrote:
At 10:48 AM 7/26/99 -0700, Tom Perrine wrote:
At that time (1985), every MLS-possible system that had been produced
had been cancelled (or died for other reasons)   Sure,
some of these (ours included) had serious performance problems, but
*every* one was cancelled?

This is a digression from the legislative issue, but the cancellations were
probably for commercial reasons. Many of the early efforts were more or
less funded by vendors, and they pulled out when no market developed that
could justify the obscene cost and schedule of a government security
evaluation. I could go on at length about the cost effectiveness of A1
style formal methods at finding significant security flaws, even if you
assume a pliant set of evaluators (NOT the government). NSA ended up
funding the LOCK program in the late '80s probably because vendors had
realized that there was no financial benefit in A1's formal assurance of
OSes. NSA still had some True Believers in A1 a decade ago, but they're all
gone now as far as I can tell.

I can support this conclusion from the KeyKOS experience.  KeyKOS could be
configured to support the B3/A1 requirements.  (The requirements for the two
levels were the same, only the level of assurance differed.)  Because our
kernel was written in 370 Assembler, our evaluation team suggested we start
with a B2 evaluation.  Our cost estimate for that evaluation was $1,000,000.
Our investors didn't see a market, so we dropped out.

A rule of thumb I've been using is a million dollars (pounds, DM, rubles,
zorkmids, whatever) and a years work for an E3 certification (one person going
through the process once described progress as "$800K into the certification",
the expectation seems to be that once you've burned up a million dollars,
you're done).  At the moment there are only two justifications for this sort of
extravagance, the prospect of (one would assume fairly lucrative) government
contracts, or the requirement for a certain level of certification to comply
with digital signature laws (only a few European laws have done this, although
Australia may follow suit).

I'm not saying certification is a bad thing, it certainly helps in providing
some level of assurance that the product you're using is OK, it's just
unfortunate that there's no way to do it in any economically viable way.  For
what I'm working on I'll just claim "designed to meet B3 (until proven
otherwise)" if anyone asks[0], I don't think anyone will pay several million
dollars and wait several years[1] just to get a bit of paper affirming this.

Peter.

[0] Before I get flamed for this, this is exactly what it is, "designed to
meet", no more, no less.
[1] There isn't any rule of thumb for the work involved in attaining the higher
assurance levels because it's done so rarely, although in terms of cost and
time I've seen an estimate of $40M for an A1 Multics (it never eventuated)
and DEC's A1 security kernel took nearly a decade to do, with 30-40 people
working on it at the end (just before it was cancelled).  A lot of this
overhead was due to the fact that this hadn't been done much and there was
a lot of research work involved, an estimate I've had for doing a
commercial-product A1 system now would be about 3-5 years (probably closer
to 5), ramping up from an initial 10 to 30 people at the end, and costing
maybe $15-20M.




FBI PR specialist on KQED Forum San Francisco at 9:00am

1999-08-03 Thread Ernest Hua

I think his name was agent Grotz, but I'm
not sure.  Definitely Mr. PR.  When certain
callers complained heavily, and he couldn't
defend himself, he backtracked to the usual
"we have a program for that" or "just call
my office and we'll talk" or "look at our
new core values".

Very bureaucratic of him.

Barbara at the end at least made the
important point about how the FBI's (and
the NSA's) stupid encryption policy has
crippled our infrastructure.

Way to go, Barbara!

Otherwise, far too many right wing nuts
calling in.  One item which I have
complained about in this group and in
Cypherpunks is that, as strongly as some of
you might feel about Waco and Ruby Ridge
and the like, it simply does not help the
cause of encryption freedom to whip out the
jack-booted language any chance you get.

In the public's mind, such inflamatory
statements really cloud the substance behind
the encryption issue, which is already
confusing enough by itself.

Luckily, the FBI is having trouble educating
the public on this topic as well, precisely
because it is so confusing.

Ern





Re: linux-ipsec: /dev/random

1999-08-03 Thread Paul Koning

 "John" == John Denker [EMAIL PROTECTED] writes:

 John At 10:09 AM 8/2/99 -0400, Paul Koning wrote:
   1. Estimating entropy.  Yes, that's the hard one.  It's
  orthogonal from everything else.  /dev/random has a fairly simple
  approach; Yarrow is more complex.
  
  It's not clear which is better.  If there's reason to worry about
  the one in /dev/random, a good solution would be to include the
  one from Yarrow and use the smaller of the two answers.

 John Hard?  That's much worse than hard.  In general, it's
 John impossible in principle to look at a bit stream and determine
 John any lower bound on its entropy.  Consider the bitstream
 John produced by a light encoding of /dev/zero.  If person "A" knows
 John the encoding, the conditional entropy is zero.  If person "B"
 John hasn't yet guessed the encoding, the conditional entropy is
 John large.

 John Similar remarks apply to physical entropy: I can prepare a
 John physical system where almost any observer would measure lots of
 John entropy, whereas someone who knew how the system was prepared
 John could easily return it to a state with 10**23 bits less
 John apparent entropy.  Example: spin echoes.

Fine, but we weren't talking about "in principle" or "in general".
Sure, given an unspecified process of unknown (to me) properties I
cannot make sensible statements about its entropy.  That is true but
it isn't relevant to the discussion.

Instead, we're talking about systems where we have some understanding
of the properties involved.

For example, to pick a physical process, suppose I had a noise
generator (resistor), shielding of known properties or at least
bounded effectiveness, biases ditto, I would say I can then come up
with a reasonable entropy estimate, especially if I'm quite
conservative.  This is what people typically do if they build
"hardware random number generators".  They certainly need to be
treated with care and analyzed cautiously, but it definitely is a
thing that can be done.

Sure, you can do cat /dev/zero | md5sum  /dev/random, but I don't
believe anyone is proposing that as a way of feeding entropy into it.

paul



Re: linux-ipsec: /dev/random

1999-08-03 Thread John Denker

At 10:09 AM 8/2/99 -0400, Paul Koning wrote:

1. Estimating entropy.  Yes, that's the hard one.  It's orthogonal
from everything else.  /dev/random has a fairly simple approach;
Yarrow is more complex.

It's not clear which is better.  If there's reason to worry about the
one in /dev/random, a good solution would be to include the one from
Yarrow and use the smaller of the two answers.

Hard?  That's much worse than hard.  In general, it's impossible in
principle to look at a bit stream and determine any lower bound on its
entropy.  Consider the bitstream produced by a light encoding of /dev/zero.
 If person "A" knows the encoding, the conditional entropy is zero.  If
person "B" hasn't yet guessed the encoding, the conditional entropy is large.

Similar remarks apply to physical entropy:  I can prepare a physical system
where almost any observer would measure lots of entropy, whereas someone
who knew how the system was prepared could easily return it to a state with
10**23 bits less apparent entropy.  Example: spin echoes.

2. Pool size.  /dev/random has a fairly small pool normally but can be 
made to use a bigger one.  Yarrow argues that it makes no sense to use 
a pool larger than N bits if an N bit mixing function is used, so it
uses a 160 bit pool given that it uses SHA-1.  I can see that this
argument makes sense.  (That suggests that the notion of increasing
the /dev/random pool size is not really useful.)

Constructive suggestion:  given a RNG that we think makes good use of an N
bit pool, just instantiate C copies thereof, and combine their outputs.
ISTM this should produce something with N*C bits of useful state.

5. "Catastrophic reseeding" to recover from state compromise.

So while this attack is a bit of a stretch, defending against it is
really easy.  It's worth doing.

I agree.  As you and Sandy pointed out, one could tack this technology onto
/dev/urandom and get rid of one of the two main criticisms.

And could we please call it "quantized reseeding"?  A catastrophe is
usually a bad thing.

6. Inadequate entropy sources for certain classes of box.

If the TRNG has zero output, all you can do is implement a good PRNG and
give it a large, good initial seed "at the factory".

If the TRNG has a very small output, all you can do is use it wisely.
Quantized reseeding appears to be the state of the art.

--

There's one thing that hasn't been mentioned in any of the "summaries", so
I'll repeat it:  the existing /dev/urandom has the property that it uses up
*all* the TRNG bits from /dev/random before it begins to act like a PRNG.
Although this is a non-issue if only one application is consuming random
bits, it is a Bad Thing if one application (that only needs a PRNG) is
trying to coexist with another application (that really needs a TRNG).

This, too, is relatively easy to fix, but it needs fixing.

I see no valid argument that there is anything major wrong with the
current generator, nor that replacing it with Yarrow would be a good
thing at all.  

I agree, depending on what you mean by "major".  I see three "areas for
improvement" 
  a) don't reseed more often than necessary, so it doesn't suck the TRNG dry,
  b) when it does reseed, it should use quantized reseeding, and
  c) get around the limitation of the width of the mixing function, perhaps
using the parallel-instances trick mentioned above, or otherwise.




Re: linux-ipsec: /dev/random

1999-08-03 Thread John Denker

At 01:27 PM 8/2/99 -0400, Paul Koning wrote:

we weren't talking about "in principle" or "in general".
Sure, given an unspecified process of unknown (to me) properties I
cannot make sensible statements about its entropy.  That is true but
it isn't relevant to the discussion.

Instead, we're talking about systems where we have some understanding
of the properties involved.

For example, to pick a physical process, suppose I had a noise
generator (resistor), shielding of known properties or at least
bounded effectiveness, biases ditto, I would say I can then come up
with a reasonable entropy estimate, especially if I'm quite
conservative.  This is what people typically do if they build
"hardware random number generators".  They certainly need to be
treated with care and analyzed cautiously, but it definitely is a
thing that can be done.

I agree with that.  Indeed I actually attached a homebrew TRNG to my
server, pretty much as you described.

Sure, you can do cat /dev/zero | md5sum  /dev/random, but I don't
believe anyone is proposing that as a way of feeding entropy into it.

That's where we might slightly disagree :-) ... I've seen some pretty
questionable proposals ... but that's not the point.

The point is that there are a lot of customers out there who aren't ready
to run out and acquire the well-designed hardware TRNG that you alluded to.
 So we need to think carefully about the gray area between the
strong-but-really-expensive solution and the cheap-but-really-lame
proposals.  The gray area is big and important.

Cheers --- jsd



Proposed bill for tax credit to develop encryption with covert access

1999-08-03 Thread Radia Perlman - Boston Center for Networking

http://thomas.loc.gov/cgi-bin/bdquery/z?d106:h.r.02617:

I'm sure you'll all be enthusiastic about the chance to save your
company tax money.

Radia




Re: linux-ipsec: /dev/random

1999-08-03 Thread John Denker

At 01:50 PM 8/2/99 -0400, Paul Koning wrote:

I only remember a few proposals (2 or 3?) and they didn't seem to be
[unduly weak].  Or do you feel that what I've proposed is this
weak?  If so, why?  I've seen comments that say "be careful" but I
don't remember any comments suggesting that what I proposed is
completely bogus...

 We can waste lots of cycles having cosmic discussions, 
 but that's not helping  
matters.  What we need is a minimum of ONE decent quality additional
entropy source, one that works for diskless IPSEC boxes. 

OK, I see four proposals on the table.  (If I've missed something, please
accept my apologies and send a reminder.)

1) Hardware TRNG
2) Network timing
3) Deposits from a "randomness server"
4) Just rely on PRNG with no reseeding.

Discussion:

1) Suppose we wanted to deploy a jillion of these things.  Suppose they
have hardware TRNGs at an incremental cost of $10.00 apiece.  That comes to
ten jillion dollars, and I don't want to pay that unless I have to.

2) Network timing may be subject to observation and possibly manipulation
by the attacker.  My real-time clocks are pretty coarse (10ms resolution).
This subthread started with a discussion of software to estimate the
entropy of a bitstream, and I submit that this attack scenario is a perfect
example of a situation where no software on earth can provide a useful
upper bound on the entropy of the offered bit-stream.

3) Deposits from a server are conspicuously ineffective for terminating a
continuation attack.  If we can't do better than that, we might as well go
for option (4) and not even pretend we are defending against continuation
attacks.

4) I don't think my customers would be very happy with a system that could
not recover from a transient read-only compromise.


So... What have I missed?  What's your best proposal?

Thanx --- jsd




Re: linux-ipsec: /dev/random

1999-08-03 Thread Paul Koning

 "John" == John Denker [EMAIL PROTECTED] writes:

 John At 01:50 PM 8/2/99 -0400, Paul Koning wrote:
   I only remember a few proposals (2 or 3?) and they didn't seem to
  be [unduly weak].  Or do you feel that what I've proposed is this
  weak?  If so, why?  I've seen comments that say "be careful" but I
  don't remember any comments suggesting that what I proposed is
  completely bogus...
  
  We can waste lots of cycles having cosmic discussions, but that's
  not helping matters.  What we need is a minimum of ONE decent
  quality additional entropy source, one that works for diskless
  IPSEC boxes.

 John OK, I see four proposals on the table.  (If I've missed
 John something, please accept my apologies and send a reminder.)

 John ...2) Network timing

 John Discussion:

 John ...
 John 2) Network timing may be subject to observation and possibly
 John manipulation by the attacker.  My real-time clocks are pretty
 John coarse (10ms resolution).

But that's not what I proposed.  I said "CPU cycle counter".  Pentiums 
and up have those (and for all I know maybe older machines too, I'm no 
x86 wizard).  If the best you have is a 10 ms clock then this proposal 
does NOT apply -- for the reason you stated.

paul



Re: Summary re: /dev/random

1999-08-03 Thread tytso

   Date: Sun, 01 Aug 1999 17:04:14 +
   From: Sandy Harris [EMAIL PROTECTED]

   More analysis is needed, especially in the area of how
   to estimate input entropy.

True.  I actually don't believe perfection is at all possible.  There
are things which could probably do a better job, such as trying to run
gzip -9 over the entropy stream and then using the size of the
compressed stream (minus the dictionary) as the entropy.  This is
neither fast nor practical to do in the kernel, though.

   Yarrow's two-stage design, where the output from hashing the pool
   seeds a pseudo-random number generator based on a strong block
   cipher, offers significant advantages over the one-stage design in
   /dev/random which delivers hash results as output. In particular, it
   makes the hash harder to attack since its outputs are never directly
   seen, makes denial-of-service attacks based on depleting the
   generator nearly impossible, and "catastrophic reseeding" prevents
   iterative guessing attacks on generator internal state.

Yarrow is different.   It uses a 160 bit pool, and as such is much more
dependent on the strength of the cryptographic function. Hence, the two
stage design is much more important.  It also doesn't use a block
cipher, BTW.  It just uses a an interated hash function such as SHA, at
least the last time I looked at the Counterpane paper.

Linux's /dev/random uses a very different design, in that it uses a
large pool to store the entropy.  As long as you have enough entropy
(i.e., you don't overdraw on the pool's entropy), /dev/random isn't
relying on the cryptographic properties as much as Yarrow does.
Consider that if you only withdraw 160 bits of randomness out of a 32k
bit pool, even if you can completely reverse the SHA function, you can't
possibly determine more than 0.3% of the pool.

As such, I don't really believe the second stage design part of Yarrow
is really necessary for /dev/random.  Does it add something to the
security?  Yes, but at the cost of relying more on the crypto hash, and
less on the entropy collection aspects of /dev/random, which is as far
as I'm concerned much more important anyway.

If Free S/WAN really wants the second stage design, I will observe that
the second stage can be done entirely in user space.  Just use
/dev/random or /dev/urandom as the first stage, and then simply use an
iterated SHA (or AES candidate in MAC mode --- it really doesn't matter)
as your second stage, periodically doing the catastrophic reseed by
grabbing more data from /dev/random.  This gives you all of the
benefits (speed of key generation, no worry about DOS attacks by
depleting entropy --- by mostly ignoring the problem) and drawbacks
(over-dependence on the crypto function) of Yarrow.

- Ted

P.S. PGP's random number generator is similar to Linux's, and is
similarly quite different from Yarrow.   Probably the best thing to say
is that philosophically quite different.  I don't really believe we have
enough analysis tools to say which one is "better".



mailing list: eucrypto

1999-08-03 Thread Thomas Roessler

A new mailing list, [EMAIL PROTECTED], has been established.  It's
intended for discussions of crypto politics with a focus on the
European Union.

Topics include:

- Announcements and discussions on common European issues concerning
  availability, use, legal framework and politics of cryptographic
  techniques.
 
- Announcements and discussions on common issues concerning
  communications interception and related topics, e.g.
  state-sponsored hacking of communication end points.
 
- Announcements and brief discussions on national issues which may
  be of interest abroad.  Extensive and in-depth discussions on such
  topics should be performed on respective national mailing lists
  such as [EMAIL PROTECTED] (for the UK), or [EMAIL PROTECTED]
  (for Germany).
 
- Announcements and discussions on joint initiatives and campaigns
  concerning any of the abovementioned topics.
 
To subscribe to the list, send an e-mail containing the words
"subscribe eucrypto" to [EMAIL PROTECTED].



Re: linux-ipsec: Re: Summary re: /dev/random

1999-08-03 Thread Paul Koning

 "Paul" == Paul Koning [EMAIL PROTECTED] writes:

 Paul 2. Pool size.  /dev/random has a fairly small pool normally but
 Paul can be made to use a bigger one.  Yarrow argues that it makes
 Paul no sense to use a pool larger than N bits if an N bit mixing
 Paul function is used, so it uses a 160 bit pool given that it uses
 Paul SHA-1.  I can see that this argument makes sense.  (That
 Paul suggests that the notion of increasing the /dev/random pool
 Paul size is not really useful.)

Correction... I reread the Yarrow paper, and it seems I misquoted it.

Yarrow uses the SHA-1 context (5 word hash accumulator) as its "pool"
so it certainly has a 160 bit entropy limit.  But /dev/random uses a
much larger pool, which is in effect the input to a SHA-1 or MD5 hash,
the output of which is (a) fed back into the pool to change its state,
and (b) after some further munging becomes the output bitstream.

In that case, the possible entropy should be as high as the bit count
of the pool, not the length of the hash, so cancel my comment #2...

paul




Re: And now, a java encoder ring!

1999-08-03 Thread Andreas Bogk

[EMAIL PROTECTED] (Peter Gutmann) writes:

 Is there any easy way to check this which doesn't involve writing a lot of 
 code and poking it at the ring to see how it'll react?  I have one of these 

Yes. Upload the ModExp demo applet and see if it will exponentiate two
large numbers correctly in the right amount of time (under a second).

Andreas

-- 
"We show that all proposed quantum bit commitment schemes are insecure because
the sender, Alice, can almost always cheat successfully by using an
Einstein-Podolsky-Rosen type of attack and delaying her measurement until she
opens her commitment." ( http://xxx.lanl.gov/abs/quant-ph/9603004 )



Key management for encrypting to self

1999-08-03 Thread Nick Szabo


Enzo Michelangeli wrote:
What's the point of using publick key technologies like ECC to protect
private documents?

The device or terminal I'm using at the moment may not be a 
persistently secure part of my TCB.  In particular:

(a) I might want to bring a Palm travelling but keep my
secret key at home, so that my key is not compromised if
the device is stolen.

(b) I might be borrowing a friend's device.  (I'm trusting the 
friend with the confidentiality of this particular document, 
but not with my secret key).

(c) I might be accessing the encrypting device through an untrusted
terminal (into which I don't want to type my passphrase).

With public key crypto I can encrypt to self without having secure
access to my TCB.  There are probably many other variations on this 
theme.







[EMAIL PROTECTED] 
http://www.best.com/~szabo/
PGP D4B9 8A17 9B90 BDFF 9699  500F 1068 E27F 6E49 C4A2




Re: palm crypto

1999-08-03 Thread Markus Friedl

On Mon, Aug 02, 1999 at 10:03:28AM +0800, Enzo Michelangeli wrote:
 What's the point of using publick key technologies like ECC to protect
 private documents? As key management is a non-issue, something based on,
 say, 3DES or IDEA (like "Secret!", http://linkesoft.com/english/secret/)
 would suffice...

public key technologie allows encryption of text w/o passphrase.
only decrypt needs passphrase.

On Tue, Aug 03, 1999 at 10:24:32AM -0700, Jay D. Dyson wrote:
  http://www.certicom.com/software/SecureMemo11.ZIP
  http://www.certicom.com/software/SecureMemo11.SIT.BIN
  http://www.certicom.com/software/palmmemo.htm
 
   I've been trying to access these files, but I'm consistently hit
 up for a login/pass.  The standard cypherpunks routine doesn't work, and
 I'd like to check this data out.  Help? 

try:
ftp://ftp.fu-berlin.de/unix/security/replay-mirror/crypto/PalmPilot/SecureMemo11.ZIP

-markus



Re: linux-ipsec: /dev/random

1999-08-03 Thread Anonymous

 John The point is that there are a lot of customers out there who
 John aren't ready to run out and acquire the well-designed hardware
 John TRNG that you alluded to.  So we need to think carefully about
 John the gray area between the strong-but-really-expensive solution
 John and the cheap-but-really-lame proposals.  The gray area is big
 John and important.

Of course Intel is putting a TRNG onto its chip sets already, and these
will probably be widely available in the future.  This thing produces
massive quantities of true entropy, more than even John could need,
and it has been reviewed by one of the sharpest guys in the business,
Paul Kocher (see review at www.cryptography.com).

Maybe the real solution here is to work more closely with Intel to
provide some kind of open source access to this RNG.  An associate who
attended one of Intel's corporate briefings on the chip indicated that
the question of Linux came up, and Intel expressed a desire to find some
solution for that market.  They want people to use this chip, and Linux
is an increasingly important part of the landscape for them.



Re: palm crypto

1999-08-03 Thread Ian Goldberg

In article 001201bedc8b$3d5fb580$[EMAIL PROTECTED],
Enzo Michelangeli [EMAIL PROTECTED] wrote:
What's the point of using publick key technologies like ECC to protect
private documents? As key management is a non-issue, something based on,
say, 3DES or IDEA (like "Secret!", http://linkesoft.com/english/secret/)
would suffice...

I had the same thought.  I finally decided that the ability to write a
memo, encrypt it to some *other* pubkey, and beam it to the recipient is
sufficiently useful to warrant the existence of this program.  Keeping
your *own* files secret is a special case.

   - Ian