Re: depleting the random number generator

1999-07-26 Thread John Kelsey

-BEGIN PGP SIGNED MESSAGE-

[ To: Perry's Crypto List, James, Ben, Bram ##
  Date: 07/25/99 ##
  Subject: Re: depleting the random number generator ]

Date: Sun, 25 Jul 1999 11:01:00 -0400
To: "James A. Donald" [EMAIL PROTECTED], Ben Laurie
   [EMAIL PROTECTED], bram [EMAIL PROTECTED]
From: "Arnold G. Reinhold" [EMAIL PROTECTED]
Subject: Re: depleting the random number generator
Cc: cryptography [EMAIL PROTECTED]

One nice advantage of using RC4 as a nonce generator is that
you can easily switch back and forth between key setup and
code byte generation. You can even do both at the same time.
(There is no need to reset the index variables.) This allows
you to intersperse entropy deposits and withdrawals at will.

Has anyone looked at this from a cryptanalytic point of
view?  I think there are chosen-input attacks available if
you do this in the straightforward way.  That is, if I get
control over some of your inputs, I may be able to alternate
looking at your outputs and sending in new inputs, and mount
an attack that isn't possible at all against RC4 as it's
normally used.  (This comes out of conversations with Jon
Callas, Dave Wagner, and Niels Ferguson, from a time when I
considered designing a Yarrow-variant using RC4 as the
underlying engine.)

In particular, if you deposit the time of each entropy
withdrawal, the proposed denial of service attack that
started this thread would actually replenish a few bits of
entropy with each service request.

This isn't a bad idea, but I'd be careful about assuming
that those times hold much entropy.  After all, a given
piece of code which has thirty calls to the PRNG probably
runs in about the same amount of time every time, barring
disk or network I/O.

Arnold Reinhold

- --John Kelsey, Counterpane Internet Security, [EMAIL PROTECTED]
NEW PGP print =  5D91 6F57 2646 83F9 6D7F 9C87 886D 88AF

-BEGIN PGP SIGNATURE-
Version: PGPfreeware 5.5.3i for non-commercial use http://www.pgpi.com

iQCVAwUBN5vpyCZv+/Ry/LrBAQEEugP/a0EmfGGNtCt9TXbzvbn6VbdpwMvInVr0
U+BiLtwa4UCp7l4i4BK3lovYkXHAYwdKD4v7k7OQw0iIaJAEHGFrdscByoAc1rA7
X83UylGkuhjyRmH9N7ygK67oSp7suWd5j5+7nS1TiZvFdP/hE8M7BXOtaFmxx7eP
K6tmgAWN3uc=
=P+FQ
-END PGP SIGNATURE-




Re: House committee ditches SAFE for law enforcement version

1999-07-26 Thread Declan McCullagh

Oh, and there's one other thing: There is no companion bill to SAFE in the
Senate. So assuming (this is a big assumption) the Senate approves ProCODE
or something, then the differences between the two bills would be hammered
out in a conference committee.

Needless to say, this would be very dangerous and domestic controls could
be inserted in a heartbeat. It depends on who's on the committee, for one
thing, and whether ostensibly pro-crypto legislators are willing to
compromise in exchange for more funding of their own pet projects, etc.

But all this is far in the future and unlikely to happen with this Congress
and this obstructionist and veto-happy administation. It seems to me that
the millions of dollars that have been spent by the industry in
crypto-lobbying efforts could have been better spent on, say, offshore
development.

-Declan





Re: House committee ditches SAFE for law enforcement version

1999-07-26 Thread John A. Limpert

Tim May wrote:
 
 Fourth, and this is a serious question, not a rhetorical one: What the hell
 ever happened to the movement to develop offshore and them skirt U.S.
 export laws thusly? Remember how RSA had created a European branch would
 would supposedly develop RSA-type softwar and then throw it in the face of
 Uncle Sam? This was about two years ago, as I recall. And remember NTT and
 their RSA chip, which Jim Bidzos held up in front of Congress as a slap in
 their face?

I don't claim to have any inside knowledge of what has happened with
RSA, Sun and other companies with offshore crypto operations. I suspect
that any U.S. company is going to have to deal with threats from the
federal government concerning "good/happy/profitable relations" between
that company and the government.

Sun sells huge amounts of hardware and software to the federal
government. Are they willing to give up that business? Do they want the
IRS, SEC, DOC and other agencies to be looking for ways to obstruct
their business operations?

Do you see any cellular/PCS companies offering strong encryption to
their customers? They all caved when the feds suggested that it wasn't a
good idea. So we get no encryption or encryption that can be cracked by
an amateur cryptanalyst with a PC.

What does a U.S. company do when the NSA suggests modifications (a la
Crypto AG) to their software that compromise its security?

I believe that if we are ever going to see good encryption software for
the masses, it is going to be from open source projects, not from
commercial vendors selling binaries.



Re: House committee ditches SAFE for law enforcement version

1999-07-26 Thread Bill Sommerfeld

[CC:'s to list I don't subscribe to deleted.]

one possible escape clause here is a constitutional provision
regarding immunity of legislators for acts in congress:

[from article 1, section 6]

".. for any Speech or Debate in either House, they shall not be
questioned in any other place."

.. so, as I read it, the only entity capable of enforcing the gag
order (i.e., preventing a legislator from repeating what he heard in
the closed briefing in a subsequent open legislative session) is the
congress itself, and that, likely, only after the fact.

But then again, i'm not a lawyer, and I'm also not sure how this
provision has been interpreted in the past..

- Bill



Re: House committee ditches SAFE for law enforcement version

1999-07-26 Thread Rick Smith

Declan McCullagh [EMAIL PROTECTED] writes:

 The sponsor of yesterday's amendment, Rep. Weldon, said that he wants to
 have a classified briefing //on the House floor// to scare members into
 voting his way. Look for killer amendments to SAFE to be offered during
 that floor vote, perhaps even ones with domestic controls.

Recent reports on the handling of classified policy information make it
clear that the information will be "leaked" to many sources, and will
quickly fall into the hands of America's adversaries. The closed debate
shuts out public discussion without protecting the information from
potential foes.

Or, as Tom Clancy pointed out, it'll be published in Aviation Week the
following month.

Rick.
[EMAIL PROTECTED]




Re: depleting the random number generator

1999-07-26 Thread James A. Donald

--
At 01:49 PM 7/25/99 -0700, David Wagner wrote:
  One nice advantage of using RC4 as a nonce generator is that you can
easily
  switch back and forth between key setup and code byte generation. You can
  even do both at the same time. (There is no need to reset the index
  variables.) This allows you to intersperse entropy deposits and
withdrawals
  at will.

Arnold G. Reinhold [EMAIL PROTECTED] wrote:
 Oh dear!  This suggestion worries me.
 Is it reasonable to expect this arrangement to be secure
 against e.g. chosen-entropy attacks?

Yes:  If the attacker knows exactly when the packets arrive (which he
cannot) this cannot give him any additional knowledge about the state.

The worst case is that the attacker does not lose any information.


--digsig
 James A. Donald
 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG
 GwzRjnRrKYJu2r1GIGDbMcu4BUlTzkvCgsPsse1R
 4zW/Nuta5TAkUWJiaYK+pxqBFNK6i8MzCczPKz24u




Re: House committee ditches SAFE for law enforcement version

1999-07-26 Thread John Denker

At 07:31 AM 7/26/99 -0400, Bill Sommerfeld wrote:

".. for any Speech or Debate in either House, they shall not be
questioned in any other place."

But then again, i'm not a lawyer, and I'm also not sure how this
provision has been interpreted in the past..

IANL but as you can imagine, members of congress take their privileges very
seriously.  No court or executive agency would dare sanction a member for
something said in speech or debate, and the privilege has even been
extended to member's aides, e.g. in the Pentagon Papers case:
  http://www.law.vill.edu/Fed-Ct/Supreme/Flite/opinions/408US606.htm

Leakage from the floor (or peremptory declassification, as Senator Gravel
did with the Pentagon Papers) has been a sore point in the past.  It makes
agencies very leery of giving a "secret briefing" to members of congress.
But congress wants, and sometimes requires, such briefings.

The result is that each house has its own rules against disclosing secret
information.  I couldn't easily find a copy of the rules, but I assume that
member who broke the rules could be censured or expelled.  OTOH in a case
where there was a legitimate difference of opinion as to whether something
*should* have been classified, the member would have a very strong defense.




Re: depleting the random number generator

1999-07-26 Thread bram

On Sun, 25 Jul 1999, John Kelsey wrote:

 Has anyone looked at this from a cryptanalytic point of
 view?  I think there are chosen-input attacks available if
 you do this in the straightforward way.  That is, if I get
 control over some of your inputs, I may be able to alternate
 looking at your outputs and sending in new inputs, and mount
 an attack that isn't possible at all against RC4 as it's
 normally used.  (This comes out of conversations with Jon
 Callas, Dave Wagner, and Niels Ferguson, from a time when I
 considered designing a Yarrow-variant using RC4 as the
 underlying engine.)

I thought about building SRNG's from several different cryptographic
primitives, and came to the conclusion that the chosen-entropy attacks
force it to be based on a secure hash. Since the design I figured out
looks very much like yarrow, we probably had thoughts along the same
lines.

 This isn't a bad idea, but I'd be careful about assuming
 that those times hold much entropy.  After all, a given
 piece of code which has thirty calls to the PRNG probably
 runs in about the same amount of time every time, barring
 disk or network I/O.

A lot of things include less entropy than one might assume. For example,
keystrokes contain essentially no entropy based on what letter was hit,
and the number of bits of entropy their timing includes is approximately
the logarithm of the number of time ticks since the last keystroke. (which
means, interestingly enough, that you can get faster entropy harvesting by
having a more precise clock.)

-Bram




Re: depleting the random number generator

1999-07-26 Thread bram

On Mon, 26 Jul 1999, James A. Donald wrote:

  Oh dear!  This suggestion worries me.
  Is it reasonable to expect this arrangement to be secure
  against e.g. chosen-entropy attacks?
 
 Yes:  If the attacker knows exactly when the packets arrive (which he
 cannot) this cannot give him any additional knowledge about the state.

The threat model for yarrow and other SRNG's is that the attacker can not
only tell when entropy is coming in, but control it's contents as well.
The idea is to build something which only fails if the attacker both knows
the state of the pool at some point and manages to control all attempted
reseedings.

-Bram




No Subject

1999-07-26 Thread Anonymous

On Sun, 25 Jul 1999, John Kelsey wrote:

 Has anyone looked at this from a cryptanalytic point of
 view?  I think there are chosen-input attacks available if
 you do this in the straightforward way.  That is, if I get
 control over some of your inputs, I may be able to alternate
 looking at your outputs and sending in new inputs, and mount
 an attack that isn't possible at all against RC4 as it's
 normally used.  (This comes out of conversations with Jon
 Callas, Dave Wagner, and Niels Ferguson, from a time when I
 considered designing a Yarrow-variant using RC4 as the
 underlying engine.)

Even aside from active attacks, there is a possible problem based on
the fact that RC4 can "almost" fall into a repeated-state situation.
RC4's basic iteration looks like:

(1) i += 1;
(2) j += s[i];
(3) swap (s[i], s[j]);
(4) output s[s[i] + s[j]];

(everything is mod 256)

The danger is that if it ever gets into the state j = i+1, s[j] = 1,
then it will stay that way.  It will increment i, then add s[i] to j,
which will also increment j.  Then which it swaps s[i] and s[j] it will
make s[j] be 1 again.

However in normal use this never happens, because this condition
propagates backwards as well as forwards; if we ever are in this state,
we always were in this state.  And since we don't start that way, we
never get that way.

Adding input entropy could break these rules.  If we fold in entropy
following the pattern of RC4 seeding, we alter line 2:

(2) j += s[i] + input();

Now it is no longer true that we can't fall into (or out of) this state.
If we had a non-zero input() value and then a bunch of zeros, we could
get into the repeated state and stay there.

It may not be that easy to recognize the repeated state, but it certainly
makes the RNG seem less robust.  The effect is that the "1" value keeps
bubbling through the s array, with every other value moving up one step
as the "1" value moves past it.  The s array ends up rotating very slowly,
not being mixed at all.  This is completely unlike normal RC4.

It is really not safe to mess with RC4's iteration rules like this.  The
cipher is rather brittle in this regard.



Re: depleting the random number generator

1999-07-26 Thread James A. Donald

--
   Oh dear!  This suggestion worries me.
   Is it reasonable to expect this arrangement to be secure
   against e.g. chosen-entropy attacks?

On Mon, 26 Jul 1999, James A. Donald wrote
  Yes  If the attacker knows exactly when the packets arrive (which he
  cannot) this cannot give him any additional knowledge about the state.

At 1018 AM 7/26/99 -0700, bram wrote
 The threat model for yarrow and other SRNG's is that the attacker can not
 only tell when entropy is coming in, but control it's contents as well.

The assumption was that entropy was the time of arrival.  Even if the
attacker has control over the entropy added to the RC4 state, this cannot
give him any additional information about the state of the RC4 generator.
Thus the worst case is the same as if you did nothing, and of course from
time to time a packet will arrive that did not come from the attacker,
adding entropy of which the attacker is unaware and cannot control.

 The idea is to build something which only fails if the attacker both knows
 the state of the pool at some point and manages to control all attempted
 reseedings.

An RC4 state fulfills this requirement, plus if we reseed from the time of
arrival of packets, the attacker cannot control all incoming packets, thus
even if at some point he knows the state of the pool, that knowledge will
soon be lost.
--digsig
 James A. Donald
 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG
 Kdyo4Br88Xlrpmdxedxsb+iRl+WbUY9Q2lin8JGP
 4hxPM9bxlC8ZeyNeBRnazTzz0j0G45vOXSp/3e6kl




Re: depleting the random number generator

1999-07-26 Thread Arnold G. Reinhold

At 1:49 PM -0700 7/25/99, David Wagner wrote:
In article v04011700b3c0b0807cfc@[24.218.56.100],
Arnold G. Reinhold [EMAIL PROTECTED] wrote:
 One nice advantage of using RC4 as a nonce generator is that you can easily
 switch back and forth between key setup and code byte generation. You can
 even do both at the same time. (There is no need to reset the index
 variables.) This allows you to intersperse entropy deposits and withdrawals
 at will.

Oh dear!  This suggestion worries me.
Is it reasonable to expect this arrangement to be secure
against e.g. chosen-entropy attacks?  [John Kelsey makes the same point]

You raise a good question, but I think I can demonstrate that it is safe.
Here is the inner loop of the algorithm I am proposing in its most extreme
case: generating cipher bytes and accepting entropy at the same time.
(using Bruce Schneier's notation from Applied Cryptography, 2nd ed.):

i = i + 1 mod 256
j = j + S[i] + K[n] mod 256
swap S[i] and S[j]
t = S[i] + S[j] mod 256
next cipher byte = S[t]

Here K[n] is the next byte of entropy.

Note that RC4 code generation is exactly the same except that K[n] = 0 for
all n.

Assume an attacker initially does not know the state of the S array or the
value of j (you used 256 bytes of strong entropy as your initial RC4 key
and then discarded the next 256 cipher bytes like your mama taught you),but
does know i. (The attacker has been counting, knows the length of your
initial key setup and was able to shut out all other activity.)  Also
assume the attacker gets to choose each K[n] and then gets to see each
cipher byte.

If you look at the last two lines of the loop, you can see that the
attacker needs to know something about the new value of j to learn any
information about the state of the S array from a cipher byte.  Now focus
on the second line of the algorithm.  To know anything about the new value
of j, he needs to know something about the old value of j AND something
about the value of S[i].  By assumption he knows neither. Therefore he
learns nothing about the new value of j and thus nothing about the state of
the S array.

Since addition mod 256 is a group, being able to choose K is no more
helpful in learning the new value of j than knowing K's value, which you
always know during code generation in RC4 (it's zero, as pointed out
above). You might think there could be a special situation that you could
wait for where you can use your ability to pick K to keep RC4 in a small
loop, but step 1 insures that a new S[i] is brought into the calculations
each time.

I believe this shows that adding entropy as you go, even if it might be
chosen by an attacker, is no more risky than a known plaintext attack
against vanilla RC4.

Of course in the original situation I proposed, the attacker could at best
choose only some of the entropy added.

For extra insurance, someone using RC4 as a nonce generator might want to
discard a random number (256) of cipherbytes after the initial key setup.
This would deny an attacker any knowledge of the value of i beforehand.
Also, generating nonces and adding entropy in separate operations, which is
the natural thing to do from a programming perspective, results in
additional mixing and further complicates the problem for an attacker.


At 11:55 PM -0500 7/25/99, John Kelsey wrote:

[Arnold R] In particular, if you deposit the time of each entropy
withdrawal, the proposed denial of service attack that
started this thread would actually replenish a few bits of
entropy with each service request.

[John K] This isn't a bad idea, but I'd be careful about assuming
that those times hold much entropy.  After all, a given
piece of code which has thirty calls to the PRNG probably
runs in about the same amount of time every time, barring
disk or network I/O.


I was careful to say a "a few bits of entropy with each service request."
The service requests I was refering to were the attacker's attempt to set
up an IPsec tunnel. These involve network traffic and so can be expected to
generate some entropy.  Here is John Denker's [EMAIL PROTECTED]
original description of the attack:

Step 1) Suppose some as-yet unknown person (the "applicant") contacts
Whitney and applies for an IPsec tunnel to be set up.  The good part is that
at some point Whitney tries to authenticate the Diffie-Hellman exchange (in
conformance with RFC2409 section 5) and fails, because this applicant is an
attacker and is not on our list of people to whom we provide service.  The
bad part is that Whitney has already gobbled up quite a few bits of entropy
from /dev/random before the slightest bit of authentication is attempted.

Step 2) The attacker endlessly iterates step 1.  This is easy.  AFAIK there
is no useful limit on how often new applications can be made.  This quickly
exhausts the entropy pool on Whitney.

Step 3a) If Whitney is getting key material from /dev/random, the result is
a denial of service.  All the IPsec 

Re: Security Lab To Certify Banking Applications (was Re: ECARM NEWS for July 23,1999 Second Ed.)

1999-07-26 Thread William H. Geiger III

In v0421012db3be70faae9c@[207.244.108.87], on 07/23/99 
   at 03:20 PM, Robert Hettinga [EMAIL PROTECTED] said:

 The Financial Services Security Laboratory will open July 28 in
 Reston, Va. The facility will be used to test software packages against
 a set of standards for securing e-commerce and bill-payment
 applications, as well as browsers and operating software.


Well I have my doubts on this. Either they refuse to certify Microsoft 
Netscape software and alienate 90% of the consumer market, or they do
certify them making their certification worthless.

-- 
---
William H. Geiger III  http://www.openpgp.net
Geiger ConsultingCooking With Warp 4.0

Author of E-Secure - PGP Front End for MR/2 Ice
PGP  MR/2 the only way for secure e-mail.
OS/2 PGP 5.0 at: http://www.openpgp.net/pgp.html
Talk About PGP on IRC EFNet Channel: #pgp Nick: whgiii

Hi Jeff!! :)
---




Subject: Re: Security Lab To Certify Banking Applications (was Re: ECARM NEWS for July 23,1999 Second Ed.)

1999-07-26 Thread Peter Gutmann

"William H. Geiger III" [EMAIL PROTECTED] writes:

In v0421012db3be70faae9c@[207.244.108.87], on 07/23/99
   at 03:20 PM, Robert Hettinga [EMAIL PROTECTED] said:

The Financial Services Security Laboratory will open July 28 in
Reston, Va. The facility will be used to test software packages against
a set of standards for securing e-commerce and bill-payment
applications, as well as browsers and operating software.

Well I have my doubts on this. Either they refuse to certify Microsoft 
Netscape software and alienate 90% of the consumer market, or they do certify
them making their certification worthless.

Actually there's a way you can manage this (which was used by MS to get NT's 
ITSEC E3 certification in the UK):

  1. Define your own TOE (target of evaluation) for the certification 
 (translation: lower your expectations to the point where they're already 
 met).
  2. Have the product certified to your own TOE.
  3. Mark the TOE "Microsoft Confidential" and don't let anyone see it 
 (leading to considerable speculation about how you could possibly manage 
 to write a TOE which would allow NT to get an E3 certification).
  4. Tell everyone you have an E3 certified OS and sell it to government
 departments as secure.

This isn't to say that the certification process is a bad thing.  If it's done
properly it can lead to a reasonable degree of assurance that you really do 
have a secure product, which is exactly what was intended.  Unfortunately if 
all you're interested in is filling a marketing checkbox, you can do this as 
well.  This was the Orange Book's strength (and weakness), it told you exactly
what you had to do to get the certification so you couldn't work around it 
with fancy footwork.  OTOH it was also inflexible and had requirements which 
didn't make sense in many instances, which is what lead to the development of
alternatives like ITSEC/the Common Criteria.  For all its failings I prefer 
the Orange Book (if it can be made to apply to the product in question) 
because that way at least you know what you're getting.

(Given that NT now has a UK E3 certification, I don't think you need to get 
it recertified in the US, since it's transferrable to all participating 
contries, so I don't think it'd have to be certified by the above lab).

Peter.