Re: Fw:Fraud voting machines

2003-03-31 Thread David Wagner
Richard Guy Briggs  wrote:
If You Want To Win An Election, Just Control The Voting Machines
by Thom Hartmann
[...]
Six years later Hagel ran again, this time against Democrat Charlie Matulka
in 2002, and won in a landslide. As his hagel.senate.gov website says, Hagel
was re-elected to his second term in the United States Senate on November
5, 2002 with 83% of the vote. That represents the biggest political victory
in the history of Nebraska.

What Hagel's website fails to disclose is that about 80 percent of those
votes were counted by computer-controlled voting machines put in place by
the company affiliated with Hagel. Built by that company. Programmed by that
company.

Breathless speculation aside, it oughtn't be that hard to test whether
Hagel's victory was credible.  Surely there were some polls of the voters.
You would think that if there was significant fraud through compromised
voting machines, then this fact would be very noticeable in the polls.
Does anyone know whether there is any evidence to back up these
allegations that Hagel's election results were fraudulent, or is this
article just blowing smoke?

I agree that we ought to take voting fraud seriously, and I'm very
critical of e-voting.  However, we also ought to get the facts, all the
facts, and to get them right.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Who's afraid of Mallory Wolf?

2003-03-25 Thread David Wagner
Ian Grigg writes:
 I don't think mere monetary costs are even germane to
 something like this.  The costs, publicly and personally,
 are of a different kind than money expresses.

I'm sorry to disagree, but I'm sticking to my
cost-benefit analysis:  monetary costs are totally
germane.  You see, we need some way in which
to measure the harm.  It's either subjective as
you describe above, which can't support an
infrastructure decision, or its objective, which
means, money.

I'm skeptical.  Just because the cost is
subjective doesn't mean we should ignore the cost.

But, luckily, there is a way to turn the above
subjective morass of harm into an objective
hard number:  civil suit.

That's using a questionable measuring stick.
The damages paid out in a civil suit may be very
different (either higher, or lower) than the true
cost of the misconduct.  Remember, the courts are
not intended to be a remedy for all harms, nor could
they ever be.  The courts shouldn't be a replacement
for our independent judgement.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Who's afraid of Mallory Wolf?

2003-03-24 Thread David Wagner
Ian Grigg  wrote:
By common wisdom, SSL is designed to defeat
the so-called Man in the Middle attack, or
MITM for short.

The question arises, why?

One possible reason: Because DNS is insecure.
If you can spoof DNS, you can mount a MITM attack.

A second possible reason: It's hard to predict
what attacks will become automated.  Internet
attacks seem to have an all-or-nothing feel:
either almost noone exploits them, or they get
exploited en masse.  The latter ones can be
really painful, if you haven't built in protection
in advance.

You could take your argument even further and
ask whether any crypto was needed at all.
After all, most attacks have worked by compromising
the endpoint, not by sniffing network traffic.
I'll let you decide whether to count this as a
success story for SSL, or as indication that the
crypto wasn't needed in the first place.
(I'm a little skeptical of this argument, by the
way, but hey, if we're playing devil's advocate,
why not aim high?)

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Brumley Boneh timing attack on OpenSSL

2003-03-24 Thread David Wagner
Nomen Nescio  wrote:
Regarding using blinding to defend against timing attacks, and supposing
that a crypto library is going to have support for blinding:

 - Should it do blinding for RSA signatures as well as RSA decryption?
 - How about for DSS signatures?

My guess is that it's not necessary, as the attacker doesn't
have as much control over the input to the modular exponentiation
process in the case of RSA signatures.  (For RSA decryption,
the attacker can specify the ciphertext freely.  However, for
signatures, the input to the modular exponentiation is a hash
of the attacker's chosen input, which gives the attacker a lot
less freedom to play Bleichenbacher-like games.)

But then, the recent Klima-Pokorny-Rosa paper shows how even
just a tiny crack can lead to subtle, totally unexpected attacks.
Who would have thought that SSL's version rollback check (two bytes
in the input to the modular exponentiation) could enable such a
devastating attack?  Not me.

The Boneh-Brumley and KPR papers have made me much more paranoid
about side-channel attacks.  As a result, I might turn blinding on
even for signatures by default, out of caution, even though I can't
see how such an attack could possibly work.

 - How about for ElGamal decryption?
 - Non-ephemeral (static) DH key exchange?

Yes, I think I'd use side channel defenses (like blinding) here.
I don't know of any attacks off the top of my head, but it sure
seems plausible to me that there might be some.

 - Ephemeral DH key exchange?

I wouldn't tend to be very worried about ephemeral exchanges,
since all the attacks we've seen so far require many interactions
with the server with the same key.  I could be wrong, but this
seems pretty safe to me.

In other words, what do we need as far as blinding support either in
developing a crypto library or in evaluating a crypto library for use?

Suppose we are running a non-SSL protocol but it is across a real-time
Internet or LAN connection where timing attacks are possible.  And suppose
our goal is not to see a paper and exploit published within the next
three years telling how to break the protocol's security with a few
hours of connect time.

Good question.
Personally, I'd enable side channel defenses (like blinding) by
default in the crypto library in every place that the library does
some lengthy computation with a long-lived secret.
But I'll be interested to hear what others think.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Microsoft: Palladium will not limit what you can run

2003-03-14 Thread David Wagner
Hermes Remailer  wrote:
Hopefully this will shed light on the frequent claims that Palladium will
limit what programs people can run, [...]

That's a strawman argument.  The problem is not that Palladium will
*itself* directly limit what I can run; the problem is what Palladium
enables.  Why are you focusing on strawmen?  Why did you omit the real
concerns about technology like Palladium?

Palladium could enable big vendors to limit what applications I can run.
Palladium could enable big vendors to behave anti-competitively.
Palladium could enable big vendors to build document formats that
aren't interoperable with open-source software.  Palladium could be a
net negative for consumers.

Many of these risks are already possible today without Palladium, but
Palladium may increase the risks.  These risks are by no means guaranteed
to occur, but they are a real risk.  Shouldn't we think carefully about
this technology before we deploy it?  Shouldn't we at least consider
these risks?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: Proven Primes

2003-03-07 Thread David Wagner
Bill Frantz  wrote:
I guess I'm dumb, but how to you verify a proof of Sophie Germain primeness
with less effort than to run the tests yourself?

There are ways to prove that p is prime so that the receiver
can verify the proof more easily than it would be to construct
a proof.  The verification process is deterministic (there is
no chance of error), unlike probabilistic primality tests.

Here's a simple method, due to Pratt.  It turns out that p is
prime if and only if the multiplicative group (Z/pZ)^* of integers
modulo p is cyclic.  To show that the group is cyclic, we can
give a generator g.  To show that g is a generator, we can factor
p-1 and show that g^{(p-1)/q} != 1 (mod p) for all prime q that
divide p-1.  Thus, the proof of primality for p will be
   proof(p) = (g, q_1, proof(q_1), q_2, proof(q_2), ...)
where q_1, q_2, ... is the list of prime factors of p and where
proof(q_i) is a recursive proof of primality for q_i.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]


Re: AES-128 keys unique for fixed plaintext/ciphertext pair?

2003-02-18 Thread David Wagner
Matt Crawford  wrote:
But here's the more interesting question. If S = Z/2^128 and F is the
set of all bijections S-S, what is the probability that a set G of
2^128 randomly chosen members of F contains no two functions f1, f2
such that there exists x in S such that f1(x) = f2(x)?

Vanishingly small, as you guessed.

Fix x0 in S.  Your probability is at most the probability that G has
no two functions f1, f2 with f1(x0) = f2(x0).  The latter is the same
as the probability that a set of 2^128 randomly chosen 128-bit values
contains no repeated elements, which is indeed vanishingly small (it is
(2^128)! / (2^128)^(2^128), which is something like 1/e^(2^128)).

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Columbia crypto box

2003-02-10 Thread David Wagner
Trei, Peter wrote:
The weird thing about WEP was its choice of cipher. It used RC4, a 
stream cipher, and re-keyed for every block. . RC4 is
not really intended for this application. Today we'd
have used a block cipher with varying IVs if neccessary

I suspect that RC4 was chosen for other reasons - ease of
export, smallness of code, or something like that. It runs fast,
but rekeying every block loses most of that advantage.

It's hard to believe that RC4 was chosen for technical reasons.
The huge cost of key setup per packet (equivalent to generating 256
bytes of keystream and then throwing it away) should dominate the other
potential advantages of RC4.

In any case, WEP would clearly look very different if it had been designed
by cryptographers, and it almost certainly wouldn't use RC4.  Look at
CCMP, for instance: it is 802.11i's chosen successor to, and re-design
of, WEP.  CCMP uses AES, not RC4, and I think that was a smart move.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Prime numbers guru 'factors' down success

2003-01-20 Thread David Wagner
Ben Laurie  wrote:
William Knowles wrote:
 Prime numbers (such as 1, 5, 11, 37...) are divisible only by 
 themselves or 1. While smaller prime numbers are easy to make out, for 
 very large numbers, there never had been a formula for primality 
 testing until August 2002.

Doh! This is so untrue. The point is that they discovered a test that 
wasn't NP, for the first time. OK, so its P but with a vast constant, 
but even so, there must be a point at which it gets better than the best 
NP methods. I wonder if anyone's worked out where that point is?

If you compare to randomized algorithms, I suspect the answer is never.
There are randomized algorithms that run in polynomial time that have been
known for many years.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Key Pair Agreement?

2003-01-20 Thread David Wagner
Jeroen C. van Gelderen wrote:
Here is a scenario: Scott wants Alice to generate a key pair after 
which he will receive Alice's public key. At the same time, Scott wants 
to make sure that this key pair is newly generated (has not been used 
before).

You might be able to have Scott specify a 64-bit string, and then ask
Alice to come up with a RSA public key that has this string as its low
64 bits.  I believe it is straightforward to modify the RSA key generation
algorithm to generate keypairs of the desired form.

If you're worried about the security of allowing Scott to choose the
low bits of Alice's public key, you could have Scott and Alice perform
a joint coin-flipping protocol to select a random 64-bit string that
neither can control, then proceed as before.

I haven't worked out all the details, but something like this might
be workable.

In practice, you might also want to confirm that Alice knows her private
key (i.e., has ability to decrypt messages encrypted under her public
key).

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Micropayments, redux

2002-12-16 Thread David Wagner
Ed Gerck  wrote:
For example, in reply to my constraint  #2, you say:

 This is expected to be roughly counterbalanced by the
 number of unlucky users who quite (sic) while behind.

but these events occur under different models. If there
is no prepayment (which is my point #2) then many users
can quit after few transactions and there is no statistical
barrier to limit this behavior.

Yes, that's true.  Still, the loss is bounded, even if there is no
prepayment.  Suppose that each transaction is for 1cent, and we set things
up so you pay 1/100 of the time.  Then the most any given user can walk
off by quitting early is about $1.  The costs of customer acquisition
are probably far greater than $1.  For instance, many online payment
schemes were offering $10 coupons just for signing up to the system.

Remember also that this scheme requires strong is-a-person credentials,
so each person can probably pay this game at most once.  This means
there is not much incentive for anyone to bother trying to game the
system on purpose.  And, as you say, it is not hard to tweak the system
to reduce the amount the bank can lose in this way, if you care.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Micropayments, redux

2002-12-16 Thread David Wagner
Matt Crawford  wrote:
 No, it doesn't.  It doesn't take unlimited time for lottery-based
 payment schemes to average out; finite time suffices to get the
 schemes to average out to within any desired error ratio.

Strictly speaking, the average will come within your error tolerance
of the expected value *with probability near 1*.

Yes, but the probability of it being significantly worse than I claimed
(i.e., by more than a factor t) is exponentially small (in t).  One can
easily calculate concretely exactly what the risk curve looks like.
I'll spare everyone the details and just say that I see no reason why
this should be a showstopper in practice.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: DOS attack on WPA 802.11?

2002-12-08 Thread David Wagner
Arnold G. Reinhold wrote:
If I am right and WPA needlessly 
introduces a significant denial of service vulnerability, then it 
should be fixed. If I am wrong, no change is needed of course.

But TKIP (the part of WPA you're talking about) is only a
temporary measure, and will soon be replaced by AES-CCMP.

The question is not Should we replace TKIP?, because the
answer to that is obvious: Yes, we should, and we will.
Th question is: Why bother working on a `fix' to WPA that
will likely never be deployed and that will be obsoleted
in a few years by the spread of AES-CCMP?.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Why is RMAC resistant to birthday attacks?

2002-10-24 Thread David Wagner
Ed Gerck  wrote:
Wei Dai wrote:
 No matter how good the MAC design is, it's internal collision probability
 is bounded by the inverse of the size of its internal state space.

Actually, for any two (different) messages the internal collision probability
is bounded by the inverse of the SQUARE of the size of the internal state space.

No, I think Wei Dai had it right.  SHA1-HMAC has a 160-bit internal state.
If you fix two messages, the probability that they give an internal collision
is 1/2^160.

Maybe you are thinking of the birthday paradox.  If you have 2^80 messages,
then there is a good probability that some pair of them collide.  But this
is the square root of the size of the internal state space.  And again, Wei
Dai's point holds: the only way to reduce the likelihood of internal collisions
is to increase the internal state space.

In short, I think Wei Dai has it 100% correct.

Not really. You can prevent internal collision attacks, for example, by using
the envelope method (e.g., HMAC) to set up the MAC message.

This is not accurate.  The original van Oorschot and Preneel paper
describes an internal collision attack on MD5 with the envelope method.
Please note also that HMAC is different from the envelope method, but
there are internal collision attacks on HMAC as well.  Once again, I
think Wei Dai was 100% correct here, as well.

You might want to consider reading some of the literature on internal
collision attacks before continuing this discussion too much further.
Maybe all will become clear then.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: collision resistance -- Re: Why is RMAC resistant to birthday attacks?

2002-10-24 Thread David Wagner
 There seems to be a question about whether:
 
 1. the internal collision probability of  a hash function is bounded by the
 inverse of the size of its internal state space, or
 
 2. the internal collision probability of a hash function is bounded by the
 inverse of the square root of size of its internal state space.
[...]
 Thus, if we consider just two messages, affirmation #1 holds, because
 P reduces to 1/S. If we consider n  2 messages, affirmation #2 holds (the
 birthday paradox).

Umm, that's basically what I said in my previous message to the
cryptography mailing list.  But my terminology was better chosen.
In case 2, calling this the internal collision probability is
very misleading; there is no event whose probability is the inverse
of the square root of the size of the internal state space.

Again, this is nothing new.  This is all very basic stuff, covered
in any good crypto textbook: e.g., _The Handbook of Applied Cryptography_.
You might want to take the time to read their chapters on hash functions
and message authentication before continuing this discussion.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: unforgeable optical tokens?

2002-09-24 Thread David Wagner

Bill Frantz  wrote:
If the challenger selects several of his stored challenges, and asks the
token reader to return a secure hash of the answers (in order), no
information will be leaked about the response to any individual challenge.
This procedure will allow the challenger to perform a large number of
verifications with a relatively small number of stored challenge-response
pairs.

I don't think this works.  A malicious reader could remember all the
challenges it gets and record all the responses it measures (before
hashing).  If the number of possible challenges is small, the malicious
reader might learn the entire challenge-response dictionary after only
a few interactions.  From that point on, the malicious reader would be
able to spoof the presence of the token.

(Of course, if malicious readers aren't a threat, then you don't
need fancy uncloneable tokens.  A simple cryptographic key written
on a piece of paper suffices.)

So I think you really do need to use a different challenge every time.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: unforgeable optical tokens?

2002-09-21 Thread David Wagner

Barney Wolff  wrote:
Actually, it can.  The server can store challenge-responses in pairs,
then send N as the challenge and use the N+1 response (not returned)
as the key.

But why bother?  What does this add over just using crypto
without their fancy physical token?  The uncloneability of
their token is irrelevant to this purpose.  You might as well
just carry around a piece of paper, or a floppy disk, with a
list of keys on it.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: unforgeable optical tokens?

2002-09-20 Thread David Wagner

Perry E. Metzger wrote:
But if you can't simulate the system, that implies that the challenger
has to have stored the challenge-response pairs because he can't just
generate them, right? That means that only finitely many are likely to
be stored. Or was this thought of too?

I believe the idea is that there are gazillions of possible challenges.
The challenger picks a thousand randomly in advance, scans the token
from the corresponding thousand different angles to get the thousand
responses, and stores all them.  Then, later, the challenger can select
one of his stored challenges, pass it to a remote entity, and demand
the correct answer.  Of course, a challenger must never re-use the same
challenge twice.

I find the physical token a poor replacement for cryptography, when the
goal is challenge-response authentication over a network.  In practice,
you never really want just challenge-response authentication; you
want to set up a secure, authenticated channel to the other party,
which means you probably also need key distribution functionality.
The physical token suggested here doesn't help with that at all.

It seems to me the real value of the physical token is that it provides a
piece of hardware that is (hopefully) very expensive to clone.  That's an
interesting capability to have in your bag of tricks.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Cryptogram: Palladium Only for DRM

2002-09-19 Thread David Wagner

Peter N. Biddle wrote:
[...] You can still extract everything in Pd via a HW attack. [...]

How is this BORE resistant? The Pd security model is BORE resistant for a
unique secret protected by a unique key on a given machine. Your hack on
your machine won't let you learn the secrets on my machine; to me that's
BORE resistant.  [...]

Yes, but...

For me, BORE (Break Once Run Everywhere) depends on the application.
You can't analyze Palladium in isolation, without looking at the app,
too.  It doesn't make sense to say Palladium isn't susceptible to BORE
attacks, if the applications themselves are subject to BORE attacks.

For example, if a record company builds an app that stores a MP3 of
the latest Britney Spears song in a Palladium vault, then this app
will be susceptible to BORE attacks.  Extracting that MP3 from any one
machine suffices to spread it around the world.  It won't comfort the
record company much to note that the attacker didn't learn the Palladium
crypto keys living on other machines; the damage has already been done.
Palladium doesn't make DRM resistant to BORE attacks.  It can't.

In short, there are some applications that Palladium can't make
BORE-resistant.  Some apps (e.g., DRM) are simply fundamentally fragile.

Maybe a more interesting question is: For which apps does Palladium
provide resistance against BORE attacks that is not available by other
means?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Cryptogram: Palladium Only for DRM

2002-09-17 Thread David Wagner

AARG!Anonymous  wrote:
David Wagner writes:
 Standard process separation, sandboxes, jails, virtual machines, or other
 forms of restricted execution environments would suffice to solve this
 problem.

Nothing done purely in software will be as effective as what can be done
when you have secure hardware as the foundation.

I wasn't thinking of pure software solutions.  I was thinking of a
combination of existing hardware + new software: use the MMU to provide
separate address spaces, and use a secure VM or OS kernel to limit what
those processes can do.  As far as I can see, this can provide just as
much protection against viruses for your bank account as Palladium can.

In general, with software and existing hardware working together, I
suspect we can already do everything Palladium can do, except for the DRM
and related applications founded on taking control away from the owner
of the machine.  Maybe I'm missing something.  Still, it seems to me that
Palladium would much more compelling if it left out the tamper-resistant
chip and gave up on the semi-coercive DRM-like applications.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Quantum computers inch closer?

2002-09-02 Thread David Wagner

David Honig  wrote:
At 08:56 PM 8/30/02 -0700, AARG!Anonymous wrote:
The problem is that you can't forcibly collapse the state vector into your
wished-for eigenstate, the one where the plaintext recognizer returns a 1.
Instead, it will collapse into a random state, associated with a random
key, and it is overwhelmingly likely that this key is one for which the
recognizer returns 0.

I thought the whole point of quantum-computer design is to build
systems where you *do* impose your arbitrary constraints on the system.

Look again at those quantum texts.  AARG! is absolutely correct.
Quantum doesn't work like the original poster seemed to wish it would;
state vectors collapse into a random state, not into that one magic
needle-in-a-haystack state you wish it could find.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Quantum computers inch closer?

2002-09-02 Thread David Wagner

Ed Gerck  wrote:
The original poster is correct, however, in that a metric function can
be defined
and used by a QC to calculate the distance between a random state and an
eigenstate with some desired properties, and thereby allow the QC to define
when that distance is zero -- which provides the needle-in-the-haystack
solution,
even though each random state vector can be seen as a mixed state and will, with
higher probability, be representable by a linear combination of eigenvectors
with random coefficients, rather than by a single eigenvector.

I must admit I can't for the life of me figure out what this paragraph
was supposed to mean.  Maybe that's quantum for you.

But I take it we agree: The original poster's suggested scheme for
cracking Feistel ciphers doesn't work, because quantum computers don't
work like that.  Agreed?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: [SIMSOFT] Protecting Privacy with Translucent Databases

2002-08-03 Thread David Wagner

R. A. Hettinga wrote:
Protecting Privacy with Translucent Databases

Last week, officials at http://www.yale.edu/Yale University complained to
the FBI that admissions officers from
http://www.princeton.edu/index.shtmlPrinceton University had broken into
a Yale Web site and downloaded admission decisions on 11 students who had
applied to both schools. [...]
Unfortunately, the security on the Yale Web site was atrocious: all anybody
needed to look up a student's record was that student's name, social
security number (SSN), and date of birth. [...]
[ proposes a solution ]


I'm glad commentators are beginning to point out that
more care should be put into protected personal information.
However, solution proposed in this article seems to me to
be more complicated than necessary.

I can't find any legitimate reason why colleges should need your
SSN when deciding whether to admit you.  They get away with it because
they can, but that doesn't mean they are right to do so.

It seems to me that a much more privacy-friendly solution would be
to simply refrain from asking for sensitive personal information like
SSN and date of birth -- name and a random unique identifier printed
on the application form ought to suffice.  (If SSN is later needed
for financial aid purposes, it could be requested after the student
decides to matriculate.)

Am I missing anything?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: building a true RNG

2002-08-01 Thread David Wagner

 David Wagner [EMAIL PROTECTED] writes:
  I don't know of any good cryptographic hash function that comes with
  a proof that all outputs are possible.  However, it might not be too
  hard to come up with plausible examples.  For example, if we apply the
  Luby-Rackoff construction (i.e., 3 rounds of a Feistel cipher), with
  ideal hash functions in each round, does this have the desired properties?
  It might.
 
 This seems to define a block cipher with no key, which is collision
 free but not one-way.  Am I misunderstanding what you're proposing?

You understood it perfectly.  Good point.
I didn't notice that problem.  Harrumph.

Thanks for catching my oversight!

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: building a true RNG

2002-07-30 Thread David Wagner

Amir Herzberg wrote:
But there's a big difference: the random oracle `assumption` is clearly not
valid for SHA-1 (or any other specific hash function).

Well, the random oracle model has problems, but I think those problems
are a bit more subtle than just an assumption that is true or false.

So the question is again: what is an assumption which we can test SHA-1
against, and which is sufficient to make the `entropy crunching alg` secure?

Hmm; I thought I answered this before.  Was it unclear?  If so, please
ask.  In any case, here's a summary.  In the standard model (without
random oracles), there is *no* such assumption.  There's no hope for
finding such an assumption, if you want to build a general-purpose
entropy cruncher that works for any distribution on the input samples.
One can prove this.  No matter what function you choose, there is an
input distribution that makes this function inadequate.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: building a true RNG

2002-07-29 Thread David Wagner

 An example: presume we take a simple first order statistical model. If our
 input is an 8-bit sample value from a noise source, we will build a 256
 bin histogram. When we see an input value, we look its probability up in
 the model, and discard every 1/(p(x)-1/256)'th sample with value x. When
 this happens, the sample is just eaten and nothing appears in the output;
 otherwise we copy.

I understand what you're trying to say, but this will not give a
general-purpose function that doesn't waste entropy regardless of the
input distribution.  This only works when the distribution on the input
stream consists of independent, memoryless samples from some distribution
on 8-bit values.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: building a true RNG

2002-07-29 Thread David Wagner

Barney Wolff  wrote:
This leads me to ask what may be a laughably naive question:
Do we even know that the popular hash functions can actually generate
all 2^N values of their outputs?

It seems very unlikely that they can generate all 2^N outputs
(under current knowledge).  However, they satisfy the next-best
thing: their output appears to be indistinguishable from uniform to
computationally-bounded observers, hence it's as good as if they
could generate all 2^N outputs for most purposes.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: building a true RNG

2002-07-29 Thread David Wagner

Oh dear.  On re-reading your message I now suspect that what you asked
is not what I originally thought you asked.  I see two questions here:
  Q1: If we cycle through all N-bit messages, are all
  2^N output values possible?
  Q2: If we cycle through all messages (possibly very long
  or very short), are all 2^N output values possible?
On first reading I thought you were asking Q1, but it now occurs to me
you were probably asking Q2.  I apologize profusely for any confusion
I may have caused.

Anyway, the answer to Q1 is likely to be No.  I'd guess that the answer
to Q2 is probably Yes, or close to it.

For the first scenario, if the hash behaves like a random oracle,
then one would expect only 2^N * (1 - 1/e) of the possible outputs to
be attainable.  Of course, the forbidden outputs are not easy to find;
the best way I know is by exhaustive search over all 2^N N-bit messages.
For the second scenario, if the hash behaves like a random oracle, then
one would expect all outputs to be attainable.  No significant deviation
from the random oracle model is known for SHA1 (well, with one exception
that doesn't appear to be relevant here).  That said, this is not a proof,
and my answers just represent my best guesses.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: building a true RNG

2002-07-29 Thread David Wagner

 3) For a one-way hash function should not expect a _constructive_ 
 proof that it generates all possible codes;  such a construction
 would violate the one-way property.

Nitpick: the last statement does not seem quite right to me.  I'm thinking
of the notion of a one-way permutation.  For instance, the RSA function
f(x) = x^3 mod n is conjectured to be a one-way permutation, assuming n
is a RSA modulus with unknown factorization and the RSA assumption holds.
(I'm being a little fast and loose with the formal details, but I hope
this conveys the idea.)

That said, I'm not claiming that the RSA function would make a good
cryptographic hash function.  Cryptographic hash functions are expected
to be a lot more than just one-way and collision-free.

I don't know of any good cryptographic hash function that comes with
a proof that all outputs are possible.  However, it might not be too
hard to come up with plausible examples.  For example, if we apply the
Luby-Rackoff construction (i.e., 3 rounds of a Feistel cipher), with
ideal hash functions in each round, does this have the desired properties?
It might.

On the gripping hand, I don't think this is a real issue in practice.
SHA1 is probably good enough for all practical purposes that I can
think of.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: building a true RNG

2002-07-29 Thread David Wagner

 To test a hash function h() whose range is S,
 let F be the set of balanced functions from S - {0, 1}.  (Balanced
 meaning that each f in F maps exactly half of S to 0 and half to 1.)
 If you can contrive to choose many members of F at random, and compose
 them with h for many arguments of h, you should be able to set
 confidence limits on how much of S is covered by h.

Can you elaborate?  What would the computational complexity of this be?

Estimating the size of {x : f(h(x))=1} to within relative error e
requires O(1/e^2) evaluations of h, if I understand correctly.  If we
consider the difference between a hash function that can hit all of
S, and another hash function that misses only one element of S, we'll
need to resolve the size of these sets to within relative error 1/|S|.
That suggests we'll need |S|^2 evaluations of h, which is infeasible
for SHA1 and slower than the naive O(|S|) algorithm.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: building a true RNG

2002-07-29 Thread David Wagner

 The reason for batching entropy input is to prevent someone who has 
 broken your system once from discovering each small entropy input by 
 exhaustive search.  (There was a nice paper pointing this out in. If 
 someone has the reference...)

I believe you are referring to the state compromise attacks
described in the following paper:
  J. Kelsey, B. Schneier, D. Wagner, C. Hall,
  Cryptanalytic Attacks on Pseudorandom Number Generators,
  FSE'98.  http://www.counterpane.com/pseudorandom_number.html
I once wrote a short note about the relevance of this to IPSec:
  http://www.cs.berkeley.edu/~daw/my-posts/using-prngs

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: building a true RNG

2002-07-27 Thread David Wagner

Amir Herzberg wrote:
So I ask: is there a definition of this `no wasted entropy` property, which
hash functions can be assumed to have (and tested for), and which ensures
the desired extraction of randomness?

None that I know of.  I'm not aware of much work in the crypto literature
on this topic.

Actually, there is not much hope for such a property.  It is pretty easy
to see that, if we make no assumptions on the entropy inputs other than
they have sufficient entropy, then no single deterministic algorithm can
ever be good at extracting randomness.  If we fix any single extraction
algorithm A, there exists a distribution D on the inputs which make it
give non-uniform output (for example, D might be the uniform distribution
on the set {x : A(x) has its first ten bits zero}).  This is a standard
observation from the theory community's work on derandomization.

The only way out of this I can see is to assume we have a small seed
of uniformly distributed randomness on the side.   This is exactly the
direction explored in the theory community in work on extractors, the
leftover hashing lemma, and the like.  However, from a cryptographic point
of view, such an assumption is highly unreasonable (even worse than the
random oracle assumption).

In short: I know of no better way to analyze cryptographic randomness
extraction than to use the random oracle model.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: building a true RNG

2002-07-27 Thread David Wagner

John S. Denker wrote:
Amir Herzberg wrote:
 So I ask: is there a definition of this `no wasted entropy` property, which
 hash functions can be assumed to have (and tested for), and which ensures
 the desired extraction of randomness?

That's the right question.

The answer I give in the paper is 

 What we are asking is not really very special. We
 merely ask that the hash-codes in the second
 column be well mixed. 

Alas, that's not a very precise definition.

Actually, my intuition differs from yours.  My intuition is that
entropy collection requires fairly strong assumptions about the hash.
For instance, collision-freedom isn't enough.  One-wayness isn't enough.
We need something stronger, and something that appears difficult to
formalize in any precise, mathematically rigorous way.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: building a true RNG (was: Quantum Computing ...)

2002-07-24 Thread David Wagner

Eugen Leitl  wrote:
Is there any point in compressing the video before running it through a 
cryptohash?

No.  (assuming you're talking about lossless compression)

In general, any invertible transformation neither adds or subtracts
entropy, and hence is extremely unlikely to make any difference to the
performance of the hash function (assuming SHA-1 is cryptographically
secure, which it is currently believed to be).  Lossless compression
is just one special kind of invertible transformation.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Bernstein's NFS machine

2002-03-02 Thread David Wagner

Very interesting.  Thanks for the analysis.

Bernstein's analysis is based on space*time as your cost metric.
What happens if we assume that space comes for free, and we use simply
time as our cost metric?  Do his techniques lead to an improvement in
this case?

It looks to me like there is no improvement in the first phase, but
there is some improvement in the second phase (reduction in time from
B^2 to B^1.5).  Then, of course, we need to re-consider the parameters
to balance the work, and it seems we would want to choose E = B^1.25,
subject to the constraint that E is large enough to produce at least
B relations.  What does this solve for, in terms of L?  How much do his
matrix solving techniques speed up the total computation, in the case
where we count only the running time of the adversary?

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: password-cracking by journalists... (long, sorry)

2002-01-21 Thread David Wagner

Will Rodger  wrote:
It included all sorts of people traipsing up to 
Capitol Hill to make sure that ordinary research and system maintenance, 
among other things, would not be prosecuted.

I think our understanding of the DMCA has changed
significantly since it was first introduced, and it's
not clear to me that the DMCA provides the level of
protection that should perhaps be there.

For instance, none of the exemptions for research
apply to 1201(b), the half of the DMCA that bans making
circumvention devices (as opposed to 1201(a), which bans
circumventing and does have a few exemptions).  As far as
I can tell, 1201(b) appears to be a real concern for
certain types of research in this field.

OK. so that's my rap on why this law is bad but won't likely put anyone on 
this list in jail.

The biggest issue for researchers may be not in the DMCA's
criminal provisions, but rather in its civil provisions.
(i.e., money, not jailtime)  And the civil aspects of the
DMCA have a truly sharp sting.

I spent a lot of time talking to lawyers at UC Berkeley and
elsewhere about this very issue, and there appears to be a real
but very-hard-to-quantify risk -- a risk to scientists that should
not be lightly dismissed.

Given this risk, I've decided I cannot afford to work any further
in the area of copy protection as long as the uncertainty remains.
And how in good conscience can I advise students working with me
to work in this troubled area?  I can't.



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: Computer Security Division Activities

2001-10-13 Thread David Wagner

Mike Brodhead  wrote:
Just about all of the private-sector conferences I have attended
require registration.

I think this is a poor example.  I expect you'd be welcome to use the
name 'John Smith' and pay cash, if you like.

I think the real point is this: We see, all too often, cases where it is
claimed that sacrifices of civil liberties are necessary for security,
yet upon closer inspection one gets the impression that those sacrifices
may not provide any security benefits at all.  Identification requirements
may be a good example of this: if teenagers have no problems obtaining
fake ID, what can we conclude about a terrorist operation?

In a perfect world, we'd only sacrifice civil liberties when there is
sufficient benefit to security.  In the real world, though, it seems
that often there is great pressure to do something visible, even if
what you do doesn't have any true security value.  It is not too hard to
find many examples of security mechanisms that improve the perception
of security (i.e., give warm fuzzy feelings to the uninformed) but which
actually contribute very little to real security.  Think of those photo
ID requirements when you fly, for example -- I have yet to hear anyone
articulate how they help prevent terrorism (as opposed to improving the
airlines' bottom line or reassuring the public).  While such measures may
be politically attractive and perhaps even defensible in some situations,
they bring many risks with them, and I do think we need to be careful
about how we employ them.

As for Gilmore's specific example, I do not take a strong position in
either direction.  However, whatever you think about the specific notion
of a new short-term ad-hoc ID requirement for NIST workshops, I think
his general point has considerable merit that we should not overlook.



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: chip-level randomness?

2001-09-20 Thread David Wagner

Bill Frantz  wrote:
At 2:17 PM -0700 9/19/01, Theodore Tso wrote:
It turns out that with the Intel 810 RNG, it's even worse because
there's no way to bypass the hardware whitening which the 810 chip
uses.

Does anyone know what algorithm the whitening uses?

Just like von Neumann's unbiasing procedure, but with a few bits of
state instead of just one.  See Paul Kocher's analysis for the details.

In short, the whitening is only enough to reduce any biases in the raw
generator, not to remove them.



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]