Cryptography-Digest Digest #409, Volume #14      Tue, 22 May 01 16:13:01 EDT

Contents:
  Subset Sum Density >=1 (Al)
  Re: How do you measure the power of a chipher? (those who know me have no need of my 
name)
  Re: "computationally impossible" and cryptographic hashs (David Wagner)
  Re: "computationally impossible" and cryptographic hashs ("Paul Pires")
  Re: "computationally impossible" and cryptographic hashs ("Harris Georgiou")
  Re: "computationally impossible" and cryptographic hashs ("Douglas A. Gwyn")
  Re: "computationally impossible" and cryptographic hashs ("Joseph Ashwood")
  Re: "computationally impossible" and cryptographic hashs ("Henrick Hellström")
  Re: Best, Strongest Algorithm (John Savard)
  Re: Best, Strongest Algorithm (John Savard)
  Re: RSA private key size (Morten Primdahl)
  Re: PGP details (Andrew McDonald)
  Re: "computationally impossible" and cryptographic hashs ("Douglas A. Gwyn")
  Re: "computationally impossible" and cryptographic hashs (David Wagner)
  Re: "computationally impossible" and cryptographic hashs (David Wagner)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Al)
Subject: Subset Sum Density >=1
Date: 22 May 2001 10:38:01 -0700

Are there any algorithms to quicken the solving of the subset sum problem with 
density>=1?
There maybe multiple, single or no solutions to the problem.

Given a set of n or more n bit numbers, and a publicly known sum of a privately 
selected subset.
How large should n be to make computation by the public of the subset unfeasable?

------------------------------

From: those who know me have no need of my name <[EMAIL PROTECTED]>
Subject: Re: How do you measure the power of a chipher?
Date: 22 May 2001 16:54:45 GMT

[zooming off on a tangent]

<[EMAIL PROTECTED]> divulged:

>Tom St Denis wrote:

>> Um a 56-bit RSA key is not secure at all.  

>128 bit RSA is alost as unsecure as 56 bit RSA ;->>
>need at least 1024 bits...

lack of a threat model is getting so boring in these types of
responses.  when discussing strengths elsewhere one might assume a
model, but this is sci.crypt.

query: is 56 or 128 bit rsa really unsuited to protection of a datum
whose valuable lifetime (after which it becomes useless, or of so
little value to adversaries as to be effectively useless) is 10ms?
50ms?  1s?  (note that i didn't ask if there were better algorithms,
sizes or time frames.)

heck, even that is too little model.

-- 
okay, have a sig then

------------------------------

From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: "computationally impossible" and cryptographic hashs
Date: 22 May 2001 17:58:44 GMT

Paul Pires wrote:
>I notice you said "behave like randomly-chosen functions" and
>not "behave like a function that produces random output"

The difference is that (for a randomly-chosen function)
if you feed the same input in twice, you'll definitely get
the same output both times.

------------------------------

From: "Paul Pires" <[EMAIL PROTECTED]>
Subject: Re: "computationally impossible" and cryptographic hashs
Date: Tue, 22 May 2001 11:14:55 -0700


David Wagner <[EMAIL PROTECTED]> wrote in message 
news:9ee9ck$mj0$[EMAIL PROTECTED]...
> Paul Pires wrote:
> >I notice you said "behave like randomly-chosen functions" and
> >not "behave like a function that produces random output"
>
> The difference is that (for a randomly-chosen function)
> if you feed the same input in twice, you'll definitely get
> the same output both times.

Ah... I get it, I think. Randomly chosen function = Pseudo random
function (deterministic), Random function = indeterministic? Isn't
a true random function an abstraction? I can't think of an example
where the function could be responsible for randomness. Maybe the
input that the function processes but not the function itself, right?

Thanks




------------------------------

From: "Harris Georgiou" <[EMAIL PROTECTED]>
Subject: Re: "computationally impossible" and cryptographic hashs
Date: Tue, 22 May 2001 21:32:41 +0300


Ï SCOTT19U.ZIP_GUY <[EMAIL PROTECTED]> Ýãñáøå óôï ìÞíõìá óõæÞôçóçò:
[EMAIL PROTECTED]
> [EMAIL PROTECTED] (Daniel Graf) wrote in <[EMAIL PROTECTED]>:
>
> >     I have a basic question about cryptographic hashes.  A friend
> >......
>
>   Its not a stupid question but many things are written to confuse
> new commers. But just look at the counting argument. If you have
> a 64 bit hash. It can only take on 2**64 vlaues.  IF you allowe
> passwords to be long then in theroy you could have more than
> 2**64 passwords. That means some different passwords will hash
> to the same value.  What people are counting on is that for
> the number of passwords actaully used. There will be only a small
> chance of picking two that hash the same. THe point is no matter
> how small its not ZERO
>
> David A. Scott
> --

Yes, it's a matter regarding hashes that troubled me at first too. This is a
common property when the output space is a reduction (in size/dimensions) of
the input space.
Of course, even when the hashing works as a perfect 64-bit random function,
(2**64)+1 unique tries gives you 100% chance that you will have found a
collision, while on average this only takes about: 2**64/2 = 2**63 tries
(half).


--

Harris

- 'Malo e lelei ki he pongipongi!'




------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: "computationally impossible" and cryptographic hashs
Date: Tue, 22 May 2001 18:16:23 GMT

Mark Wooding wrote:
> From another point-of-view, however, I think they mean the same thing.

Only if you never feed the function the same input twice.

------------------------------

From: "Joseph Ashwood" <[EMAIL PROTECTED]>
Subject: Re: "computationally impossible" and cryptographic hashs
Date: Tue, 22 May 2001 11:41:37 -0700

First, don't worry, it's not a stupid question, it's merely a beginner
question.

As it happens you are close to correct, there will be more than one value
that collides to the same output, the major question is that of the
difficulty of finding such values. With the UNIX DES-based hash there's
actually only 2^56 possible inputs (8 characters, only 7 bits of each
character are active). A simple search through the 2^56 will find at least
one value that maps to the output. So there is at least one value in there
that will collide with the valid password hash, there may be others. In
order to perform the search in this case it's easiest to perform a selective
brute force, sorting the possibilities on valid language structure, the
current record for a pure brute force of DES stands at 22 hours, because you
are likely to have a large number of targets (e.g. anyone on the system) and
a good sorting of the values (based on a dictionary) this can be performed
much faster, with breaking the vast majority of a large university taking
less than a week.

That's the bad news. The good news is that UNIX systems are moving away from
the DES-based hash and onto better technologies. The most common method is
to replace the hash with an MD5 hash. The MD5 hash allows the users to use
passphrases of arbitrary size. The hash is also 128-bits, so where DES has
been brute forced in under a day, MD5 is completely infeasible to break. To
make the time differences even more clear, you'd be looking at 2^128 work to
break a single known hash (this is actually a subtly different problem from
finding a single collision which would take 2^64 work). If I give you a
system that will do 2^56 work (e.g. break DES) in 1 second that machine
working 24 hours a day, 7 days a week (with no rolling blackouts) will take
2^72 seconds ~=  10^15 years to break the MD5 hash. Unfortunately MD5 isn't
believed to be quite that strong, so maybe it'll only take 10^10 years.
However this assumes that the user is security conscious enough to choose an
input that has 128-bits worth of entropy, since English has only about 1-bit
of entropy per character, that's a long passphrase. Additionally most users
will still use the same poor password choice they used for DES, reducing the
actual time to break slightly less of a vast majority of the same large
university to still a week.

As to the one-wayness of cryptographic hash functions, this is a rather
interesting study, some would argue a study in futility, but I am not so
fatalist. What is meant by this is that given f(x) it is infeasible to find
y such that f(y)=f(x). There are various tactics to get to that end, and
explaining them would in general is quite difficult, but I will try to
explain a few simplist methods (that are very coarse overviews, of other
methods).

For the first it is useful to realise that the DES-based hash from UNIX
actually gives us a bit of enlightenment about one possibility. Use the
value being hashed to key an encryptio algorithm, at first this sounds
rather odd, but it makes sense. One of the most desirable properties of an
encryption algorithm, and one that we very commonly measure is it's
resistance to having the plaintext-ciphertext pair known. Since we can
easily fix the plaintext to all zeros (as in UNIX), we can reduce the
availability of variations. Because of the construction of DES, this
one-wayness happens rather slowly, taking 8 rounds for diffusions, and 16
rounds for complete resistance. DES uses what is called a feistel network to
perform this, I believe the construction of a feistel network is covered in
the FAQ if it isn't, just ask.

The other option's vary widely. I will explain in the most basic terms I can
the one called the Wide Trail stategy(used in Whirlpool). The Wide Trail
stategy works by creating a set of functions, each of which has a certain
desired property, taking special care on the selection of each. In the case
of Whirlpool (cryptonessie submission if you'd care to examine it, and very
well written) the layers are; non-linear, cyclic, linear diffusion, key
addition, these are the standard layers that are currently believe necessary
for a Wide Trail construction to be secure. Unlike the DES-based operation
the Whirlpool hash is designed from the beginning to be a hash, and uses
it's previous output for input to the next iteration. Because of a careful
selection of the layers, the information is scattered through the output
very quickly, making it believed to be generally secure. The Whirlpool paper
also gives detailed analysis, and very pointed remarks on what the authors
consider to make a strong hash function (the list falls in line with the
concensus of cryptanalysts), they then go on to detail how they justify a
claim that they meet those requirements. I would suggest the Whirlpool paper
to anyone wanting to see a very well written cryptographic paper, with
analysis.

If you have any other questions, don't hesitate to ask.
                                    Joe

Daniel Graf <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> I have a basic question about cryptographic hashes.  A friend
> and I were talking about how the hash algorithm used some UNIX
> machines uses only 64 (or so) bits of the password.
> Now, I don't know much of anything about cryptography, but I
> have used hash algorithms for use with hash-tables.   Are
> cryptographic hashes mostly dissimilar, or am I wrong in guessing that
> more than one input at a Unix login prompt may match the string found
> in the passwd file? (particularly for passwords greater than 8
> characters).
> Looking at the sci.crypt faq I found in section 7.1 something
> which says:
>
> For some one-way hash functions it's also computationally
> impossible to determine two messages which produce the
> same hash.
>
> Does "computationally impossible" mean literally that such a
> thing cannot happen?
>
> Sorry if this is such a stupid question.




------------------------------

From: "Henrick Hellström" <[EMAIL PROTECTED]>
Subject: Re: "computationally impossible" and cryptographic hashs
Date: Tue, 22 May 2001 21:21:32 +0200

A function is always deterministic in the sense that if x = y then f(x) =
f(y). That's actually what it means for something to be a function.

The difference between random and pseudo random is more subtle. A function g
of type F (e.g. {0,1}^n -> {0,1}^n) is random if it chosen with a perfect
uniform distribution from the entire set of F. A function f_k of type F is
pseudo random if it is not random (usually because it is selected from a
relatively small subset of F, but selected with uniform distribution from
that subset), but it is hard to tell that it wasn't chosen with perfect
uniform distribution from the entire set of F.

For example, a 128-bit block cipher belongs to the set of functions
{0,1}^128 -> {0,1}^128. This set contains (2^128)^(2^128) elements. That
number is *really* large, and obviously much, much larger than a normal key
space like 2^128 or 2^256. So the block cipher is not a random function. But
the key space is sufficiently large to make it possible to design keyed
functions that are hard to tell from randomly selected functions from the
entire set.

--
Henrick Hellström  [EMAIL PROTECTED]
StreamSec HB  http://www.streamsec.com

"Paul Pires" <[EMAIL PROTECTED]> skrev i meddelandet
news:V1yO6.10139$[EMAIL PROTECTED]...
>
> David Wagner <[EMAIL PROTECTED]> wrote in message
news:9ee9ck$mj0$[EMAIL PROTECTED]...
> > Paul Pires wrote:
> > >I notice you said "behave like randomly-chosen functions" and
> > >not "behave like a function that produces random output"
> >
> > The difference is that (for a randomly-chosen function)
> > if you feed the same input in twice, you'll definitely get
> > the same output both times.
>
> Ah... I get it, I think. Randomly chosen function = Pseudo random
> function (deterministic), Random function = indeterministic? Isn't
> a true random function an abstraction? I can't think of an example
> where the function could be responsible for randomness. Maybe the
> input that the function processes but not the function itself, right?
>
> Thanks
>
>
>



------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Best, Strongest Algorithm
Date: Tue, 22 May 2001 19:30:09 GMT

On Thu, 10 May 2001 11:38:50 -0700, "Joseph Ashwood" <[EMAIL PROTECTED]>
wrote, in part:

>If you remember correctly I'm one of them. And I still stand by the
>statement that it is impossble to dependably compress the output of a good
>encryption function.

That's certainly true. I haven't heard Mr. Scott claim otherwise, but
then I don't follow everything he posts.

>At one point you specifically stated that BICOM could
>output a single byte output that was Rijndael encrypted, that always has
>been, and always will be the problem.

But that _is_ possible. Just use counter mode, for example. I agree
you can't use ECB mode, or CBC mode, to produce a one-byte
Rijndael-encrypted output. But other modes allow one to encipher
messages in fractions of a block, although they may require an IV the
size of a block in addition.

John Savard
http://home.ecn.ab.ca/~jsavard/frhome.htm

------------------------------

From: [EMAIL PROTECTED] (John Savard)
Subject: Re: Best, Strongest Algorithm
Date: Tue, 22 May 2001 19:35:26 GMT

On 22 May 2001 15:54:03 GMT, [EMAIL PROTECTED]
(SCOTT19U.ZIP_GUY) wrote, in part:

>  Yes Joe and the other phony crypto gods refuse to openly state
>that full RIJNDAEL can be used in a full block mode that is not
>a weakened 3 letter chaining mode. Where the result can still be
>bijective to byte files on input and output. They seem to wrongly
>think only something weak like a counter mode can do this.

Ah. Then you don't claim that you can, with Rijndael, using your
preferred "full block mode", encipher a one byte compressed input to a
one byte output?

Of course, if you allow a one-block IV to make each encryption, even
with the same key, unique, then just use a mode that lets you encrypt
the IV using (for example) ciphertext stealing...

John Savard
http://home.ecn.ab.ca/~jsavard/frhome.htm

------------------------------

From: Morten Primdahl <[EMAIL PROTECTED]>
Subject: Re: RSA private key size
Date: Tue, 22 May 2001 21:41:08 +0200


Thanks for the replies. What are the odds that there's
more than one valid private key in the entire keyspace
(in a proper implementation)? 

I've tested the validity of my implementation for small
encryptions (eg. 16 bit), and I experience several matches
when I test all keys in the keyspace. I'd assume this to be
a modulo problem, but I'm certain that my block size
is smaller than my key strength. Here's what I do:

Retrieve requested keyStrength.

halfKeyStrength = keyStrength/2
blockSize = keyStrength-1;

Generate two primes p, q each of size (bit length) 
halfKeyStrength (q != p)

f = (q-1)(p-1)
e generated prime of size halfKeyStrength, gcd(e,f) != 1
and e < f.

n = q*p
d = e^{-1} mod f

References to good RSA tutorials appreciated, William
Stalling's Network Secutiry Essentials just doesn't cut
it for me..

Thanks

Morten

------------------------------

From: [EMAIL PROTECTED] (Andrew McDonald)
Subject: Re: PGP details
Date: Tue, 22 May 2001 19:40:39 GMT

On Mon, 21 May 2001 06:47:53 GMT,
[EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
>  
>  Andrew McDonald wrote:
[changing preferences in OpenPGP keys]
> > It can be done (not very nicely though) with gnupg.
>  
>  yes.. but GnuPG will put also Preferred Hash and 
>  Preferred Compress Algo list in key.
>  
>  you may want (need) to modify this one too - because GnuPG will
>  put ZLIB as Preferred Compress Algo and this is not supported by
>  PGP so messages encrypted with GnuPG will not decrypt on PGP...

Yes. Clearly your preferred options for symmetric cipher, hash and
compression ought to be available in the PGP tool you use. Otherwise
it rather defeats the point of having preferences. :-)

-- 
Andrew McDonald
[EMAIL PROTECTED]
http://www.mcdonald.org.uk/andrew/
OpenPGP DSA/ElG 1024/2048  3EDE 0FBC 6138 DCA0 FC8E C508 FCBB A9C8 F2DE ED36

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: "computationally impossible" and cryptographic hashs
Date: Tue, 22 May 2001 19:22:14 GMT

Paul Pires wrote:
> Ah... I get it, I think. Randomly chosen function = Pseudo random
> function (deterministic), Random function = indeterministic?

Not in general; a "randomly chosen" function (according to some
distribution over the space of all functions) will a posteriori
not act pseudo-randomly.

> Isn't a true random function an abstraction?

All functions are abstractions.

> I can't think of an example where the function could be
> responsible for randomness.

Easy: whatever the input, the function ignores it and flips a
coin to determine what to return.

------------------------------

From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: "computationally impossible" and cryptographic hashs
Date: 22 May 2001 20:02:14 GMT

Paul Pires wrote:
>David Wagner <[EMAIL PROTECTED]> wrote:
>> Paul Pires wrote:
>> >I notice you said "behave like randomly-chosen functions" and
>> >not "behave like a function that produces random output"
>>
>> The difference is that (for a randomly-chosen function)
>> if you feed the same input in twice, you'll definitely get
>> the same output both times.
>
>Ah... I get it, I think. Randomly chosen function = Pseudo random
>function (deterministic), Random function = indeterministic? Isn't
>a true random function an abstraction? I can't think of an example
>where the function could be responsible for randomness. Maybe the
>input that the function processes but not the function itself, right?

Hold on.  I think you are confusing yourself here. :-)

A mathematical function has the property that if you give it the
same input twice, you'll get the same output.  That's part of the
definition.

A randomly-chosen (or pseudorandomly-chosen) function is just a
function, so it has the same property.

The phrase "behaves like a function that produces random output"
is meaningless, because if it produces random output, it might produce
different outputs on the same inputs, which means that it is not a
function after all.

------------------------------

From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: "computationally impossible" and cryptographic hashs
Date: 22 May 2001 20:05:55 GMT

Douglas A. Gwyn wrote:
>Not in general; a "randomly chosen" function (according to some
>distribution over the space of all functions) will a posteriori
>not act pseudo-randomly.

Well, when people say "a random function", they often mean
"a function chosen at random according to the uniform distribution",
even though the two are not, strictly speaking, equivalent.  The
former description is just easier on the fingers.  You're right to
ask people to be careful about the distinction between the two,
but in this case I think it was pretty clear from the context
which was intended.

Meanwhile, I could voice the same complaint about your terminology
usage.  Strictly speaking, a pseudorandom function need not have
(an approximation to) the uniform distribution.  Pseudorandomness
refers to "computationally indistinguishable from random", and
just as a random function might be randomly distributed according
to some arbitrary distribution D, so too a pseudorandom function
might be distributed according to a distribution D' that is
computationally indistinguishable from D.

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list by posting to sci.crypt.

End of Cryptography-Digest Digest
******************************

Reply via email to