Hadmut Danisch wrote:
It can be easily shown that there is no lossless
compression method which can effectively compress every possible
input.
OK.
... I need a book about computer science or encoding theory,
which explicitely says that this is impossible, in a way that a person
unexperienced in co
On Tue, Aug 31, 2004 at 02:48:00PM +0200, Hadmut Danisch wrote:
> It can be easily shown that there is no lossless
> compression method which can effectively compress every possible
> input. Proof is easy: In a first step, consider all
> possible messages of length n bit, n>0. There are 2^n diff
What is the best way to publish the specification of a protocol to
the crypto community or ask for reviews of/comments on it?
I tried posting to sci.crypt with no luck, and it was suggested
that this would be an appropriate place to ask. Assuming that's
right, here are the approximate details of
Dan Carosone wrote:
> There is one application of hashes, however, that fits these
> limitations very closely and has me particularly worried:
> certificates. The public key data is public, and it's a "random"
> bitpattern where nobody would ever notice a few different bits.
>
> If someone finds a
To be more precise: Your odds of getting a modulus that
you can divide by something are very high. Your odds of
getting a modulus that you can factor efficiently are
very low.
William
> -Original Message-
> From: Matt Crawford [mailto:[EMAIL PROTECTED]
> Sent: Monday, August 30, 2004 11:
My understanding is that once you've used trial division to
get rid of all the extremely short divisors, a random number
of length n is about as hard to factor as an RSA modulus of
the same length. I don't think there are a lot of easy-to-factor
moduli around.
William
> -Original Message---
*
DIMACS Workshop on Computational Issues in Auction Design
October 7 - 8, 2004
DIMACS Center, Rutgers University, Piscataway, NJ
Organizers:
Jayant Kalagnanam, IBM Watson Lab, [EMAIL PROTECTED]
Eric Ma
Hi,
I need a literature reference for a simple problem of
encoding/compression theory:
It can be easily shown that there is no lossless
compression method which can effectively compress every possible
input. Proof is easy: In a first step, consider all
possible messages of length n bit, n>0.
*
DIMACS Workshop on Mobile and Wireless Security
November 3 - 4, 2004
DIMACS Center, Rutgers University, Piscataway, NJ
Organizers:
Bill Arbaugh, University of Maryland, [EMAIL PROTECTED]
Presented under the
certificates. The public key data is public, and it's a "random"
bitpattern where nobody would ever notice a few different bits.
If someone finds a collision for microsoft's windows update cert (or a
number of other possibilities), and the fan is well and truly buried
in it.
Correct me if I'm wron
David Honig wrote:
At 12:12 AM 8/27/04 -0700, Ed Gerck wrote:
David Honig wrote:
"Applications can't be any more secure than their
operating system." -Bram Cohen
That sounds cute but I believe it is incorrect. Example: error-
correcting codes. The theory of error-correcting codes allows
information
At 12:12 AM 8/27/04 -0700, Ed Gerck wrote:
>
>David Honig wrote:
>> "Applications can't be any more secure than their
>> operating system." -Bram Cohen
>
>That sounds cute but I believe it is incorrect. Example: error-
>correcting codes. The theory of error-correcting codes allows
>information to b
On Sun, 29 Aug 2004, John Denker wrote:
>bear wrote:
>>
>>H2(H1) = H1( H1(M) xor H1( TT( M)))
>
>I think that was intended to be something like
>
> H2(M) = H1( H1(M) xor H1( TT( M)))
> ^
Actually, it was intended to take a hash function
as an argument and define a new hash
I agree with 99% of what Hal Finney wrote. I won't repeat
the parts we agree on.
But let's discuss these parts:
how much harder?
Well, probably a lot. Finding a regular B2 collision in a perfect
160 bit hash compression function takes 2^80 work. Finding a double
collision like this is essentiall
Bear writes:
> This construction takes slightly more than twice as
> much work and twice as much internal state as a
> conventional hash function (embrace it guys, I don't
> think there's a way around it).
>
>H2(H1) = H1( H1(M) xor H1( TT( M)))
>
> TT denotes some trivial transformation
> (
John Denker writes:
> Run two hash-machines in parallel. The first one operates
> normally. The second one puts the first block of the string
> into a buffer (bounded size!) and then proceeds to hash the
> rest of the string, starting with the second block. At the
> end of the message, it append
bear wrote:
H2(H1) = H1( H1(M) xor H1( TT( M)))
I think that was intended to be something like
H2(M) = H1( H1(M) xor H1( TT( M)))
^
TT denotes some trivial transformation
(I propose swapping high and low halves
of the first block). H1 is a conventional
hash function and H2
John Denker wrote:
Here's another splint using the same general idea, but
with less complexity: calculate the hash once then
prepend that to the message and hash again, i.e.
hash3(M) := hash1[hash1(M) (+) M]
This is Schneier's and Ferguson's solution to then-known hash function
weaknesses in P
Here's a handy way to build Joux-resistant hash
functions with existing tools.
This construction takes slightly more than twice as
much work and twice as much internal state as a
conventional hash function (embrace it guys, I don't
think there's a way around it).
H2(H1) = H1( H1(M) xor H1(
On Sat, 28 Aug 2004, John Denker wrote:
>Jerry Leichter writes:
> >> However ... *any* on-line algorithm falls to a Joux-style attack.
>Hal Finney wrote:
>> ... hashes that support incremental hashing, as any useful hash surely must,
>If you insist on being able to hash exceedingly long strin
On Thu, 26 Aug 2004, John Denker wrote:
>Hi --
>
>I don't fully understand the new attacks on the hash
>functions (which I suppose is forgiveable, since the
>papers don't seem to be available).
>
>But here's a shot in the dark that seems promising at
>first glance:
>
>Given a message m, pad it t
Jerry Leichter writes:
>> However ... *any* on-line algorithm falls to a Joux-style attack.
Hal Finney wrote:
... hashes that support incremental hashing, as any useful hash surely must,
If you insist on being able to hash exceedingly long strings
(i.e. longer than your storage capacity) here is a
22 matches
Mail list logo