Cryptography-Digest Digest #428, Volume #14      Fri, 25 May 01 02:13:01 EDT

Contents:
  Re: Crypto NEWBIE, wants to create the 100% SAFE FRACTAL encoding... Am I a fool ? 
("BenZen")
  Re: Crypto NEWBIE, wants to create the 100% SAFE FRACTAL encoding... Am I a fool ? 
("Paul Pires")
  Re: Combining functions for stream ciphers (Boris Kazak)
  Re: rs232 data encryption ("Andrew Jamieson")
  Re: rs232 data encryption ("Jeffrey Walton")
  Re: Help with a message ("Douglas A. Gwyn")
  Re: RSA's new Factoring Challenges: $200,000 prize. (my be repeat) ("Michael Brown")
  Re: Evidence Eliminator Detractors Working Hard But No Result? (Eric Lee Green)

----------------------------------------------------------------------------

From: "BenZen" <[EMAIL PROTECTED]>
Subject: Re: Crypto NEWBIE, wants to create the 100% SAFE FRACTAL encoding... Am I a 
fool ?
Date: Thu, 24 May 2001 23:18:23 -0400

Paul Pires wrote in message ...
>
>BenZen <[EMAIL PROTECTED]> wrote in message 
>news:zBhP6.514$[EMAIL PROTECTED]...
><Snip>
>> The main difficulty is finding a Fractal algorithm, that insures sufficiently VAST
>> domain for seeds (like a random-number-generator)... And that we can insure
>> 1) it won't become periodic.
>
>For how long? This is a deterministic process right? It must repeat sooner or
>later. Don't you mean "for a really long time"?

I guess you are right.  I'm hoping for the 'longest' non-periodic sequence.
On the // line of thought; Isn't the 'PI' numerical exact value non-periodical;
Hence an infinite number of decimals ?... Then why could not be a specific
Fractal algorithm proven to generate 'irrationnal'-like sequence ?

How do we prove PI decimals are non periodic Paul ?
Can we apply this test to any other algorithm ?.. I have fractals in mind ;)

>> ... Then when the sequence is tested against statistical models for demonstration.
>> .... Then it's 100% Safe Fractal based encoding.
>>
>> It's just a matter of time.. I just don't see any problem.
>> Hence the question:  Am I a fool ?
>> IMHO, pseudorandom, is just bound to fail, as a pale imitation of Fractal+Chaos.
>
>Fail how? Be insecure? I don't see how you jump to that conclusion. Why isn't
>your concept for producing "Fractal+Chaos" Not a pseudo random function?
>
I see your point Paul.. Indeed, I do feel insecure with common pseudo-random
generators that deals in a certain set of algorithm.  I have seen Fractals in
action, and zoomed so many places.. While the 'Fractal' map might not be
infinite in terms of taking discrete samples from it, using a computer;
I see enough visual depth and complexity to 'reassure' myself as to finding
a sufficiently vast domain for the longest pseudo-random sequence with
some specific statistical properties, I hope for.

You are right though;  It might be just my own insecurity with simpler
pseudorandom generators.. I do consider Fractals to be my pandora's box.
And the 'strange attractors' are also catching my attention in a similar way,
but in the time-domain field (signal processing).

Indeed, Fractal+Chaos might be just another pseudorandom algorithm,
not written yet.. But I don't understand why Mathematicians or Crypto programmers
aren't more attracted by the inherent complexity of these new Math fields.

>There are some fine points to the advice that you were given that I think you
>might have blown by.
>
Indeed Paul.
I do appreciate you pointing me to 'pseudorandom' and my 'hypothesis'
that might lead to just another of them.

I would very-much like to understand if there is a standard procedure to
determine if a 'pseudorandom' sequence is non-periodic for a 'sufficiently
great period.. Does this involve BruteForce testing for weeks ?

>There might be a whole bunch of "100% Safe encoding schemes" now, the
>problem is knowing and proving it. Check out Claude Shannon and Perfect
>Secrecy.
>
THANKS Very much for that tip.
I shall take a serious look at it.

>Testing determines that a sample does not have the detectable bias that the test
>was meant to measure. It does not determine that a sample is good. New
>methods need new tests. There is no way to develope an exhaustive set of tests.
>The degrees of freedom, the number of ways it could be biased are just too big.
>(Trust me, I just learned this the hard way) Thank's Scott.
>
Hehehe :D.. I smell an insid joke here.. I shall fear the Mighty Scott also. :D
Aren't they all Miracle workers, According to Capt'n Kirk ;)

Indeed... I see I will have to set very precise goals, and characteristics for
which to measure my sequences... If I ever arrive to that point.

>Keys and keyspace, Any key which could be selected needs to be uniformly
>distributed amongst the possible keys. It seems that you are saying that the
>number of good keys is large. It needs to be that a Key chosen will be a
>good key from a large group of good keys.
>
>Choosing the characteristic of a generator to specifically offset the known
>bad charateristics of the plaintext (zipfile) sounds like a remarkably bad thing
>to do. Perhaps if I ever get this stuff, I'll know why :-) Anybody care to help
>me out here?
>
>Paul

I have a formidable idea about how 'Keys' could be 'picked' from a set.
And how my ideas are based on these intuitions.
Let me explain in my simple terms.. Newbies shall find it very refreshing as well.

== The Perfect Encrypting algorithm shall also become a compression algorithm.=
I want to keep my explanation intuitive; So I will use a simplified example.
Suppose we want to 'encrypt' an image.. This image is mapped sampled, and
represented in a binary file... Suppose that we have the 'perfect' encrypting
algorithm;  Like I said before; I want the encryption sequence to match as closely
as possible the 'original', in order for it to 'MASK' the original's characteristics.
.. In the utopic example, my algorithm would match almost bit for bit the original;
Thus once a XOR between the two streams is performed.. WE see very few
information bits, here and there... compression on the result is easy;
(*) I would intuitively suppose it is an indication of successfull encryption.
     If the result is easy to compress; While the original might not have been.
Now.. Back to the Keys and Keyspace.
Since the Encryption shall be done using a 'key'..  I think there might be better
hopes of pushing the 'Fractal' algorithm one step further, with a first pass
on the original document... The first pass shall determine some inherent
characteristics of the original; such as 'variance', type of distribution,
granularity.. And in the specific case of images; Geometric properties,
that could be matched with a particular fractal variant.
The 'Key' is an agglomerate of options and seeds.. Then, even if the 'encryptor',
program suggests a 'Key'; The user will customize the choice by taking one
key close, but just on of the billion's close by.. Resulting in an imperfect match
between the original and the encryption sequence.. With a final combination
leaving apparant clouds of bits with little to say about the whole.

That's one aspect I fell about 'Fractals', which we can better 'rescale', to
match a particular structural patter in the original.
Taking just 'any key' withing the entire set of keys, is IMHO not aiming for best.
========

Forgive me to display my ignorance in the field of cryptography.
I hope this can stir some brainstorm on the question;
Or at least gets me some well deserved laughers and tips.

Best regards Paul & Paul.
Sincerely,
Ben
Oh.. and forgive my spelling.. Spell checker is dry.

>>
>> Please give me your thoughts,
>> And I would appreciate some tips on how to verify if my algorithm generates
>> a sufficiently (random/chaotic) sequence.
>> [EMAIL PROTECTED]
>> Ben Zen
>>






------------------------------

From: "Paul Pires" <[EMAIL PROTECTED]>
Subject: Re: Crypto NEWBIE, wants to create the 100% SAFE FRACTAL encoding... Am I a 
fool ?
Date: Thu, 24 May 2001 20:53:33 -0700


BenZen <[EMAIL PROTECTED]> wrote in message 
news:BakP6.532$[EMAIL PROTECTED]...
<snip>
> I would very-much like to understand if there is a standard procedure to
> determine if a 'pseudorandom' sequence is non-periodic for a 'sufficiently
> great period.. Does this involve BruteForce testing for weeks ?

I smell a true believer.

HeHeHe...By me Bubba. I'm just a hack You just blew off one of the
guys in the room who could tell you though. I don't think you got a grip
on the scope here. Brute force testing of what? The algorithm? To do that
you'd have to test a significant portion of the keys..for a long time OR find
a supportable analytical argument that the structure proposed is somehow
constrained not to repeat within a huge number of bits....for any key...for
any text.

The universe will die long before you do the former and I will die long before
I learn how to answer the latter :-)

>
> >There might be a whole bunch of "100% Safe encoding schemes" now, the
> >problem is knowing and proving it. Check out Claude Shannon and Perfect
> >Secrecy.
> >
> THANKS Very much for that tip.
> I shall take a serious look at it.
>
> >Testing determines that a sample does not have the detectable bias that the test
> >was meant to measure. It does not determine that a sample is good. New
> >methods need new tests. There is no way to develope an exhaustive set of tests.
> >The degrees of freedom, the number of ways it could be biased are just too big.
> >(Trust me, I just learned this the hard way) Thank's Scott.
> >
> Hehehe :D.. I smell an insid joke here.. I shall fear the Mighty Scott also. :D
> Aren't they all Miracle workers, According to Capt'n Kirk ;)
>
> Indeed... I see I will have to set very precise goals, and characteristics for
> which to measure my sequences... If I ever arrive to that point.
>
> >Keys and keyspace, Any key which could be selected needs to be uniformly
> >distributed amongst the possible keys. It seems that you are saying that the
> >number of good keys is large. It needs to be that a Key chosen will be a
> >good key from a large group of good keys.
> >
> >Choosing the characteristic of a generator to specifically offset the known
> >bad charateristics of the plaintext (zipfile) sounds like a remarkably bad thing
> >to do. Perhaps if I ever get this stuff, I'll know why :-) Anybody care to help
> >me out here?
> >
> >Paul
>
> I have a formidable idea about how 'Keys' could be 'picked' from a set.
> And how my ideas are based on these intuitions.
> Let me explain in my simple terms.. Newbies shall find it very refreshing as well.
Wooosh......(That was the sound of your description flyin right over my head.)

I wish you well.

Paul




------------------------------

From: Boris Kazak <[EMAIL PROTECTED]>
Reply-To: [EMAIL PROTECTED]
Subject: Re: Combining functions for stream ciphers
Date: Fri, 25 May 2001 03:59:17 GMT

Laura wrote:
> 
> Nicol So <[EMAIL PROTECTED]> wrote in message 
>news:<[EMAIL PROTECTED]>...
> > Laura wrote:
> > >
> > > I am currently working on improving the ORYX stream cipher, but am
> > > wondering how to adjust the combining function.  I want to make it
> > > more complex, but not slow down the encryption process too much.  Does
> > > anyone have any ideas?  (Currently, the outputs from the LFSR's are
> > > combined using modular addition).
> >
> > Is there a reason why you want to focus on the combining function?
> 
> I am attempting to improve the cipher's security against the divide
> and conquer approach that was used to break the system.  In
> conjunction with a different combining function, I will also be using
> less than eight output bits from each LFSR.
======================
> I will also be using
> less than eight output bits from each LFSR.
Then after addition you just need a lookup table. Or, alternatively,
2 lookup tables before addition (one per LFSR).

Best wishes     BNK

------------------------------

From: "Andrew Jamieson" <[EMAIL PROTECTED]>
Subject: Re: rs232 data encryption
Date: Fri, 25 May 2001 04:16:10 GMT

==============< Big snip >=====================
> There was a paper at FSE2001 by Alkassar et al about a CFB variant that
> retained the resyncing property, but drastically reduced the expected
number
> of encryptions.  However, I thought that the OP would prefer an
> "off-the-shelf" solution, and in any case, 4000 block encryptions per
second
> is not pushing the state of the art, even with a moderately old
> microprocessor.  Of course, if it's an 8051, it probably isn't
particularly
> doable...

Slightly OT, but;
Does anyone have any information (links, etc) on how fast AES is on low end
micro's (most especially 8051)?





------------------------------

Reply-To: "Jeffrey Walton" <[EMAIL PROTECTED]>
From: "Jeffrey Walton" <[EMAIL PROTECTED]>
Subject: Re: rs232 data encryption
Date: Fri, 25 May 2001 00:26:55 -0400

: Does anyone have any information (links, etc) on how fast AES is on
low end
: micro's (most especially 8051)?

I think NIST was doing some testing on hardware implementations with the
NSA.  They basically asked for a "call for algorithms"

This may help:
http://csrc.nist.gov/encryption/aes/round2/r2anlsys.htm#NSA



"Andrew Jamieson" <[EMAIL PROTECTED]> wrote in message
news:e0lP6.76124$[EMAIL PROTECTED]...
: --------------< Big snip >---------------------
: > There was a paper at FSE2001 by Alkassar et al about a CFB variant
that
: > retained the resyncing property, but drastically reduced the
expected
: number
: > of encryptions.  However, I thought that the OP would prefer an
: > "off-the-shelf" solution, and in any case, 4000 block encryptions
per
: second
: > is not pushing the state of the art, even with a moderately old
: > microprocessor.  Of course, if it's an 8051, it probably isn't
: particularly
: > doable...
:
: Slightly OT, but;
: Does anyone have any information (links, etc) on how fast AES is on
low end
: micro's (most especially 8051)?
:



------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: Help with a message
Date: Fri, 25 May 2001 05:04:22 GMT

JPeschel wrote:
> Where? When?

As often happens for a new idea, there were several different early
formulations and definitions; even in the original Riverbank
publication, there were multiple, inconsistent definitions of "index of
coincidence", one having negative values and another being
probabilities after interpretation according to a model.  In the
original Friedman MilCryp, the phrase "index of coincidence" was not
used, the analogous development being in terms of absolute counts,
probabilities, and "kappa" tests.  By the time Callimahos started his
revision of MilCryp, the concept and terminology had stabilized, and
generations of cryppies have considered IC to be the ratio of observed
number of coincidences to expected (on the null hypothesis) number of
coincidences; see the footnote on p. 41 of vol. 1, part 1 of the Aegean
Park Press reprint.  There are several "flavors" of I.C. (named after
Greek letters), depending on the particular arrangement of the test;
they *all* have the property that the I.C. is 1 when there is no
correlation detected by the test.

> If it is a mistake, then Sinkov and others have it wrong, too.

Yes, they do.  It's not surprising, since everybody "outside" seems to
have taken it from Sinkov's book.  As to why Sinkov chose his
particular nonstandard definition, I can only speculate.  One
possibility is that, like Friedman initially, he thought of the term
in connection with a whole class of related correlation computations,
and since he was teaching in terms of basic probability notions, that
is where he had the first opportunity to attach the appellation "index
of coincidence" to one of these computations.

Once the Callimahos MilCryp reached the public, however, there was no
excuse for the "Encyclopedia" to choose the nonstandard definition.

> >The word "index" connotes comparison against a norm.
> You connote too much: an index can be just a ratio.

That's what I said.  It is a ratio of an observed value to a norm.

Which is more useful?
        (1) "This CT has a digraphic IC of 0.024."
        (2) "This CT has a digraphic IC of 2.4."
In case (1), you don't know the number of categories, so you cannot
interpret the "index" (although if there are a bunch of similarly
computed indices you can judge its rank among those).  In case (2),
you immediately know that digraphic grouping is causally significant,
regardless of the number of categories.

> That doesn't sound like enough information to make an FOIA request.
> Do you have more?

Sent separately to just Joe.

------------------------------

From: "Michael Brown" <[EMAIL PROTECTED]>
Crossposted-To: sci.math
Subject: Re: RSA's new Factoring Challenges: $200,000 prize. (my be repeat)
Date: Fri, 25 May 2001 17:23:31 +1200

Sorry if this is a duplicate. The first one seems to have got itself killed as
the modem disconnected it around the time I sent it (it's on another computer,
so I'm not exactly sure).
Anyhow, here it is (with a few modifications):


First, go read my page at http://odin.prohosting.com/~dakkor/rsa

Then have a quick look at the 1024, 896, 704, or 576 bit numbers. Last 2 bits
for each of these are ....11 (eg: Last 8 bits of the 1024 bit number are
....11001011). Look closely at the second to last digit. A perfect example of
where the algorithm will work.

Based on my previous tests, the about of RAM required will be about 128 meg
maximum, and the factoring time should be about 8 minutes (for the 1024 bit
one).

Cya. Gotta fire up Delphi and start coding like mad ... =P

Regards,
Michael

PS: If you do find a fault with the algorithm please tell me so I don't waste my
time :)

"Peter Trei" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> RSA Security, has revamped its Factoring Challenges.
>
> Prizes now start at US$10,000 (factorization of a 576 bit modulus) to
> US$200,000 (factorization of a 2048 modulus).
>
> RSA and its predecessor companies have been sponsoring factorization
> challenges for many years, but until now the prize money has been
> nominal. It is hoped that the increased bounties will draw more people
> to the field, and spur new research.
>
> For details, including the challenge numbers, see:
>
> http://www.rsasecurity.com/rsalabs/challenges/factoring/index.html
>
> Peter Trei
> Cryptoengineer
> RSA Security Inc.
> [EMAIL PROTECTED]
>





------------------------------

From: [EMAIL PROTECTED] (Eric Lee Green)
Crossposted-To: alt.privacy,alt.security.pgp
Subject: Re: Evidence Eliminator Detractors Working Hard But No Result?
Reply-To: [EMAIL PROTECTED]
Date: Fri, 25 May 2001 05:48:10 GMT

=====BEGIN PGP SIGNED MESSAGE=====
Hash: SHA1

On 25 May 2001 01:59:54 -0000, Frog2 <[EMAIL PROTECTED]> wrote:
>Why haven't I heard the words USENET DEATH SENTENCE in
>connection with this spam-spraying monster called Proweb Ltd? As
>the primary source of EE spam and the only ISP that makes a
>profit from Usenet's all time number one spammer, Proweb has it
>coming, in spades.

That is why, in fact, ProWeb finally dropped EE as a
customer. Apparently their upstream was about to drop THEM if they
kept getting spam complaints. So while your vitriol against ProWeb may
or may not be well-founded, a death sentence against ProWeb will affect
EE not at all. 

Robin Hood Software's http://www.evidence-eliminator.com site is now
hosted by UUNET-UK. When you see EE spam, please report it to
[EMAIL PROTECTED] so that they are aware of the problem. When making the
report please note in your message that UUNET-UK's terms of service
prohibit both direct and third-party spam on behalf of a UUNET-UK
customer, and that RHS is violating the terms of UUNET-UK's terms of
service by not properly controlling their sales agents' actions
regarding spam.


=====BEGIN PGP SIGNATURE=====
Version: GnuPG v1.0.5 (GNU/Linux)
Comment: For info see http://www.gnupg.org

iD8DBQE7DfD93DrrK1kMA04RAtHWAJ4+WzzQyleImbQkOFQR2KXFS98fSQCeJ0QN
7MWRbjlpmM0vjx8gF+U2FC0=
=qXm+
=====END PGP SIGNATURE=====

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list by posting to sci.crypt.

End of Cryptography-Digest Digest
******************************

Reply via email to