Cryptography-Digest Digest #956, Volume #9       Fri, 30 Jul 99 13:13:03 EDT

Contents:
  Re: Virtual Matrix Encryption (David C. Oshel)
  Re: How Big is a Byte? (was: New Encryption Product!) 
([EMAIL PROTECTED])
  Re: How would this effect the good old One Time Pad? (Mok-Kong Shen)
  Re: Virtual Matrix Encryption (Patrick Juola)
  Re: With all the talk about random... ("Kasper Pedersen")
  Re: How Big is a Byte? (was: New Encryption Product!) ("Douglas A. Gwyn")
  Re: Q: Does ElGamal require that (p-1)/2 is also prime like DH? (Anton Stiglic)
  Re: How Big is a Byte? (was: New Encryption Product!)
  Re: How Big is a Byte? (was: New Encryption Product!) (Patrick Juola)
  Re: (Game) 80-digits Factoring Challenge (Francois Grieu)
  SSL vs TLS (Bjoern Sewing)
  Re: Is breaking RSA NP-Complete ? (Anton Stiglic)
  Re: RSA keys 2^64x + c versus SNFS (Francois Grieu)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (David C. Oshel)
Subject: Re: Virtual Matrix Encryption
Date: Fri, 30 Jul 1999 08:24:07 -0500

In article <[EMAIL PROTECTED]>,
[EMAIL PROTECTED] (Guenther Brunthaler) wrote:

> And I assume it will have the same basic problem: While it is true
> that every possible message is contained somewhere inside a number
> such as PI, it may take more bits to present the starting digit than
> the message size... in addition to the problem of locating the right
> starting digit.

Unless you scribble the base N ordinal of the starting digit onto
something other than a computer.  If N is sufficiently large, the ideogram
representing X(pi,msg) can probably be traced onto a single barn door.

-- 
David C. Oshel     http://pobox.com/~dcoshel
Cedar Rapids, IA   [EMAIL PROTECTED]
``Tension, apprehension and dissension have begun.'' 
-- Duffy Wyg&, in Alfred Bester's _The Demolished Man_

------------------------------

From: [EMAIL PROTECTED]
Crossposted-To: alt.folklore.computers
Subject: Re: How Big is a Byte? (was: New Encryption Product!)
Date: 30 Jul 1999 13:33:00 GMT
Reply-To: [EMAIL PROTECTED]

In <7ns2t2$r1f$[EMAIL PROTECTED]>, [EMAIL PROTECTED] writes:
>In article <[EMAIL PROTECTED]>,
>   [EMAIL PROTECTED] (Paul Guertin) wrote:
>>[EMAIL PROTECTED] (Chris Hedley) wrote:
>>
>>> In article <7msb7u$10j$[EMAIL PROTECTED]>,
>>>     "Michael D." <[EMAIL PROTECTED]> writes:
>>> > I think that a major problem that we all have is that our mothers,
>yes, mine
>>> > as well as yours, taught us  that the first number is one(1) rather
>than
>>> > zero(0).
>>> Which is a bit silly of them, really, considering their kids aren't born
>>> one year old, but zero years.
>>
>>Except in Korea, and maybe other countries as well. A newborn is 1 year
>old
>>over there.
>
>Or if you're a horse.
>
>/BAH
>
>Subtract a hundred and four for e-mail.

I was thinking that a horse celebrated its birthday on the first of the
year, regardless of when it was born.  Thus, a horse born in December
is a one year old in January.

Dave

P.S. Standard Disclaimer: I work for them, but I don't speak for them.


------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: How would this effect the good old One Time Pad?
Date: Fri, 30 Jul 1999 14:39:25 +0200

wtshaw wrote:
> 

> There is something here, and I did a demonstration of it here in 1995, and
> was part of my campaign to stress the idea that any text can be made into
> a useful key.  The idea is offsets, that a single long string can be
> compounded with itself several times to create a new string.  One or two
> offsets can be solved fairly easily with a short string.  Even for a sort
> string, a few more offsets and the original is very difficult to
> reconstruct.

Tiny remarks:
 
It has been conjectured that certain proprietary (non-disclosure) 
algorithms sold for money employ several offsets into a pool of bit 
sequences.

I suppose that a straight forward generalization is to employ
n sequences and n offsets, perhaps with the n sequences dynamically 
chosen by the user from a larger set.

M. K. Shen

------------------------------

From: [EMAIL PROTECTED] (Patrick Juola)
Subject: Re: Virtual Matrix Encryption
Date: 30 Jul 1999 09:59:37 -0400

In article <7ns555$ajj$[EMAIL PROTECTED]>,  <[EMAIL PROTECTED]> wrote:
>In article <7nq7c4$f2$[EMAIL PROTECTED]>,
>  [EMAIL PROTECTED] wrote:
>>
>> > Otherwise, there's no real description of the algorithm.  It uses
>> > "theoretically infinite matrices", whatever the hell they are.
>IMHO,
>> if
>> > you want to secure your files, use PGP or another product that uses
>> > proven algorithms.
>> >
>>
>> Not too be picky but the algorithms in PGP have never been proven
>> secure.  They appear to be.  However the algorithms in PGP have a more
>> formal treatment (and less insane claims like OTP strength...)
>>
>Just out of curiosity, which algorithms in PGP haven't been proven
>secure?

All of them.

        -kitten

------------------------------

From: "Kasper Pedersen" <[EMAIL PROTECTED]>
Subject: Re: With all the talk about random...
Date: Fri, 30 Jul 1999 16:44:28 +0200

Jeffery Nelson <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> What's so bad about C++ or (The dreader, but still valad) BASIC functions
> for generateing random numbers.  Are their methods flawed?  I know they
> derive them from a list and the time of day, but this seems random ENOUGH
to
> not be able to recognise a patruen.  I was useing random keys in MY one
time
> pad generated by C++.  Should I reconsider?
>
> -Jeff

The basic difference between the ones you get and the ones you need (for
encryption) is basically, that the ones you get from a PRNG (pseudo-) are
random for statistical purposes, but they aren't random AT ALL, while the
ones you get from a TRNG (hardware noise device) may not be good enough for
statistical purposes (bias, correlation), but they are random.

PRNG: Something that looks random, but is completely deterministic.

Look at a series of numbers:  01 07 49 43
It will repeat (next is 01 again)  because what I've written is just
s:=(s*7) mod 100.

The random() generator used in Borland C++ is
s:=(s*134775813+1) mod 2^32
When you use it in the form random(256), you get the most significant 8 bits
of s.
To crack a junk-OTP with this generator, I'll guess a single letter, maybe
'e'?
Then there are 24 bits left. The simplest is to run through 16M
possibilities, it takes less than a second per character. Assuming there are
20-non-'e' letters before the 'e', I have to wait 20 seconds for it to crack
(checking that the output is ascii characters).

When you have s, it's broken.
To turn it around, random() is as weak as it can possibly be.
To get that OTP, you need a real source of random numbers.

/Kasper Pedersen











------------------------------

Crossposted-To: alt.folklore.computers
From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: How Big is a Byte? (was: New Encryption Product!)
Date: Fri, 30 Jul 1999 13:43:59 GMT

[EMAIL PROTECTED] wrote:
>> The problem with this is that array[0] is the first item. The reason this
> is a problem is that in computer addressing the address refers to the next
> item in memory (i.e. the item that starts at array[0]).

No, at the machine-instruction level, indexing adds an offset to a
base address to get the effective address.  Thus an offset of 0 is
naturally mapped to the beginning of an array referenced by its
base address.  The concepts of "first" and "next" do not enter into
this.

------------------------------

From: Anton Stiglic <[EMAIL PROTECTED]>
Subject: Re: Q: Does ElGamal require that (p-1)/2 is also prime like DH?
Date: Fri, 30 Jul 1999 11:29:37 -0400

> Thus, primes of the form 2q+1 , q prime,  are indeed much rarer than
> primes in general.

Of cours they are Bob, this is what started this news discussion!
But thanks for pointing it out again!

> Bzzt. Wrong.  Thank you for playing. Characterizing p = 2q+1
> as a 'probable prime' because it has this special form "Isn't
> even wrong".  It has NOTHING to do with probable primes.  p is
> a probable prime if it satisfies a^(p-1) = 1 mod p  for (a,p) = 1.
> Your statement "With this form, p is what we call probably prime"
> is nonsense.

Bzzt, Double Wrong!!!   Go back to the minors.
This statement has NOTHING to do with probable primes!
A number p is said to be a probable prime if it is beleived to be
a prime number du to the fact it passed a probabilistic primality test,

An integer p that satisfies a^(p-1) = 1 mod p is called a pseudoprime
to the base a.

An integer n that is a composite integer such that a^(n-1) = 1 mod n
for all integers a wich satisfy gcd(a,n) is called a Carmichael number.

You where maybe confusing and mixing some of these definitions.
Don't you work at RSA labs???  You're supposed to know these things.


> If q is indeed prime,  then you can trivially PROVE p is prime.
> Since all the factors of p-1 are known, all you need do is demonstrate
> a primitive root. In fact, all you need do to PROVE primality is
> to find  a depending on r such that for each r|p-1   one has
> a^(p-1)/r != 1 mod p but a^p-1 = 1 mod p. (Selfridge).

> Using Miller-Rabin is wrong. It will take longer and does not yield
> certainty.

>

You have an exact reference to this, showing the complexity of the algo so
as to compare it to using Miller-Rabin?


Anton


------------------------------

From: <[EMAIL PROTECTED]>
Crossposted-To: alt.folklore.computers
Subject: Re: How Big is a Byte? (was: New Encryption Product!)
Date: Fri, 30 Jul 1999 11:38:35 -0400

On 30 Jul 1999, Patrick Juola wrote:

> In article <[EMAIL PROTECTED]>,
>  <[EMAIL PROTECTED]> wrote:
> >On Thu, 29 Jul 1999, John Savard wrote:
> >> Zero may be my first counter state, but it is not the number
> >> associated with the first item or event. As there are times when
> >> reserving a storage location for element zero of an array makes sense,
> >> and other times when it does not, because of the purpose to which the
> >> array will be put, flexibility is useful...but, on the other hand,
> >> having to specify this every time is wasteful too. Starting with
> >> element zero at least allows both option with, at worst, a slight
> >> waste of storage, which is probably worth it to make one's program
> >> easy to understand and document, and avoid unnecessary offset
> >> arithmetic.
> >
> >The problem with this is that array[0] is the first item.
> 
> Only in C and its descendants; in Fortran, if I recall properly, array(1)
> is the first item.  In Pascal, I can define an array to start and finish
> at any ordinal position -- and if I'm using a Real Language(tm) I can
> define arrays based on any durn-fool index scheme I want.  (I just
> *love* the associative arrays available in /bin/awk.)

I'm sorry. I thought we were discussing off by one errors and the
differences between counting and addressing, not your bias against C. Yes,
I was assuming C but the C usage comes from the natural methods of most
assembly languages and is very logical when you think about it.

> >The reason this
> >is a problem is that in computer addressing the address refers to the next
> >item in memory (i.e. the item that starts at array[0]).
> 
> No, the reason for this is that the semantics of C-style arrays are
> defined as syntactic sugar over a pointer access.  Which is one of
> the most serious security/reliability holes in C/Unix.

Yes, C arrays come very nearly directly from (or at least have a lot in
common with) assembly language. And arrays are pointers in C. However, I
really think you're exaggerating when you try to claim that C's array
syntax is a security or a reliability hole in either C or Unix. The
improper use of C array syntax can cause said holes but then most C
programmers have gotten used to the "it starts at 0" concept. Would you
also say it's a major flaw with assembly language? Should we all start
having our assembly language programs use pointers to arrays that start
one byte sooner that the array just so we can say the item 1 is at array:
+ 1?

Now, if you want to discuss this rationally rather that show your bias
against C I'd be happy to.


PS Off by one errors occur in more languages than just C and kin.
____________________________________________________________________________
                                  |
"A little nonsense now and then,  |  "If it walks out of the fridge, let
 Is relished by the wisest men."  |     it go"  -- John Dougherty
                           --W.W. |  "If it loves you it will come back."
                                  |             -- Ian Davis
__________________________________|_________________________________________
Theta Xi 

Kappa Sigma 1175


------------------------------

From: [EMAIL PROTECTED] (Patrick Juola)
Crossposted-To: alt.folklore.computers
Subject: Re: How Big is a Byte? (was: New Encryption Product!)
Date: 30 Jul 1999 12:24:46 -0400

In article <[EMAIL PROTECTED]>,
 <[EMAIL PROTECTED]> wrote:
>On 30 Jul 1999, Patrick Juola wrote:
>
>> In article <[EMAIL PROTECTED]>,
>>  <[EMAIL PROTECTED]> wrote:
>> >On Thu, 29 Jul 1999, John Savard wrote:
>> >> Zero may be my first counter state, but it is not the number
>> >> associated with the first item or event. As there are times when
>> >> reserving a storage location for element zero of an array makes sense,
>> >> and other times when it does not, because of the purpose to which the
>> >> array will be put, flexibility is useful...but, on the other hand,
>> >> having to specify this every time is wasteful too. Starting with
>> >> element zero at least allows both option with, at worst, a slight
>> >> waste of storage, which is probably worth it to make one's program
>> >> easy to understand and document, and avoid unnecessary offset
>> >> arithmetic.
>> >
>> >The problem with this is that array[0] is the first item.
>> 
>> Only in C and its descendants; in Fortran, if I recall properly, array(1)
>> is the first item.  In Pascal, I can define an array to start and finish
>> at any ordinal position -- and if I'm using a Real Language(tm) I can
>> define arrays based on any durn-fool index scheme I want.  (I just
>> *love* the associative arrays available in /bin/awk.)
>
>I'm sorry. I thought we were discussing off by one errors and the
>differences between counting and addressing, not your bias against C. Yes,
>I was assuming C but the C usage comes from the natural methods of most
>assembly languages and is very logical when you think about it.

Interesting statement; I've been programing in C professionally since
quite possibly before you touched your first computer -- and here I'm
accused, irrelevantly, of an anti-C bias.

My feelings about C are beside the point; I'm simply stating that the
array semantics in C are a matter purely of *convention* and aren't any
more natural or intuitive -- and in fact, to a traditionally trained
mathematician are *less* natural/intuitive -- than one-based indexing
or any other method you like.  C arrays are just thinly disguised pointers,
which again aren't particularly natural or intuitive; they just happened
to be convenient for Thompsen and Richie.  Whether I like C or not,
I can certainly recognize that the zero-based indexing conflicts with
standard mathematical practice and is difficult to present to students.

You seem to be making the common mistake of believing that the
structures in your first or preferred language are somehow less
arbitrary than the structures in other languages.  'Tain't so,
dude.

>> >The reason this
>> >is a problem is that in computer addressing the address refers to the next
>> >item in memory (i.e. the item that starts at array[0]).
>> 
>> No, the reason for this is that the semantics of C-style arrays are
>> defined as syntactic sugar over a pointer access.  Which is one of
>> the most serious security/reliability holes in C/Unix.
>
>Yes, C arrays come very nearly directly from (or at least have a lot in
>common with) assembly language. And arrays are pointers in C. However, I
>really think you're exaggerating when you try to claim that C's array
>syntax is a security or a reliability hole in either C or Unix.

I stand by my statement.  Two words : finger bug.

> The
>improper use of C array syntax can cause said holes but then most C
>programmers have gotten used to the "it starts at 0" concept.

Of course, one can get used to anything in time -- why should one
have to work that hard to master arcana?

> Would you
>also say it's a major flaw with assembly language?

Yes -- although assembly language itself has so many other major flaws
that it's hard to know where to begin cataloguing them.  The main one,
of course, being that assembly language is so damn hard to write
*because* it follows the conventions of the machine instead of conventions
useful for thinking about the problem at hand.

If I needed to make a graph of financial data covering the years 1950-1999,
it would make much more sense in the context of that problem to
be able to use an array indexed from 1950 to 1999, inclusive, with
compiler support to prevent out-of-bounds accesses.  Which C notably
fails to provide.

>Should we all start
>having our assembly language programs use pointers to arrays that start
>one byte sooner that the array just so we can say the item 1 is at array:
>+ 1?

No, we should stop writing assembly language programming.  No one does
large projects in assembly any more for a very good reason.  The
compiler is usually a better assembly language programmer than you are
and doesn't make careless mistakes in translation.

>Now, if you want to discuss this rationally rather that show your bias
>against C I'd be happy to.

"If all you have is a hammer, the entire world looks like a nail."
Again, I'm not especially biased against C -- but I've been using it
enough to be well aware of its weaknesses, and array indexing is,
sho 'nuff, one of the big ones.  

>PS Off by one errors occur in more languages than just C and kin.

Yes.  But typically many fewer of them.

The pointer-based semantics of C arrays encourages lots of
errors : off-by-one errors, overrun screws, smashing the stack,
unexpected aliasing, and so forth.  Most of the major security holes
in Unix -- or at least the publicized ones -- are related in some
fashion or another to the array semantics.  Array semantics, esp.
w.r.t. multidimensional arrays, are probably the hardest part of C
to teach to a student and the easiest part to screw up (and often the
hardest bugs to find and fix).  

The various bondage and domination languages -- Pascal, Ada, &c. --
specifically implement arrays as *packaged* memory blocks, which
allows greater flexibility (and hence greater intuitiveness) in
calling as well as better ability to catch and trap errors.  This,
in my view, doesn't make up for the *rest* of the officious SE crap
one has to put up with in order to program in Ada, but if I had the
ability to pick and choose features in designing my own ideal 
language, I would definitely use Ada- or Pascal-style array semantics
in preference to C.  And general associative arrays in preference
to either if I had good compiler support.

        -kitten



------------------------------

From: [EMAIL PROTECTED] (Francois Grieu)
Crossposted-To: sci.math
Subject: Re: (Game) 80-digits Factoring Challenge
Date: Fri, 30 Jul 1999 18:55:47 +0200

Don McDonald <[EMAIL PROTECTED]> wrote:

>> In Mathematica, you can type
>>       PowerMod[2,n-1,n]
>> to run the (weak) probable prime test for base 2.

> How do I calculate the powermod function in PARI-gp research calc.?

to compute a^e mod n do
 compo(mod(a,n)^e,2)

so what you want is  
 compo(mod(2,n)^(n-1),2)

Francois Grieu - Innovatron

------------------------------

Date: Fri, 30 Jul 1999 18:01:07 +0200
From: Bjoern Sewing <[EMAIL PROTECTED]>
Subject: SSL vs TLS

Hi

can anyone tell me the differences between
SSLv3 and TLSv1 ??

Thanks a lot

Bjoern
-- 
===============================================================
Bjoern Sewing - University of Bielefeld        (TF-NOV - N3.02)

email:  [EMAIL PROTECTED]               
===============================================================

------------------------------

From: Anton Stiglic <[EMAIL PROTECTED]>
Subject: Re: Is breaking RSA NP-Complete ?
Date: Fri, 30 Jul 1999 12:46:41 -0400

Peter Pearson wrote:

> Anton Stiglic wrote:
> >
> > Bob Silverman wrote:
> >
> > > In fact there are several well known crypto systems (e.g. Chor-Rivest,
> > > Ajtai-Dwork,  and other knapsack related problems) that are known to be
> > >  NP-Complete.
> >
> > I am not an expecrt in these cryptosystems, so I'll just give references:
> >
> > "Most encryption algorithms based on the knapsack problem are breakable",
> > see
> >   Brickel, "The cryptanalysis of knapsack cryptosystems", R.D. Ringeisen
> > and F.S.
> [and on and on and on]
>
> In the interest of precision, please note that Bob Silverman's original
> objection was to the following assertion, which you, Anton Stiglic,
> made:
>
>   No, it is not NP-Complete.  In fact, there is no crypto-system that is
>   based on an NP-Complete problem.
>
> To make such an assertion with honest confidence, one would have to
> have an awesome familiarity with the field, which I don't believe you
> care to claim.

I have corrected my mystake in a reply to that post,  I ment to talk about
NP-Hard and not NP-Complete.  It is a result that I have read a long time
ago and did not remember clearly (I mentioned that), this is why I refer ALL
OF YOU the article of Brassard before replying to this post.
I don't think Bob Silverman was familiar with the article, as competent as
he might be.


>To make such an assertion with honest confidence, one would have to
>have an awesome familiarity with the field, which I don't believe you
>care to claim.

I don't, I don't think anybody else here has either.  Brassard does do, I
know him personaly, that is why I refer to his article.

Please I have corrected my statement, do not bother to further this discussion
unless you want to talk about what is in the article.



------------------------------

From: [EMAIL PROTECTED] (Francois Grieu)
Subject: Re: RSA keys 2^64x + c versus SNFS
Date: Fri, 30 Jul 1999 18:43:59 +0200

Bob Silverman <[EMAIL PROTECTED]> wrote (edited):
> Francois Grieu <[EMAIL PROTECTED]> wrote
>> (..ISO 9796-1 suggests..) public keys of form
>> n = pq = 2^(64x) + c  with  |c| > 2^(48x-1)
>> Loosely speaking  c  is at least 75% the size of the modulus.
>> Question: is that enough to guard against SNFS ?
>
> No. It is not enough.  c need to have reasonably large HAMMING
> WEIGHT (..so that..) it is harder to represent  n  as the
> instance of a polynomial with small coefficients.
>
> (..) if  n = pq  is of the form  sum(2^a_i)  AND
> sum(a_i) is moderately small, then such moduli are easier to
> break.  When c is indeed small then the Hamming weight is also
> small, which is why I gave the original warning.


To clear that point first, I do understand excellent reasons NOT to
include special moduli in a standard like X9.31 which must be
simple, general, and cautious.
And considering the recent results [4] [5] [6] that special moduli
weaken some signature padding schemes, including (marginally) the one
in X9.31, the standard's board was in retrospect wise to follow Bob's
advise and drop special moduli, whatever the reason.


To get back to the original question, if I get Bob right, the Special
Number Field Sieve works for moduli which are the image of an integer
by some polynomial with small integer coefficients [I'd appreciate a
confirmation as I confess my ignorance on NFS: could not follow the
math in the factoring game past MPQS with 2 large primes]. Assuming
this, I reason that if nearly 75% of the bits in  pq  are "random
looking", the total count of bits in the coefficients and the
point of the simplest polynomial expression of  pq  is likely to be
about 75% the bit size of pq, else we'd have an algorithm to compress
apparent randomness.  I doubt such a long polynomial expression
significantly helps NFS, even if we could find it.

I thus fail to see how SNFS (or any known factoring algorithm beating
GNFS) could, with reasonable probability (>2^-100), take advantage of
the special form of a  pq  generated as follow, for s>=8
1) pick a randomly seeded strong prime  p  such that
         2^(32x)/sqrt(2) < p < 2^(32x)-2^99
2) pick a randomly seeded strong prime  q  such that
     (2^(64x)-2^(48x))/p < q < (2^(64x)-2^(48x-1))/p

I consider these special moduli have practical virtues for signature
verification in Smart Cards (lowly 8 bit things with 0.5 MIPS, 8KB ROM,
1KB EEPROM, 192 bytes RAM) because such special moduli
- lower storage requirement for the public key
- save some bytewise multiplications during modular reduction
- save the code and effort associated with quotient estimation in the
  modular reduction code
- makes it practical to just reject those vanishingly few signatures
  where verification triggers a wrong quotient estimation (rather than
  special-casing this unlikely situation).


Francois Grieu - Innovatron

----

[4] J.-S Coron,  D. Naccache, and J. P. Stern, "A New Signature Forgery
Strategy applicable to ISO-9796-1/2, ECash(tm), PKCS#1 V2.0, ANSI X9.31,
SSL-3.02", to appear; preliminary version circulated as
ISO/IEC JTC1/SC27 N2329.

[5] R.D. Silverman and D. Naccache, "Recent Results on Signature Forgery",
<http://www.rsa.com/rsalabs/html/sigforge.html>

[6] L. Guillou, "Report of the SC27/WG2 ad hoc meeting in Paris,
France, May 14, 1999 on Signature forgery attacks", circulated as
ISO/IEC JTC1/SC27 N2355.

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to