Cryptography-Digest Digest #479, Volume #12      Fri, 18 Aug 00 17:13:01 EDT

Contents:
  Re: Java Random Numbers (Daniel Leonard)
  Re: 215 Hz five-qubit quantum processor (CLSV)
  Re: pRNG, ECC and block modes ([EMAIL PROTECTED])
  Re: Symmetric Encryption and Decryption ([EMAIL PROTECTED])
  Re: Just a thought... (David A Molnar)
  Re: Bytes, octets, chars, and characters (John Savard)
  Re: blowfish problem ("Douglas A. Gwyn")
  Re: Just a thought... ("Paul Pires")

----------------------------------------------------------------------------

From: Daniel Leonard <[EMAIL PROTECTED]>
Subject: Re: Java Random Numbers
Date: Fri, 18 Aug 2000 19:54:36 GMT

On Fri, 18 Aug 2000, Scott Fluhrer wrote:

>=20
> <[EMAIL PROTECTED]> wrote in message news:8nje9h$pis$[EMAIL PROTECTED]=
=2E..
> > Java implementation of Random Number Generator.
> Is this supposed to be another implementation of the exact same random
> number generator you published earlier?  If so, have you run test vectors=
 to
> verify that it is the same generator?  It looks like it has a few bugs.  =
In
> particular, I believe that char's in Java are always 8 bits, and so anyti=
me
> you try to store something in the "upper 8 bits", it gets lost.

In java, primitive data type are well defined (as opposed to C - see the
blowfish thread).

byte   -> 8 bit
char   -> 16 bit
short  -> 16 bit
int    -> 32 bit
long   -> 64 bit

they are and will always be those sizes

==========
Daniel L=E9onard

OGMP Informatics Division    E-Mail: [EMAIL PROTECTED]
D=E9partement de Biochimie     Tel   : (514) 343-6111 ext 5149
Universit=E9 de Montr=E9al       Fax   : (514) 343-2210
Montr=E9al, Quebec             Office: Pavillon Principal G-312
Canada H3C 3J7               WWW   :


------------------------------

From: CLSV <[EMAIL PROTECTED]>
Subject: Re: 215 Hz five-qubit quantum processor
Date: Fri, 18 Aug 2000 21:58:39 +0200

Bernd Paysan wrote:

> Paul Rubin wrote:
> > That's not even in NP.  The shortest decompressor might have
> > exponential (or worse) running time or space requirements.

> There's a derivative, the practical compressor. It assigns cost for
> decompressing, and costs for transfer. I.e. if you can transfer a bit in
> 1 million instructions, a compression that needs one million instruction
> cycles more to complete isn't worth if it only saves one bit. So you can
> generate programs for your VM, and simulate them for n*1M cycles (n is
> the bitness of the result), and look at the first program that matches
> your criteria. Same goes for space (you can abort all programs that take
> "too much" space, whatever your requirement for the space limitation
> is). This will not give the shortest program, but the shortest
> "practical" program. And the search algorithm for sure is NP, but with
> the positive side that for a 2^n input, it has only to test up to 2^m
> outputs, where m is the bit length of the compressed file (which should
> be smaller than n).

Four points:

1 What if there are only short(er) programs that take > n*1M cycles?
  Then your scheme fails.
2 If a 2^n input can not be compressed (which you don't know beforehand)
  than m > n.
3 Did you make a calculation of the number of programs of say size
  2^16 bits? Do you still think it is 'practical' even executing one
  million instructions?
4 Take a look at: http://www.drb.insel.de/~heiner/BB/

Regards,

        CLSV

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: pRNG, ECC and block modes
Date: Fri, 18 Aug 2000 19:45:42 GMT

In article <[EMAIL PROTECTED]>,
  [EMAIL PROTECTED] (Rich) wrote:
> Hello group, as a long-time lurker I've learned quite a lot and very
much
> appreciate all the discussions. I now have a few questions, which I
hope
> you folks can weigh in on. There are as follows:
>
> 1. Which pseudo-random-number-generator is the "Best"?: Yarrow; Blum-
Blum-
> Shub; Micali-Schnorr or Wichmann-Hill?
>
> 2. Which one-way hash is the "Best"?: Havel; SHA-2; DHA; Tiger;
Ripemd-160;
> Blum-Blum-Shub; Meyer-Davis; or GOST 34.11-94?
>
> 3. Which Message Authentication Cert. is the "Best"?: H-MAC or D-MAC?
>
> 4. Which Error-Correction Code is the "Best"?: Turbo Codes; binary
Goppa or
> Viterbi?
>
> And, finally: 5. Which block-cipher mode is "Best"?: CBC w/CTS or
Counter-
> based CTR?
>
> The answers to these five questions have eluded me. An additional
> consideration is that I'd like to use one of each of the 5 different
> components with Serpent-1 and some compression software (either 777
or
> Malcolm Taylor's RK) - so, the combination might be a factor in your
> responses.
>
> Any help or discussion will, undoubtably be of value. Thank you very
much
> in advance.

If you had a clue about any of those different fields, you wouldn't ask
such a question.

What is the "best" makes no sense since for alot of those no
quantifiable measure of "best" exists.

Also what problem are you trying to solve?  Are you looking for some
buzzword compliancy?

Tom


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Symmetric Encryption and Decryption
Date: Fri, 18 Aug 2000 19:47:11 GMT

In article <8njkv5$[EMAIL PROTECTED]>,
  "Koon Kin Hin, Kenny" <[EMAIL PROTECTED]> wrote:
> To: Everyone
> From : Kenny
>
> Can anyone help me to answer my question?
> 1. Can anyone tell me the process of Symmetric Encryption and
Decryption in
> detail?
> 2. How can a smart card can perform processing function by itself?

A Block Cipher (which performs symmetric encryption) is just an
algorithm.  It can be done on a desktop, smartcard, in hardware or
using an abaccus.

I suggest you read up more about the field say HAC which is online.

Tom


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: David A Molnar <[EMAIL PROTECTED]>
Subject: Re: Just a thought...
Date: 18 Aug 2000 19:45:58 GMT

Paul Pires <[EMAIL PROTECTED]> wrote:

>> each key and comparing it to a copy of the plain text) it could take
>> them as long as 36, 000 years.

> You have unprovable conditions already. How do you prove that it can't be
> run faster according to a faster process? 

at least three things you can do :

        1) base your function on something which seems to be
        "inherently sequential," which people have looked at,
        and have no idea how to do quickly. See Rivest, Shamir, and Wagner
        "Time-Lock Puzzles and Time-Release Crypto" for an example 
        of such a construction.

        2) try to develop a theory of which functions can and can not be
        parallelized. actually, you don't have to develop it yourself;
        some of the work has been done. Look for the book 
        _Limits to Parallel Computation : P-Completeness Theory_ by
        Hoover, Greenlaw, and Ruzzo. 
        They describe the "parallel computation thesis" -- that the
        class of "efficiently decidable in parallel" languages
        corresponds with "Nick's Class" or NC. 

        What's NC, exactly?

        It's the class of all Boolean circuits over {AND, OR, NOT}
        with fan-in 2 and logarithmic depth. You can also think of
        it as machines which use only polynomial time and a 
        logarithmic amount of space. 
        
        One question you can ask : Does NC = P, where P is the class
        of languages accepted by machines running in polynomial time?
        
        Naturally, this is open. NC \subset P for sure, but it seems
        unlikely that P \subset NC. In fact, we can define a notion
        of a "P-complete" problem in the same way that we can define
        NP-complete problems. 
        If any of these problems are "parallelizable," then NC = P.
        But we believe that NC != P, so they of course aren't
        parallelizable...

        So now you can define a NC vs P analogue of one-way functions!

        Great theory. Minor problems for cryptography :

        The notion of "efficiently parallelizable" in question here
        is the notion of going from an O(n^k) time algorithm to a
        O(log^k n) time algorithm, given O(n^k) processors. 
        This is a strong notion of "efficient", and too strong for
        the way that we'd like to use cryptography -- we need to
        know that how much having *two* computers helps, 
        not O(n^k). 
        and we'd like to rule out even polynomial or god forbid,
        *constant factor* speedups!
        
        This is fixable. The Greenlaw, Hoover, Ruzzo
        book mentions a notion of "Strict P-Completeness" which 
        attempts to classify problems that don't have any 
        parallel speedup whatsoever. The book mentions that this
        is "relatively new" in 1995; I haven't seen anything
        about it since (then again, I'm not a parallel computation
        theorist). 

        Also, the same litany of problems which accompanies trying
        to base cryptography on NP-complete problems presents itself

                * asymptotic nature of "hardness" 
                i.e. "For all n > n_0, this is hard...what's n_0?
                        ah, well, look at the proof" 
                This seems to be fixable by looking at the reductions
                to see what n_0 should be. that doesn't mean it's easy.

                * average case vs. worst case parallelizability
        
                * randomized NC -- it exists. I don't know if
                it's more powerful or not. but I bet that
                your adversary will flip random coins if it is...

                * maybe other silly things as well.
                I was wondering if there is an analogue of the 
                P vs. UP === one way functions result here, but
                it's not obvious. all the machines with which
                we're dealing here are deterministic, thus unambiguous. 

                similarly, there may be analogues of Brassard's
                result about deterministic public-key cryptosystems
                NP-complete to break --> NP = coNP.
                Now, P DOES equal coP, but maybe something else 
                silly happens.


        3)      The field of algebraic complexity has derived parallel
                lower bounds for some _specific problems_ in some
                particular models. Finally, a way to prove something
                about your particular problem! 

                For example, if you go to Ketan Mulmuley's home page,
                you'll see a parallel lower bound paper for maxflow. 
                The catch is that he uses a model where the parallel
                machine is not allowed to access individual bits within
                a machine word. It can only do + - * / on words.
                Now, every maxflow algorithm I know fits this model,
                but it doesn't rule out something strange. 
                
                There is a completist book by Burgisser, 
                Clausen, and Shokrollahi on this subject. I find
                it rather rough going, mostly because I haven't
                taken enough algebra (just one semester). 
                Borodin and Cook is supposed to be the other 
                good book in the area. 

                The upside for this approach is that you can prove
                things about particular problems. Which is what we want
                for cryptography, right? The downside is that this
                area seems to be technical and details about models
                are important. I'm still learning about it. 

> How do you prove that it cannot be
> hacked faster than brute force?

oh, well, that's the problem, ain't it. 

you can maybe prove that computing f(x) is not parallelizable. 
but that doesn't tell you much about f^-1(x) by itself.

-David



------------------------------

From: [EMAIL PROTECTED] (John Savard)
Crossposted-To: comp.lang.c,alt.folklore.computers
Subject: Re: Bytes, octets, chars, and characters
Date: Fri, 18 Aug 2000 19:53:41 GMT

On Fri, 18 Aug 2000 03:59:04 +0100, David Hopwood
<[EMAIL PROTECTED]> wrote, in part:

>The use of
>the word "character" to mean anything other than a unit of text (such as
>a symbol, letter, etc., or possibly a control code), should be strenuously
>resisted; characters are *not* units of storage.

This is true _now_, with such things as UTF-8.

However, in the past, it had been customary to refer to a six-bit area
in a computer's memory, where such an area was the span of memory
occupied by a character of a text, as a character.

The term byte did not come into general use until computers stored
characters in 8-bit areas.

The term 'octet' is a technical term, mainly found in international
standards for ASCII.

Thus, normally, a byte refers to an 8-bit unit of storage, and a
character refers to a unit of storage which may differ from 8 bits,
but which is used on a given system for an ordinary printable
character. To refer to the storage elements of an IBM 1401 as other
than characters would be confusing, and not in accord with the
historical usage. No one is likely to be inclined to call them either
bytes or septets.

Just as one computer might have a word length of 16 bits and another
might have a word length of 32 bits, I suppose that in addition to
character sizes of 6 and 8 bits, a computer might have a character
size of 16 bits.

Variable-length characters might be found on external media, and might
appear in data streams going to and from devices such as printers and
modems, but typically characters in such a form would be converted to
a uniform-length form for use in memory, to permit manipulation.

Of course, what with UNICODE, I suppose that constructs such as

      CHARACTER*80 CARD

would have to be replaced by

      CHARACTER    CARD(80)

because instead of a string of two characters, the type CHARACTER*2
would now refer to a 16-bit UNICODE character. This was a conceptual
flaw in FORTRAN, it must be admitted.

John Savard
http://home.ecn.ab.ca/~jsavard/crypto.htm

------------------------------

Crossposted-To: comp.lang.c
From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: blowfish problem
Date: Fri, 18 Aug 2000 19:48:18 GMT

Paul Schlyter wrote:
> what about the DEC-10, which used both 7-bit and 8-bit bytes?  Sure, a
> C implementation would use 8-bit chars of course.  The word size of
> the DEC-10 was 36 bits, and 36/8 = 4.5.  An int would naturally be
> 36 bits on the DEC-10 -- but what value should sizeof(int) then return?

There *were* C implementations on the PDP-10.  The most common
solution to the byte addressability issue was to use 9 bits
per char.  By the way, in other languages (lacking malloc etc.)
there were often 5 7-bit ASCII characters packed into a word.
I don't recall anybody using 8-bit bytes on the PDP-10, but I
quit working with it before 8-bit toy computers like the IBM PC
became popular.

------------------------------

From: "Paul Pires" <[EMAIL PROTECTED]>
Subject: Re: Just a thought...
Date: Fri, 18 Aug 2000 13:17:59 -0700


David A Molnar <[EMAIL PROTECTED]> wrote in message
news:8nk3pm$dhf$[EMAIL PROTECTED]...
> Paul Pires <[EMAIL PROTECTED]> wrote:
>
> >> each key and comparing it to a copy of the plain text) it could take
> >> them as long as 36, 000 years.
>
> > You have unprovable conditions already. How do you prove that it can't
be
> > run faster according to a faster process?
>
> at least three things you can do :
>
> 1) base your function on something which seems to be
> "inherently sequential," which people have looked at,
> and have no idea how to do quickly. See Rivest, Shamir, and Wagner
> "Time-Lock Puzzles and Time-Release Crypto" for an example
> of such a construction.
>
> 2) try to develop a theory of which functions can and can not be
> parallelized. actually, you don't have to develop it yourself;
> some of the work has been done. Look for the book
> _Limits to Parallel Computation : P-Completeness Theory_ by
> Hoover, Greenlaw, and Ruzzo.
> They describe the "parallel computation thesis" -- that the
> class of "efficiently decidable in parallel" languages
> corresponds with "Nick's Class" or NC.
>
> What's NC, exactly?
>
> It's the class of all Boolean circuits over {AND, OR, NOT}
> with fan-in 2 and logarithmic depth. You can also think of
> it as machines which use only polynomial time and a
> logarithmic amount of space.
>
> One question you can ask : Does NC = P, where P is the class
> of languages accepted by machines running in polynomial time?
>
> Naturally, this is open. NC \subset P for sure, but it seems
> unlikely that P \subset NC. In fact, we can define a notion
> of a "P-complete" problem in the same way that we can define
> NP-complete problems.
> If any of these problems are "parallelizable," then NC = P.
> But we believe that NC != P, so they of course aren't
> parallelizable...
>
> So now you can define a NC vs P analogue of one-way functions!
>
> Great theory. Minor problems for cryptography :
>
> The notion of "efficiently parallelizable" in question here
> is the notion of going from an O(n^k) time algorithm to a
> O(log^k n) time algorithm, given O(n^k) processors.
> This is a strong notion of "efficient", and too strong for
> the way that we'd like to use cryptography -- we need to
> know that how much having *two* computers helps,
> not O(n^k).
> and we'd like to rule out even polynomial or god forbid,
> *constant factor* speedups!
>
> This is fixable. The Greenlaw, Hoover, Ruzzo
> book mentions a notion of "Strict P-Completeness" which
> attempts to classify problems that don't have any
> parallel speedup whatsoever. The book mentions that this
> is "relatively new" in 1995; I haven't seen anything
> about it since (then again, I'm not a parallel computation
> theorist).
>
> Also, the same litany of problems which accompanies trying
> to base cryptography on NP-complete problems presents itself
>
> * asymptotic nature of "hardness"
> i.e. "For all n > n_0, this is hard...what's n_0?
> ah, well, look at the proof"
> This seems to be fixable by looking at the reductions
> to see what n_0 should be. that doesn't mean it's easy.
>
> * average case vs. worst case parallelizability
>
> * randomized NC -- it exists. I don't know if
> it's more powerful or not. but I bet that
> your adversary will flip random coins if it is...
>
> * maybe other silly things as well.
> I was wondering if there is an analogue of the
> P vs. UP === one way functions result here, but
> it's not obvious. all the machines with which
> we're dealing here are deterministic, thus unambiguous.
>
> similarly, there may be analogues of Brassard's
> result about deterministic public-key cryptosystems
> NP-complete to break --> NP = coNP.
> Now, P DOES equal coP, but maybe something else
> silly happens.
>
>
> 3)  The field of algebraic complexity has derived parallel
> lower bounds for some _specific problems_ in some
> particular models. Finally, a way to prove something
> about your particular problem!
>
> For example, if you go to Ketan Mulmuley's home page,
> you'll see a parallel lower bound paper for maxflow.
> The catch is that he uses a model where the parallel
> machine is not allowed to access individual bits within
> a machine word. It can only do + - * / on words.
> Now, every maxflow algorithm I know fits this model,
> but it doesn't rule out something strange.
>
> There is a completist book by Burgisser,
> Clausen, and Shokrollahi on this subject. I find
> it rather rough going, mostly because I haven't
> taken enough algebra (just one semester).
> Borodin and Cook is supposed to be the other
> good book in the area.
>
> The upside for this approach is that you can prove
> things about particular problems. Which is what we want
> for cryptography, right? The downside is that this
> area seems to be technical and details about models
> are important. I'm still learning about it.
>
> > How do you prove that it cannot be
> > hacked faster than brute force?
>
> oh, well, that's the problem, ain't it.
>
> you can maybe prove that computing f(x) is not parallelizable.
> but that doesn't tell you much about f^-1(x) by itself.
>
> -David
>
>
Hi David,
You need to cut down on your caffeine intake.

Paul






------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to