Cryptography-Digest Digest #616, Volume #14 Fri, 15 Jun 01 09:13:01 EDT
Contents:
Re: Help with Comparison Of Complexity of Discrete Logs, Knapsack, (Mok-Kong Shen)
Re: Best, Strongest Algorithm (gone from any reasonable topic) - VERY (Tim Tyler)
NIST Rng Test Software (Unix) (Brice)
Re: Alice and Bob Speak MooJoo (Roger Fleming)
Re: Alice and Bob Speak MooJoo (Roger Fleming)
Re: Algorithm take 3 - LONG (was : Re: RSA's new Factoring Challenges: $200,000
prize. (my be repeat)) ("Michael Brown")
Re: Help with Comparison Of Complexity of Discrete Logs, Knapsack, and Large
Primes ([EMAIL PROTECTED])
Re: NIST Rng Test Software (Unix) ("Henrick Hellström")
Re: Algorithm take 3 - LONG (was : Re: RSA's new Factoring Challenges: $200,000
prize. (my be repeat)) ("The Scarlet Manuka")
hello? ("Tom St Denis")
Re: hello? (S Degen)
Re: hello? ("Tom St Denis")
Re: Help with Comparison Of Complexity of Discrete Logs, Knapsack, (Mok-Kong
Shen)
Re: Alice and Bob Speak MooJoo ("Robert J. Kolker")
Re: Looking for Mitsuru Matsui paper (Pascal Junod)
Re: Looking for Mitsuru Matsui paper ("Tom St Denis")
----------------------------------------------------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Help with Comparison Of Complexity of Discrete Logs, Knapsack,
Date: Fri, 15 Jun 2001 09:10:32 +0200
[EMAIL PROTECTED] wrote:
>
> Mok-Kong Shen <[EMAIL PROTECTED]> writes:
> > I wrote:
> >> Mok-Kong Shen <[EMAIL PROTECTED]> writes:
> >>>
> >>> As I said, a logical model is wrong, if it is not consistent. The
> >>> stuff did by the two authors is not wrong in the mathematical sense...
> >>
> >> But a book is wrong, if it fails to accomplish its goal. R&W wanted a
> >> complete mathematical theory--but such a thing is provably impossible.
> >
> > Well, take an example. FLT has been finally proved. Before
> > that many books on FLT, giving some interesting (correct)
> > results, have been published, e.g. one by Ribenboim, though
> > none of these contain a proof of FLT (excepting 'partial
> > proofs'). Do you simply call all these books 'wrong'?
>
> They were working on a problem which was solvable--or at least, not known
> to be unsolvable. Of course they weren't ``wrong''.
>
> On the other hand, if somebody decided to devote 500 pages to a theory
> intended to culminate in a proof of the continuum hypothesis, then it's
> fair to say that the entire project is wrong. Even if interesting and
> publishable results are proven along the way.
I suppose you were questioning the intelligence quotient
of Whitehead and Russell. I couldn't argue against you,
since I have no knowledge on that. Note however you
are looking from today's view, where mathematics has
advanced much beyond the timepoint where the book was
written. We know that one is afterwards always much much
much more clever in all situations, not only in math.
I am not very sure that you yourself wouldn't have
fallen into the same trap, if you had lived in the time
period of these authors.
>
> >>> Every proof in the book must be correct (even though I haven't touch
> >>> that book), since it apparently is a recognized literature.
> >>
> >> Are you tetched? Recognized literature is generally riddled with
> >> errors. One should assume that R&W contains many errors...
> >
> > ...I meant that what the two authors had done could not be called wrong
> > simply because they were unable to achieve the goal that they had set
> > for themselves.
>
> Don't you mean ``Because nobody, from now till hell freezes over, will
> *ever* achieve the goal, because it is impossible?''
See the above. Once you know better, the situation changes.
When you know something is impossible, you wouldn't try.
But what if you don't YET know? Do you have the ability
of clairvoyance? Goedel destroyed Hilbert's dream, but
that's by far no reason to consider Hilbert an idiot.
Science always advances by sort of trial and error,
isn't it?
>
> Their whole program was wrong. That doesn't make them idiots, bad
> fathers, or rotten human beings. It just means that their whole
> program was wrong. Get a grip.
In my response to Gwyn, I said that their goal (programme)
was wrong, but the stuff is correct mathematically, there
being, excepting some eventually present small errors, no
faults in the mathematical sense (i.e. errors in the sense
of invalid deductions). I had from the beginning of the
discussions stressed that the stuff in the book is
correct mathematically, no more nor less, and repeatedly
said that their goal was wrong and hence not achieved.
But Gwyn seemed to continually ignore that.
M. K. Shen
=========================
http://home.t-online.de/home/mok-kong.shen
------------------------------
From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: Best, Strongest Algorithm (gone from any reasonable topic) - VERY
Reply-To: [EMAIL PROTECTED]
Date: Fri, 15 Jun 2001 07:36:52 GMT
Mok-Kong Shen <[EMAIL PROTECTED]> wrote:
: Yes, it is always o.k. to make definitions. I recall in
: this connection a famous sentence from Lewis Carrol,
: though I no longer can reproduce the exact wording from
: memory. It's something like: 'When I use a word, it means
: just what it means, no more nor less'.
Humpty Dumpty from Lewis Carrol's "Alice Through the Looking Glass":
``When ever I use a word [...] it means exactly what I want it to mean,
no more, no less.''
--
__________
|im |yler [EMAIL PROTECTED] Home page: http://alife.co.uk/tim/
------------------------------
From: [EMAIL PROTECTED] (Brice)
Subject: NIST Rng Test Software (Unix)
Date: 15 Jun 2001 01:18:37 -0700
Hi all,
I have now given up on compiling the NIST Rng test software on a PC running
Windows and i have reverted to a Unix machine.
I have managed to compile the code without any problems but i am not getting
the same results as those given by NIST when i run the test software on the
test samples.
Does anyone know of any bugs in the code ? Has anyone got some executable
under Unix that they could maybe send me?
Thank you in advance for your help.
Brice.
------------------------------
From: [EMAIL PROTECTED] (Roger Fleming)
Subject: Re: Alice and Bob Speak MooJoo
Date: Fri, 15 Jun 2001 08:16:39 GMT
Just a couple of general comments on this interesting thread:
1. The idea that a person studying an unknown language in total isolation,
cannot glean any useful information, seems plausible but in practice is
useless because even the tiny clue that the speakers are probably human is
often enough to get started. With the possible exception of civilisations from
other stars, it is difficult to imagine a greater degree of isolation than
occurred, for example, in "deciphering" Linear B. Essentially all Ventris knew
about the document he examined was that it originated from an ancient
Meditarranean culture. Topic, context, culture, language, etc etc were all
unknown. He didn't even have the advantage of a 2 way conversation.
Nevertheless he was able to "decipher" it, and even greater feats of
paleolinguistics now occur relatively frequently.
2. The URL we were given to the site about Navajo code talkers was very
interesting, and dispelled a number of misconseptions I (and many others) held
about this system. However,
a. The claim that Navajo doesn't have a written form is completely untrue.
b. The system as described is simply a nomenclator, (a type of hybrid
substitution cipher/code invented in the 14th century), and at that, a
nomenclator with a relatively low number of equivalents and a permanenetly
fixed key and high usage volume. Such a system is only slightly more secure
than simple substitution, and a knowledge of Navajo is not required to
attack it. Furthermore, although faster than using a cipher, it is obviously a
lot slower than simply speaking in Navajo. I can't see any good reason for
doing it this way instad of just speaking Navajo. And I have become skeptical
that the Japanese "never managed to break it", as every web page claims. Do we
know for sure that they even attempted to do so, or did they regard it as a
short term tactical cipher that would not productively reward cryptanalysis?
Or did the "security by obscurity" of doing it in Navajo fool them about its
weakness? Do we even know for sure that they never read a single message?
------------------------------
From: [EMAIL PROTECTED] (Roger Fleming)
Subject: Re: Alice and Bob Speak MooJoo
Date: Fri, 15 Jun 2001 08:27:12 GMT
"Boyd Roberts" <[EMAIL PROTECTED]> wrote:
>"David A Molnar" <[EMAIL PROTECTED]> a écrit dans le message news:
> 9g7bqh$9u2$[EMAIL PROTECTED]
>> ... then *why* would she *want* to listen to Alice
>> and Bob? and if she *can*, doesn't that destroy the claimed
>> "security-through-inaccessible-referent" ?
>
>traffic analysis would be one reason.
I think you've misunderstood his point, Boyd. If Alice and Bob's conversations
have absolutely no effect on Eve, then Eve couldn't care less about them, so
she doesn't care about traffic analysis any more than she cares about the
message content. If they do have an effect on her, then she can use this
information to start learning the secret language.
>then there's ideolects. this is a style of speach that is
>shared between some group of people who share some common
>experience. even though you understand all the words it
>may be total gibberish to you. certain words may also
>convey a multitude of meanings and/or information.
>
>obviously it wouldn't generalise to encryption of arbitary
>messages, but you could use it to convey, in the clear, short
>messages that only the sender and the recipient will understand.
>
>it's sort of a codebook system with some context thrown in.
>come to think of it, without the context a speaker of the
>ideolect may not understand the message. the codebook is
>the shared experience(s) and it's unwritten.
This generally known as "veiled speech" and it is a lot weaker than most
people realise. If you think about it, a code (ie substituting random symbols
for each word, phrase, or concept) is the ultimate extension of veiled speech,
and yet is still often broken.
------------------------------
From: "Michael Brown" <[EMAIL PROTECTED]>
Crossposted-To: sci.math
Subject: Re: Algorithm take 3 - LONG (was : Re: RSA's new Factoring Challenges:
$200,000 prize. (my be repeat))
Date: Fri, 15 Jun 2001 22:34:38 +1200
"Joseph Ashwood" <[EMAIL PROTECTED]> wrote in message
news:#SgTofQ9AHA.302@cpmsnbbsa07...
> that has to be one of the longest NNTP delays I've seen, 5 days.
>
> It's getting much better, but it's still a program. You still use dword,
> instead of the rather more useful integer.
Good point. Especially as I was about to start talking about placing negative
numbers into these ...
> Use forcibly compress 4 entities
> together into a byte when that is an implementation detail (you should
> consider the detail when computing the RAM but not when explaining the
> algorithm).
*nod*
> And your arbitrary addition of variables (without types) is
> still around.
Some of them are defined up in the bit before the "<START OF ALGORITHM>" bit.
> I would like to see a move away from a pseudo programming
> language to something more expressive, but as long as it's complete and
> understandable it will work. So would it be possible to move to a higher
> level? I know last time I asked you to move to a lower level, it's hard to
> find the right balance sometimes.
I'll try :)
> I will assume that there is some portion of algorithm here that will do
> things like define the BoxStack, and StackLen.
> Just for accounting I'll list the variables that are not properly defined up
> here
> BoxStack
> StackLen
> Boxes
That's done in the masses of text above, which will eventually be turned into an
algorithm and merged with this one.
> > NextBox : dword;
> Just make this an integer.
Yep. Don't know why the heck I made it a dword???
> > OldVal : byte;
> > DestPart : byte;
> > DestMask : byte;
>
> I'd suggest defining a type Box that has 4 subvariables, labelled A,B,O and
> C. Each of these variables takes a value from the set {0,1,unknown,
> invalid}. You implementation will likely do it as you wrote for optimization
> but for algorithm expression it is unecessary. It won't match what most
> people call math, but it will be useful. To do this you might consider
> splitting things to a less monolithic structure. For example find a
> reasonable name for a portion of the algorithm, something like
> StackInitialize(...), then give an semi-English explanation of what it does
> (something like what I've called gibberish would be suitable), then later
> give a more rigorous definition.
*nod*
> I assume dec(...) is decrement
Correct. AKA "... ++" in C.
> > redim(BoxStack, StackLen);
> I assume redim(A,B) changes the dimensions of A to correspond with B
It sets A to contain B elements, if that's what you mean. I think that's getting
towards the implementation area again, though.
> > OldVal := Boxes[NextBox].Values;
> > Boxes[NextBox].Values := Lookup[Boxes[NextBox].Values];
> >
> > // Has the link connected to input A changed?
> > if ((Boxes[NextBox].Values xor OldVal) and $03) > 0 then
>
> I assume "xor" is the bitwise logical eXclusive-OR, and "and" is the
> bitewise logical And.
*nod*
> I assume inc(...) increments, and "not" is the bitwise logical inverse. I
> don't know what shl is?
Yes and yes. shl is "SHift Left" - I think this is more commonly represented by
"<<".
> There's a reason we generally don't take things seriously until they are
> presented in a clear form.
<SNIPPed, but read and acknowledged>
Unfortunately, I think it might be the same case for this one. It breaks trying
to do 23*29 :()
I think I know why, but I don't know how to fix it (apart from pushing it into
non-linear boolean equations which doesn't help).
Thanks for the reply,
Michael
------------------------------
Subject: Re: Help with Comparison Of Complexity of Discrete Logs, Knapsack, and
Large Primes
From: [EMAIL PROTECTED]
Date: 15 Jun 2001 06:50:01 -0400
Mok-Kong Shen <[EMAIL PROTECTED]> writes:
> [EMAIL PROTECTED] wrote:
>>
>> Their whole program was wrong. That doesn't make them idiots, bad
>> fathers, or rotten human beings. It just means that their whole
>> program was wrong. Get a grip.
>
> In my response to Gwyn, I said that their goal (programme)
> was wrong, but the stuff is correct mathematically, there
> being, excepting some eventually present small errors, no
> faults in the mathematical sense...
Mok, they were proving that 1+1=2. Do you hear me? THEY WERE PROVING THAT
1+1=2!
Now AS LONG AS we believed they were on the trail of something useful, we
were willing to read a 200-page proof that 1+1=2. As soon as we realized
that this was a dead end, we suddenly decided that it would be a collossal
waste of time to read such a thing.
YES, Mok, Yes! Yes! Yes, they were probably correct in their reasoning. But
nobody is interested in their work anymore, because now that we know that
aren't accomplishing some lofty goal, we have no more interest in seeing it
proven that 1+1=2. Today, R&W are the butt of jokes, because we know they
were wasting their time--and reading the book would be a waste of our time.
Len.
--
Don't blame me for the BIND company's decision to stick to obsolete
replication protocols. The rsync+openssh combination is not a secret.
-- Dan Bernstein
------------------------------
Reply-To: "Henrick Hellström" <[EMAIL PROTECTED]>
From: "Henrick Hellström" <[EMAIL PROTECTED]>
Subject: Re: NIST Rng Test Software (Unix)
Date: Fri, 15 Jun 2001 10:56:59 GMT
If I remember the C code correctly, you have to set a number of compiler
directives that determine the float number and integer format of the system.
I presume that this is what is most likely to cause problems when you
compile the code.
--
Henrick Hellström [EMAIL PROTECTED]
StreamSec HB http://www.streamsec.com
"Brice" <[EMAIL PROTECTED]> skrev i meddelandet
news:[EMAIL PROTECTED]...
> Hi all,
>
> I have now given up on compiling the NIST Rng test software on a PC
running
> Windows and i have reverted to a Unix machine.
>
> I have managed to compile the code without any problems but i am not
getting
> the same results as those given by NIST when i run the test software on
the
> test samples.
>
> Does anyone know of any bugs in the code ? Has anyone got some executable
> under Unix that they could maybe send me?
>
> Thank you in advance for your help.
>
> Brice.
------------------------------
From: "The Scarlet Manuka" <[EMAIL PROTECTED]>
Crossposted-To: sci.math
Subject: Re: Algorithm take 3 - LONG (was : Re: RSA's new Factoring Challenges:
$200,000 prize. (my be repeat))
Date: Fri, 15 Jun 2001 18:59:24 +0800
"Michael Brown" <[EMAIL PROTECTED]> wrote in message
news:5plW6.533$[EMAIL PROTECTED]...
> "Joseph Ashwood" <[EMAIL PROTECTED]> wrote in message
> news:#SgTofQ9AHA.302@cpmsnbbsa07...
> > I assume dec(...) is decrement
> Correct. AKA "... ++" in C.
For values of "++" equal to "--", anyway.
--
The Scarlet Manuka
------------------------------
From: "Tom St Denis" <[EMAIL PROTECTED]>
Subject: hello?
Date: Fri, 15 Jun 2001 11:58:39 GMT
What gives? Nobody answers any real sci.crypt questions here anymore!
Or am I just globally killfiled?
[btw reply in email since I don't want the trolls to use this as another
flame-thread]
--
Tom St Denis
---
http://tomstdenis.home.dhs.org
------------------------------
From: S Degen <[EMAIL PROTECTED]>
Subject: Re: hello?
Date: Fri, 15 Jun 2001 14:02:41 +0200
Tom St Denis wrote:
>
> What gives? Nobody answers any real sci.crypt questions here anymore!
>
> Or am I just globally killfiled?
yes.
>
> [btw reply in email since I don't want the trolls to use this as another
> flame-thread]
> --
> Tom St Denis
> ---
> http://tomstdenis.home.dhs.org
------------------------------
From: "Tom St Denis" <[EMAIL PROTECTED]>
Subject: Re: hello?
Date: Fri, 15 Jun 2001 12:15:54 GMT
"S Degen" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
>
>
> Tom St Denis wrote:
> >
> > What gives? Nobody answers any real sci.crypt questions here anymore!
> >
> > Or am I just globally killfiled?
>
> yes.
>
Now I must be new to killfiles. If I am globally killfiled how did you get
this?
Tom
------------------------------
From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Help with Comparison Of Complexity of Discrete Logs, Knapsack,
Date: Fri, 15 Jun 2001 14:16:23 +0200
[EMAIL PROTECTED] write:
>
> Mok-Kong Shen <[EMAIL PROTECTED]> writes:
> > [EMAIL PROTECTED] wrote:
> >>
> >> Their whole program was wrong. That doesn't make them idiots, bad
> >> fathers, or rotten human beings. It just means that their whole
> >> program was wrong. Get a grip.
> >
> > In my response to Gwyn, I said that their goal (programme)
> > was wrong, but the stuff is correct mathematically, there
> > being, excepting some eventually present small errors, no
> > faults in the mathematical sense...
>
> Mok, they were proving that 1+1=2. Do you hear me? THEY WERE PROVING THAT
> 1+1=2!
>
> Now AS LONG AS we believed they were on the trail of something useful, we
> were willing to read a 200-page proof that 1+1=2. As soon as we realized
> that this was a dead end, we suddenly decided that it would be a collossal
> waste of time to read such a thing.
>
> YES, Mok, Yes! Yes! Yes, they were probably correct in their reasoning. But
> nobody is interested in their work anymore, because now that we know that
> aren't accomplishing some lofty goal, we have no more interest in seeing it
> proven that 1+1=2. Today, R&W are the butt of jokes, because we know they
> were wasting their time--and reading the book would be a waste of our time.
I am not a mathematician, let alone a logician. But from
what I know it seems to be true that one has learned that
the route taken by the two authors is a dead end only
(or mainly) 'through' the very knowledge of their failure.
(In many sciences, one learns about certain impossibility
only after actually doing the experiments. I suppose that
this is also the case here.) If you as mathmatician
happens to know the opposite about that, I should be very
grateful to learn about the truth of this issue of
mathematical history (please provide references). Now,
assuming that the above is indeed true, is there any
reason to reproach that they had through bad luck (or
perhaps indeed due to a lower IQ) happened to have taken
the wrong route??? It is certainly true that they had
wasted their time in some sense. I guess that the authors
were very well aware, after finishing their book, that
they had at least not got that much that their time
investment would in more favourable situations had
otherwise brought them. But that's a risk that ANY
researching scientist has to take. There is no 'royal'
road to success, in science or elsewhere, isn't it?
Today, knowing the history of their failure and also the
reasons of failure (reasons maybe in your view could not
be discerned by the two authors before undertaking their
project mainly because of their un-intelligence or
whatever), you, as a mathematicial of a later generation,
have a free choice of either ignoring their book entirely
or reading it for some purposes. Anyway, quite a time
ago I happened to read in a literature saying that,
although their work failed, that work as such has an
essential (positive) influence on the advancement of the
following researches undertaken by others in that and
related fields. I don't remember the sentences, so I
could be wrong, but it should be something to the effect
that that book has set forth a good style of rigor in
attempting axiomatization of arithmatics that stimulates
later similar works in logics. (There may be a few other
points but these have escaped my memory now.) In any
case, the paragraph I read is definitely sort of
admiration of the work that they had done, though their
failure was certainly stressed there as well.
BTW, you must know better as mathematician of how to
currently best learn the foundations of arithmatics. In a
math course for non-mathematicians that I had taken very
long time ago, that was dealt with very quickly with the
Peano axioms. But I remember that my fellow-students who
majored in math studied a book written by Landau on that
topic and that book was not extremely thin (though
certainly not comparable at all in extent to that of
Whitehead and Russell) and had an appearance of some
difficulty for me (as least as far as my impression of
that time goes). So, pending more knowledge of experts'
view, I am yet not very sure that the two authors had
extremely overexagerated in the treatment of their
subject in the sense that they wrote in unnecessarily
long-winded and expensive (for the readers' energy)
ways.
Another thing that I happened to know and may be
interesting in this connection is that not quite
long ago in a certain university a math prof spent
one 'entire' semester to explain the foundations of
arithmetics such that, because of consequently less
time available to treat other stuffs in the succeeding
semesters, the students under that prof were found to
be at disadvantages to work out the normal types of
math exam sheets (where the foundations of arithmatics
are rarely an issue) in comparison to fellow-students
that were under other profs who treated the topic of
foundations of arithmetics rather tersely and quickly.
So it seems correct to say that even under the 'current'
mathematics professors opinions could essentially differ
as to what details/extents an upcoming mathematician
should learn about the foundations of arithmeics.
M. K. Shen
=============================
http://home.t-online.de/home/mok-kong.shen
------------------------------
From: "Robert J. Kolker" <[EMAIL PROTECTED]>
Subject: Re: Alice and Bob Speak MooJoo
Date: Fri, 15 Jun 2001 08:45:45 -0400
Mok-Kong Shen wrote:
> up basic concepts of math. The language is call LINCOS, if
> I don't err.
Used by 3 or 4 people world wide. Up there with INTERLAN,
a failred universal language invented in the 50's. I suspect more
people speak T'lingon (Klingonasse) than LINCOS.
Q'plaH.
Bob Kolker
------------------------------
Date: Fri, 15 Jun 2001 14:53:02 +0200
From: Pascal Junod <[EMAIL PROTECTED]>
Subject: Re: Looking for Mitsuru Matsui paper
On Wed, 13 Jun 2001, Tom St Denis wrote:
> A while back Matsui did a paper on a new feistel design and had some
> analysis of cubic and inverse functions in GF(2^x).
> Does anyone know where I can find that paper? Matsui doesn't have a
> website!
Do you mean "New Structure fo Block Ciphers with Provable Security against
Differential and Linear Cryptanalysis" ? This paper (FSE) has
been published in LNCS
1039 in 1996 and I'm afraid there is no electronic copies on the web. You
have to write him an email or to go in a university library...
A+
Pascal
--
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Pascal Junod, [EMAIL PROTECTED] *
* Security and Cryptography Laboratory (LASEC) *
* INF 240, EPFL, CH-1015 Lausanne, Switzerland ++41 (0)21 693 76 17 *
* Place de la Gare 12, CH-1020 Renens ++41 (0)79 617 28 57 *
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
------------------------------
From: "Tom St Denis" <[EMAIL PROTECTED]>
Subject: Re: Looking for Mitsuru Matsui paper
Date: Fri, 15 Jun 2001 12:52:21 GMT
"Pascal Junod" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]...
> On Wed, 13 Jun 2001, Tom St Denis wrote:
>
> > A while back Matsui did a paper on a new feistel design and had some
> > analysis of cubic and inverse functions in GF(2^x).
> > Does anyone know where I can find that paper? Matsui doesn't have a
> > website!
>
> Do you mean "New Structure fo Block Ciphers with Provable Security against
> Differential and Linear Cryptanalysis" ? This paper (FSE) has
> been published in LNCS
> 1039 in 1996 and I'm afraid there is no electronic copies on the web. You
> have to write him an email or to go in a university library...
You say this just after another user posted a url to an online copy.../..
Hhehehehe
Tom
------------------------------
** FOR YOUR REFERENCE **
The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:
Internet: [EMAIL PROTECTED]
You can send mail to the entire list by posting to sci.crypt.
End of Cryptography-Digest Digest
******************************