Cryptography-Digest Digest #431, Volume #9       Tue, 20 Apr 99 21:13:04 EDT

Contents:
  Re: Question on confidence derived from cryptanalysis. (Terry Ritter)
  Re: Charles Booher is a complete IDIOT! (Paul Koning)
  Re: BEST ADAPTIVE HUFFMAN COMPRESSION FOR CRYPTO (Andrew Carol)
  Export restrictions (JCA)
  Re: Export restrictions ("Steven Alexander")
  Re: On Being Earnest (Mr. Alen Yoki)
  Re: Question on confidence derived from cryptanalysis. (Geoff Thorpe)
  Re: On Being Earnest ("Steven Alexander")
  Re: New drop in cipher in the spirit of TEA (David Wagner)
  Block Cipher Question ([EMAIL PROTECTED])

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Terry Ritter)
Subject: Re: Question on confidence derived from cryptanalysis.
Date: Tue, 20 Apr 1999 22:03:05 GMT


On Tue, 20 Apr 1999 17:36:26 GMT, in
<[EMAIL PROTECTED]>, in sci.crypt
[EMAIL PROTECTED] (John Savard) wrote:

>[...]
>Although lately, once again, I've made a number of posts criticizing places
>where I think you've overstated your case 

Well, I guess in that case I need to go back and directly address
those issues.  They frankly seemed less compelling than some others at
the time, and there is only so much time.  In fact, I'm going to have
to finish this up soon and get back to work.  


>- and I think it's very important
>_not_ to overstate one's case when one is advocating a minority position -

I think it should be very disturbing to anyone actually trying to do
Science to have to consider whether or not the conclusions they come
to are a "minority position."  It really does not matter how people
vote on the facts:  The facts are what they are.  I do not even think
about whether my "positions" are minority or majority, and I do not
care.  

I don't suppose there ever has been or ever will be anything I write
that I will not look back on and say "Gee, I could have put that
better."  But I see no overstatement.  If you do, you should state
clearly what you consider the limits of the correct position, and
highlight the excess which you consider beyond reality.  

What I see in my comments is an attempt to correct certain irrational
conclusions about cryptanalysis and strength which may have very
significant negative consequences for society.  This should be pretty
much a straight logic argument with little opinion involved.  The
issue reappears periodically, but has been promoted recently in
various writings by Schneier (in particular, the article in the March
IEEE Computer column).  


>I will take the opportunity to acknowledge both that you have made
>contributions through your own work, as well as by representing a point of
>view that points in the direction of what I, also, feel is a correction
>needed by the cryptographic community.

One is tempted to ask why -- if you think this correction is needed --
you are not also trying to make it happen.  Will you step into the
breach as I retire?  If you are not part of the solution....


>One needs the very highest credibility when one is engaged in telling
>people what they do not want to hear.

On the contrary, all one needs to do is to show the logic:  It is
compelling.  

If people want to wait for a crypto god to find a way to gracefully
change his point of view, fine, but that is rumor and superstition,
not Science.  To really know what is going on you have to be able to
draw your own conclusions, and to believe your own results.  It is not
my goal to provide a different package of rumor and superstition which
happens to be correct.  I am no crypto god, and I don't want to be
one.  This is not about me.  

---
Terry Ritter   [EMAIL PROTECTED]   http://www.io.com/~ritter/
Crypto Glossary   http://www.io.com/~ritter/GLOSSARY.HTM


------------------------------

From: Paul Koning <[EMAIL PROTECTED]>
Subject: Re: Charles Booher is a complete IDIOT!
Date: Tue, 20 Apr 1999 14:18:58 -0400

Does anyone here know how to set up a killfile for the 
Netscape news reader?

        paul

------------------------------

From: Andrew Carol <[EMAIL PROTECTED]>
Subject: Re: BEST ADAPTIVE HUFFMAN COMPRESSION FOR CRYPTO
Date: Tue, 20 Apr 1999 16:07:53 -0700

In article <7fit8t$hpu$[EMAIL PROTECTED]>, SCOTT19U.ZIP_GUY
<[EMAIL PROTECTED]> wrote:

> That means for every finite file there is a one
> to one mapping from the compressed file to the uncompressed
> file. One misconception that many people have is that when
> one compresss it does not always result in a smaller file.
>  One chooses a method that usually results in a smaller file.
> The method I choose to use initially was one based on
> Huffman adaptive compression.

If I understand you to be saying that your headerless compression
_always_ results in a unqiue smaller file then you are very wrong. 
That is impossible and easy to show why.

Since there are fewer smaller files than larger files, there can't be a
one-to-one match of all larger files to smaller files.

In fact, hufman coding has a pathelogical case where each symbol
frequency increases by powers of two.  The tree will degrenerate into a
list which will result in much larger files.

If you use adaptive huffman to compress files, it's easy to create a
file which will "compress" to a file larger than the source.

If you have to output a flag to indicate this has happened and that you
will fall back onto another form of compression, then you have a
header.

Oh well....

------------------------------

From: JCA <[EMAIL PROTECTED]>
Subject: Export restrictions
Date: Tue, 20 Apr 1999 23:35:09 GMT


    How do export restrictions work? I only hear about key lengths, but
I guess that algorithms must be taken into account as well; after all,
it
would be dead easy to come up with a completely idiotic encryption
algorithm with a, say, 128 bits key. Would Uncle Sam frown if anyone
tries to export it? (Maybe nobody would want to buy it but, you never
know: suckers are a dime a dozen.)




------------------------------

From: "Steven Alexander" <[EMAIL PROTECTED]>
Subject: Re: Export restrictions
Date: Tue, 20 Apr 1999 16:33:14 -0700

I assume that it depends.  Who does review the algorithms?  If it is done by
people that are knowledgeable in cryptography...they may look at the
algorithm and factor it into their decision.  For instance they might let
you export the XOR encryption program you made even if it does have a
256-bit key.  However, if the person reviewing it knows nothing about crypto
they might just see 256 bits and pull out a big red stamp labeled declined.
I'm quite interested to know.  Input from anyone that has applied for export
status on their algorithm?

-steven

>    How do export restrictions work? I only hear about key lengths, but
>I guess that algorithms must be taken into account as well; after all,
>it
>would be dead easy to come up with a completely idiotic encryption
>algorithm with a, say, 128 bits key. Would Uncle Sam frown if anyone
>tries to export it? (Maybe nobody would want to buy it but, you never
>know: suckers are a dime a dozen.)




------------------------------

From: [EMAIL PROTECTED] (Mr. Alen Yoki)
Subject: Re: On Being Earnest
Date: Tue, 20 Apr 1999 23:33:15 GMT

Everything Bruce Schneier said in that article makes perfect sense to me.
Cryptographic algorithms are sort of like fine wines, in that they have to
age a long time before they become useful. Of course the article will be
literally "controversial" in that eager amateur cryptographers will be in
denial, but it's not "controversial" in the sports commentator's sense; the
ref made a good call here.
-- 
"Mr. Alen Yoki"     better known as [EMAIL PROTECTED]
 01  2345 6789      <- Use this key to decode my email address.
                    Fun & Free - http://www.5X5poker.com/

------------------------------

From: Geoff Thorpe <[EMAIL PROTECTED]>
Subject: Re: Question on confidence derived from cryptanalysis.
Date: Tue, 20 Apr 1999 20:35:53 -0400

Hi,

Terry Ritter wrote:
> 
> On Tue, 20 Apr 1999 00:28:14 -0400, in <[EMAIL PROTECTED]>,
> in sci.crypt Geoff Thorpe <[EMAIL PROTECTED]> wrote:
> >
> >I disagree - and I disagree with every sentence moreover. I may not
> >design ciphers but I can definately slug it out with most people
> >regarding probability theory, statistics, and logic.
> 
> You may be willing to "duke it out," as though this were some sort of
> winner-take-all contest, but if you believe your logic is compelling,
> you will have to think again.  Not only am I not compelled, I am
> appalled to see you repeating things over and over, in the apparent
> illusion that this has some relation to logic or scientific argument.

Other parts of your posts refer to your ideas and your technologies and
your experience, etc. I do not claim familiarity with your ideas, but
moreover I was attempting to say that I also do not claim to be a cipher
designer. I am a scientist however, and was tiring of your attempts to
state what I saw as broad, arrogant, and overly pessimistic views as
fact together with implications of naivety and ignorance on my (and
others?) part. I also have no desire to "duke it out", "lock horns", or
any such thing - just wanted to make sure you understood that not being
a cipher designer does not mean I'm going to lie down, take your
statements as authorative when I genuinely disagree with some of your
fundamental points.

> >I also have to
> >assist with various API designs and have been on the (l)using end of
> >quite a few if we want to talk standards, picking algorithms, and
> >covering butts (oh yeah, I've done quite a bit of Risk Management
> >related stuff too).
> 
> What a guy you are I'm sure.  Let's get on with it:

yadayadayada. I have a vague idea now of some of your areas of expertise
as per your posts and the peripheral discussion. You seem to have no
tolerance for my views on the matter so I thought it appropriate to at
least let you know that I'm not some bunny out on a limb here. However,
I'm of the impression that my problem here is not that you won't
consider my opinion as worthy of some merit, so much as you won't
consider any other opinion than your own as worthy of merit. Mind you, I
recall that recently you categorically discarded the considered views of
Mr Schneier and others so I guess credentials are a waste of time anyway
- I should have thought of that.

> else's capabilities.  It is not my *opinion* that any cipher we have
> *might* possibly break -- that is fact.  I assume the worst case, and
> propose systems to provide strength even then.

Exactly, you assume the worst case. Whilst you certainly will never be
accused of lacking precaution, why should I accept that your position
the only appropriate one to adopt? The world operates a lot more
pragmatically than you might be prepared to accept, and naturally we
roll the dice as a result - memories of the ice-storm in Montreal and
the massive power-outage in Auckland, New Zealand (particularly relevant
to me) flood to me at this point. Individually, each failure is roundly
criticised and everyone pats themselves on the back as to why they
wouldn't have fallen into that particular trap.

I could get killed the very next time I go driving, in fact I'm
increasingly of the opinion there are those who wouldn't be overly upset
about it. But I do not insist that I and others must push through
radical measures involving gondolas, pulleys, and the abolition of
personal automotive ownership.

Before I get accused of doing precisely what I don't want to do (lock
horns, duke it out, etc) ... let me just say that I really am warming to
an idea implicit in all of this - and I believe it is one of yours,
though it was Trevor I think who recently illustrated it quite well ...
namely the employment of a standard bank of ciphers that can be invoked
on demand in any number of possible configurations eg strung together in
a different order every time, utilising the different modes of
operation, etc etc. I also agree the implementation and standardisation
headaches of this sort of scheme are not insurmountable - indeed every
standard SSL implementation I've seen lately seems to implement most of
the core ciphers/modes that could be used in such a scheme. I'm also
definately not against the idea of extensibility of frameworks to
incorporate as-yet-unknown elements - indeed PKCS#7 probably didn't have
DSA, ElGamal etc in mind, but now they seem to be creeping into CMS and
that method seems to allow room to grow it again later. (If I've
confused this with something else, someone please correct me - I could
have my wires a little crossed right now and don't have any reference
handy).

But it seems to me, especially with regard to non-realtime applications,
that to an extent, less-is-more ... sure a few ciphers in your pool is
fine, especially if everyone has them. But the wholesale liberalisation
of 
cipher farming seems to create a very real problem - a kind of protocol
grid-lock. And frankly, I still place a lot of stock in what *I* rank as
ciphers of tested strength and wouldn't want any system of mine having
too many "new toy" ciphers creeping in. Perhaps we need to agree to
disagree.

> Your position, dare I state it, is that you *can* estimate the
> capabilities of your Opponents.  You also say you can estimate the
> future strength of a cipher from past tests.  But for all this
> claiming, we see no similar statements in the scientific literature.
> So these are simply your opinions, and I see no supporting facts.

Scientific literature? Ironic that it is precisely this quantity that
you appear to place very little value in with regard to ("tested")
cipher strength, and yet I am supposed to find some to support my view?
Anyway - I have already said that my view (that a cipher not falling
over despite some considerable efforts against it does merit some
"value") is not based on any exact science. I think history, and some
basic common sense warrant my conclusions. Your contrary opinion does
not appear to be any more scientifically founded - although it does
appear to be a little more "absolute" or "axiomatic" (and IMHO "not
terribly practically useful").

> >Now, statement (1) is wrong.
> 
> Which was:  "1) We cannot estimate the probability that an effective
> attack exists which we did not find."
> 
> Since you think this is wrong, you must believe we *can* make an
> estimate.  Fine.  Do it.  Show me.

The fact that I can drive in Quebec without getting killed for 3 months
suggets I can probably survive another few days. I don't know what my
chances would be in London - and maybe the insurance salesman doesn't
either. Fine, I'll go for a drive shortly and if I STILL don't get
killed (ie. I post again in the future to this group) then that supports
my estimate of the probability. If you think I'm wrong, break triple-DES
and you show me. Otherwise - neither of us is correct in any pure sense
... but I'm still comfortable with my approach and if others are too
that's all that matters. Anyway, now I think about it further - exactly
how can you possibly insist that "we cannot estimate a probability" ???
Sounds absurd. Particularly with something that has any historical
record at all?

As someone with a love of pure mathematics, it does feel a little
disturbing to be arguing a point with someone where it is *I* who am on
the fuzzy, pragmatic, approximation side of the fence and the other is
arguing puristically.

> Alas, what people believe is not science.

But what people believe influences what they will and will not do (and
will or will not put up with). And unless a scientist can *prove*
absolutes they will have difficulties imposing absolutes. Perhaps a good
way to measure this is to ask an insurance-brokerage expert to comment
on the insurability (premiums etc) on an information resource secured
using your approach versus something like I prefer. Not a single ounce
of "science" will enter into this equation (I suppose) and yet I can't
imagine a more adequate way to judge the situation - after all, it is
these kind of people whose lives it is to cover the costs of things when
they go wrong.

> >year than the average "expected life". It's a very basic and common
> >mathematical model/argument, and it's common sense.
> 
> Oddly, no such study has appeared in the literature.  That seems
> somewhat strange, since you say it is very basic common sense.
> Perhaps everyone else in cryptography has simply been blinded to this
> fundamental truth.  When will you write it up for us?

If I hire a programmer to work with a new technology and a deadline, and
my options (for the same money/conditions etc) are between someone who
has demonstrated he/she can handle new technologies (in the past of
course), and someone who *might* be able to handle new technologies, I'm
going to hire the one with experience. A new candidate might be faster,
hungrier, and actually better with the new technology - but why would I
take that chance versus the chance the experienced one ran out of puff?
True, until I try one I will not know which one was better but I'll hope
you agree estimations, probabilities, and common sense are all present
and can be utilised. I got a feel that your view on this was almost
quantum mechanical - then I remembered that even QM admits probability
distributions and expected values (and the difference between a likely
result and an unlikely one even though each is possible until you find
out for sure).

But I digress perhaps, and we've already demonstrated we don't agree
here so ...

> You are arguing your opinion about cipher strength.  (Recall that I do
> not argue an *opinion* about cipher strength, but instead the *fact*
> that any cipher may be weak.)  If you have compelling factual
> evidence, I will support it.  Show me the correlation you say exists.
> Prove it.  Then you can use it.

I've already admitted that my "correlation" is a fuzzy one, full of
ideas that are "compelling" (to me) here, "suggestive" (to me) there,
etc - and that my conclusion is a fuzzy one. Perhaps then I've shown
compelling "fuzzy" evidence. [;-) Anyway, you are saying I cannot use
"tested strength" as a measure - and your sole reason seems to be -
"because it could still break tomorrow". Nobody disputes the latter
statement but it does not logically imply the blanket assertion you
make. Not proving things one way or the other does not mean we need
default to your assertion, that all ciphers are equal when only existing
failures to break them are in evidence, and abandon my assertion, that
failing to break ciphers does provide useful information for
"estimations".

And in case you ask, no - I know of NO research paper to support this
and have no interest in attempting to create some when I'm already
satisfied.

> Nobody has any problem with you making a call for yourself and risking
> only yourself.  But if this "call" is intended to formulate what
> "should" happen for much of society, you may need to revise your
> estimate as to the consequences of failure.  Just how much disaster
> are you willing for us to have?

The *consequences* of failure are not what I'm estimating. And again,
I'll agree that the idea discussed before (utilising a defined set - for
interoperability this seems necessary - of ciphers, algorithms, etc etc
that can be jumbled around on the fly to diffuse the impact "a break"
would have). It would interesting, though off topic, to see how your
absolutist approach generalises to arms control, transportation
legislation, etc. All areas where "pragmatic fuzzies" tend to preside
over "puristic absolutes" - even when they're cautionary variety.

> Will it be OK for everyone to use the single standard cipher which you
> predict is strong, if you turn out to be wrong?  Will it be OK when

I've already moved a bit to your side on at least one point - one single
cipher (if they are implicitly atomic and cannot encompass the idea that
one can effectively put what would be 3 or 4 atomic ciphers into a
"cipher") would not be as comforting as a small (I still think "fixed",
or at least "slow moving") collection of ciphers jumbled up to disperse
the impact a break in any one configuration would have. I still think my
point applies to the selection of those ciphers though.

> communications grind to a halt and incompatible low-security temporary
> measures are instituted everywhere while a new cipher is integrated
> into all the programs which must be replaced throughout society?  Is
> that OK with you?

And quantum computers could break everything and that wouldn't be OK
with me either. But I'm not going to resort to carrier pigeons (which
could be broken by a large society of hunters ... oh god ... this is
getting too much).

> >Our Opponents are just well-paid versions of us, most of
> >whom probably grew up around us, and who find their occupations not too
> >unfathomably unethical to suffer day by day.
> 
> This is simply breathtaking:  It is appallingly, plainly, immediately
> false even to the most casual observer.  People do differ, we do have
> limitations others do not have, and others often do take advantage of
> knowledge to which we have no access.

You still don't get what I'm saying ... YES people do differ, but I
think continuously, not by quantum leaps that erase any relationship you
can draw.

> >Sure thing - but the whole system does not collapse down to a binary
> >system of "broken" and "not-broken-yet" ... as you say, you put together
> >a threat model ... consistent with your requirements and using a chosen
> >method for judging a components "worth", and amplify it here and there
> >as appropriate. A lot like putting together a cost-proposal I guess ...
> >add in your known prices, choose an acceptable value for the "unknowns",
> >amplify the costs of all the "risky" bits, add x% profit on top - and
> >then bang another 30% on top for good measure, and generally covering
> >your butt some more.
> 
> Write whatever numbers you want: you cannot support them.

You can be as cautious as you like and you could still get busted - you
can be as irresponsible as you like and you COULD (not likely) get away
with it. You can also just give up. That same model applies every time I
write a proposal, an electricity company designs and insures an
infrastructure, and many other real world situations. Tell me why I HAVE
to resort to such a binary system of "broken" and "not-broken-yet". You
don't seem to be able to support your claim that the test of time (and
attack) does not provide a usable measure and you yourself have not
written any numbers to try. Don't tell me and many other people with an
interest that it's invalid to use such approaches, and then only support
your claim by statement - particularly if you intend to then insist I
support my own claims with numbers or proofs I'm supposed to pluck out
of thin-air.

> >3 ciphers strung in a line is, to me, a cipher.
> 
> The distinction is that each cipher is an independent and separable
> element which can be "mixed and matched" with any other.  Each cipher
> is tested as an independent unit, and brings whatever strength it has
> independent of internal ciphering requirements.  Dynamic mixing and
> matching prevents having any fixed target to attack.

So should good cipher design as far as I can see but I'll go along with
you here. I see this idea as promising and will not argue with the
premise that if you've got 5 good ones, why just stick with one - indeed
why just stick with a fixed arrangement of that 5 (effectively making
one very complicated, but still fixed, cipher) when you can jumble the
order, modes of operation, etc each time. (The way in which that has
been done would presumably become part of the "key"). I'd still prefer
that we standardise on those 5 and maybe rotate new ones in
"occasionally" (conservatively) in a method not-unlike the current AES
process - ie. public exposure to candidates for a good hard thrash at
them before actual incorporation of them into systems.
 
> >You need all three in
> >the same place and in the same order to have anything other than a
> >"noise generator". Breaking 3 ciphers should be no more difficult than
> >breaking one well designed one using 3 different stages
> 
> Really?  Tell me more about how ciphers are designed.  How long did
> you say you have been doing this?  How many ciphers have you designed?
> Have you measured them?  Where can we see your work?

Already told you I'm not a cipher designer. But there are cipher
designers who share my view so attack the idea, not the guy saying it. I
might also add - you're asking me to measure ciphers after having
insisted quite strongly that any such attempt is implicitly impossible
(with the absolute exception of breaking it).

Can I take apart a modern cipher and say "that's not a cipher - look,
it's lots of little ciphers"? All I said was the division for me between
3 ciphers strung in a line and one cipher with 3 stages to it seems to
be a question of packaging and patents. One could even stretch the
definition and include the possibility of reording "stages" based on the
"key". But I'm not going to suck myself into a bits-on-the-wire cipher
designing discussion because I know I can't make a worthwhile
contribution to it.

> >regular basis. I see this as unacceptable in a real-world scenario for
> >reasons of interoperability & standardisation, as well as security.
> 
> What you mean is that *you* do not know how such a system would be
> standardized and made interoperable.  Then you imply this means
> *nobody* *else* could do it either.

Fair call. Let me try again then - I think there could well be some very
useful gains made from employing standards that use multiple primitives
that hopefully seem (barring quantum computers?) to be independant
targets of attack, that when used in different orders, modes, etc reduce
the chances of the whole system getting broken rather than one putting
the house on one primitive, or one configuration of the primatives. I do
however think we should be measuring the primitives (which you suggest
is pointless) as best we can, and that we should use a conservative
approach to the standardisation on those primitives and the method by
which new ones are incorporated into the standards.

If your "boolean model" of cipher strength is valid - can't this entire
idea, when wrapped and considered as an entity in itself, then be
implicated as just as "trust-worthy" as a single cipher that hasn't been
broken? I would NOT regard them as equal but your argument, by
extension, does.

Cheers.
Geoff

------------------------------

From: "Steven Alexander" <[EMAIL PROTECTED]>
Subject: Re: On Being Earnest
Date: Tue, 20 Apr 1999 17:39:29 -0700


I think that many of the people that are new to cryptography are in too much
of a hurry to make the next (insert your favorite algorithm here) that they
do not want to believe that it takes both experience and years of academic
review of an algorithm before it will become widely accepted.  I could make
the strongest algorithm that the academic community and the world's
militaries have ever seen and it will still not gain wide acceptance without
several years of review.  This is simply something that we must deal with.
IMHO it is not reasonable to expect people to protect important information
with an algorithm that has not been around for long.  At the very least the
algorithm needs to have undergone enough review that they have an acceptable
level of confidence in the algorithms security.  This does not ensure its
security nor does it mean that newer algorithms are not better.  This allows
cryptanalysts to try known attacks against the algorithm and attempt new
ones.  If it does not fail any of these review we know that it does offer
"some" degree of security.

my $.02

-steven



------------------------------

From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: New drop in cipher in the spirit of TEA
Date: 20 Apr 1999 17:42:52 -0700

In article <7fcnq2$46b$[EMAIL PROTECTED]>,
 <[EMAIL PROTECTED]> wrote:
> I improved the algrithm to be more sensitive to bit changes in the plaintext
> (like david pointed out) and to the key.
> 
> It's at
> http://members.tripod.com/~tomstdenis/nc.c

This one seems much better, but now you eliminated the `sum'
variable, which accounted for most of the round-dependence in
your previous version.

As a consequence, the new version looks susceptible to slide
attacks [1]: the key schedule has four-round self-similarity,
and thus the cipher can be slid by four rounds.

Moreover, the complexity of slide attacks is independent of the
number of rounds, so merely adding some more rounds will not fix
this weakness.

Don't give up, but for next time I suggest doing more analysis
up front before proposing new ciphers.

[1] Alex Biryukov and David Wagner, ``Slide Attacks'', FSE'99.
    http://www.cs.berkeley.edu/~daw/papers/slide-fse99.ps

------------------------------

From: [EMAIL PROTECTED]
Subject: Block Cipher Question
Date: Wed, 21 Apr 1999 00:54:47 GMT

Hello.

Is there a block cipher algorithm that can be implemented using a 16-bit block
size?

Thanks.

- Randy

============= Posted via Deja News, The Discussion Network ============
http://www.dejanews.com/       Search, Read, Discuss, or Start Your Own    

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to