Cryptography-Digest Digest #439, Volume #10      Sat, 23 Oct 99 11:13:08 EDT

Contents:
  Re: Key Hidden Encryption (Chad Hurwitz)
  Re: "Risks of Relying on Cryptography," Oct 99 CACM "Inside Risks"    (DJohn37050)
  Re: "Risks of Relying on Cryptography," Oct 99 CACM "Inside Risks" column 
(DJohn37050)
  Some humble thoughts on block chaining (Mok-Kong Shen)
  Re: NAI version of PGP 6.5.1 secure? (David A Molnar)
  Re: "Risks of Relying on Cryptography," Oct 99 CACM "Inside Risks" column (Mok-Kong 
Shen)
  Re: "Risks of Relying on Cryptography," Oct 99 CACM "Inside Risks"    (Mok-Kong Shen)
  Re: "Risks of Relying on Cryptography," Oct 99 CACM "Inside Risks" column (Mok-Kong 
Shen)
  Re: "Risks of Relying on Cryptography," Oct 99 CACM "Inside Risks" column (Mok-Kong 
Shen)
  Re: OAP-L3: Where are the negative critiques from users? (Taneli Huuskonen)
  Re: "Risks of Relying on Cryptography," Oct 99 CACM "Inside Risks" column ("Trevor 
Jackson, III")
  Re: Sboxes (Tom St Denis)
  Re: Some humble thoughts on block chaining
  Re: some information theory (very long plus 72K attchmt) (Tim Tyler)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Chad Hurwitz)
Subject: Re: Key Hidden Encryption
Date: 22 Oct 1999 15:38:24 -0700

In article <[EMAIL PROTECTED]>,
Medical Electronics Lab  <[EMAIL PROTECTED]> wrote:
>Chad Hurwitz wrote:
>> The reason public key wont necessarily work in this situation is that user A
>> can look in program C and extract the public key of B and the private
>> key of program C, and then use that key to forge a message not created
>> by program C.
>
>Why not use a multiple link protocol?  User B creates a public/secret
>key pair, sends the public key to A and receives the encrypted
>message.  No need to store any key material in the program, just let
>B invent a key when they need to.  Neither user has to know much
>other than the link is established.  
>
>In any event, you can't hide the message that A sends if A has
>any brains at all.  It has to start out in the clear someplace,
>and they have access to the code and a debugger and will eventually
>find it.
>
>Patience, persistence, truth,
>Dr. mike


In our application the user creates a compilcated file.  The authentication
process is to verify how fast the user A created this special file with
program C, authenticating the begin and end times of creating the file.

Using your technique, program C would connect to B's website get a time
sensitive key each to encrypt the begin and the end time of creating
A's special file.  This would certainly ensure the time that progam C
connected to B's website.  You could also send the checksum of the file
along with the encrypted time information.  So in order to verify user
A's file you can compare the encrypted system time and checksums in A's
file to the times registered for that checksum on B's website database.

However there is a loophole, with a net authority like you suggest,
a user A can create a file that would take an hour.  Then Calculate the
checksum of that file, run program C with the forged checksum hacked
into the program, and would stop program C after only a half hour of use.
Thus, getting the encrypted timestamps for a file that took a half hour
and then pasting them on to the file that A created in a full hour.
And thus A's forged one full hour file would be authenticated for a file
that took a half hour.  This would be difficult for A to
do,  but it would be possible.  There is also the possibility of A
snooping on the key delivered by B's website to C's program.

If there is a way to stop this loophole, then what you suggest would work.
I was trying to avoid net access, because the creation of these files is
difficult and net downtime would disalow files that were created honestly
but couldn't be verified because of down time.

-- 
/-------------------------------------------------------------------\
| spam if you can, spam if you dare, spam if you must, i won't care |
| spam is futile,  spam is free,  spam is filtered,  so i won't see |
\-------------------------------------------------------------------/

------------------------------

From: [EMAIL PROTECTED] (DJohn37050)
Subject: Re: "Risks of Relying on Cryptography," Oct 99 CACM "Inside Risks"   
Date: 22 Oct 1999 23:40:55 GMT

This ordering of AES algs is a good idea, but under what criteria?
Total code space?
ROM space?
RAM space?
setup vs. runtime?

What are the devices we are trying to optimize for?
And what devices are a "don't care"?
For example, PC in SW is a don't care, I think.
Low end devices, can care a lot, but what is the critical constraint? or
constraints?
Don Johnson

------------------------------

From: [EMAIL PROTECTED] (DJohn37050)
Subject: Re: "Risks of Relying on Cryptography," Oct 99 CACM "Inside Risks" column
Date: 22 Oct 1999 23:46:29 GMT

Yes, the effect of having multiple algs increasing the cost/benefit ratio to an
attacker was mentioned in my AES paper.  This means it is entirely possible
that a flaw will not be found (or found as fast) if multiple winners are picked
than would be found if there is only one.  In one case, jackpot; in the other,
switch to backup.
Don Johnson

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Some humble thoughts on block chaining
Date: Sat, 23 Oct 1999 09:40:18 +0200

Chaning is often employed in block encipherment. What I don't
quite understand is the fact that CBC appears to be the most
often mentioned (and perhaps used) chaining mode. Since ciphertext
is always assumed to be available to the analyst, while plaintext 
is available to him only in certain types of attacks, I suppose 
plaintext chaining is more useful. The only thing bad about using 
plaintext for chaining could be, I guess, that the plaintext is 
far from being random and hence the whitening effect is poorer. 
Nevertheless, the fact that it is unknown to the analyst in 
a ciphertext only attack should in my opinion outweight its 
statistical deficiency. Moreover, using addition (modulo word 
size) instead of XOR is likely to improve the situation somewhat
due to the carry-over bits.

A probably better mode of chaining than with the immediately 
prior plaintext block is to do that with the accumulated sum of 
all prior plaintext blocks, or even the accumulated sum of all
prior plaintext and ciphertext blocks, if one wants to make use
of the better statistical quality of the ciphertexts.

If we take a block as a unit, then chaining (disregarding for
a moment the subsequent block encryption) is like a form of
auto-key encryption which is considered as a stream cipher, if
I don't err. Now if we do encryption with block chaning from 
block 1 to N and after that backwards from block N to 1, then 
this superencipherment appears to have some non-trivial merits. 
For I suppose the analyst will have to process all the blocks
instead of being able to work with perhaps some blocks only.
If this is true, then for large N the time factor may render his
work futile. If one has two encryption algorithms, then this
superencipherment can be done in a number of different ways.
Besides using one algorithm in each direction, one can apply the 
two algorithms to the blocks in a checker board pattern or any 
other arbitrary pattern. This variablity is evidently to the 
advantage of the communication partners.

Normally one uses an IV to start the chaining. If the encryption
is strong enough, I suppose that the communication partners
need to agree only once on an IV. For after processing a message
the last chaning value can be saved for use as IV of the next
message. Thus normally an IV needs not be transmitted, nor
there is need to somehow derive a block from the plaintext blocks
for use as IV and suppress the ciphertext of the first block (in 
schemes that are designed to use IV without increasing the number
of blocks to be transmitted by 1).

M. K. Shen
========================
http://home.t-online.de/home/mok-kong.shen

------------------------------

From: David A Molnar <[EMAIL PROTECTED]>
Crossposted-To: alt.security.pgp
Subject: Re: NAI version of PGP 6.5.1 secure?
Date: 23 Oct 1999 08:30:34 GMT

In sci.crypt Adam Durana <[EMAIL PROTECTED]> wrote:
> consider your attacker to have unlimited computing power.  And in some cases
> unlimited computing power isn't needed.  Maybe this makes me paranoid, but
> you should never assume you know what an attacker is capable of.

OK. So you don't believe in complexity assumptions. Time to build crypto
which relies on other assumptions. Note that every system (I know of) has
at the very least the assumption that the secret key is not compromised!

What assumptions are you willing to make? 

-David

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: "Risks of Relying on Cryptography," Oct 99 CACM "Inside Risks" column
Date: Sat, 23 Oct 1999 11:04:05 +0200

David Wagner wrote:
> 
> I think it is more prudent to assume that if you select somehow from a
> list of five algorithms for each connection, you may end up with a system
> that is only as strong as the weakest of the five algorithms.

I suppose that this has been stated before: If one randomly choose
from a set of algorithms of different strength, then one has a
chance of (sometimes) using the weakest algorithm. The weakness
of the whole has to be weight-summed according to the probability 
of usage. One can of course take the standpoint that it could be 
catastrophical if the weakest algorithm ever gets used and hence
the weakest algorithm determines everything. That's a thoroughly
viable standpoint. But, if one assumes that a few of the finalists
of AES are (yet) unable to be differentiated from one another in
respect of strength (a scenario that I believe to be certain), then 
there is no weakest algorithm to be considered.

M. K. Shen

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: "Risks of Relying on Cryptography," Oct 99 CACM "Inside Risks"   
Date: Sat, 23 Oct 1999 11:04:01 +0200

Shawn Willden wrote:
> 
> Perhaps a reasonable compromise would be to list multiple algorithms in the standard,
> but define one as being "primary", or even define a graduated scale.  A device that
> supports AES level 1 implements algorith A.  AES level 2 devices implement A and B
> and AES level 3 devices implement A, B and C.  Thus, any two devices that support AES
> can communicate, but devices with higher levels of support have more options.
> Perhaps the standard, or a related standard, could specify how algorithm choice is
> indicated to the receiving device.

The use of levels is in fact done in standards of certain programming
languages.

M. K. Shen

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: "Risks of Relying on Cryptography," Oct 99 CACM "Inside Risks" column
Date: Sat, 23 Oct 1999 11:04:11 +0200

David Wagner wrote:
>

> Suppose unbeknownst to us, one of the AES finalists ("the lemon") is
> fatally flawed, but the rest are totally strong.  In other words, let's
> assume that our adversaries are able to read traffic encrypted under
> "the lemon", but we don't realize this.
> 
> Now, there are two ways we could have designed our cryptosystems:
> 
> 1. Scenario: We picked one lucky finalist as the single AES winner.
>    That cipher, and no other, becomes deployed everywhere.
> 
>    Risk: If we pick "the lemon" as our winner, all of traffic is readable.
> 
>    Expected result: With probability at most 1/5, all our traffic
>    is readable.  With prob. at least 4/5, all our traffic is secure.

> 
> 2. Scenario: All five finalists were designed as winners; the idea is
>    that, for each new connection, one picks a new key at random _and_
>    a new cipher (from the five finalists) at random.  This is what is
>    done in deployed systems worldwide.
> 
>    Risk: Every connection that uses "the lemon" is insecure.
> 
>    Expected result: With probability 1, exactly 1/5 of our traffic can
>    be read by the adversary.

Question: Why did you use 'at most' and 'at least' in the first
paragraph above instead of 'exactly' ('exactly' is albeit used in the 
second paragraph cited)? Could you please explain a little bit?

M. K. Shen

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: "Risks of Relying on Cryptography," Oct 99 CACM "Inside Risks" column
Date: Sat, 23 Oct 1999 11:04:16 +0200

David Wagner wrote:
> 

> My preference would be to have exactly one MUST-implement "AES winner".
> Then, if you like, you can pick several optional alternates that you MAY
> implement at your discretion; I don't care about that part.  (I'm using
> the words MUST and MAY as in the meaning established for IETF standards.)

I conjecture that this is acceptable to the majority. This means
however, as I said in a previous follow-up, that NIST, besides 
picking a winner, has to testify the usefulness of a couple of 
other candidates in formal documents with details of the same nature 
as the winner.

M. K. Shen

------------------------------

From: [EMAIL PROTECTED] (Taneli Huuskonen)
Crossposted-To: talk.politics.crypto
Subject: Re: OAP-L3: Where are the negative critiques from users?
Date: 23 Oct 1999 14:57:31 +0300

=====BEGIN PGP SIGNED MESSAGE=====

In <[EMAIL PROTECTED]> Boris Kazak
<[EMAIL PROTECTED]> writes:

>Taneli Huuskonen wrote:
[...]
>> Anyone who really understands the content of Shamir's classical result
>> on the unbreakability of the one-time pad knows that it applies exactly
>**********************

>Unfortunately, there is a *small* difference between Shamir and
>Shannon...sorry

Oops!  Thanks for the correction;  I meant Shannon, of course.

Taneli Huuskonen

=====BEGIN PGP SIGNATURE=====
Version: 2.6.3i
Charset: noconv

iQB1AwUBOBGicgUw3ir1nvhZAQEgiAMAp2IkQ8ZABQ587mEXJaH2t2cqE0WEB4Hj
j9UH7URqqZkTvbOebDukF0bS2pfPF1BAvTRpxBPwHjT5na9cnsshUMgivDSz7Ec4
iPt6AtAfld1efeEPDbQCvDN1L8xeFYlm
=/Sxo
=====END PGP SIGNATURE=====
-- 
I don't   | All messages will be PGP signed,  | Fight for your right to
speak for | encrypted mail preferred.  Keys:  | use sealed envelopes.
the Uni.  | http://www.helsinki.fi/~huuskone/ | http://www.gilc.org/

------------------------------

Date: Sat, 23 Oct 1999 08:38:28 -0400
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: "Risks of Relying on Cryptography," Oct 99 CACM "Inside Risks" column



Jerry Coffin wrote:

> Now, there some people have proposed ordering the ciphers in the
> standard -- I.e. the standard itself say something to the effect that:
> "here are algorithms 1, 2 and 3" and they'd be used in that order.  I
> don't agree with this approach.  I believe AES should stick to
> algorithms, not protocols.  Protocols are (at least IMO) an entirely
> separate kettle of fish from algorithms, and should be kept separate
> both in the implementation and the standards -- right now FIPS 46-2
> describes a single algorithm; other standards deal with how to do
> chaining, how to do protocols, etc., and I think that's the way things
> should stay.  The one place you'd see something in AES that deal with
> algorithm selection would be a bit like the progression from FIPS 46-2
> to FIPS 46-3 -- this changes the recommendation from DES to 3DES.  In
> a later version of AES, there might be a change from (say) three
> algorithms to two if one of the existing algorithms was found to have
> a substantial weakness.
>
> The basic difference would be that in the AES, there would be more
> than one algorithm described -- nothing more.  Nothing about which one
> is preferred, how to pick between them, negotiate usage, etc.
> Ultimately, ensuring interoperability goes WELL beyond the algorithm,
> and does NOT belong in AES, IMO.  One proposal has suggested that one
> particular algorithm be chosen to include in all possible AES
> implementations.  IMO, this would be pointless since it's still far to
> weak to ensure anything approaching interoperability (just as the
> single algorithm in DES doesn't ensure interoperability between
> products).

This logic leaves me no choice.  I am forced to agree.  The "output" of hte AES
contest chold be a set of ciphers.  NMNL.


------------------------------

From: Tom St Denis <[EMAIL PROTECTED]>
Subject: Re: Sboxes
Date: Sat, 23 Oct 1999 12:37:26 GMT

In article <CCaQ3.270$[EMAIL PROTECTED]>,
  "Adam Durana" <[EMAIL PROTECTED]> wrote:
> Hi,
>
> I've used Sboxes in ciphers I've written for fun, and used a Sbox and
an
> inverse of that Sbox to decrypt.  I have been looking at ciphers,
such as
> blowfish, that use Sboxes that consist of 256 32bit entries and they
don't
> seem to have an inverse for decryption.  I am probably missing
something
> important.  Would anyone care to explain how Sboxes like this work,
or point
> me to a good site where I can read about them?  My reference book for
> cryptography, which is Handbook of Applied Cryptography, discusses
them when
> describing a cipher which uses Sboxes but it does not explain them by
> themselves.  I know they are a form of substitution but other than
that I am
> pretty clueless on how they work, or how you design them.  I've been
working
> with Sboxes with 256 8bit entries and then an inverse of that, so Sbox
[byte]
> = cbyte and Sbox-1[cbyte] = byte.

The sboxes in Blowfish are used in the F function.  They do not work on
the source word directly.  In SAFER for example the source words go
thru the sboxes and come out the other end, however in Blowfish/Cast
the sboxes are used only in the round function.

i.e

A = A xor F(B)

Where B goes thru the sboxes and the result xors against A, obviously
the inverse of this is


A = A xor F(B) xor F(B) = A

Tom


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: [EMAIL PROTECTED] ()
Subject: Re: Some humble thoughts on block chaining
Date: 23 Oct 99 13:11:23 GMT

Mok-Kong Shen ([EMAIL PROTECTED]) wrote:
: Since ciphertext
: is always assumed to be available to the analyst, while plaintext 
: is available to him only in certain types of attacks, I suppose 
: plaintext chaining is more useful.

You are correct in a way. But this point of view is considered naive.

Essentially, it is assumed that the naked block cipher is strong enough,
so CBC mode is recommended because it creates very little additional
propagation of transmission errors, and avoids the one specific problem of
identical messages enciphering identically.

Using a weak, but genuinely encrypting, stream cipher before and after ECB
encipherment would pretty much kill all the attacks on the underlying
block cipher dead, but the idea that that would be the proper way to use
DES or LUCIFER and so on just never got off the ground.

John Savard

------------------------------

From: Tim Tyler <[EMAIL PROTECTED]>
Subject: Re: some information theory (very long plus 72K attchmt)
Reply-To: [EMAIL PROTECTED]
Date: Sat, 23 Oct 1999 13:21:50 GMT

Anton Stiglic <[EMAIL PROTECTED]> wrote:
: Tim Tyler wrote:
:> Anton Stiglic <[EMAIL PROTECTED]> wrote:

:> : People have the wrong opinion that if you compress data, the result
:> : is random data.  That's abolutely wrong.
:>
:> On the other hand, some people do have the (correct) impression that
:> compressing files well results in data which become more and more
:> indistinguishable from random data on statistical tests, as the
:> compression algorithm improves.

: This is completly false, let me say why again.

: Let P be the set of files.
: thm.: Img(Comp(P)) will never be random if P is not random.
: proof.  Create the following algorithm A that distinguishes
:     output of Comp() and true random bits.
:     Given c, A computes DeComp(c), if this gives a valid file
:      (it's easy to found out if it's a valid file, this should be defined),
:      then A says "this is not random",
:      else, A says "this is random".
: There you got, you have a polynomial time distinguisher.

It is easy to prove that for good compression algorithms (i.e. those that
produce maximal compression in some sense on their target data set)
Decomp(C) will /always/ produce a valid file.

Consequently your supposed test will always print (incorrectly) "This is
not random".

Since such a test is plainly useless for determining the randomness or
otherwise of the input stream, it matters not one iota whether the
test is polynomial or not (a fact indidentally which you have not
proved - merely assuming that DeComp(C) is itself executable in
polynomial-time).

: The argument you gave is just completly ridiculous and false!!

:-(
-- 
__________
 |im |yler  The Mandala Centre  http://www.mandala.co.uk/  [EMAIL PROTECTED]

He who laughs last, failed to understand the joke.

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to