Cryptography-Digest Digest #888, Volume #10      Wed, 12 Jan 00 06:13:01 EST

Contents:
  Re: Wagner et Al. (Guy Macon)
  Re: Encryption Keys ("Trevor Jackson, III")
  Re: Questions about message digest functions (David A Molnar)
  Re: Encryption Keys (Nicol So)
  A simple method ("Jeff Moser")
  Re: AES & satellite example (David Wagner)
  Re: AES & satellite example (David Wagner)
  Re: Doing math on very high numbers (Johnny Bravo)
  Re: Encryption Keys (Sisson)
  Re: Encryption Keys (Quisquater)
  Re: Q: Block algorithms with variable block size (Stefan Lucks)
  Re: Example C programs to encrypt/decrypt password (RavingCow)
  Re: "1:1 adaptive huffman compression" doesn't work (Mok-Kong Shen)
  Re: "1:1 adaptive huffman compression" doesn't work (Mok-Kong Shen)
  Re: Q: Block algorithms with variable block size (Mok-Kong Shen)

----------------------------------------------------------------------------

From: [EMAIL PROTECTED] (Guy Macon)
Subject: Re: Wagner et Al.
Date: 11 Jan 2000 23:07:47 EST

In article <[EMAIL PROTECTED]>, 
[EMAIL PROTECTED] (lordcow77) wrote:
>
>On a properly administered Windows NT system, to say nothing of *nixes,
>a trojan will not have the access rights neccesary to modify system
>files, access files which the ACLs forbid, enter the memory space of
>another process, or intercept system messages or events not intended
>for it. It's a pretty difficult task to get a NT system that secure,
>especially if you have any type of reasonable server install, but it's
>entirely doable in a day or two (unless you burn disk images onto CD
><grin>).

I never managed to get this to work.  I can boot from the NT disc
when my HD is 100% wiped (no MBR, no boot record, nothing but zeros
everywhere), but I never got NT to boot from the CD.  I had the same
problem trying to get NT to boot from a Jazz drive.  I haven't spent
much time on the problem, so maybe it's simple.


------------------------------

Date: Tue, 11 Jan 2000 23:31:32 -0500
From: "Trevor Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Encryption Keys

Paul Roy wrote:

> Hello All ---
>    I found this on Hacker News.  Rather disturbing?  Comments?
>
>           Proteus
> *****----------******
>
> Encryption Keys Easily Found On Systems
>
> contributed by evenprime
> Researchers at nCipher in Cambridge, England have found a way to easily find
> encryption keys on target systems. The technology centers on this: There is a
> general assumption that encryption keys will be impossible to find because
> they are buried in servers crowded with similar strings of code. What the
> researchers discovered, however, is that encryption keys are more random than
> other data stored in servers. To find the encryption key, one need only search
> for abnormally random data.

Almost by definition the criteria they are using for finding keys would also match
ciphertext present.  Since it is reasonable to expect there to be more ciphertext
present than keys present, I suspect the researchers are relying upon the
gullibility of their audience.  Is their grant up for review?

How interesting is a research result that presumes unlimited access to the storage
on a server?  Given such unlimited access, why would the keys be more interesting
than the plaintext the keys are protecting?



------------------------------

From: David A Molnar <[EMAIL PROTECTED]>
Subject: Re: Questions about message digest functions
Date: 12 Jan 2000 04:01:31 GMT

Tim Tyler <[EMAIL PROTECTED]> wrote:
> The designer of the function gets to choose e (the fixed key used for the 
> particular hash in question).  He also gets to choose p and q.   Thus he
> is in the perfect position to ensure that e and (p - 1)(q - 1) *are*
> relatively prime.

> That's how it seems to me anyway.

Oh. OK. I guess I was looking at it from the point of view when 
we don't trust the designer of the function. Whether or not
that matters depends on what you will use the function for (i.e.
will it help the designer if he cheats?).

Thanks, 
-David
 


------------------------------

From: Nicol So <[EMAIL PROTECTED]>
Subject: Re: Encryption Keys
Date: Tue, 11 Jan 2000 23:38:44 -0500
Reply-To: see.signature

Paul Roy wrote:
> 
>    I found this on Hacker News.  Rather disturbing?  Comments?
> ... 
> 
> Encryption Keys Easily Found On Systems
> 
> contributed by evenprime
> Researchers at nCipher in Cambridge, England have found a way to easily find
> encryption keys on target systems. The technology centers on this: There is a
> general assumption that encryption keys will be impossible to find because
> they are buried in servers crowded with similar strings of code. ...
> 
> ZD Net

Maybe something is lost in the reporting, but I've never heard of any
competent security engineer or cryptographer making the so-called
"general assumption". I wonder where they got their information from.

-- 
Nicol So, CISSP // paranoid 'at' engineer 'dot' com
Disclaimer: Views expressed here are casual comments and should
not be relied upon as the basis for decisions of consequence.

------------------------------

From: "Jeff Moser" <[EMAIL PROTECTED]>
Subject: A simple method
Date: Tue, 11 Jan 2000 23:53:43 -0500

You find it using "the extended Euclidean algorithm"

If you recall, the Euclidean algorithm finds the greatest common denominator
between two numbers

you set the bigger number equal to the smaller number multiplied by some
coefficient plus the remainder..

3220 = 79 * 40 + 60

the initial smaller number becomes the larger number.. and the remainder
becomes the smaller number

79 = 60 * 1 + 19

iterates again..

60 = 19 * 3 + 3
19 = 3 * 6 + 1

the remainder is one.. thus the GCD is one.. thus.. there is an inverse.

reversing this we see
1 = 19 - 3 * 6
and further going back and solving for the remainder of 3.. we get

1 = 19 - [60 - 19 * 3] * 6
getting some terms together (notable 60 and 19 because they're in the second
to last line)

1 = 19 - 6 * 60 + 18 * 19
1 = 19 * 19  - 6 * 60
now.. going up another line
1 = 19 *[79 - 60 * 1] - 6 * 60
1 = 19 * 79 - 19 * 60 - 6 * 60
1= 19 * 79 - 25 *60
again.. going up again
1 = 19 * 79 - 25 * [3220 - 79 * 40]
1 = 19 * 79 - 25 * 3220 + 1000 * 79
1 = 1019 * 79 - 25 * 3220
now.. you take the coefficient of the "e" value.. which is 1019.. and take
this mod 3220

1019 mod 3220 = 3220.. [do this in case the coefficient is negative]

This is very useful because when you have an encrypted value

C == M^e (mod n)
and decrypt it you get

C^d == (M^e)^d == M^ed == M^(1 + k * phi(n)) == M^1 * 1^k == M (mod n)

(phi part cancels out by Fermat's Little Theorem)

cheers..



------------------------------

From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: AES & satellite example
Date: 11 Jan 2000 22:09:28 -0800

In article <[EMAIL PROTECTED]>,
Jerry Coffin  <[EMAIL PROTECTED]> wrote:
> In article <[EMAIL PROTECTED]>, [EMAIL PROTECTED] says...
> 
> > For example, I know of systems where classified algorithm code in
> > encrypted under an algorithm specific to that function and the
> > encrypted code image is signed under another algorithm specific to
> > that purpose. Both of these algorithms that protect the code from
> > substitution and disclosure are dedicated to these functions, not used
> > for anything else and can not be replaced.
> 
> This probably helps a little, but only a little. [...]
> IOW, if you're going to allow updating of the code, you certainly want 
> to use a separate algorithm, but even at best this is only a _minor_ 
> improvement, not a real cure for the fundamental problem.

Do you think so?
I was thinking that this might help a lot.  The special algorithms
used for code uploading will be used only very rarely, and are not
performance-intensive, so they could be (e.g.) 1000 rounds of Triple-DES,
if you like.

------------------------------

From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: AES & satellite example
Date: 11 Jan 2000 22:13:07 -0800

In article <[EMAIL PROTECTED]>,
Doug Stell <[EMAIL PROTECTED]> wrote:
> For example, I know of systems where classified algorithm code in
> encrypted under an algorithm specific to that function and the
> encrypted code image is signed under another algorithm specific to
> that purpose. Both of these algorithms that protect the code from
> substitution and disclosure are dedicated to these functions, not used
> for anything else and can not be replaced.

By the way, this sounds like a lovely application for an
informationally-theoretic (provably-secure) cryptosystem,
e.g., the one-time pad and the information theoretically
secure message authentication schemes.

(You might also pre-encrypt with Triple-DES, just in case
an attacker `cheats' and evades the security model used to
prove the cryptosystem secure, but that's tangential.)

------------------------------

From: [EMAIL PROTECTED] (Johnny Bravo)
Subject: Re: Doing math on very high numbers
Date: Wed, 12 Jan 2000 01:19:40 GMT

On Thu, 30 Nov 2000 19:25:41 +0100, "Erik Edin" <[EMAIL PROTECTED]>
wrote:

>Hello.
>I intend to make an encryption program in C++ that uses the RSA-algorithm in
>the future. 

  Is that going to be a one way function, or will you be able to get
information back from the future as well?  <grin>

>I would like to know if anyone knows of any tutorial that
>describes a method of doing math on very high numbers? 

  Best bet is not to reinvent the wheel, get a big number library
and use that instead.  See the thread titled
"Large Numbers Beginner Question" in this very group, it doubt it has
expired from your server as it is only 5 days old.

  Best Wishes,
    Johnny Bravo


------------------------------

From: Sisson <[EMAIL PROTECTED]>
Subject: Re: Encryption Keys
Date: Wed, 12 Jan 2000 08:31:45 GMT

Hello

Paul Roy wrote:

>  There is a
> general assumption that encryption keys will be impossible to find because
> they are buried in servers crowded with similar strings of code.

Doesn't this sortta ruin the point of having the key in the first place? instead
of having the key u need to know which, which in itself is the key, if u know what
i mean...

>From Spendabuck



------------------------------

From: Quisquater <[EMAIL PROTECTED]>
Subject: Re: Encryption Keys
Date: Wed, 12 Jan 2000 10:22:26 +0100

Here is a really more precise reference:
(please we are in a sci group not a rumor group)

"New Viruses Search For Strong Encryption Keys"
(newspapers like such titles)

http://www.techweb.com/wire/story/TWB19990315S0001

and the paper by Adi Shamir and Niko van Someren
(dated September 22, 1998!) 

is here:

http://www.ncipher.com/products/files/papers/anguilla/keyhide2.pdf

By the way I know several uses of such techniques for reverse
engineering
(or to avoid that) in the past including by me in the 80ies (using 
a simple color scheme for encoding bytes of a file).

------------------------------

From: Stefan Lucks <[EMAIL PROTECTED]>
Subject: Re: Q: Block algorithms with variable block size
Date: Wed, 12 Jan 2000 10:13:43 +0100

On Wed, 12 Jan 2000, Mok-Kong Shen wrote:

> Are there block encryption algorithms in the literature that
> have block sizes that are variable, i.e. user choosable (maybe with 
> some constraints)? I believe that such a parametrization could be
> quite valuable, though it might not be easy to do with the 
> techniques that underly certain currently well-known algorithms.


There are block ciphers for variable but large blocks (320 bit or more). 
Two papers of mine, where I described such beasts, are:
  1. "Faster Luby-Rackoff Ciphers", Proceedings of Fast Software
     Encryption 1996.
  2. "On the Security of Remotely Keyed Encryption", Proceedings of Fast
     Software Encryption 1997.
See
  http://th.informatik.uni-mannheim.de/People/Lucks/papers.html
to access the papers online. 

See also the proposal of the block ciphers BEAR, LION and LIONESS by Ross
Anderson and Eli Biham, also published in the proceedings of FSE 1996. I
guess, you can find the paper at Ross Anderson's homepage, too. 
  


-- 
Stefan Lucks      Th. Informatik, Univ. Mannheim, 68131 Mannheim, Germany
            e-mail: [EMAIL PROTECTED]
            home: http://th.informatik.uni-mannheim.de/people/lucks/
===== Wer einem Computer Unsinn erzaehlt, muss immer damit rechnen. =====



------------------------------

From: RavingCow <[EMAIL PROTECTED]>
Subject: Re: Example C programs to encrypt/decrypt password
Date: Wed, 12 Jan 2000 20:28:03 +1100
Reply-To: "vbkid[at]rocketmail[dot]com"

benny wrote:
> 
> Hi,
> 
> I need to get sample of c programs to encrypt/decrypt password.
> I need those, because currently I use clear text passwords to access
> database application  which is not secure, and will use compiled C programs
> to encrypt/decrypt the password. Thanks alot for your help.
> 
> regards,
> Benny

You should read the RFC on MD2 hash for an easy-to-impant hashing
algorithm.
Search yahoo for 'RFC' then search the index for MD2.

If you need source code, you can email me at vbkid[at]rocketmail[dot]com
and I will be happy to provide it for you.

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: "1:1 adaptive huffman compression" doesn't work
Date: Wed, 12 Jan 2000 11:09:26 +0100

SCOTT19U.ZIP_GUY wrote:
> 
> <[EMAIL PROTECTED]> wrote:
> >John Savard wrote:
> >>
> >> I can achieve that if I don't have to go to byte boundaries. I can
> >> achieve that if I'm allowed to use random padding with a length
> >> indicator. But trying to do it David Scott's way, that condition can
> >> no longer be achieved (well, I can always XOR my last byte with a
> >> checksum of the rest of the message to at least mask the bias...).
> >
> >I am not quite sure whether one couldn't even attempt to 'define'
> >the 1-1 problem away with a 'convention'. That is, if on compression
> >the last code symbol does not fill to a byte boundary, then the
> >software has to do filling with bits that do not form a valid code
> >symbol and it is 'required' by convention that the filling is
> >to be random, say dependent on the system clock. Now if one
> >compresses one and the same file twice, the results are identical
> >with the exception of the filling bits. This way I suppose the
> >original argument for 1-1 in the case of the analyst using wrong
> >keys to decrpyt (i.e. the argument of thereby leaking some information
> >to him because of non-1-1) no longer applies. Certainly I admit
> >that what I described is a 'trick', but it works for the purpose at
> >hand, doesn't it? Or there could be technical problems of realizing
> >that 'convention' that I haven't yet seen? Thanks.

>  The problem with adding random informaion is that when you decompress
> you have to know what is random so the compressor does not mistakenly
> use some of the data to compress.  For eample your doing huffman coding
> the old way. Your symbol last symbol compressed its in bit postion 2 of the
> last byte. How do you add random data so the compressor does not use some
> of it. It is far better to use the  1-1 compressor in the first place. 

One can arrange to have a (non-terminal) node in the Huffman tree
that does not correspond to any plaintext symbol. Then anything
derived from that node is not a valid symbol. Note that the 'convention'
simply means that if the software can't interpret a group of
bits at the end of file (that could also be the prefix of a valid 
code) then forget it, for it is 'random' and hence, if asked later 
to compress back, just put in any stuff there that is also random. 
As I said, this is only a 'trick'. So critiques such as non-elegance 
etc. could surely apply. But I suppose one should be pragmatic
('practical') and be ready to accept things non-perfect. Works
to find a genuinely 1-1 compressor may be theoretically valuable. But 
if I look back to the volume of discussions on that theme and estimate
the amount of time the participants spent on that (the major part
probably of those developing the software to cope with 1-1), I am 
personally of the opinion that the 'cost/benefit' ratio isn't 
extremely good. I have indeed a similar feeling towards quite an
amount of other scientific research works where a lot is written 
(and continue to be written for decades) that is certainly 
theoretically highly valuable and interesting however practically 
much less so, because one could solve the problems involved
with some 'quick and dirty' means and live with that sufficiently 
well. It is in my humble opinion in almost all (practical) cases 
uneconomical/harmful to persuit (theoretical) perfectness. I mean 
from a global social standpoint the resources/efforts are not spent 
in an optimal way. The world doesn't have an 'over-supply' of these.
The time and energy of the experts could be better employed to solve 
some other problems that really must be solved. Note that I said 
'from a global social standpoint'. From the standpoint of the 
researchers who publish their works that can certainly be quite 
different. ('Look, here is again a high-quality paper of mine that 
only a few peers could read', etc.) Well, I am very aware that my 
opinions above on scientific research are entirely heretic.

M. K. Shen
=================================
http://home.t-online.de/home/mok-kong.shen

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: "1:1 adaptive huffman compression" doesn't work
Date: Wed, 12 Jan 2000 11:17:27 +0100

Tim Tyler schrieb:
> 
> Mok-Kong Shen <[EMAIL PROTECTED]> wrote:

> : I am not quite sure whether one couldn't even attempt to 'define'
> : the 1-1 problem away with a 'convention'. That is, if on compression
> : the last code symbol does not fill to a byte boundary, then the
> : software has to do filling with bits that do not form a valid code
> : symbol and it is 'required' by convention that the filling is
> : to be random, say dependent on the system clock. [...]
> 
> This is pretty much the same technique as John Savard proposed.
> 
> It works to the extent that you can generate genuinely random bits.
> If your bits are not completely random then you still have problems.
> 
> You also wind up with a non-deterministic compressor.  The system
> fails to exploit the range of the compressor to itsgreatest possible
> extent.  It is no longer portable to embedded environments with no
> obvious reliable source of genuinely random noise available.

No. Whether the software on two runs of compression put in the
same bunch of (supposedly) random bits is of no significance.
Even if one had a truly random source, there is a chance that 
these be the same. The only thing to be noted is that with the
convention the analyst has in the context of this thread no way 
of knowing that the non-1-1 is attributable to employing wrong keys.

M. K. Shen

------------------------------

From: Mok-Kong Shen <[EMAIL PROTECTED]>
Subject: Re: Q: Block algorithms with variable block size
Date: Wed, 12 Jan 2000 11:22:40 +0100

Stefan Lucks wrote:
> 
> On Wed, 12 Jan 2000, Mok-Kong Shen wrote:
> 
> > Are there block encryption algorithms in the literature that
> > have block sizes that are variable, i.e. user choosable (maybe with
> > some constraints)? I believe that such a parametrization could be
> > quite valuable, though it might not be easy to do with the
> > techniques that underly certain currently well-known algorithms.
> 
> There are block ciphers for variable but large blocks (320 bit or more).
> Two papers of mine, where I described such beasts, are:
>   1. "Faster Luby-Rackoff Ciphers", Proceedings of Fast Software
>      Encryption 1996.
>   2. "On the Security of Remotely Keyed Encryption", Proceedings of Fast
>      Software Encryption 1997.
> See
>   http://th.informatik.uni-mannheim.de/People/Lucks/papers.html
> to access the papers online.
> 
> See also the proposal of the block ciphers BEAR, LION and LIONESS by Ross
> Anderson and Eli Biham, also published in the proceedings of FSE 1996. I
> guess, you can find the paper at Ross Anderson's homepage, too.

Thanks. I wonder whether that isn't a good idea to be incorporated
into security software that is intended to be used by everybody.

M. K. Shen

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to