Cryptography-Digest Digest #235, Volume #10      Tue, 14 Sep 99 19:13:03 EDT

Contents:
  Re: pseudo random number in a embedded software (Eric Lee Green)
  Re: Can you believe this?? (Anton Stiglic)
  Re: Ritter's paper (David Wagner)
  ti83 encryption (Arthur Dardia)
  Re: Sources of randomness (Scott Nelson)
  Re: Sources of randomness (David Wagner)
  Re: Size of DH exponent & modulous?? (Eric Lee Green)
  VLSI chip design ("Steve S")
  Re: Mystery inc. (Beale cyphers) (Curt Welch)
  Re: Can you believe this?? (Eric Lee Green)
  Re: 13 reasons to say 'no' to public key cryptography (Paul Koning)
  Re: Can you believe this?? ([EMAIL PROTECTED])
  Re: ti83 encryption ([EMAIL PROTECTED])
  Re: Can you believe this?? (Eric Lee Green)

----------------------------------------------------------------------------

From: Eric Lee Green <[EMAIL PROTECTED]>
Subject: Re: pseudo random number in a embedded software
Date: Tue, 14 Sep 1999 12:22:26 -0700

Medical Electronics Lab wrote:
> Check out Yarrow on Bruce's site:
> http://www.counterpane.com

Been there. As you can probably tell, by looking at Ocotillo. The
problem with Yarrow is that it is very much tied to the Win32 platform
due to the fact that all of its sources of entropy are based on tying
into the innards of the Win32 platform. Still, it is very much a
worthwhile approach.

I don't know what this person is doing as far as embedding. If this is
custom hardware with a custom embedded firmware with only network
connections, not any disk drive interrupts or anything, using a pair of
hardware timers running at slightly different rates and taking the
lower-order bits when a network interrupt comes in may be a
semi-adequate source of ongoing entropy, but doesn't help with gathering
enough entropy to initialize the PRNG in the first place. If, on the
other hand, he is using embedded FreeBSD or Linux, /dev/random already
does a good job of doing this without having to wedge anything into
interrupt drivers. If using QNX or some other embedded OS, usually these
have a good API for wedging things into interrupt drivers, and that can
be used to feed a PRNG like Yarrow or Ocotillo
(http://www.estinc.com/~eric/ ).  

-- 
Eric Lee Green    http://members.tripod.com/e_l_green
  mail: [EMAIL PROTECTED]
                    ^^^^^^^    Burdening Microsoft with SPAM!

------------------------------

From: Anton Stiglic <[EMAIL PROTECTED]>
Subject: Re: Can you believe this??
Date: Tue, 14 Sep 1999 15:12:40 -0400


this is why we should all support open-source!






------------------------------

From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: Ritter's paper
Date: 14 Sep 1999 12:55:58 -0700

In article <[EMAIL PROTECTED]>, Terry Ritter <[EMAIL PROTECTED]> wrote:
> There is a copy of the article .PDF on my pages.  It is first in the
> list in the Technical Articles section on my top page.  The exact link
> is:
>    http://www.io.com/~ritter/ARTS/R8INTW1.PDF

Thanks for posting!  I think this is an important subject for
discussion.

However, I don't think your suggestion works.  I'd like to invite
you to look over my reasoning and see if you find any errors.

Let's think of this as a resource allocation problem (i.e., an
economics problem), where our sole goal is to minimize the risk
that the adversary can read our traffic.  Then I think a fairly
simple calculation shows that your proposed approach is sub-optimal,
and that the best strategy is to "follow the crowd".

Suppose we have a fixed bound R on the total amount of resources
we can apply to the problem (e.g., R man-years, R months of Eli
Biham's time, whatever).  Further suppose we have a fixed amount T
of traffic to protect.  We have two choices:
 ("AES")  Design one cipher that you really really believe in; use
          it for _all_ the traffic.
          In other words, spend all of R on design and analysis
          of the cipher, and use it for all of T.
 (Ritter) Design N ciphers, and hope most of them don't get broken.
          In other words, spend R/N on each of the N designs, and
          use each cipher to encrypt T/N of the traffic.
I think these scenarios accurately characterize the two approaches
we want to compare.  Do you agree with the model?

Let f(R) be the probability that we apply the resources specified
by R to cryptographic design and analysis, and yet the adversary still
manages (somehow) to break our cipher.

We can now calculate the risk of failure for each scenario.
 ("AES")  With probability f(R), the cipher breaks, and all T of
          our traffic is broken.
   => Expected loss = T*f(R).
 (Ritter) Each cipher breaks with probability f(R/N), and each break
          reveals T/N of our traffic.
          Since expectation is linear, the total expected loss is the
          sum of the expected losses; the latter quantity is T/N * f(R/N)
          for each cipher, and there are N of them, so...
   => Expected loss = N * T/N * f(R/N) = T*f(R/N).
Here, I've made the assumption that the "utility function" we want
to minimize is the expected amount of compromised traffic.  This is
probably an unrealistic assumption, but let's make it for the moment.

It's hard to tell a priori what the graph of f() will look like,
but at least we can be pretty sure that f() is monotonic: doing
more analysis will only reduce the risk of catastrophic failure.

Thus, we can see that f(R) < f(R/N), and in particular,
T*f(R) < T*f(R/N), so the way to minimize the expected loss is to
choose the single-cipher strategy (the "AES" approach).  Using lots
of ciphers only increases the expected loss.

In the real world, the expected loss is probably not exactly the right
function to minimize (probably the harm is a non-linear function of
the amount of traffic compromised, where leaking 10% of one's traffic
is almost as bad as leaking 90% of it).  Nonetheless, a moment's thought
will show that adjusting this assumption to be more realistic will only
widen the gap between the two strategies, and will make the "AES"
approach even more appealing than the above calculation might suggest.

------------------------------

From: Arthur Dardia <[EMAIL PROTECTED]>
Subject: ti83 encryption
Date: Tue, 14 Sep 1999 16:09:05 -0400
Reply-To: [EMAIL PROTECTED]

I was so bored during my Statistics class this morning that I wrote a simple
TI83 encryption program.  I'll write the decryption part tomorrow.  I'll also
write the random data key generator by using the CBL (which is a sensor that
hooks up to the TI83 to record sound, temperature, electricity, etc).  Combining
all of the data from those ports along with the user's 'random' keypad input
should create a nice random data pool.

Here's the code:  let me know if it's safe encryption.  I believe it's the same
encryption used in Solitaire.  Being that I couldn't make an Omega with my
keyboard, I used ! to represent the O with the slash through it on the TI83.  I
did this because my program does not need to use ! to do factorials.

:"ABCDEFGHIJKLMNOPQRSTUVWXYZ"=Str4  // the equals should be the STO arrow
:1=A:2=B:3=C:4=D                    // values of the letters, init purposes
:5=E:6=F:7=G:8=H
:9=I:10=J:11=K:12=L
:13=M:14=N:15=O:16=P
:17=Q:18=R:19=S:20=T
:21=U:22=V:23=W:24=X
:25=Y:26=Z:!=1                     // ! is the equiv. of the 'omega' var
:Input "ptext: ",Str1              // get plaintext
:Input "key:   ",Str2              // get key
:If (length(Str2)<length(Str1))    // if key shorter than ptext, exit
:Then
:Goto !                            // the ! should be an 'omega'
:End
:While (!<length(Str1))
:expr(sub(Str1,!,1))+expr(sub(Str2,!,1))=L1(1)
:While (L1(1)>26)                  // get L1(1) to be 26 or under
:L1(1)-26=L1(1)
:End
:Str3+sub(Str4,L1(1),1)=Str3       // append the next encrypted char to ctext
:End
:Disp Str3:Pause
:Lbl !:ClrHome

I'll give an example:

ptext: SECRET
key:   TYMSNA

T+S
20+21=41  L1(1)=41 (first element in List 1)
41-26=15  (Get it under, or equal to 26)
T+S=O     (Convert to letter, store in Str3)

Append next letter to Str3.

Continue process for each pair of letters.  To me, this encryption seems
uncrackable if perfect key entropy is achieved.  I delete all Str's.  I
shouldn't have to delete the Variables since they're only the values 1->26. 
This is not dangerous, althought it may be a clue to your attacker that you did
use crypto.  Please give me advice on to fix my algo, if it is wrong.  Classroom
crypto, got to love it!

Disclaimer: In no way did I write this program to make it impossible for
teachers to see what is on the memory on your calculator, should you choose to
cheat!  Obey the academic honor code!

                                        Art Dardia

------------------------------

From: [EMAIL PROTECTED] (Scott Nelson)
Subject: Re: Sources of randomness
Reply-To: [EMAIL PROTECTED]
Date: Tue, 14 Sep 1999 19:35:34 GMT

On Tue, 14 Sep 1999 11:01:26 -0700, "John E. Kuslich" <[EMAIL PROTECTED]>
wrote:

>If you have ever tried to design an electronic circuit that produces a
>"random" stream, you will appreciate the difficulty of removing any
>periodic (correlated) data.
>
>The problem is pathologically difficult when strict randomness testing is
>applied to the output stream.  

[snip]

Wouldn't it be sufficient to process the output with a 74LS402 ?
Admittedly this adds a chip and possibly a clock circuit to 
the design, but that's hardly "pathologically difficult."  
Or am I missing something?

Scott Nelson <[EMAIL PROTECTED]>


------------------------------

From: [EMAIL PROTECTED] (David Wagner)
Subject: Re: Sources of randomness
Date: 14 Sep 1999 12:11:30 -0700

In article <[EMAIL PROTECTED]>,
Scott Nelson <[EMAIL PROTECTED]> wrote:
> But CRC, a.k.a Linear Feedback Shift Register, is usually much
> smaller and faster than a cryptographically secure hash.  If
> the object is to distill random bits from a biased source,
> it seems a better choice to me.

Smaller, faster, and almost certainly less secure[1].
My advice: stick with a cryptographic hash function (e.g., SHA1).


[1] One can prove that CRCs are adequate for distilling entropy: the
    leftover hash lemma says that if you use a LFSR with a n-bit output
    and a random (uniformly-distributed) feedback function, and if you
    have at least 3n bits of Renyi entropy of order 2 in the source
    (different from the usual measure of entropy!!!), and if you never
    use it more than once, then the n-bit output has close to the uniform
    distribution.

    However, this approach is terribly wasteful of random bits, requires
    stronger-than-usual assumptions on the distribution of the inputs,
    and in practice is probably highly susceptible to deadly implementation
    errors.

------------------------------

From: Eric Lee Green <[EMAIL PROTECTED]>
Subject: Re: Size of DH exponent & modulous??
Date: Tue, 14 Sep 1999 12:49:02 -0700

Peter Gunn wrote:
> Would a 512bit prime, and 256bit values for x and y suffice?
> 
> Similarly, any suggestions  for key lengths of 128bits and 256bits?

For a uniform distribution you need a range for x and y that is (P-1) in
size. That is, x and y should probably have the same number of bits as
the prime. 

According to at least one source that I came across, breaking
Diffie-Hellman may be reducible to the same order as breaking the
factoring problem. A 512-bit number was recently factored. Thus a field
size of 1024 bits is probably a better choice nowdays than a 512-bit
field size.  

-- 
Eric Lee Green    http://members.tripod.com/e_l_green
  mail: [EMAIL PROTECTED]
                    ^^^^^^^    Burdening Microsoft with SPAM!

------------------------------

From: "Steve S" <[EMAIL PROTECTED]>
Subject: VLSI chip design
Date: Tue, 14 Sep 1999 21:14:45 +0100

Hi,
i am thinking of designing a VLSi chip to factor large numbers (probably
using the quadratic seive method) does anyone know if this sort of thing has
been attempted before and if so where and how much did it cost?
Any other ideas appreciated
Cheers
Steve




------------------------------

Subject: Re: Mystery inc. (Beale cyphers)
From: [EMAIL PROTECTED] (Curt Welch)
Date: 14 Sep 1999 21:23:36 GMT

"Douglas A. Gwyn" <[EMAIL PROTECTED]> wrote:
> Curt Welch wrote:
> > "Douglas A. Gwyn" <[EMAIL PROTECTED]> wrote:
> > > Curt Welch wrote:
> > > > The alphabetic strings that appear when #1 is decoded with the DOI
> > > > is far from random so that's a _big_ clue to the encoding.
> > > Unfortunately, the better those clues, the *more* likely that it is
> > > all a hoax.  The reason is, this *type* of cipher does *not* produce
> > > long alphabetic runs
> > A multiple substitution cypher has enough degrees of freedom to make
> > it very easy to create the long alphabetic runs when decoding the
> > cypher with the wrong key.
>
> No, this *type* of cipher (keyed as in Beale #2) is *not* at all
> likely to produce long alphabetic runs from decrypting with the
> wrong key.

Duh.

> Remember, the key is constrained to be an actual,
> readily available document;

NO, NO, NO.  How may times do I have to say this?.  It's not constrained
to be an actual document.  Read my posts.  Read the damn story.  But
pay attention before you post again -- please (my blood preasure might
kill me if I have to say this again :)).

If you assume it must be an actual document then the chance that the
alphabetic strings of those lengths would show up when decoding the
#1 beale cypher with the wrong document are so small that you can call
it imposible.  I agree with that.  That's obvious.

But the important point of the problem is that no one knows how the
three cyphers were encoded.  As the story goes, the #2 document was
"broken" just be chance.  No one said they were book cyphers.  That's
just what happen to work for the only one that has been decoded.  So the
natural assumption everyone makes is that the two unbroken cyphers were
probably the same - so many many people have spent an untold number of
hours (over the past 150 years) trying to decode the cyphers with every
document they can get their hands on that would have existed back around
the time of the cyphers.

But, Jim Gillogly's discovery of the alphabetic strings (as far as I know
no he was the first to publish this) changed all that.  It means that if
the #1 cypher is a real cypher -- i.e. it somehow decodes to a real message
-- then it is most likely not a simple book cypher.  i.e., looking for the
"right" document to decode it with is a waste of time.  It also means that
using the DOI in a different fashion (like using the second letter of
each word instead of the first - or numbering it backwords, etc) as also a
waste of time.

It is however, still very likely that the #1 cypher is a simple multiple
substitution cypher.  That is, each number in the cypher text maps to a
single letter. And each number maps to the same letter each time it's used
- but multiple numbers are used for the same letter (hence multiple
substitution).

So, I have been using the term "Key" in my posts to refer not the to
document (like the DOI) which is used to decode the message, but rather the
table of letters and numbers which must be used to decode the message.

For the #2 cypher, this "key" was generated from the DOI by numbering the
the words and using the first letter from each word.  But for the #1
cypher, something else was done to generate the "key" -- something that
ended up causing the alphabetic strings to appear when it's decoded
with the DOI key.

A very plausible explaination for this (what I'm calling Ed's table
theory), is that the #2 key (the one made from the DOI numbering) was first
written down in a form where all the numbers for letter A were written on
one line, then all the numbers for the letter B were written on the next
line, etc.  This list of numbers was then some how used to generate the key
for the #1 document.  One simple way would be to just write letters at the
top of each column - and to use each column as the set of numbers which
decodes to a letter for the #1 cypher.  This table of numbers becomes the
"Key". It's used both to encode the #1 clear text, and would have been used
to decode the #1 cyhper had it not been lost.

So, to encode the #1 document, you take the first letter from the clear
text, find it along the top of this new table, then randomly pick one
of the numbers in that column, and use that number as the first number
in the cypher text.  You continue like this encoding each letter with
a number from the correct column.  But, to make the cypher as strong as
possible, you want to make sure you don't keep picking the same numbers
for the same letters.  You want to keep using different numbers.  After
encoding a part of the document, you realize you can't remember which
numbers you have used and which you haven't.  So, in the encoding process,
you may decide to encode one word (like "the") using numbers from the say
the second line.  Then the next word (like "gold") is encoded with numbers
from the third line, then the next work ("is"), is encoded with numbers
from the forth line.  This way, you know for sure that you are not picking
the same numbers. And what do you see when you decode this segment of
cyhper text with the DOI key? "BBBCCCCDD" (one of the Gillogly stirngs -
see article: <[EMAIL PROTECTED]>)

And then later in the encoding processes, you start back at the top of the
table, and encode a letter from the frist line, then the next letter is
encoded with a number from the second line, then the third, etc.  But your
finger slips at some point and you end up loosing track of which line you
last used, or because you encoded an "A" (at the left of the page) and then
the next letter was a T (near the right hand side of the page), you screw
up and end up encoding both from the same line.  So what does this end up
looking like when you decode it with the wrong key (i.e the DOI)?
"ABCDEFGHIIJ..."

This has always struck me as a simple and obvious answer to the existance
of the Gillogly strings seen in the #1 cypher.

So, if it's a valid idea, then what needs to be done is to try and
reconstruct this table (i.e the "key" for the #1 cypher).  I know Ed has
spent time trying to do this and hasn't had any luck.

I spent a little time on it, but no serious work.  I don't know of anyone
other than Ed that has really explored this path.

There are a few obvious ways to build the table - such as listing the
numbers (from the DOI) left to right in ascending order.  But then some
letters from the DOI have less than 26 numbers, and others have a lot more
than 26.  So what do you do about that?  Stop at 26?  Wrap it to multiple
lines? (Another possible cause of the dup letters in the Gillogly strings)?

And which numbers do you use?  All the numbers from the DOI?  Just numbers
used in #1 and/or #2?

Another possible way to sort the numbers in the lines is based on their
use in #2. (This assumes the person buiding the table wrote down the
numbers as they encoded the #2 document).

Then there's the issue of which column gets which letter?  This isn't too
important because if you have the correct table but just have the letters
in the wrong place, then the problem just becomes one of solving a single
substitution cypher - and with the help of letter frequencies that's an
easy problem to solve - even automatically with a computer program.

The problem is hard of course because if you don't get the numbers in
the correct order (at least for most lines), then what you end up with
is mostly pure garbage when you try to decode the cypher text (and
two levels of garbage if you have both the order of the numbers wrong
and the letters across the top wrong).

But still, with the help of software to do some searching for you, and
perhaps a little more "insight" or clues (like the possible word boundries
at BBB CCCC DD which I hadn't though of until writing this message) it
seems there could be a solution down this path...

-- 
Curt Welch                                            http://CurtWelch.Com/
[EMAIL PROTECTED]                          Webmaster for http://NewsReader.Com/

------------------------------

From: Eric Lee Green <[EMAIL PROTECTED]>
Subject: Re: Can you believe this??
Date: Tue, 14 Sep 1999 14:44:21 -0700

[EMAIL PROTECTED] wrote:
> 
> Eric Lee Green wrote:
> <snip>
> 
> > I will tell you right now that BCTP uses:
> >   Diffie-Hellman for key exchange
> >   MD5 for digest authentication
> >   /dev/urandom or Ocotillo for challenge string and key generation (an
> > NT client, if one is ever done, will probably use Yarrow instead),
> 
> /dev/urandom is NOT a valid source for key generation/challenge string.
> /dev/random is a good source for high quality entropy to seed a secure
> random number generator.  Even the authors themselves have said that
> /dev/urandom should only be used for simple programming uses (like games and
> such).  If you would like, I can dig up the article that says it.  Better
> invest in a secure RNG.

See what I mean? I'd rather know this TODAY than six months from now!

Taking your advice, and will use /dev/random to seed Ocotillo instead of
using /dev/urandom directly as a PRNG. 

-- 
Eric Lee Green    http://members.tripod.com/e_l_green
  mail: [EMAIL PROTECTED]
                    ^^^^^^^    Burdening Microsoft with SPAM!

------------------------------

From: Paul Koning <[EMAIL PROTECTED]>
Subject: Re: 13 reasons to say 'no' to public key cryptography
Date: Tue, 14 Sep 1999 17:10:58 -0400

Thierry Moreau wrote:
> 
> Paul Koning wrote, in part:
> >
> > Would you care to offer alternative approaches?
> >
> > With supporting reasoning as to why they are better in large
> > scale real world settings?
> >
> 
> Yes, this is the "SAKEM implied security model" as a general approach
> and the "SAKEM procedure" itself for initialization (SAKEM stands for
> Secret Authentication Key Establishment Method) SAKEM is well suited to
> large scale deployment.

Oh really?  Doesn't look that way to me.

It looks like Kerberos slightly warmed over.  All the drawbacks of
shared secret authentication schemes carry over...

        paul

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: Can you believe this??
Date: Tue, 14 Sep 1999 18:06:55 -0400

Eric Lee Green wrote:

> See what I mean? I'd rather know this TODAY than six months from now!
>
> Taking your advice, and will use /dev/random to seed Ocotillo instead of
> using /dev/urandom directly as a PRNG.

I did some searching for Ocotillo...I could not find a reference to it anywhere on
the web...do you happen to have a url?


------------------------------

From: [EMAIL PROTECTED]
Subject: Re: ti83 encryption
Date: Tue, 14 Sep 1999 18:46:45 -0400

Arthur Dardia wrote:

> I'll give an example:
>
> ptext: SECRET
> key:   TYMSNA
>
> T+S
> 20+21=41  L1(1)=41 (first element in List 1)
> 41-26=15  (Get it under, or equal to 26)
> T+S=O     (Convert to letter, store in Str3)

Isn't this just a Virgenee (sp) cipher?


------------------------------

From: Eric Lee Green <[EMAIL PROTECTED]>
Subject: Re: Can you believe this??
Date: Tue, 14 Sep 1999 15:38:12 -0700

[EMAIL PROTECTED] wrote:
> I did some searching for Ocotillo...I could not find a reference to it anywhere on
> the web...do you happen to have a url?

http://www.estinc.com/~eric/

Basically, it's something I threw together for Unix that (hopefully)
does the same thing as Yarrow under Windows. At least, I was looking
VERY carefully at the Yarrow paper while writing it. I hope that the
twists I added to the Yarrow concept doesn't make it insecure :-(. 

Ocotillo is a strange desert plant, half-cactus (stores water, has wide
shallow root system) and half-bush (has actual leaves, whereas cactus
doesn't, cactus's leaves have evolved into thorns instead). 

-- 
Eric Lee Green    http://members.tripod.com/e_l_green
  mail: [EMAIL PROTECTED]
                    ^^^^^^^    Burdening Microsoft with SPAM!

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to