Cryptography-Digest Digest #343, Volume #12       Wed, 2 Aug 00 21:13:01 EDT

Contents:
  Re: Software package locking ("Trevor L. Jackson, III")
  Re: Software package locking (Andru Luvisi)
  Re: unbreakable code? Yes ("Douglas A. Gwyn")
  Re: Blowfish Implementation ("Douglas A. Gwyn")
  Re: unbreakable code? Yes (Eric Smith)
  Re: Skipjack ("Douglas A. Gwyn")
  Re: counter as IV? ("Douglas A. Gwyn")
  Re: What vulnerabilities do I have? ([EMAIL PROTECTED])
  Re: unbreakable code? Yes (H. Peter Anvin)

----------------------------------------------------------------------------

Date: Wed, 02 Aug 2000 18:57:47 -0400
From: "Trevor L. Jackson, III" <[EMAIL PROTECTED]>
Subject: Re: Software package locking

Jeffrey Williams wrote:

> Actually, Trevor, I would be interested to hear the details of a software protection
> scheme which is demonstrably uncrackable (either mathematically secure or too time
> consuming, or whatever).

I've been on both sides of this fence, so I have some sympathy for both camps.

It may be possible to prove that mathematically secure software is theoretically
impossible.  My interest lies in practical software security, not provable security.
Thus we're left with systems that are impractical to attack.  But the term impractical
means that some definition of the attacker is behind the "too" part of the phrase "too
time consuming".  An attacker with unlimited resources can probably overcome any
possible piece of software.

So we're left with attackers of limited resources (time, machines, money, etc.)  The
nature of the defense against attackers of limited means is based on an assessment of
the cost to disable the individual parts of the security system.  The parts are like 
the
transforms that make up a cipher.  Individually they are reversible.  Collectively they
may interact in ways that explode the difficulty of isolating and disabling them.

First, let's dispose of some trivial cases.  If the program has a lesser privilege 
level
than the attacker, the attacker is probably going to win fairly quickly.  Thus a secure
program probably needs root/ring 0/etc. capabilities.  Of course the attacker can 
always
use more sophisticated hardware, such as an in-circuit emulator, to gain an unmatchable
advantage, but if the attacker's testing platform matches the run-time platform then 
the
program has a chance.

In the case of a vanilla system without much hardware protection a cracker with a good
debugger has unlimited access to the machine, and should be able to have his way with
any program.  But this is where it gets interesting.  There's an interface between the
debugger and the program.  That interface is subject to preemptive attack by the
program.

For example, breakpoints and single steps are powerful analytic tools that can overcome
any kind of obfuscation or encryption.  But what if the program interferes with the
breakpoint handler?  Let's say it stomps on the breakpoint vector at many places within
the code.  And it stomps on the vector with a value that is critical to the operation 
of
the program.  Let's say it uses the breakpoint trap as a replacement for the system
services trap (int 21 in PC-DOS lingo).

Now we have a situation where the program runs fine when loaded in a debugger as long 
as
no breakpoints are set.  Further, there are many places where the breakpoint vector 
will
be overwritten if the debugger is used to adjust it, and many places the program will
fail catastrophically if the cracker manages to find and fix all of the overwrites.

The same thing is done to the single-step vector, and any other system resource that 
the
debugger might use.

Then we add linkage through the BIOS.  Calling parts of the BIOS code, especially with
an eye to using the results of the call/jmp not just threading flow of control through
the ROM.

Then we add routines that detect the identity of the spawning process and respond to 
the
debugger by writing to its memory space causing faults within the debugger code (just
which program is the debugger, and which is the debugee?)  Most of the powerful
debuggers are fairly well known.  It's not hard to fingerprint them -- easier than
fingerprinting protocol stacks.

What debugger capability is easy to disable?  The breakpoint handler!  After all its
address is _published_ in the breakpoint vector.  Just make sure the debugger break
point handler can't break the program.  More interestingly, substitute your own
breakpoint handler.  _Simulate_ debugging yourself.  Make patches but remove them 
before
the program runs and restore them when it halts.

Then we add routines to detect clock skew.  If the debugger soaks up an appreciable
amount of CPU time this will be detectable.  If it ever waits for user input it will be
easily detectable.  Certainly a debugger could be written to fully simulate the real
time clock, freezing it when the program was paused, but the debugger cannot control 
its
impact upon the CPU cache and cache-line to address line mapping is fairly obvious.
Thus variations are detectable.

Then we add checksum layers.  Note that in a large software package checksum layers can
be undetectable.  The key is to not visibly react when a checksum failure is detected,
but to quietly change operation mode.  Perhaps some pieces of perfectly reasonable
application code get overwritten with garbage.  (Or breakpoints)

Then we add communication layers for applications that have connections.  Systems under
attack can silently produce a syndrome that is detectable at the other end.  Systems
that have been modified can indicate the change in status.  It's gets very interesting
to have an unsuspecting cracker hammering away on software that that has reported
distress to a human in real time.  The human can do things to the hacker's mind that do
not bear contemplation.  I have been in remote control of software under attack.  It is
enormously entertaining.

How would you feel if your debugger's memory display window suddenly started chatting
with you?  You might reconsider your attack.

The point of all these layers is to eliminate the various classes of attack.  The
objective is _not_ to make uncrackable software.  The objective is to make software 
that
take so long to crack that when it breaks no one cares.

Just like in crypto.

>  I do understand that the thread originator wants to use
> hardware characteristics of the individual systems (sort of a fingerprint) as the
> key to the protection scheme.  But if his algorithm is in software, a good assembler
> programmer should be able to find the algorithm and bypass it.

Yes.  But how long will it take to bypass all of them?  Especially when most of the
security features don't *do* anything -- visibly.  How do you identify which code is
security and which code is functional?  This line can be made as blurry as desired.

>
>
> That, of course, doesn't mean that it will be easy.  But if your algorithm is
> software based, I do not see how you'd stop a talented, determined assembler
> programmer.

One cannot stop an attacker indefinitely.  But one can stop any attacker for a day,
perhaps a week.  All but "national technical means" for a week or a month.  All but a
serious software lab for several months.  And your average software jockey for many
months, perhaps several years.

How deep do you want to bury the application under layers of security?  Note that it is
pointless to cause the attacker more trouble than writing the application from scratch
would cost.  This is why 97DES is a nonstarter.  The benefit is not worth the cost.



------------------------------

From: Andru Luvisi <[EMAIL PROTECTED]>
Subject: Re: Software package locking
Date: 02 Aug 2000 16:48:29 -0700

"Trevor L. Jackson, III" <[EMAIL PROTECTED]> writes:
[snip]
> First, let's dispose of some trivial cases.  If the program has a lesser privilege 
>level
> than the attacker, the attacker is probably going to win fairly quickly.  Thus a 
>secure
> program probably needs root/ring 0/etc. capabilities.  Of course the attacker can 
>always
> use more sophisticated hardware, such as an in-circuit emulator, to gain an 
>unmatchable
> advantage, but if the attacker's testing platform matches the run-time platform then 
>the
> program has a chance.
[snip]

By requiring that your program runs as root/in ring 0/whatever, you
are increasing the security dangers associated with a user running
your product, and decreasing its value to your users.  The fact that a
program is able to run non-root is a security *feature* which is
valuable to its users.

[snip]
> For example, breakpoints and single steps are powerful analytic tools that can 
>overcome
> any kind of obfuscation or encryption.  But what if the program interferes with the
> breakpoint handler?  Let's say it stomps on the breakpoint vector at many places 
>within
> the code.  And it stomps on the vector with a value that is critical to the 
>operation of
> the program.  Let's say it uses the breakpoint trap as a replacement for the system
> services trap (int 21 in PC-DOS lingo).
[snip]

What happens when some bug in the program which is tickled by
something in one user's hardware or software configuration goes off?
Would you rather have the user junk your software, or use a debugger
to figure out what action it is that is causing the problem and then
send you a detailed bug report you can use to fix the problem?

By making a debugger unusable, you are making your software less
valuable to your users.

[snip]
> Then we add linkage through the BIOS.
[snip] 
> Then we add routines that detect the identity of the spawning process and respond to 
>the
> debugger by writing to its memory space causing faults within the
> debugger
[snip]
> _Simulate_ debugging yourself.  Make patches but remove them before
> the program runs and restore them when it halts.
[snip]
> Then we add routines to detect clock skew.
[snip]
> Then we add checksum layers.
[snip]

And all this costs time and money, which you need to charge your users
for.  Do you think they like the idea of paying your developers to sit
around writing all these things, rather than creating that new feature
they need?  "Oh gee, this program can't do this basic thing I need it
to, but I can't copy it!  Boy do I feel like I got my money's worth!"
They will have this reaction *every* time some feature they want is
missing, and since you can't give them everything they will ever want,
it *will* happen.  If your competition is less concerned than you are
about this "threat", and spend less time and money implementing this
"security", they will spend less money to get the same features, and
stability, and with the same money they will be able to have more
features and stability.

[snip]
> Then we add communication layers for applications that have connections.  Systems 
>under
> attack can silently produce a syndrome that is detectable at the
> other end.
[snip]
> How would you feel if your debugger's memory display window suddenly started chatting
> with you?  You might reconsider your attack.
[snip]

...or tell the entire world that your product invades people's
privacy.  That makes your product *much* less valuable to your users.

[snip]
> The point of all these layers is to eliminate the various classes of attack.  The
> objective is _not_ to make uncrackable software.  The objective is to make software 
>that
> take so long to crack that when it breaks no one cares.
> 
> Just like in crypto.

Unfortunately, in the process you are doing things which are really
bad for both you and your users.

  You are increasing your development cost and time.

  You are decreasing the resources you can devote to improving
  stability, security, and features.

  You are decreasing the value of your product to your customers.

At some point, your development cost will become greater than what
your customers are willing to pay, and that will not leave you in a
good place.

Andru
-- 
Andru Luvisi, Programmer/Analyst

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: unbreakable code? Yes
Date: Wed, 02 Aug 2000 20:25:10 -0400

Sundial Services wrote:
> Not to mention that CDs can become scratched and not every byte that
> is written to one will always be read back perfectly.

CDs include heavy-duty error detection and correction coding.
For a data CD-ROM, it is expected that errors will be corrected
perfectly or else a read error is reported to the application.
There should be no unrecoverable error unless the CD-ROM is badly
damaged.  I find in practice that unreadable CD-ROMs are rare.

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: Blowfish Implementation
Date: Wed, 02 Aug 2000 20:32:38 -0400

Runu Knips wrote:
> Interesting. In fact, I believe most ciphers would really be hard
> to implement on machines which aren't 8 bit. You always have to
> mask out the superflous bits in each byte all the time.

No, you don't.  You just have to allow for unused high-order bits
in each non-octet "byte", or more likely in each "unsigned int".
<limits.h> tells you how many bits are in a byte, and <stdint.h>
provides even more exhaustive descriptions of the available widths.
If you use those parameters appropriately, it is actually pretty
easy to avoid dependencies on architectural assumptions.

------------------------------

From: Eric Smith <[EMAIL PROTECTED]>
Subject: Re: unbreakable code? Yes
Date: 02 Aug 2000 17:37:48 -0700

"Douglas A. Gwyn" <[EMAIL PROTECTED]> writes:
> CDs include heavy-duty error detection and correction coding.
> For a data CD-ROM, it is expected that errors will be corrected
> perfectly or else a read error is reported to the application.
> There should be no unrecoverable error unless the CD-ROM is badly
> damaged.  I find in practice that unreadable CD-ROMs are rare.

Douglas is absolutely right.  The CD-Audio format uses two cross-
interleaved Reed-Solomon codes.  However, for CD-Audio it was considered
acceptable for large errors to not be completely correctable.  They
interleave the samples so that a single long burst error will only
destroy every other sample.  Thus the intermediate samples can be
interpolated, which is referred to as error "concealment".

For data, interpolation obviously doesn't work, so the CD-ROM format
adds IN ADDITION to the audio format's error correction, another layer
of cross-interleaved Reed-Solomon code.

It takes an AMAZINGLY large defect (or combination of smaller defects)
to render a CD-ROM sector uncorrectable.

A CD that is manfactured reasonably well and handled reasonably well
should not have any uncorrectable errors for many years.  Eventually
the aluminum layer will oxidize enough to introduce uncorrectable
errors.  This is why some premium audio CDs use a gold reflective layer.
CD-R media with a gold reflective layer is expected to have a lifetime
at least six times longer than that using a silver layer.


------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: Skipjack
Date: Wed, 02 Aug 2000 20:45:08 -0400

"David A. Wagner" wrote:
> Maybe I'm missing something here, but it seems there is a real
> question here, no?

The question might be, what is the most effective way to use the
bits of the key -- should some of them be used to generate S-boxes?
To answer the question one first has to know how to cryptanalyze
these systems.  For the cryptanalytic approach I was working on,
the answer is that the more *variety in structure*, the harder the
crack, so the most effective way to use key bits would be to make
irregularities in structure.  E.g., use some of them to determine
the number of rounds, etc.  It may be that somebody somewhere has
a more fundamental insight into this kind of cryptanalysis and
might not have as much trouble with variable structure, but I
don't see how it wouldn't pose a significant problem.

------------------------------

From: "Douglas A. Gwyn" <[EMAIL PROTECTED]>
Subject: Re: counter as IV?
Date: Wed, 02 Aug 2000 20:55:37 -0400

"David A. Wagner" wrote:
> Douglas A. Gwyn <[EMAIL PROTECTED]> wrote:
> > For a *known* F and N, surely there is a known relationship
> > between F(K,N) anf F(K,N+1), although it might not be
> > expressable as simply as when F is merely the XOR function.
> I think there is a miscommunication here.  The XOR function is not a
> pseudorandom function.

Of course not.  You replaced the XOR in my original example with
a pseudo-random function F.  XOR(K,N) and XOR(K,N+1) have a fairly
simple relationship, which you thought could permit a related-key
attack.  I was observing that F(K,N) and F(K,N+1) also have some
known relationship (F and N are known constants), but not as simple,
and wondered why that did not similarly permit a related-key attack.

By the way, this is not idle curiosity.  I'm trying to find some
way to avoid requiring a true-random bit generator in a system
we're developing.

> These are fundamental concepts in the literature on provable security.

Thanks, but it was beside the point.

------------------------------

From: [EMAIL PROTECTED]
Subject: Re: What vulnerabilities do I have?
Date: Thu, 03 Aug 2000 00:46:53 GMT

Sorry about that!  You're absolutely right.  I shouldn't have said that
my key generation was secure.  Based on the way my encryption is set up,
it is vulnerable to man-in-the-middle.  I don't use any type of
authentication when I start the key generation, so I can't tell whether
or not the client is who he says he is.  So, man-in-the-middle is my
biggest vulnerability.

Are there any others that you (or anyone else) can spot?

Thanks


In article <[EMAIL PROTECTED]>,
  Steve Weis <[EMAIL PROTECTED]> wrote:
> [EMAIL PROTECTED] wrote:
> > I've implemented network data encryption of an client/server
application
> > where I use 3DES to encrypt all the data between the client and
server.
> > I know that I have atleast 2 compromises:
> > 2) if the attacker uses a man-in-the-middle attack
>
> Assuming the two parties have securely agreed on a key, how would an
> man-in-the-middle attack be conducted? An active attacker could
disrupt
> packets or conduct a replay attack, but how would a m.i.t.m. work? My
> understanding is that it would happen during key agreement where a
third
> party would inject an element into the protocol that would allow them
to
> read and forge all traffic between the two parties undetected.
>


Sent via Deja.com http://www.deja.com/
Before you buy.

------------------------------

From: H. Peter Anvin <[EMAIL PROTECTED]>
Subject: Re: unbreakable code? Yes
Date: 2 Aug 2000 18:09:22 -0700

Followup to:  <[EMAIL PROTECTED]>
By author:    JimD
In newsgroup: sci.crypt
> 
> Dunno about the atmospheric noise though. How exactly do you
> mean 'atmospheric'. Do you mean noise derived from sound or from
> electrical/radio noise?
> 
> I probably wouldn't use it because you could well retain a copy
> of my keys.
> 

Also note that the software is trivial, and the only complex portion
of the system is the hardware random number generator, although some
recent Intel chipsets actually contain one built-in.  The rest is
probably better done by a sophisticated piece of technology called "a
CD burner."

        -hpa

-- 
<[EMAIL PROTECTED]> at work, <[EMAIL PROTECTED]> in private!
"Unix gives you enough rope to shoot yourself in the foot."
http://www.zytor.com/~hpa/puzzle.txt

------------------------------


** FOR YOUR REFERENCE **

The service address, to which questions about the list itself and requests
to be added to or deleted from it should be directed, is:

    Internet: [EMAIL PROTECTED]

You can send mail to the entire list (and sci.crypt) via:

    Internet: [EMAIL PROTECTED]

End of Cryptography-Digest Digest
******************************

Reply via email to