Re: trade-offs of secure programming with Palladium (Re: Palladium: technical limits and implications)

2002-08-15 Thread Russell Nelson

Adam Back writes:
 > So there are practical limits stemming from realities to do with code
 > complexity being inversely proportional to auditability and security,
 > but the extra ring -1, remote attestation, sealing and integrity
 > metrics really do offer some security advantages over the current
 > situation.

You're wearing your programmer's hat when you say that.  But the
problem isn't programming, but is instead economic.  Switch hats.  The
changes that you list above may or may not offer some security
advantages.  Who cares?  What really matters is whether they increase
the cost of copying.  I say that the answer is no, for a very simple
reason: breaking into your own computer is a "victimless" crime.

In a crime there are at least two parties: the victim and the
perpetrator.  What makes the so-called victimless crime unique is that
the victim is not present for the perpetration of the crime.  In such
a crime, all of the perpetrators have reason to keep silent about the
comission of the crime.  So it will be with people breaking into their
own TCPA-protected computer and application.  Nobody with evidence of
the crime is interested in reporting the crime, nor in stopping
further crimes.

Yes, the TCPA hardware introduces difficulties.  If there is way
around them in software, then someone need only write it once.  The
whole TCPA house of cards relies on no card ever falling down.  Once
it falls down, people have unrestricted access to content.  And that
means that we go back to today's game, where the contents of CDs are
open and available for modification.  Someone could distribute a pile
of "random" bits, which, when xored with the encrypted copy, becomes
an unencrypted copy.

-- 
-russ nelson  http://russnelson.com |
Crynwr sells support for free software  | PGPok | businesses persuade
521 Pleasant Valley Rd. | +1 315 268 1925 voice | governments coerce
Potsdam, NY 13676-3213  | +1 315 268 9201 FAX   |




Re: trade-offs of secure programming with Palladium (Re: Palladium: technical limits and implications)

2002-08-13 Thread Tim Dierks

At 09:07 PM 8/12/2002 +0100, Adam Back wrote:
>At some level there has to be a trade-off between what you put in
>trusted agent space and what becomes application code.  If you put the
>whole application in trusted agent space, while then all it's
>application logic is fully protected, the danger will be that you have
>added too much code to reasonably audit, so people will be able to
>gain access to that trusted agent via buffer overflow.

I agree; I think the system as you describe it could work and would be 
secure, if correctly executed. However, I think it is infeasible to 
generally implement commercially viable software, especially in the 
consumer market, that will be secure under this model. Either the 
functionality will be too restricted to be accepted by the market, or there 
will be a set of software flaws that allow the system to be penetrated.

The challenge is to put all of the functionality which has access to 
content inside of a secure perimeter, while keeping the perimeter secure 
from any data leakage or privilege escalation. The perimeter must be very 
secure and well-understood from a security standpoint; for example, it 
seems implausible to me that any substantial portion of the Win32 API could 
be used from within the perimeter; thus, all user interface aspects of the 
application must be run through a complete security analysis with the 
presumption that everything outside of the perimeter is compromised and 
cannot be trusted. This includes all APIs & data.

I think we all know how difficult it is, even for security professionals, 
to produce correct systems that enforce any non-trivial set of security 
permissions. This is true even when the items to be protected and the 
software functionality are very simple and straightforward (such as key 
management systems). I think it entirely implausible that software 
developed by multimedia software engineers, managing large quantities of 
data in a multi-operation, multi-vendor environment, will be able to 
deliver a secure environment.

This is even more true when the attacker (the consumer) has control over 
the hardware & software environment. If a security bug is found & patched, 
the end user has no direct incentive to upgrade their installation; in 
fact, the most concerning end users (e.g., pirates) have every incentive to 
seek out and maintain installations with security faults. While a content 
or transaction server could refuse to conduct transactions with a user who 
has not upgraded their software, such a requirement can only increase the 
friction of commerce, a price that vendors & consumers might be quite 
unwilling to pay.

I'm sure that the whole system is secure in theory, but I believe that it 
cannot be securely implemented in practice and that the implied constraints 
on use & usability will be unpalatable to consumers and vendors.

  - Tim

PS - I'm looking for a job in or near New York City. See my resume at 





Re: trade-offs of secure programming with Palladium (Re: Palladium: technical limits and implications)

2002-08-13 Thread James A. Donald

--
On 12 Aug 2002 at 16:32, Tim Dierks wrote:
> I'm sure that the whole system is secure in theory, but I
> believe that it cannot be securely implemented in practice and
> that the implied constraints on use & usability will be
> unpalatable to consumers and vendors.

Or to say the same thing more pithily, if it really is going to be
voluntary, it really is not going to give hollywood what they
want.  If really gives hollywood what they want, it is really
going to have to be forced down people's throats.


--digsig
 James A. Donald
 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG
 q/bTmZrGsVk2BT9JgumhMqvjDmyIbiElvtidl9aP
 2/0CXfo6fzHCxpa+SX8o8Jzvyb71S0KzgBs0gDRhN




Re: trade-offs of secure programming with Palladium (Re: Palladium: technical limits and implications)

2002-08-12 Thread Adam Back

At this point we largely agree, security is improved, but the limit
remains assuring security of over-complex software.  To sum up:

The limit of what is securely buildable now becomes what is securely
auditable.  Before, without the Palladium the limit was the security
of the OS, so this makes a big difference.

Yes some people may design over complex trusted agents, with sloppy
APIs and so forth, but the nice thing about trusted agents are they
are compartmentalized:

If the MPAA and Microsoft shoot themselves in the foot with a badly
designed over complex DRM trusted agent component for MS Media Player,
it has no bearing on my ability to implement a secure file-sharing or
secure e-cash system in a compartment with rigorously analysed APIs,
and well audited code.  The leaky compromised DRM app can't compromise
the security policies of my app.

Also it's unclear from the limited information available but it may be
that trusted agents, like other ring-0 code (eg like the OS itself)
can delegate tasks to user mode code running in trusted agent space,
which can't examine other user level space, nor the space of the
trusted agent which stated them, and also can't be examined by the OS.

In this way for example remote exploits could be better contained in
the sub-division of trusted agent code.  eg. The crypto could be done
by the trusted-agent proper, the mpeg decoding by a user-mode
component; compromise the mpeg-decoder, and you just get plaintext not
keys.  Various divisions could be envisaged.


Given that most current applications don't even get the simplest of
applications of encryption right (store key and password in the
encrypted file, check if the password is right by string comparison is
suprisingly common), the prospects are not good for general
applications.  However it becomes more feasible to build secure
applications in the environment where it matters, or the consumer
cares sufficiently to pay for the difference in development cost.

Of course all this assumes microsoft manages to securely implement a
TOR and SCP interface.  And whether they manage to succesfully use
trusted IO paths to prevent the OS and applications from tricking the
user into bypassing intended trusted agent functionality (another
interesting sub-problem).  CC EAL3 on the SCP is a good start, but
they have pressures to make the TOR and Trusted Agent APIs flexible,
so we'll see how that works out.

Adam
--
http://www.cypherspace.org/adam/

On Mon, Aug 12, 2002 at 04:32:05PM -0400, Tim Dierks wrote:
> At 09:07 PM 8/12/2002 +0100, Adam Back wrote:
> >At some level there has to be a trade-off between what you put in
> >trusted agent space and what becomes application code.  If you put the
> >whole application in trusted agent space, while then all it's
> >application logic is fully protected, the danger will be that you have
> >added too much code to reasonably audit, so people will be able to
> >gain access to that trusted agent via buffer overflow.
>
> I agree; I think the system as you describe it could work and would be
> secure, if correctly executed. However, I think it is infeasible to
> generally implement commercially viable software, especially in the
> consumer market, that will be secure under this model. Either the
> functionality will be too restricted to be accepted by the market, or there
> will be a set of software flaws that allow the system to be penetrated.
>
> The challenge is to put all of the functionality which has access to
> content inside of a secure perimeter, while keeping the perimeter secure
> from any data leakage or privilege escalation. [...]




trade-offs of secure programming with Palladium (Re: Palladium: technical limits and implications)

2002-08-12 Thread Adam Back

I think you are making incorrect presumptions about how you would use
Palladium hardware to implement a secure DRM system.  If used as you
suggest it would indeed suffer the vulnerabilities you describe.

The difference between an insecure DRM application such as you
describe and a secure DRM application correctly using the hardware
security features is somewhat analogous to the current difference
between an application that relies on not being reverse engineered for
it's security vs one that encrypts data with a key derived from a user
password.

In a Palladium DRM application done right everything which sees keys
and plaintext content would reside inside Trusted Agent space, inside
DRM enabled graphics cards which retrict access to video RAM, and
later DRM enabled monitors with encrypted digital signal to the
monitor, and DRM enabled soundcards, encrypted content to speakers.
(The encrypted contentt to media related output peripherals is like
HDCP, only done right with non-broken crypto).

Now all that will be in application space that you can reverse
engineer and hack on will be UI elements and application logic that
drives the trusted agent, remote attesation, content delivery and
hardware.  At no time will keys or content reside in space that you
can virtualize or debug.


In the short term it may be that some of these will be not fully
implemented so that content does pass through OS or application space,
or into non DRM video cards and non DRM monitors, but the above is the
end-goal as I understand it.

As you can see there is still the limit of the non-remote
exploitability of the trusted agent code, but this is within the
control of the DRM vendor.  If he does a good job of making a simple
software architecture and avoiding potential for buffer overflows he
stands a much better chance of having a secure DRM platofrm than if as
you describe exploited OS code or rogue driver code can subvert his
application.


There is also I suppose possibility to push content decryption on to
the DRM video card so the TOR does little apart from channel key
exchange messages from the SCP to the video card, and channel remote
attestation and key exchanges between the DRM license server and the
SCP.  The rest would be streaming encrypted video formats such as CSS
VOB blocks (only with good crypto) from the network or disk to the
video card.


Similar kinds of arguments about the correct break down between
application logic and placement of security policy enforcing code in
Trusted Agent space apply to general applications.  For example you
could imagine a file sharing application which hid the data the users
machine was serving from the user.  If you did it correctly, this
would be secure to the extent of the hardware tamper resistance (and
the implementers ability to keep the security policy enforcing code
line-count down and audit it well).


At some level there has to be a trade-off between what you put in
trusted agent space and what becomes application code.  If you put the
whole application in trusted agent space, while then all it's
application logic is fully protected, the danger will be that you have
added too much code to reasonably audit, so people will be able to
gain access to that trusted agent via buffer overflow.


So therein lies the crux of secure software design in the Palladium
style secure application space: choosing a good break-down between
security policy enforcement, and application code.  There must be a
balance, and what makes sense and is appropriate depends on the
application and the limits of the ingenuity of the protocol designer
in coming up with clever designs that cover to hardware tamper
resistant levels the the applications desired policy enforcement while
providing a workably small and pracitcally auditable associated
trusted agent module.


So there are practical limits stemming from realities to do with code
complexity being inversely proportional to auditability and security,
but the extra ring -1, remote attestation, sealing and integrity
metrics really do offer some security advantages over the current
situation.

Adam

On Mon, Aug 12, 2002 at 03:28:15PM -0400, Tim Dierks wrote:
> At 07:30 PM 8/12/2002 +0100, Adam Back wrote:
> >(Tim Dierks: read the earlier posts about ring -1 to find the answer
> >to your question about feasibility in the case of Palladium; in the
> >case of TCPA your conclusions are right I think).
> 
> The addition of an additional security ring with a secured, protected 
> memory space does not, in my opinion, change the fact that such a ring 
> cannot accurately determine that a particular request is consistant with 
> any definable security policy. I do not think it is technologically 
> feasible for ring -1 to determine, upon receiving a request, that the 
> request was generated by trusted software operating in accordance with the 
> intent of whomever signed it.
> 
> Specifically, let's presume that a Palladium-enabled application is being 
> used