Re: Cryptogram: Palladium Only for DRM

2002-09-20 Thread Alan Braggins

> Of course, those like Lucky who believe that trusted computing technology
> is evil incarnate are presumably rejoicing at this news.  Microsoft's
> patent will limit the application of this technology.

In what way is "in the desktop of almost every naive user" a usefully
limited application?




Re: Cryptogram: Palladium Only for DRM

2002-09-19 Thread David Wagner

AARG! Anonymous  wrote:
>Lucky Green wrote:
>> In the interest of clarity, it probably should be mentioned that any
>> claims Microsoft may make stating that Microsoft will not encrypt their
>> software or software components when used with Palladium of course only
>> applies to Microsoft [...]
>
>First, it is understood that Palladium hashes the secure portions of
>the applications that run.  [...]
>
>With that architecture, it would not work to do as some have proposed:
>the program loads data into secure memory, decrypts it and jumps to it.
>The hash would change depending on the data and the program would no
>longer be running what it was supposed to.

I think Lucky is right: Palladium does support encrypted programs.
Imagine an interpreter interpreting data, where the data lives in
the secure encrypted "vault" area.  This has all the properties of
encrypted code.  In particular, the owner of the machine might not be
able to inspect the code the machine is running.

If you want a more concrete example, think of a JVM executing encrypted
bytecodes, or a Perl interpreter running encrypted Perl scripts.  For all
practical purposes, this is encrypted software.  Whether this scenario
will become common is something we can only speculate on, but Palladium
does support this scenario.




RE: Cryptogram: Palladium Only for DRM

2002-09-19 Thread James A. Donald

--
On 19 Sep 2002 at 11:13, AARG! Anonymous wrote:
> Of course, those like Lucky who believe that trusted
> computing technology is evil incarnate are presumably
> rejoicing at this news. Microsoft's patent will limit the
> application of this technology.  And the really crazy people
> are the ones who say that Palladium is evil, but Microsoft is
> being unfair in not licensing their patent widely!

The evil of DRM, like the evils of guns, depends on who has the
gun and who has not.

If only certain privileged people have guns, and the rest of us
are disarmed, then guns are evil indeed.

If trusted computing means that certain special people have
ring -1 access to my computer, and I do not, and those certain
special people are people I do not trust ... 

--digsig
 James A. Donald
 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG
 9qfOgx4DuD39ZV1os+Mk6SzsJp3A6f8e/S94djUj
 41XdHA+e/zdxPCIroQznM5ILiFBEOUSYYagF5KQkb




RE: Cryptogram: Palladium Only for DRM

2002-09-19 Thread AARG! Anonymous

Lucky Green wrote:
> AARG! Wrote:
> > In addition, I have argued that trusted computing in general 
> > will work very well with open source software.  It may even 
> > be possible to allow the user to build the executable himself 
> > using a standard compilation environment.
>
> What AARG! is failing to mention is that Microsoft holds that Palladium,
> and in particular Trusted Operating Root ("nub") implementations, are
> subject to Microsoft's DRM-OS patent. Absent a patent license from
> Microsoft, any individual developer, open source software development
> effort, and indeed any potential competitor of Microsoft that wishes to
> create a Palladium-like TOR would do so in violation of Microsoft's
> patent. U.S. Patent law takes a dim view of such illegal infringers:
> willful infringers, in particular infringers that generate a profit from
> their creation of a non-Microsoft version of a TOR face the risk of a
> court ordering such infringers to pay treble damages.

That's too bad.  Trusted computing is a very interesting technology
with many beneficial uses.  It is a shame that Microsoft has a patent on
this and will be enforcing it, which will reduce the number of competing
implementations.

Of course, those like Lucky who believe that trusted computing technology
is evil incarnate are presumably rejoicing at this news.  Microsoft's
patent will limit the application of this technology.  And the really
crazy people are the ones who say that Palladium is evil, but Microsoft
is being unfair in not licensing their patent widely!

> As of this moment, Microsoft has not provided the open source community
> with a world-wide, royalty-free, irrevocable patent license to the
> totality of Microsoft's patents utilized in Palladium's TOR. Since open
> source efforts therefore remain legally prohibited from creating
> non-Microsoft TORs, AARG!'s lauding of synergies between Palladium and
> open source software development appears premature.

Well, I was actually referring to open source applications, not the OS.
Palladium-aware apps that are available in source form can be easily
verified to make sure that they aren't doing anything illicit.  Since the
behavior of the application is relatively opaque while it is protected by
Palladium technology, the availability of source serves as an appropriate
balance.

But it does appear that Microsoft plans to make the source to the TOR
available in some form for review, so apparently they too see the synergy
between open (or at least published) source and trusted computing.


> > [1] A message from Microsoft's Peter Biddle on 5 Aug 2002; 
> > unfortunately the cryptography archive is missing this day's 
> > messages.  "The memory isn't encrypted, nor are the apps nor 
> > the TOR when they are on the hard drive. Encrypting the apps 
> > wouldn't make them more secure, so they aren't encrypted."  
>
> In the interest of clarity, it probably should be mentioned that any
> claims Microsoft may make stating that Microsoft will not encrypt their
> software or software components when used with Palladium of course only
> applies to Microsoft and not to the countless other software vendors
> creating applications for the Windows platform.

UNLESS Microsoft means that the architecture is such that it does not
support encrypting applications!  The wording of the statement above seems
stronger than just "we don't plan on encrypting our apps at this time".
There are a couple of reasons to believe that this might be true.

First, it is understood that Palladium hashes the secure portions of
the applications that run.  This hash is used to decrypt data and for
reporting to remote servers what software is running.  It seems likely
that the hash is computed when the program is loaded.  So the probable
API is something like "load this file into secure memory, hash it and
begin executing it."

With that architecture, it would not work to do as some have proposed:
the program loads data into secure memory, decrypts it and jumps to it.
The hash would change depending on the data and the program would no
longer be running what it was supposed to.  This would actually undercut
the Palladium security guarantees; the program would no longer be running
code with a known hash.

Second, the Microsoft Palladium white paper at
http://www.microsoft.com/presspass/features/2002/jul02/0724palladiumwp.asp
describes the secure memory as "trusted execution space".  This suggests
that this memory is designed for execution, not for holding data.
The wording hints at an architectural separation between code and data,
when in the trusted mode.


> Lastly, since I have seen this error in a number of articles, it seems
> worth mentioning that Microsoft stated explicitly that increasing the
> security of DRM schemes protecting digital entertainment content, but
> not executable code, formed the impetus to the Palladium effort.

Further reason to believe that Palladium's architecture may not support
the encrypt

Re: Cryptogram: Palladium Only for DRM

2002-09-18 Thread Nomen Nescio

Peter Biddle writes:
> Pd is designed to fail well - failures in SW design shouldn't result in
> compromised secrets, and compromised secrets shouldn't result in a BORE
> attack.

Could you say something about the sense in which Palladium achieves
BORE ("break once run everywhere") resistance?  It seems that although
Palladium is supposed to be able to provide content security (among
other things), a broken Palladium implementation would allow extracting
the content from the "virtual vault" where it is kept sealed.  In that
case the now-decrypted content can indeed run everywhere.

This seems to present an inconsistency between the claimed strength of the
system and the description of its security behavior.  This discrepancy
may be why Palladium critics like Ross Anderson charge that Microsoft
intends to implement "document revocation lists" which would let Palladium
systems seek out and destroy illicitly shared documents and even programs.

Some have claimed that Microsoft is talking out of both sides of its
mouth, promising the content industry that it will be protected against
BORE attacks, while assuring the security/privacy community that the
system is limited in its capabilities.  If you could clear up this
discrepancy that would be helpful.  Thanks...




Re: but _is_ the pentium securely virtualizable? (Re: Cryptogram: Palladium Only for DRM)

2002-09-18 Thread Peter

The issue isn't whether or not the architeture as it existed in the past is
or isn't able to securely isolate user and kernel mode processes in an OS
which may not exist. If an OS can be written to securely isolate user and
kernel mode processes then I am sure that someone clever will find a way to
use it to do such a thing and may have an excellent security solution for
that OS which runs on current chips. I wish whomever tries to do this the
best of luck.

In Windows there are a number of reasons we can't use the current isolation
model for absolute enforcement of isolation. The biggest business reasons
are backwards compatibility for applications and kernel mode drivers, both
of which count on the current architecture and all of it's strengths (and
quirks). As we have stated before, we designed Pd with the assumption that
we couldn't break apps and we couldn't break device drivers.

Arbitrary Windows code which runs today must continue to run and function in
Pd as it does today, and yet Pd must still be able to provde protection.
Someone with a niche OS who doesn't care about breaking things may use a
different approach - they don't have gazllions of lines of 3rd party code
counting on version to version compatibility. We do.

>From a technical perspective, there are also a number of reasons that the
current isolation models don't work My guess is that a hard core Linux or
Unix kernel dev could probably explain this just as well as MS could,
however I will see if I can get someone on our end to outline the issues as
we see them.

I think that you are talking about separating user-mode processes in VMWare
(right?). What about SCSI controllers? The BIOS? Option ROMS? Kernel mode
device drivers? DMA devices? Random kernel foo.sys? What if the attack uses
SMM to attack VMWare itself? How does VMWare prove that the environment it
inherited when it booted is valid?

Lastly - Pd is only partially about process isolation. Nothing in the
current architecture even attmempts to address SW attestation, delegated
evaluation, authentication, or the sealing of data.

P


- Original Message -
From: "Nathaniel Daw" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Cc: "Cypherpunks" <[EMAIL PROTECTED]>
Sent: Tuesday, September 17, 2002 3:01 PM
Subject: Re: but _is_ the pentium securely virtualizable? (Re: Cryptogram:
Palladium Only for DRM)


>
> > The fact that VMWare works just means they used some tricks to make it
> > practically virtualize some common OSes, not that it is no longer
> > possible to write malicious software to run as user or privileged
> > level inside the guest OS and have it escape the virtualization.
>
> I spoke with someone who had evaluated the appropriateness of the VMWare
> internals for security sandboxing with respect to just this point. He
> seemed to believe that it is simply not possible for processes in the
> guest to escape the sandbox (perhaps, in light of the paper you
> cite, this signals inefficiencies in VMWare). Other people on this list
> were, I believe, involved in porting VMWare to be hosted under the BSD
> architecture and may be able to speak further about this. In any case,
> the broader point that has been made repeatedly is that even if the
> Pentium is not efficiently, securely virtualizable due to quirks in its
> instruction set, clearly there are architectures which are but which avoid
> the objectionable, user-hostile, aspects of the Pd scheme.
>
> n
>
>
>
> -
> The Cryptography Mailing List
> Unsubscribe by sending "unsubscribe cryptography" to
[EMAIL PROTECTED]
>




but _is_ the pentium securely virtualizable? (Re: Cryptogram: Palladium Only for DRM)

2002-09-17 Thread Adam Back

On Mon, Sep 16, 2002 at 11:01:06PM -0400, Perry E. Metzger wrote:
> [...] in a correctly operating OS, MMUs+file permissions do more or
> less stop processes from seeing each others data if the OS functions
> correctly.

The OS can stop user processes inspecting each others address space.
Therefor a remote exploit in one piece of application software should
not result in a compromise of another piece of software.  (So an IE
bug should not allow the banking application to be broken.)  (Note
also that in practice with must current OSes converting gaining root
once given access to local processes is not that well guaranteed).

However the OS itself is a complex piece of software, and frequently
remote exploits are found in it and/or the device drivers it runs.  OS
exploits can freely ignore the protection between user applications,
reading your banking keys.

Even if a relatively secure OS is run (like some of the BSD variants),
the protection is not _that_ secure.  Vulnerabilities are found
periodically (albeit mostly by the OS developers rather than
externally -- as far as we know).  Plus also the user may be tricked
into running trojaned device drivers.

So one approach to improve this situation (protect the user from the
risks of trojaned device drivers and too large and complex to
realistically assure security of OSes) one could run the OS itself in
ring0 and a key store and TOR in ring-1 (the palladium approach). 

Some seem to be arguing that you don't need a ring-1.  But if you read
the paper Peter provided a reference for, they conclude that the
pentium architecture is not (efficiently) securely virtualizable.  The
problem area is the existance of sensitive but unprivileged
instructions.

The fact that VMWare works just means they used some tricks to make it
practically virtualize some common OSes, not that it is no longer
possible to write malicious software to run as user or privileged
level inside the guest OS and have it escape the virtualization.

(It is potentially inefficently securely virtualizable using complete
software emulation, but this is highly inefficient).

(Anonymous can continue on cypherpunks if Perry chooses to censor his
further comments.)

Adam
--
http://www.cypherspace.net/




Re: Cryptogram: Palladium Only for DRM

2002-09-17 Thread AARG! Anonymous

Niels Ferguson wrote:

> At 16:04 16/09/02 -0700, AARG! Anonymous wrote:
> >Nothing done purely in software will be as effective as what can be done
> >when you have secure hardware as the foundation.  I discuss this in more
> >detail below.
>
> But I am not suggesting to do it purely in software. Read the Intel manuals
> for their CPUs. There are loads of CPU features for process separation,
> securing the operating system, etc. The hardware is all there!

> Maybe I have to explain the obvious. On boot you boot first to a secure
> kernel, much like the Pd kernel but running on the main CPU. This kernel
> then creates a virtual machine to run the existing Windows in, much like
> VMware does. The virus is caught inside this virtual machine. All you need
> to do is make sure the virtual machine cannot write to the part of the disk
> that contains your security kernel. 

Thanks for the explanation.  Essentially you can create a virtualized
Palladium, where you emulate the functionality of the secure hardware.
The kernel normally has access to all of memory, for example, but you can
virtualize the MMU as VmWare does, so that some memory is inaccessible
even to the kernel, while the kernel can still run pretty much the same.
Similarly your virtualizing software could comput a hash of code that
loads into this secure area and even mimic the Palladium functionality
to seal and unseal data based on that hash.  All this would be done at
a level which was inaccessible to ordinary Windows code, so it would be
basically as secure as Palladium is with hardware.

The one thing that you don't get with this method is secure attestation.
There is no way your software can prove to a remote system that it is
running a particular piece of code, as is possible with Pd hardware.
However I believe you see this as not a security problem, since in your
view the only use for such functionality is for DRM.

I do think there are some issues with this approach to creating a
secure system, even independent of the attestation issue.  One is
performance.  According to a presentation by the VMWare chief scientist [1],
VMWare sees slowdowns of from 8 to 30 percent on CPU-bound processes, with
graphics-intensive applications even worse, perhaps a factor of 2 slower.
Maybe Windows could do better than this, but users aren't going to be
happy about giving up 10 percent or more of their CPU performance.

Also, Palladium hardware provides protection against DMA devices:
"Even PCI DMA can't read or write memory which has been reserved to a
nub's or TA's use (including the nub's or TA's code). This memory is
completely inaccessible and can only be accessed indirectly through
API calls. The chipset on the motherboard is modified to enforces this
sort of restriction." [2]  It's conceivable that without this hardware
protection, a virus could exploit a security flaw in an external device
and get access to the secure memory provided by a virtualized Palladium.

But these are not necessarily major problems.  Generally I now agree
with your comments, and those of others, that the security benefits of
Palladium - except for secure remote attestation - can be provided using
existing and standard PC hardware, and that the software changes necessary
are much like what would be necessary for the current Palladium design,
plus the work to provide VMWare-type functionality.

However that still leaves the issue of remote attestation...

> Who are you protecting against? If the system protects the interests of the
> user, then you don't need to protect the system from the user. The security
> chip is only useful if you try to take control away from the user.

This is a simplistic view.  There are many situations in which it is in
the interests of the user to be able to prove to third parties that he
is unable to commit certain actions.  A simple example is possession
of a third-party cryptographic certificate.  The only reason that is
valuable is because the user can't create it himself.  Any time someone
shows a cert, they are giving up some control in order to get something.
They can't modify that certificate without rendering it useless.  They are
limited in what they can do with it.  But it is these very limitations
that make the cert valuable.

But let me cut to the chase and provide some examples where remote
attestation, allowing the user to prove that he is running a particular
program and that it is unmolested, is useful.  These will hopefully
encourage you to modify your belief that "The 'secure chip' in Pd is
only needed for DRM. All other claimed benefits of Pd can be achieved
using existing hardware. To me this is an objectively verifyable truth."
I don't think any of these examples could be solved with software plus
existing hardware alone.

The first example is a secure online game client.  Rampant cheating in the
online gaming industry is a major problem which has gotten much attention
in the past few months.  Players can load hacks to let them