At 10:52 PM +0100 10/21/02, Adam Back wrote:
On Sun, Oct 20, 2002 at 10:38:35PM -0400, Arnold G. Reinhold wrote:
There may be a hole somewhere, but Microsoft is trying hard to get
it right and Brian seemed quite competent.
It doesn't sound breakable in pure software for the user, so this
forces the user to use some hardware hacking.

They disclaimed explicitly in the talk announce that:

| "Palladium" is not designed to provide defenses against
| hardware-based attacks that originate from someone in control of the
| local machine.

However I was interested to know exactly how easy it would be to
defeat with simple hardware modifications or reconfiguration.

You might ask why if there is no intent for Palladium to be secure
against the local user, then why would the design it so that the local
user has to use (simple) hardware attacks.  Could they not, instead of
just make these functions available with a user present test in the
same way that the TOR and SCP functions can be configured by the user
(but not by hostile software).
One of the services that Palladium offers, according to the talk announcement, is:

b. Attestation. The ability for a piece of code to digitally sign
or otherwise attest to a piece of data and further assure the
signature recipient that the data was constructed by an unforgeable,
cryptographically identified software stack.
It seems to me such a service requires that Palladium be secure against the local user. I think that is the main goal of the product.

For example why not a local user present function to lie about TOR
hash to allow debugging (for example).

Adam Back wrote:
>- isn't it quite weak as someone could send different information to
>the SCP and processor, thereby being able to forge remote attestation
>without having to tamper with the SCP; and hence being able to run
>different TOR, observe trusted agents etc.

There is also a change to the PC memory management to support a
trusted bit for memory segments. Programs not in trusted mode can't
access trusted memory.
A "trusted bit" in the segment register doesn't make it particularly
hard to break if you have access to the hardware.

For example you could:

- replace your RAM with dual-ported video RAM (which can be read using
alternate equipment on the 2nd port).

- just keep RAM powered-up through a reboot so that you load a new TOR
which lets you read the RAM.
Brian mentioned that the system will not be secure against someone who can access the memory bus. But I can see steps being taken in the future to make that mechanically difficult. The history of the Scanner laws is instructive. Originally one had the right to listen to any radio communication as long as you did not make use of the information received. Then Congress banned the sale of scanners that can receive cell phone frequencies. Subsequently the laws were tightened to require scanners be designed so that their frequency range cannot be modified. In practice this means the control chip must be potted in epoxy. I can see similar steps being taken with Palladium PCs. Memory expansion could be dealt with by finding a way to give Palladium preferred access to the first block of physical memory that is soldered on the mother board.


Also there will be three additional x86 instructions (in microcode)
to support secure boot of the trusted kernel and present a SHA1 hash
of the kernel code in a read only register. 
But how will the SCP know that the hash it reads comes from the
processor (as opposed to being forged by the user)?  Is there any
authenticated communication between the processor and the SCP?
Brian also mentioned that there would be changes to the Southbridge LCP bus, which I gather is a local I/O bus in PCs. SCP will sit on that and presumably the changes are to insure that the SCP can only be accessed in secure mode.

At 12:27 AM +0100 10/22/02, Peter Clay wrote:
I've been trying to figure out whether the following attack will be
feasible in a Pd system, and what would have to be incorporated to prevent
against it.

Alice runs "trusted" application T on her computer. This is some sort of
media application, which acts on encoded data streamed over the
internet. Mallory persuades Alice to stream data which causes a buffer
overrun in T. The malicious code, running with all of T's privileges:

- abducts choice valuable data protected by T (e.g. individual book keys
for ebooks)
- builds its own vault with its own key
- installs a modified version of T, V, in that vault with access to the
valuable data
- trashes T's vault

The viral application V is then in an interesting position. Alice has two
choices:

- nuke V and lose all her data (possibly including all backups, depending
on how backup of vaults works)
- allow V to act freely
There are two cases here. One is a buffer overflow in one of the trusted "agents" running in Palladium. Presumably an attack here will only be able to damage vaults associated with the product that contains that agent. The vendor that supplies the agent will have a strong incentive to avoid overflow opportunities.

The more dangerous case is buffer overflow in Nexus. Brian admitted that this would be disastrous. Obviously QA will be intense. They plan to publish Nexus source code. Brian was even asked if they would publish source for their C compiler. He said they had thought of that, didn't think they could get the VisualC compiler published but are considering coming up with a stripped down C compiler they can release.

I haven't seen enough detail yet to be able to flesh this out, but it does
highlight some areas of concern:

- how do users back up vaults?
They realize that the whole back up/upgrade issue is a big concern. Brian briefly presented some very complex schemes for doing this which I didn't grasp.

- there really needs to be a master override to deal with misbehaving
trusted apps.
Presumably an intact Nexus can trash any trusted app. And I don't see how any data in the vault could prevent you from loading a clean nexus, say from CD-ROM, as long as the SCP isn't altered and there is supposed to be no way to do that from software..


Arnold Reinhold

Reply via email to