Re: example: secure computing kernel needed

2004-01-03 Thread David Wagner
Amir Herzberg  wrote:
>I'm not sure I agree with your last statement. Consider a typical PC 
>running some insecure OS and/or applications, which, as you said in earlier 
>post, is the typical situation and threat. Since the OS is insecure and/or 
>(usually) gives administrator priviledges to insecure applications, an 
>attacker may be able to gain control and then modify some code (e.g. 
>install trapdoor). With existing systems, this is hard to prevent. However, 
>it may be possible to detect this by some secure monitoring hardware, which 
>e.g. checks for signatures by the organization's IT department on any 
>installed software.

But:

1. If you can check signatures on running software, you might as well
check the signature when the software is first executed, and prevent
it from being executed if it is not allowed.  The operating system is
going to have to be involved in the process of checking which user-level
applications are running (because on the OS knows this fact), so you
might as well get it involved in deciding which user-level applications
are allowed to be executed.  Proactive prevention is just as easy to
build as after-the-fact detection, and is more useful.

2. Your system administrator can only enumerate what applications you're
running if you have a trustworthy OS.  Today's OS's typically aren't
very trustworthy.

Let me explain.  Let's say the sysadmin want to check whether I'm running
Doom.  Fine.  That means the TPM (the hardware) is going to check what
bootloader (or nexus) was loaded at boot time, and sign that fact.
Then, the bootloader/nexus is going to check what operating system
kernel it loaded, and sign that fact.  Finally, the OS kernel is going
to check what user-level applications are running, and sign that fact.
This chain of signatures then tells you what software is running.  But,
if you want to believe in the chain of signatures, you have to believe
in the trustworthiness of all the software involved here: the TPM, the
bootloader/nexus, and the OS.  If my OS is Windows XP, you're screwed;
that chain of signatures doesn't tell you much.  There is no reason to
think Windows XP is trustworthy enough to know what software is running.
If Windows XP has a buffer overrun vulnerability somewhere -- as seems
likely -- then it could have been compromised, and my OS might be
signing false statements.  Maybe I hacked my OS to falsely report that
Doom is not running, when in fact I'm happily gaming away.  Maybe the
Doom installer hacked the OS to hide its presence.  You, and my sysadmin,
have no way of knowing, if Windows has any security vulnerability anywhere.

Conclusion: in today's machines, there's no reliable way to identify
which user-level applications are running, even if we all have a TCG
that supports remote attestation.

Now, there is an exception.  If my application is running natively on
top of the bootloader/nexus, not on top of Windows, then this issue
goes away.  In Microsoft's Palladium architecture, this is known as
"the right-hand side", and the TPM protects the "right-hand side" from
Windows and apps running on Windows.  (I don't recall whether TCPA has
a similar concept.)  But it seems unlikely that everyday applications,
like Doom, are going to be running on the "right-hand side".

To rephrase, my system administrator can reliably tell what is running
on the "right-hand side" of my machine, but my sysadmin has no reliable
way to tell what apps I'm running on top of Windows.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-30 Thread Amir Herzberg
At 04:20 30/12/2003, David Wagner wrote:
Ed Reed wrote:
>There are many business uses for such things, like checking to see
>if locked down kiosk computers have been modified (either hardware
>or software),
I'm a bit puzzled why you'd settle for detecting changes when you
can prevent them.  Any change you can detect, you can also prevent
before it even happens.

I'm not sure I agree with your last statement. Consider a typical PC 
running some insecure OS and/or applications, which, as you said in earlier 
post, is the typical situation and threat. Since the OS is insecure and/or 
(usually) gives administrator priviledges to insecure applications, an 
attacker may be able to gain control and then modify some code (e.g. 
install trapdoor). With existing systems, this is hard to prevent. However, 
it may be possible to detect this by some secure monitoring hardware, which 
e.g. checks for signatures by the organization's IT department on any 
installed software. A reasonable response when such violation is 
detected/suspected is to report to the IT department (`owner` of the machine).

On the other hand I fully agree with your other comments in this area and 
in particular with...
...
Summary: None of these applications require full-strength
(third-party-directed) remote attestation.  It seems that an "Owner
Override" would not disturb these applications.
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-29 Thread David Wagner
Ed Reed wrote:
>There are many business uses for such things, like checking to see
>if locked down kiosk computers have been modified (either hardware
>or software),

I'm a bit puzzled why you'd settle for detecting changes when you
can prevent them.  Any change you can detect, you can also prevent
before it even happens.  So the problem statement sounds a little
contrived to me -- but I don't really know anything about kiosks,
so maybe I'm missing something.

In any case, this is an example of an application where owner-directed
remote attestation suffices, so one could support this application
without enabling any of the alleged harms.  (See my previous email.)
In other words, this application is consistent with an "Owner Override".

>verifying that users have not excercised their god-given
>right to install spy-ware and viruses (since they're running with
>administrative priviledges, aren't they?),

It sounds like the threat model is that the sysadmins don't trust the
users of the machine.  So why are the sysadmins giving users administrator
or root access to the machine?  It sounds to me like the real problem
here is a broken security architecture that doesn't match up to the
security threat, and remote attestation is a hacked-up patch that's not
going to solve the underlying problems.  But that's just my reaction,
without knowing more.

In any case, this application is also consistent with owner-directed
remote attestation or an "Owner Override".

>and satisfying a consumer
>that the server they're connected to is (or isn't) running software
>that
>records has adequate security domain protections to protect the users
>data (perhaps backup files) the user entrusts to the server.

If I don't trust the administrators of that machine to protect sensitive
data appropriately, why would I send sensitive data to them?  I'm not
sure I understand the threat model or the problem statement.

But again, this seems to be another example application that's compatible
with owner-directed remote attestation or an "Owner Override".


Summary: None of these applications require full-strength
(third-party-directed) remote attestation.  It seems that an "Owner
Override" would not disturb these applications.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-29 Thread David Wagner
Jerrold Leichter wrote:
>|> *Any* secure computing kernel that can do
>|> the kinds of things we want out of secure computing kernels, can also
>|> do the kinds of things we *don't* want out of secure computing kernels.

David Wagner wrote:
>| It's not hard to build a secure kernel that doesn't provide any form of
>| remote attestation, and almost all of the alleged harms would go away if
>| you remove remote attestation.  In short, you *can* have a secure kernel
>| without having all the kinds of things we don't want.

Jerrold Leichter wrote:
>The question is not whether you *could* build such a thing - I agree, it's
>quite possible.  The question is whether it would make enough sense that it
>would gain wide usage.  I claim not.

Good.  I'm glad we agree that one can build a remote kernel without
remote attestation; that's progress.  But I dispute your claim that remote
attestation is critical to securing our machines.  As far as I can see,
remote attestation seems (with some narrow exceptions) pretty close to
worthless for the most common security problems that we face today.

Your argument is premised on the assumption that it is critical to defend
against attacks where an adversary physically tampers with your machine.
But that premise is wrong.

Quick quiz: What's the dominant threat to the security of our computers?
It's not attacks on the hardware, that's for sure!  Hardware attacks
aren't even in the top ten.  Rather, our main problems are with insecure
software: buffer overruns, configuration errors, you name it.

When's the last time someone mounted a black bag operation against
your computer?  Now, when's the last time a worm attacked your computer?
You got it-- physical attacks are a pretty minimal threat for most users.

So, if software insecurity is the primary problem facing us, how does
remote attestation help with software insecurity?  Answer: It doesn't, not
that I can see, not one bit.  Sure, maybe you can check what software is
running on your computer, but that doesn't tell you whether the software
is any good.  You can check whether you're getting what you asked for,
but you have no way to tell whether what you asked for is any good.

Let me put it another way.  Take a buggy, insecure application, riddled
with buffer overrun vulnerabilities, and add remote attestation.  What do
you get?  Answer: A buggy, insecure application, riddled with buffer
overrun vulnerabilities.  In other words, remote attestation doesn't
help if your trusted software is untrustworthy -- and that's precisely
the situation we're in today.  Remote attestation just doesn't help with
the dominant threat facing us right now.

For the typical computer user, the problems that remote attestation solves
are in the noise compared to the real problems of computer security
(e.g., remotely exploitable buffer overruns in applications).  Now,
sure, remote attestation is extremely valuable for a few applications,
such as digital rights management.  But for typical users?  For most
computer users, rather than providing an order of magnitude improvement
in security, it seems to me that remote attestation will be an epsilon
improvement, at best.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-28 Thread William Arbaugh


I must confess I'm puzzled why you consider strong authentication
the same as remote attestation for the purposes of this analysis.
It seems to me that your note already identifies one key difference:
remote attestation allows the remote computer to determine if they wish
to speak with my machine based on the software running on my machine,
while strong authentication does not allow this.
That is the difference, but my point is that the result with respect to 
the control of your computer is the same. The distant end either 
communicates with you or it doesn't. In authentication, the distant end 
uses your identity to make that decision. In remote attestation, the 
distant end uses your computer's configuration (the computer's identity 
to some degree) to make that same decision.

As a result, remote attestation enables some applications that strong
authentication does not.  For instance, remote attestation enables DRM,
software lock-in, and so on; strong authentication does not.  If you
believe that DRM, software lock-in, and similar effects are 
undesirable,
then the differences between remote attestation and strong 
authentication
are probably going to be important to you.

So it seems to me that the difference between authenticating software
configurations vs. authenticating identity is substantial; it affects 
the
potential impact of the technology.  Do you agree?  Did I miss 
something?
Did I mis-interpret your remarks?

My statement was that the two are similar to the degree to which the 
distant end has control over your computer. The difference is that in 
remote attestation we are authenticating a system and we have some 
assurance that the system won't deviate from its programming/policy (of 
course all of the code used in these applications will be formally 
verified :-)). In user authentication, we're authenticating a human and 
we have significantly less assurance that the authenticated subject in 
this case (the human) will follow policy. That is why remote 
attestation and authentication produce different side effects enabling 
different applications: the underlying nature of the authenticated 
subject. Not because of a difference in the technology.



P.S. As a second-order effect, there seems to be an additional 
difference
between remote attestation ("authentication of configurations") and
strong authentication ("authentication of identity").  Remote 
attestation
provides the ability for "negative attestation" of a configuration:
for instance, imagine a server which verifies not only that I do have
RealAudio software installed, but also that I do not have any Microsoft
Audio software installed.  In contrast, strong authentication does
not allow "negative attestation" of identity: nothing prevents me from
sharing my crypto keys with my best friend, for instance.

Well- biometrics raises some interesting Gattica issues.  But, I'm not 
going to go there on the list. It is a discussion that is better done 
over a few pints.

So to summarize- I was focusing only on the control issue and noting 
that even though the two technologies enable different applications 
(due to the assurance that we have in how the authenticated subject 
will behave), they are very similar in nature.


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to 
[EMAIL PROTECTED]
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-26 Thread Seth David Schoen
William Arbaugh writes:

> If that is the case, then strong authentication provides the same 
> degree of control over your computer. With remote attestation, the 
> distant end determines if they wish to communicate with you based on 
> the fingerprint of your configuration. With strong authentication, the 
> distant end determines if they wish to communicate with you based on 
> your identity.

I'm a little confused about why you consider these similar.  They seem
very different to me, particularly in the context of mass-market
transactions, where a service provider is likely to want to deal with
"the general public".

While it's true that service providers could try to use some demand
some sort of PKI credential as a way of getting the true name of those
they deal with, the particular things they can do with a true name are
much more limited than the things they could do with proof of
someone's software configuration.  Also, in the future, the cost of
demanding a true name could be much higher than the cost of demanding
a proof of software identity.

To give a trivial example, I've signed this paragraph using a PGP
clear signature made by my key 0167ca38.  You'll note that the Version
header claims to be "PGP 17.0", but in fact I don't have a copy of PGP
17.0.  I simply modified that header with my text editor.  You can tell
that this paragraph was written by me, but not what software I used to
write it.

As a result, you can't usefully expect to take any action based on my
choice of software -- but you can take some action based on whether
you trust me (or the key 0167ca38).  You can adopt a policy that you
will only read signed mail -- or only mail signed by a key that Phil
Zimmermann has signed, or a key that Bruce Lehman has signed -- but
you can't adopt a policy that you will only read mail written by mutt
users.  In the present environment, it's somewhat difficult to use
technical means to increase or diminish others' incentive to use
particular software (at least if there are programmers actively
working to preserve interoperability).

Sure, attestation for platform identity and integrity has some things
in common with authentication of human identity.  (They both use
public-key cryptography, they can both use a PKI, they both attempt to
prove things to a challenger based on establishing that some entity
has access to a relevant secret key.)  But it also has important
differences.  One of those differences has to do with whether trust is
reposed in people or in devices!  I think your suggestion is tantamount
to saying that an electrocardiogram and a seismograph have the same
medical utility because they are both devices for measuring and
recording waveforms.

> I just don't see remote attestation as providing control over your 
> computer provided the user/owner has control over when and if remote 
> attestation is used. Further, I can think of several instances where 
> remote attestation is a good thing. For example, a privacy P2P file 
> sharing network. You wouldn't want to share your files with an RIAA 
> modified version of the program that's designed to break the anonymity 
> of the network.

This application is described in some detail at

http://www.eecs.harvard.edu/~stuart/papers/eis03.pdf

I haven't seen a more detailed analysis of how attestation would
benefit particular designs for anonymous communication networks
against particular attacks.  But it's definitely true that there are
some applications of attestation to third parties that many computer
owners might want.  (The two that first come to mind are distributed
computing projects like [EMAIL PROTECTED] and network games like Quake,
although I have a certain caution about the latter which I will
describe when the video game software interoperability litigation I'm
working on is over.)

It's interesting to note that in this case you benefit because you
received an attestation, not because you gave one (although the
network is so structured that giving an attestation is arranged to be
the price of receiving one: "Give me your name, horse-master, and I
shall give you mine!").

The other thing that end-users might like is if _non-peer-to-peer_
services they interacted with could prove properties about themselves
-- that is, end-users might like to receive rather than to give
attestations.  An anonymous remailer could give an attestation to
prove that it is really running the official Mixmaster and the
official Exim and not a modified Mixmaster or modified Exim that
try to break anonymity.  Apple could give an attestation proving that
it didn't have the ability to alter or to access the contents of
your data while it was stored by its "Internet hard drive" service.

One interesting question is how to characterize on-line services where
users would be asked for attestation (typically to their detriment, by
way of taking away their choice of software) as opposed to on-line
services where users would be able to ask for attestation (typi

Re: example: secure computing kernel needed

2003-12-23 Thread Jerrold Leichter
| >>> We've met the enemy, and he is us.  *Any* secure computing kernel
| >>> that can do
| >>> the kinds of things we want out of secure computing kernels, can also
| >>> do the
| >>> kinds of things we *don't* want out of secure computing kernels.
| >>
| >> I don't understand why you say that.  You can build perfectly good
| >> secure computing kernels that don't contain any support for remote
| >> attribution.  It's all about who has control, isn't it?
| >>
| >There is no control of your system with remote attestation. Remote
| >attestation simply allows the distant end of a communication to
| >determine if your configuration is acceptable for them to communicate
| >with you.
|
| But you missed my main point.  Leichter claims that any secure kernel is
| inevitably going to come with all the alleged harms (DRM, lock-in, etc.).
| My main point is that this is simply not so.
|
| There are two very different pieces here: that of a secure kernel, and
| that of remote attestation.  They are separable.  TCPA and Palladium
| contain both pieces, but that's just an accident; one can easily imagine
| a Palladium-- that doesn't contain any support for remote attestation
| whatsoever.  Whatever you think of remote attestation, it is separable
| from the goal of a secure kernel.
|
| This means that we can have a secure kernel without all the harms.
| It's not hard to build a secure kernel that doesn't provide any form of
| remote attestation, and almost all of the alleged harms would go away if
| you remove remote attestation.  In short, you *can* have a secure kernel
| without having all the kinds of things we don't want.  Leichter's claim
| is wrong
The question is not whether you *could* build such a thing - I agree, it's
quite possible.  The question is whether it would make enough sense that it
would gain wide usage.  I claim not.

The issues have been discussed by others in this stream of messages, but
lets pull them together.  Suppose I wished to put together a secure system.
I choose my open-source software, perhaps relying on the word of others,
perhaps also checking it myself.  I choose a suitable hardware base.  I put
my system together, install my software - voila, a secure system.  At least,
it's secure at the moment in time.  How do I know, the next time I come to
use it, that it is *still* secure - that no one has slipped in and modified
the hardware, or found a bug and modified the software?

I can go for physical security.  I can keep the device with me all the time,
or lock it in a secure safe.  I can build it using tamper-resistant and
tamper-evident mechanisms.  If I go with the latter - *much* easier - I have
to actually check the thing before using it, or the tamper evidence does me
no good ... which acts as a lead-in to the more general issue.

Hardware protections are fine, and essential - but they can only go so far.
I really want a software self-check.  This is an idea that goes way back:
Just as the hardware needs to be both tamper-resistent and tamper-evident,
so for the software.  Secure design and implementation gives me tamper-
resistance.  The self-check gives me tamper evidence.  The system must be able
to prove to me that it is operating as it's supposed to.

OK, so how do I check the tamper-evidence?  For hardware, either I have to be
physically present - I can hold the box in my hand and see that no one has
broken the seals - or I need some kind of remote sensor.  The remote sensor
is a hazard:  Someone can attack *it*, at which point I lose my tamper-
evidence.

There's no way to directly check the software self-check features - I can't
directly see the contents of memory! - but I can arrange for a special highly-
secure path to the self-check code.  For a device I carry with me, this could
be as simple as a "self-check passed" LED controlled by dedicated hardware
accessible only to the self-check code.  But how about a device I may need
to access remotely?  It needs a kind of remote attestation - though a
strictly limited one, since it need only be able to attest proper operation
*to me*.  Still, you can see the slope we are on.

The slope gets steeper.  *Some* machines are going to be shared.  Somewhere
out there is the CVS repository containing the secure kernel's code.  That
machine is updated by multiple developers - and I certainly want *it* to be
running my security kernel!  The developers should check that the machine is
configured properly before trusting it, so it should be able to give a
trustworthy indication of its own trustworthiness to multiple developers.
This *could* be based on a single secret shared among the machine and all
the developers - but would you really want it to be?  Wouldn't it be better
if each developer shared a unique secret with the machine?

You can, indeed, stop anywhere along this slope.  You can decide you really
don't need remote attestation, even for yourself - you'll carry the machine
with you, or only use it when you are physically i

Re: example: secure computing kernel needed

2003-12-23 Thread David Wagner
William Arbaugh  wrote:
>David Wagner writes:
>> As for remote attestion, it's true that it does not directly let a remote
>> party control your computer.  I never claimed that.  Rather, it enables
>> remote parties to exert control over your computer in a way that is
>> not possible without remote attestation.  The mechanism is different,
>> but the end result is similar.
>
>If that is the case, then strong authentication provides the same 
>degree of control over your computer. With remote attestation, the 
>distant end determines if they wish to communicate with you based on 
>the fingerprint of your configuration. With strong authentication, the 
>distant end determines if they wish to communicate with you based on 
>your identity.

I must confess I'm puzzled why you consider strong authentication
the same as remote attestation for the purposes of this analysis.

It seems to me that your note already identifies one key difference:
remote attestation allows the remote computer to determine if they wish
to speak with my machine based on the software running on my machine,
while strong authentication does not allow this.

As a result, remote attestation enables some applications that strong
authentication does not.  For instance, remote attestation enables DRM,
software lock-in, and so on; strong authentication does not.  If you
believe that DRM, software lock-in, and similar effects are undesirable,
then the differences between remote attestation and strong authentication
are probably going to be important to you.

So it seems to me that the difference between authenticating software
configurations vs. authenticating identity is substantial; it affects the
potential impact of the technology.  Do you agree?  Did I miss something?
Did I mis-interpret your remarks?



P.S. As a second-order effect, there seems to be an additional difference
between remote attestation ("authentication of configurations") and
strong authentication ("authentication of identity").  Remote attestation
provides the ability for "negative attestation" of a configuration:
for instance, imagine a server which verifies not only that I do have
RealAudio software installed, but also that I do not have any Microsoft
Audio software installed.  In contrast, strong authentication does
not allow "negative attestation" of identity: nothing prevents me from
sharing my crypto keys with my best friend, for instance.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-22 Thread William Arbaugh

I agree with everything you say, David, until here.

As for remote attestion, it's true that it does not directly let a 
remote
party control your computer.  I never claimed that.  Rather, it enables
remote parties to exert control over your computer in a way that is
not possible without remote attestation.  The mechanism is different,
but the end result is similar.


If that is the case, then strong authentication provides the same 
degree of control over your computer. With remote attestation, the 
distant end determines if they wish to communicate with you based on 
the fingerprint of your configuration. With strong authentication, the 
distant end determines if they wish to communicate with you based on 
your identity.

I just don't see remote attestation as providing control over your 
computer provided the user/owner has control over when and if remote 
attestation is used. Further, I can think of several instances where 
remote attestation is a good thing. For example, a privacy P2P file 
sharing network. You wouldn't want to share your files with an RIAA 
modified version of the program that's designed to break the anonymity 
of the network.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-22 Thread David Wagner
William Arbaugh  wrote:
>On Dec 16, 2003, at 5:14 PM, David Wagner wrote:
>> Jerrold Leichter  wrote:
>>> We've met the enemy, and he is us.  *Any* secure computing kernel 
>>> that can do
>>> the kinds of things we want out of secure computing kernels, can also 
>>> do the
>>> kinds of things we *don't* want out of secure computing kernels.
>>
>> I don't understand why you say that.  You can build perfectly good
>> secure computing kernels that don't contain any support for remote
>> attribution.  It's all about who has control, isn't it?
>>
>There is no control of your system with remote attestation. Remote 
>attestation simply allows the distant end of a communication to 
>determine if your configuration is acceptable for them to communicate 
>with you.

But you missed my main point.  Leichter claims that any secure kernel is
inevitably going to come with all the alleged harms (DRM, lock-in, etc.).
My main point is that this is simply not so.

There are two very different pieces here: that of a secure kernel, and
that of remote attestation.  They are separable.  TCPA and Palladium
contain both pieces, but that's just an accident; one can easily imagine
a Palladium-- that doesn't contain any support for remote attestation
whatsoever.  Whatever you think of remote attestation, it is separable
from the goal of a secure kernel.

This means that we can have a secure kernel without all the harms.
It's not hard to build a secure kernel that doesn't provide any form of
remote attestation, and almost all of the alleged harms would go away if
you remove remote attestation.  In short, you *can* have a secure kernel
without having all the kinds of things we don't want.  Leichter's claim
is wrong.

This is an important point.  It seems that some TCPA and Palladium
advocates would like to tie together security with remote attestion; it
appears they would like you to believe you can't have a secure computer
without also enabling DRM, lock-in, and the other harms.  But that's
simply wrong.  We can have a secure computer without enabling all the
alleged harms.  If we don't like the effects of TCPA and Palladium,
there's no reason we need to accept them.  We can have perfectly good
security without TCPA or Palladium.

As for remote attestion, it's true that it does not directly let a remote
party control your computer.  I never claimed that.  Rather, it enables
remote parties to exert control over your computer in a way that is
not possible without remote attestation.  The mechanism is different,
but the end result is similar.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-22 Thread Ed Reed
Remote attestation has use in applications requiring accountability of
the user, as a way for cooperating processes to satisfy themselves
that
configurations and state are as they're expected to be, and not
screwed
up somehow.
 
There are many business uses for such things, like checking to see
if locked down kiosk computers have been modified (either hardware
or software), verifying that users have not excercised their god-given
right to install spy-ware and viruses (since they're running with
administrative priviledges, aren't they?), and satisfying a consumer
that the server they're connected to is (or isn't) running software
that
records has adequate security domain protections to protect the users
data (perhaps backup files) the user entrusts to the server.
 
What I'm not sure of is whether there are any anonymous / privacy
enhancing scenarios in which remote attestation is useful.  Well, the
last case, above, where the server is attesting to the client could
work.
But what about the other way around.  The assumption I have is that
any remote attestation, even if anonymous, still will leave a trail
that might be used by forensic specialists for some form of traffic
analysis, if nothing else.
 
In that case, you'd need to "trust" your "trusted computing system"
not to provide remote attestation without your explicit assent.
 
I'd really like to see an open source effort to provide a high
assurance
TPM implementation, perhaps managed through a Linux 2.6 / LSM /
TPM driver talking to a TPM module.  Yes, the TPM identity and
integrity
will still be rooted in its manufacturer (IBM, Intel, Asus, SiS,
whomever).
But hell, we're already trusting them not to put tcpstacks into the
BIOS
for PAL chips to talk to their evil bosses back in [fill in location of
your
favorite evil empire, here]. Oh, wait a minute - Phoenix is working
on that, too, aren't they?
 
I see the TPM configuration management tool as a way to provide
a trusted boot path, complete with automagical inventory of "approved"
hardware devices, so that evaluated operating systems, like Solaris
and Linux, can know whether they're running on hardware whose firmware
and circuitry are known (or believed) not to have been subverted, or to
have
certain EMI / Tempest characteristics.  Mass market delivery of
what are ususally statically configured systems that still retain
their
C2/CC-EAL4 ratings.
 
But more important is where TPM and TCPA lead Intel and IBM, towards
increasing virtualization of commodity hardware, like Intel's LeGrand
strategy to restore a "trusted protection ring" (-1) to their
processors,
which will make it easier to get real, proper virtualization with
trusted
hypervisors back into common use.
 
The fact that Hollywood thinks they can use the technology, and thus
they're willing to underwrite its development, is fortuitous, as long
as
the trust is based on open transparent reviews and certifications.
 
Maybe the FSF and EFF will create their own certification program, to
review and bless TPM "ring -1" implementations, just to satsify the
slashdot crowd...
 
Maybe they should.
 
Ed

>>> William Arbaugh <[EMAIL PROTECTED]> 12/18/2003 5:33:00 PM >>>


On Dec 16, 2003, at 5:14 PM, David Wagner wrote:

> Jerrold Leichter  wrote:
>> We've met the enemy, and he is us.  *Any* secure computing kernel 
>> that can do
>> the kinds of things we want out of secure computing kernels, can
also 
>> do the
>> kinds of things we *don't* want out of secure computing kernels.
>
> I don't understand why you say that.  You can build perfectly good
> secure computing kernels that don't contain any support for remote
> attribution.  It's all about who has control, isn't it?
>
>
There is no control of your system with remote attestation. Remote 
attestation simply allows the distant end of a communication to 
determine if your configuration is acceptable for them to communicate 
with you. As such, remote attestation allows communicating parties to 
determine with whom they communicate or share services. In that 
respect, it is just like caller id. People should be able to either 
attest remotely, or block it just like caller id. Just as the distant 
end can choose to accept or not accept the connection.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to
[EMAIL PROTECTED]
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-20 Thread William Arbaugh
On Dec 16, 2003, at 5:14 PM, David Wagner wrote:

Jerrold Leichter  wrote:
We've met the enemy, and he is us.  *Any* secure computing kernel 
that can do
the kinds of things we want out of secure computing kernels, can also 
do the
kinds of things we *don't* want out of secure computing kernels.
I don't understand why you say that.  You can build perfectly good
secure computing kernels that don't contain any support for remote
attribution.  It's all about who has control, isn't it?

There is no control of your system with remote attestation. Remote 
attestation simply allows the distant end of a communication to 
determine if your configuration is acceptable for them to communicate 
with you. As such, remote attestation allows communicating parties to 
determine with whom they communicate or share services. In that 
respect, it is just like caller id. People should be able to either 
attest remotely, or block it just like caller id. Just as the distant 
end can choose to accept or not accept the connection.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-18 Thread David Wagner
Jerrold Leichter  wrote:
>We've met the enemy, and he is us.  *Any* secure computing kernel that can do
>the kinds of things we want out of secure computing kernels, can also do the
>kinds of things we *don't* want out of secure computing kernels.

I don't understand why you say that.  You can build perfectly good
secure computing kernels that don't contain any support for remote
attribution.  It's all about who has control, isn't it?

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-15 Thread bear


On Sun, 14 Dec 2003, Jerrold Leichter wrote:

>Which brings up the interesting question:  Just why are the reactions to TCPA
>so strong?  Is it because MS - who no one wants to trust - is involved?  Is
>it just the pervasiveness:  Not everyone has a smart card, but if TCPA wins
>out, everyone will have this lump inside of their machine.

It is because this lump which we have no control over (aside from
the trivial degree of control implied by simply refusing to use it
at all) is proposed for presence inside machines which we use for
doing things important to us.

Most of us have a relatively few applications for such a device,
and we want to keep those applications completely separate from our
other use of our computers.  A dongle is more acceptable than the
TCPA hardware because it can be detached from the computer leaving
a usable machine, and because in order to reach a broad market you
cannot write software assuming its existence.

I would not object to a tamper-resistant stainless-steel
hardware token that I needed to carry with me in order to access
financial transactions (or whatever).  That's a hardware token
with a single application, which is not at all mixed up with or
involved with the fundamental hardware or software that I depend
on for all my other applications.

But I do object, in strongest possible terms, to the proposal to
weld some device into my personal computer, give it the highest
privelege mode, allow it to read or write arbitrary data on the
bus or the network interface, forbid me from looking inside it
or altering its contents, and allow it to communicate on my behalf
to unknown hosts over the internet.

I like to think that I am the person who owns my machine and that
ownership carries with it the privelege of deciding what to run
or not run on it.  TCPA assigns to others the privelege of blocking
basic, ordinary functionality if they don't know or like some
program I'm running.  But what programs I'm running on my machine
in my home is not their business unless they are trying to literally
take control of my machine away from me.

If they've got stuff that needs to be done in a secure environment
and they don't trust me to run a machine to do it on, let them run
it on their own machines rather than taking mine over by proxy.
Fair's fair; *I* own this one; *They* own that one.  What either
of us doesn't trust the other with, we must run ourselves.

I believe that if TCPA or something like it is adopted, vendors
will respond by ceasing to make any applications that are at all
useful on machines where it is not present, enabled, and loaded
with some specified default configuration that basically gives
them all ownership rights to my machines.  In a world where basic
functionality depends on such applications, no one has any choice
any more about whether to enable it or what to run on it.

>I think many of the reasons people will give will turn out, on close
>reflection, to be invalid.  Sure, you can choose not to buy software that uses
>dongles - and you'll be able to chose software that doesn't rely on TCPA.

I do not believe that the long-term goals of the TCPA partners are
consistent with the continued feasability of operating machines
that don't rely on TCPA.

>I think the real threat of TCPA is not in any particular thing it does, but in
>that it effect 'renders the world safe for dongles".  MS *could* today require
>that you have a dongle to use Word - but to do so, even with their monopoly
>power, would be to quickly lose the market.  Dongles are too inconvenient, and
>carry too much baggage.  But when the dongle comes pre-installed on every
>machine, the whole dynamic changes.

Indeed.  I cannot comprehend that you have such a complete grasp of
the problem but don't find that a very compelling argument *against*
the TCPA mechanism.

Remember that the world suffered through seven centuries of imprimatures
before freedom of the press was recognized as fundamental to liberty. I
think that freedom and self-determination in computing applications is
equally important and that the TCPA is a step toward a technology that
would enable the same kind of struggle over that freedom.

A secure kernel is a kernel that the *owner* of the machine can trust.

Bear

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-15 Thread Jerrold Leichter
| When it comes to the PC's operating system,
| there is apparently no economic way to achieve
| what you suggest - ensuring that it hasn't
| been tampered with - so few bother to worry
| about it.  If more security is desired, the
| preferred method is to bypass the PC's OS
| completely.
...which is, I would think, the only reasonable approach!  In fact, even
TCPA takes exatly this approach.  And it's hardly new, from identification (a
SecureID token is a separate, secure box, not a piece of software to run on
your PC) to IP protection (dongles being exactly tamper-resistant secure
computing kernels).

In no case does you, as the end-user, have any real say over what the secure
kernel does or how it is used.  By its nature, it operates, for the most part,
outside of your control.

Which brings up the interesting question:  Just why are the reactions to TCPA
so strong?  Is it because MS - who no one wants to trust - is involved?  Is
it just the pervasiveness:  Not everyone has a smart card, but if TCPA wins
out, everyone will have this lump inside of their machine.

I think many of the reasons people will give will turn out, on close
reflection, to be invalid.  Sure, you can choose not to buy software that uses
dongles - and you'll be able to chose software that doesn't rely on TCPA.
(In both cases, depending on the kind of software, you may find that your
choice is "run it our way, or do without".)  You can choose not to use a bank
that requires you to have a smartcard - but in practice you would be chosing
less security.


We've met the enemy, and he is us.  *Any* secure computing kernel that can do
the kinds of things we want out of secure computing kernels, can also do the
kinds of things we *don't* want out of secure computing kernels.  if the
kernel can produce *our* unforgeable signature, it can produce someone else's
as well.  Sure, we can decline to allow our secure computing kernel to be used
for that purpose - but someone else may then choose not to do business with
us.

I think the real threat of TCPA is not in any particular thing it does, but in
that it effect 'renders the world safe for dongles".  MS *could* today require
that you have a dongle to use Word - but to do so, even with their monopoly
power, would be to quickly lose the market.  Dongles are too inconvenient, and
carry too much baggage.  But when the dongle comes pre-installed on every
machine, the whole dynamic changes.
-- Jerry

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-14 Thread Anne & Lynn Wheeler
At 07:25 PM 12/11/2003 -0500, Paul A.S. Ward wrote:
>I'm not sure why no one has considered the PC banking problem to be a
>justification for secure computing.  Specifically, how does a user know
>their computer has not been tampered with when they wish to use it for
>banking access.
actually the EU FINREAD (financial reader) standard is quite directed at 
this area. basically a secure entry/display\token-interface device. part of 
the issue is not skimming any pin-entry that may be assumed as possible 
with just about all keyboard-based entry (aka tamper evident device  
supposedly somewhat consumer equivalent of the TSM ... trusted security 
module and tamper evident guidelines for point-of-sale terminals). In 
effect, finread is isolating some set of secure components into a tamper 
evident housing that has something akin to a trusted security module.

the other aspect somewhat shows up in the digital signature area. 
fundamentally a digital signature may be used for authenticating (and 
message integrity) ... but not, by itself as to "agreement" in the legal 
signature sense. the issue is how to create an environment/infrastructure 
for supporting both straight-forward authentication as well as 
intention/agreement

in theory finread has the ability to securely display the value of a 
transaction (and possibly other necessary details) and then requires a PIN 
entry after the display as evidence of

1) something you know authentication
2) being able to infer agreement with the transaction.
pretty much assumed is that finread implies some sort of token acceptor 
device ... which in turn implies a "something you have" token authentication.

so finread is attempting to both address two-factor authentication (and 
possibly three if biometric is also supported) as well as establish some 
environment related for inferring agreement/intention/etc as required per 
legal signature.

possibly overlooked in the base eu finread work is being able to prove that 
the transaction actually took place with a real finread device as opposed 
to some other kind of environment. In the (financial standard) X9A10 
working group on the X9.59 financial standard for all electronic retail 
payments we spent some amount of time on not precluding that the signing 
environment could also sign the transaction i.e.

1) amount displayed on secure secure display,
2) pin/biometric securely entered (after display occurs)
3) token digitally signs (after pin/biometric entered)
4) finread terminal digital signs
the 2nd & 3rd items (alone) are two (or three) factor authentication. 
however, in conjunction with the first and fourth items some level of 
assurance that the person agrees with the transaction.

lots of past finread references:
http://www.garlic.com/~lynn/aepay7.htm#3dsecure 3D Secure Vulnerabilities? 
Photo ID's and Payment Infrastructure
http://www.garlic.com/~lynn/aepay11.htm#53 Authentication white paper
http://www.garlic.com/~lynn/aepay11.htm#54 FINREAD was. Authentication 
white paper
http://www.garlic.com/~lynn/aepay11.htm#55 FINREAD ... and as an aside
http://www.garlic.com/~lynn/aepay11.htm#56 FINREAD was. Authentication 
white paper
http://www.garlic.com/~lynn/aadsm10.htm#keygen2 Welome to the Internet, 
here's your private key
http://www.garlic.com/~lynn/aadsm11.htm#4 AW: Digital signatures as proof
http://www.garlic.com/~lynn/aadsm11.htm#5 Meaning of Non-repudiation
http://www.garlic.com/~lynn/aadsm11.htm#6 Meaning of Non-repudiation
http://www.garlic.com/~lynn/aadsm11.htm#23 Proxy PKI. Was: IBM alternative 
to PKI?
http://www.garlic.com/~lynn/aadsm12.htm#24 Interests of online banks and 
their users [was Re: Cryptogram:  Palladium Only for DRM]
http://www.garlic.com/~lynn/aadsm14.htm#35 The real problem that https has 
conspicuously failed to fix
http://www.garlic.com/~lynn/aadsm15.htm#40 FAQ: e-Signatures and Payments
http://www.garlic.com/~lynn/aadsm9.htm#carnivore Shades of FV's Nathaniel 
Borenstein: Carnivore's "Magic Lantern"
http://www.garlic.com/~lynn/2001g.html#57 Q: Internet banking
http://www.garlic.com/~lynn/2001g.html#60 PKI/Digital signature doesn't work
http://www.garlic.com/~lynn/2001g.html#61 PKI/Digital signature doesn't work
http://www.garlic.com/~lynn/2001g.html#62 PKI/Digital signature doesn't work
http://www.garlic.com/~lynn/2001g.html#64 PKI/Digital signature doesn't work
http://www.garlic.com/~lynn/2001i.html#25 Net banking, is it safe???
http://www.garlic.com/~lynn/2001i.html#26 No Trusted Viewer possible?
http://www.garlic.com/~lynn/2001k.html#0 Are client certificates really secure?
http://www.garlic.com/~lynn/2001m.html#6 Smart Card vs. Magnetic Strip Market
http://www.garlic.com/~lynn/2001m.html#9 Smart Card vs. Magnetic Strip Market
http://www.garlic.com/~lynn/2002c.html#10 Opinion on smartcard security 
requested
http://www.garlic.com/~lynn/2002c.html#21 Opinion on smartcard security 
requested
http://www.garlic.com/~lynn/2002f.html#46 Security Issues of using Internet 
Banking
http:/

Re: example: secure computing kernel needed

2003-12-14 Thread Bill Stewart
At 02:41 PM 12/14/2003 +, Dave Howe wrote:
Paul A.S. Ward wrote:
> I'm not sure why no one has considered the PC banking problem to be a
> justification for secure computing.  Specifically, how does a user
> know their computer has not been tampered with when they wish to use
> it for banking access.
I think PC banking is an argument *against* Secure Computing as currently
proposed - there is no way to discover if there is a nasty "running" in
protected memory or removing it if there is.
Agreed.  It's a better argument for booting from a known CDROM distribution.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-14 Thread Ian Grigg
"Paul A.S. Ward" wrote:
> 
> I'm not sure why no one has considered the PC banking problem to be a
> justification for secure computing.  Specifically, how does a user know
> their computer has not been tampered with when they wish to use it for
> banking access.

It is and it has been.  Just not so much in
North America, and not in sense of making
the PC secure.

In Europe, the smart card field routinely
decided that trusted devices were required
to access the smart cards.  Such devices
were created and distributed.  Smart cards
are very expensive, though, and "free"
Internet banking dampened the enthusiasm
somewhat.

When it came to Internet banking, there
was much more of an emphasis on cost control,
and a range of cheap challenge response
hardware tokens are used to authenticate
each transaction.

In both these modes, the banks used secure
computing, but they did it by providing a
secure computer other than the PC [1].

When it comes to the PC's operating system,
there is apparently no economic way to achieve
what you suggest - ensuring that it hasn't
been tampered with - so few bother to worry
about it.  If more security is desired, the
preferred method is to bypass the PC's OS
completely.


iang

[1] Note that I use the term "secure" here
in a relative sense.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-14 Thread Dave Howe
Paul A.S. Ward wrote:
> I'm not sure why no one has considered the PC banking problem to be a
> justification for secure computing.  Specifically, how does a user
> know their computer has not been tampered with when they wish to use
> it for banking access.
I think PC banking is an argument *against* Secure Computing as currently
proposed - there is no way to discover if there is a nasty "running" in
protected memory or removing it if there is.

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-14 Thread Anne & Lynn Wheeler
At 12:02 PM 12/10/2003 -0500, John S. Denker wrote:
Previous discussions of secure computing technology have
been in some cases sidetracked and obscured by extraneous
notions such as
 -- Microsoft is involved, therefore it must be evil.
 -- The purpose of secure computing is DRM, which is
intrinsically evil ... computers must be able to
copy anything anytime.
there have been other discussions about multics and the paper from a year 
ago about not having a lot of the current vulnerabilities ... some comment 
that security has to be designed in from the start. misc. past refs to 
multics study
http://www.garlic.com/~lynn/2002e.html#47 Multics_Security
http://www.garlic.com/~lynn/2002l.html#42 Thirty Years Later: Lessons from 
the Multics Security Evaluation
http://www.garlic.com/~lynn/2002l.html#44 Thirty Years Later: Lessons from 
the Multics Security Evaluation
http://www.garlic.com/~lynn/2002l.html#45 Thirty Years Later: Lessons from 
the Multics Security Evaluation
http://www.garlic.com/~lynn/2002m.html#8 Backdoor in AES ?
http://www.garlic.com/~lynn/2002m.html#10 Backdoor in AES ?
http://www.garlic.com/~lynn/2002m.html#58 The next big things that weren't
http://www.garlic.com/~lynn/2002o.html#78 Newsgroup cliques?
http://www.garlic.com/~lynn/2002p.html#6 unix permissions
http://www.garlic.com/~lynn/2003b.html#0 Disk drives as commodities. Was 
Re: Yamhill
http://www.garlic.com/~lynn/2003i.html#59 grey-haired assembler programmers 
(Ritchie's C)
http://www.garlic.com/~lynn/2003j.html#4 A Dark Day
http://www.garlic.com/~lynn/2003k.html#3 Ping:  Anne & Lynn Wheeler
http://www.garlic.com/~lynn/2003k.html#48 Who said DAT?
http://www.garlic.com/~lynn/2003l.html#19 Secure OS Thoughts
http://www.garlic.com/~lynn/2003m.html#1 Password / access rights check
http://www.garlic.com/~lynn/2003o.html#5 perfomance vs. key size

there is also a number of discussions about the gnosis, keykos, eros 
lineage ... random refs to gnosis, keykos, &/or eros
http://www.garlic.com/~lynn/2000f.html#69 TSS ancient history, was X86 
ultimate CISC? designs)
http://www.garlic.com/~lynn/2000g.html#22 No more innovation?  Get serious
http://www.garlic.com/~lynn/2001b.html#73 7090 vs. 7094 etc.
http://www.garlic.com/~lynn/2001g.html#33 Did AT&T offer Unix to Digital 
Equipment in the 70s?
http://www.garlic.com/~lynn/2001g.html#35 Did AT&T offer Unix to Digital 
Equipment in the 70s?
http://www.garlic.com/~lynn/2001n.html#10 TSS/360
http://www.garlic.com/~lynn/2002f.html#59 Blade architectures
http://www.garlic.com/~lynn/2002g.html#0 Blade architectures
http://www.garlic.com/~lynn/2002g.html#4 markup vs wysiwyg (was: Re: 
learning how to use a computer)
http://www.garlic.com/~lynn/2002h.html#43 IBM doing anything for 50th Anniv?
http://www.garlic.com/~lynn/2002i.html#63 Hercules and System/390 - do we 
need it?
http://www.garlic.com/~lynn/2002j.html#75 30th b'day
http://www.garlic.com/~lynn/2003g.html#18 Multiple layers of virtual 
address translation
http://www.garlic.com/~lynn/2003h.html#41 Segments, capabilities, buffer 
overrun attacks
http://www.garlic.com/~lynn/2003i.html#15 two pi, four phase, 370 clone
http://www.garlic.com/~lynn/2003j.html#20 A Dark Day
http://www.garlic.com/~lynn/2003k.html#50 Slashdot: O'Reilly On The 
Importance Of The Mainframe Heritage
http://www.garlic.com/~lynn/2003l.html#19 Secure OS Thoughts
http://www.garlic.com/~lynn/2003l.html#22 Secure OS Thoughts
http://www.garlic.com/~lynn/2003l.html#26 Secure OS Thoughts
http://www.garlic.com/~lynn/2003m.html#24 Intel iAPX 432
http://www.garlic.com/~lynn/2003m.html#54 Thoughts on Utility Computing?

there is some number of efforts being done taking advantage of itanium-2 
hardware features (at least one such project in m'soft).
--
Anne & Lynn Wheelerhttp://www.garlic.com/~lynn/
Internet trivia 20th anv http://www.garlic.com/~lynn/rfcietff.htm
 

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-14 Thread Stefan Lucks
On Wed, 10 Dec 2003, John S. Denker wrote:

> Scenario:  You are teaching chemistry in a non-anglophone
> country.  You are giving an exam to see how well the
> students know the periodic table.
>   -- You want to allow students to use their TI-83 calculators
>  for *calculating* things.
>   -- You want to allow the language-localization package.
>   -- You want to disallow the app that stores the entire
>  periodic table, and all other apps not explicitly
>  approved.

First "Solution": Erease and load by hand
=

What would be wrong with
  1. ereasing the memories of the students' calculators
  2. loading the approved apps and data
immediately before the exam? (I assume, the students can't load
un-approved applications during the exam.)

(This is what some our teachers actually did when I went to school.
 Since there where no approved apps and data, step 2 was trivial. ;-)


> The hardware manufacturer (TI) offers a little program
> that purports to address this problem
>http://education.ti.com/us/product/apps/83p/testguard.html
> but it appears to be entirely non-cryptologic and therefore
> easily spoofed.

Why?


2. "Solution": testguard and the like
=

  1. Load and
  2. run
a trusted application with full access to all resources (including storage
for applications and data, and CPU time, thus blocking all the other stuff
which might be running in parallel), nothing can prevent this application
from deleting all non-approved appliations and data.

I am not sure, what testguard actually does, but the above is, what it
*should* do.

The existence of a trusted kernel would only complicate things, not
simplify them. (You had to make sure that your application is running in
the highest privileges mode ...)


I think, both of my proposed "solutions" would actually solve your
problem. Else, please describe your thread model!

Without understanding your problem, no cryptographer can provide any
solution. And if (given a proper definition of the problem) it turns out
that there is a non-cryptographic solution which works -- so what?


-- 
Stefan Lucks  Th. Informatik, Univ. Mannheim, 68131 Mannheim, Germany
e-mail: [EMAIL PROTECTED]
home: http://th.informatik.uni-mannheim.de/people/lucks/
--  I  love  the  smell  of  Cryptanalysis  in  the  morning!  --

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-14 Thread Paul A.S. Ward
I'm not sure why no one has considered the PC banking problem to be a
justification for secure computing.  Specifically, how does a user know
their computer has not been tampered with when they wish to use it for
banking access.
Paul

John S. Denker wrote:

Previous discussions of secure computing technology have
been in some cases sidetracked and obscured by extraneous
notions such as
 -- Microsoft is involved, therefore it must be evil.
 -- The purpose of secure computing is DRM, which is
intrinsically evil ... computers must be able to
copy anything anytime.
Now, in contrast, here is an application that begs for
a secure computing kernel, but has nothing to do with
microsoft and nothing to do with copyrights.
Scenario:  You are teaching chemistry in a non-anglophone
country.  You are giving an exam to see how well the
students know the periodic table.
 -- You want to allow students to use their TI-83 calculators
for *calculating* things.
 -- You want to allow the language-localization package.
 -- You want to disallow the app that stores the entire
periodic table, and all other apps not explicitly
approved.
The hardware manufacturer (TI) offers a little program
that purports to address this problem
  http://education.ti.com/us/product/apps/83p/testguard.html
but it appears to be entirely non-cryptologic and therefore
easily spoofed.
I leave it as an exercise for the reader to design a
calculator with a secure kernel that is capable of
certifying something to the effect that "no apps and
no data tables (except for ones with the following
hashes) have been accessible during the last N hours."
Note that I am *not* proposing reducing the functionality
of the calculator in any way.  Rather I am proposing a
purely additional capability, namely the just-mentioned
certification capability.
I hope this example will advance the discussion of secure
computing.  Like almost any powerful technology, we need
to discuss
 -- the technology *and*
 -- the uses to which it will be put
... but we should not confuse the two.
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to 
[EMAIL PROTECTED]


--

Paul A.S. Ward, Assistant Professor  Email: [EMAIL PROTECTED]
University of Waterloo  [EMAIL PROTECTED]
Department of Computer Engineering   Tel: +1 (519) 888-4567 ext.3127
Waterloo, OntarioFax: +1 (519) 746-3077
Canada N2L 3G1   URL: http://www.ccng.uwaterloo.ca/~pasward


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: example: secure computing kernel needed

2003-12-14 Thread Ian Grigg
"John S. Denker" wrote:

> I leave it as an exercise for the reader to design a
> calculator with a secure kernel that is capable of
> certifying something to the effect that "no apps and
> no data tables (except for ones with the following
> hashes) have been accessible during the last N hours."


Sounds like Eros & E & capabilities.  There have been
other efforts in this in the past, going back some
time, but it seems that Eros/E/Caps represents the most
advanced in general mainstream "prove this is so" comuputing.


iang

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]