Re: Another application for trusted computing

2002-08-13 Thread Mike Rosing

On Mon, 12 Aug 2002, AARG! Anonymous wrote:

 Ideally you'd like your agent to truly be autonomous, with its own data,
 its own code, all protected from the host and other agents.  It could even
 carry a store of electronic cash which it could use to fund its activities
 on the host machine.  It could remember its interactions on earlier
 machines in an uncorruptable way.  And you'd like it to run efficiently,
 without the enormous overheads of the cryptographic techniques.

Yeah, it'd be ideal for the CIA and FBI and KGB and Mossad and MI5 and ...
The perfect virus, unseen and untouchable.

 Superficially such a capability seems impossible.  Agents can't have that
 kind of autonomy.  But trusted computing can change this.  It can give
 agents good protection as they move through the net.

 Imagine that host computers run a special program, an Agent Virtual
 Machine or AVM.  This program runs the agents in their object language,
 and it respects each agent's code and data.  It does not corrupt the
 agents, it does not manipulate or copy their memory without authorization
 from the agent itself.  It allows the agents to act in the autonomous fashion
 we would desire.

who's we?

 Without trusted computing, the problem of course is that there is no
 way to be sure that a potential host is running a legitimate version of
 the AVM.  It could have a hacked AVM that would allow it to steal cash
 from the agents, change their memory, and worse.

Yeah, much worse - it might let the user know that somebody was watching
them!

 In this way, trusted computing can solve one of the biggest problems
 with effective use of mobile agents.  Trusted computing finally allows
 mobile agent technology to work right.

I don't see the perfect virus as something desirable.

 This is just one of what I expect to be thousands of applications which
 can take advantage of the trusted computing concept.  Once you have a
 whole world of people trying to think creatively about how to use this
 technology, rather than just a handful, there will be an explosion of
 new applications which today we would never dream are possible.

Dude, you seem to be on some really nice drugs.  Can you get me some?

Patience, persistence, truth,
Dr. mike




Re: Is TCPA broken?

2002-08-13 Thread Joseph Ashwood

I need to correct myself.
- Original Message -
From: Joseph Ashwood [EMAIL PROTECTED]

 Suspiciously absent though is the requirement for symmetric encryption
(page
 4 is easiest to see this). This presents a potential security issue, and
 certainly a barrier to its use for non-authentication/authorization
 purposes. This is by far the biggest potential weak point of the system.
No
 server designed to handle the quantity of connections necessary to do this
 will have the ability to decrypt/sign/encrypt/verify enough data for the
 purely theoretical universal DRM application.

I need to correct this DES, and 3DES are requirements, AES is optional. This
functionality appears to be in the TSS. However I can find very few
references to the usage, and all of those seem to be thoroughly wrapped in
numerous layers of SHOULD and MAY. Since is solely the realm of the TSS
(which had it's command removed July 12, 2001 making this certainly
incomplete), it is only accessible through few commands (I won't bother with
VerifySignature). However looking at the TSS_Bind it says explicitly on page
157 To bind data that is larger than the RSA public key modulus it is the
responsibility of the caller to perform the blocking indicating that the
expected implementation is RSA only. The alternative is wrapping the key,
but that is clearly targeted at using RSA to encrypt a key. The Identity
commands, this appears to use a symmetric key, but deals strictly with
TPM_IDENTITY_CREDENTIAL. Regardless the TSS is a software entity (although
it may be assisted by hardware), this is and of itself presents some
interesting side-effects on security.
Joe




TCPA and Open Source

2002-08-13 Thread AARG! Anonymous

One of the many charges which has been tossed at TCPA is that it will
harm free software.  Here is what Ross Anderson writes in the TCPA FAQ
at http://www.cl.cam.ac.uk/~rja14/tcpa-faq.html (question 18):

 TCPA will undermine the General Public License (GPL), under which
 many free and open source software products are distributed

 At least two companies have started work on a TCPA-enhanced version of
 GNU/linux. This will involve tidying up the code and removing a number
 of features. To get a certificate from the TCPA corsortium, the sponsor
 will then have to submit the pruned code to an evaluation lab, together
 with a mass of documentation showing why various known attacks on the code
 don't work.

First we have to deal with this certificate business.  Most readers
probably assume that you need this cert to use the TCPA system, and
even that you would not be able to boot into this Linux OS without such
a cert.  This is part of the longstanding claim that TCPA will only boot
signed code.

I have refuted this claim many times, and asked for those who disagree to
point to where in the spec it says this, without anyone doing so.  I can
only hope that interested readers may be beginning to believe my claim
since if it were false, somebody would have pointed to chapter and verse
in the TCPA spec just to shut me up about it if for no better reason.

However, Ross is actually right that TCPA does support a concept for
a certificate that signs code.  It's called a Validation Certificate.
The system can hold a number of these VC's, which represent the presumed
correct results of the measurement (hashing) process on various software
and hardware components.  In the case of OS code, then, there could be
VC's representing specific OS's which could boot.

The point is that while this is a form of signed code, it's not something
which gives the TPM control over what OS can boot.  Instead, the VCs
are used to report to third party challengers (on remote systems) what
the system configuration of this system is supposed to be, along with
what it actually is.  It's up to the remote challenger to decide if he
trusts the issuer of the VC, and if so, he will want to see that the
actual measurement (i.e. the hash of the OS) matches the value in the VC.

So what Ross says above could potentially be true, if and when TCPA
compliant operating systems begin to be developed.  Assuming that there
will be some consortium which will issue VC's for operating systems,
and assuming that third parties will typically trust that consortium and
only that one, then you will need to get a VC from that group in order
to effectively participate in the TCPA network.

This doesn't mean that your PC won't boot the OS without such a cert; it
just means that if most people choose to trust the cert issuer, then you
will need to get a cert from them to get other people to trust your OS.
It's much like the power Verisign has today with X.509; most people's
software trusts certs from Verisign, so in practice you pretty much need
to get a cert from them to participate in the X.509 PKI.

So does this mean that Ross is right, that free software is doomed under
TCPA?  No, for several reasons, not least being a big mistake he makes:

 (The evaluation is at level E3 - expensive enough to keep out
 the free software community, yet lax enough for most commercial software
 vendors to have a chance to get their lousy code through.) Although the
 modified program will be covered by the GPL, and the source code will
 be free to everyone, it will not make full use of the TCPA features
 unless you have a certificate for it that is specific to the Fritz chip
 on your own machine. That is what will cost you money (if not at first,
 then eventually).

The big mistake is the belief that the cert is specific to the Fritz
chip (Ross's cute name for the TPM).

Actually the VC data structure is not specific to any one PC.  It is
intentionally designed not to have any identifying information in it
that will represent a particular system.  This is because the VC cert
has to be shown to remote third parties in order to get them to trust the
local system, and TCPA tries very hard to protect user privacy (believe
it or not!).  If the VC had computer-identifying information in it, then
it would be a linkable identifier for all TCPA interactions on the net,
which would defeat all of the work TCPA does with Privacy CAs and whatnot
to try to protect user privacy.  If you understand this, you will see
that the whole TCPA concept requires VC's not to be machine specific.

People always complain when I point to the spec, as if the use of facts
were somehow unfair in this dispute.  But if you are willing, you can look
at section 9.5.4 of http://www.trustedcomputing.org/docs/main%20v1_1b.pdf,
which is the data structure for the validation certificate.  It is an
X.509 attribute certificate, which is a type of cert that would normally
be expected to point back at the 

Another application for trusted computing

2002-08-13 Thread AARG! Anonymous

I thought of another interesting application for trusted computing
systems: mobile agents.  These are pieces of software which get
transferred from computer to computer, running on each system,
communicating with the local system and other visiting agents,
before migrating elsewhere.

This was a hot technology from a couple of years ago, but it never
really went anywhere (so to speak).  Part of the reason was that there
wasn't that much functionality for agents which couldn't be done better
in other ways.  But a big part of it was problems with security.

One issue was protecting the host from malicious agents, and much work
was done in that direction.  This was one of the early selling points
of Java, and other sandbox systems were developed as well.  Likewise the
E language is designed to solve this problem.

But the much harder problem was protecting the agent from malicious hosts.
Once an agent transferred into a host machine, it was essentially at
the mercy of that system.  The host could lie to the agent, and even
manipulate its memory and program, to make it do anything it desired.
Without the ability to maintain its own integrity, the agent was
relatively useless in many ecommerce applications.

Various techniques were suggested to partially address this, such as
splitting the agent functionality among multiple agents which would run
on different machines, or using cryptographic methods for computing
with encrypted instances and the like.  But these were inherently
so inefficient that any advantages mobile agents might have had were
eliminated compared to such things as web services.

Ideally you'd like your agent to truly be autonomous, with its own data,
its own code, all protected from the host and other agents.  It could even
carry a store of electronic cash which it could use to fund its activities
on the host machine.  It could remember its interactions on earlier
machines in an uncorruptable way.  And you'd like it to run efficiently,
without the enormous overheads of the cryptographic techniques.

Superficially such a capability seems impossible.  Agents can't have that
kind of autonomy.  But trusted computing can change this.  It can give
agents good protection as they move through the net.

Imagine that host computers run a special program, an Agent Virtual
Machine or AVM.  This program runs the agents in their object language,
and it respects each agent's code and data.  It does not corrupt the
agents, it does not manipulate or copy their memory without authorization
from the agent itself.  It allows the agents to act in the autonomous fashion
we would desire.

Without trusted computing, the problem of course is that there is no
way to be sure that a potential host is running a legitimate version of
the AVM.  It could have a hacked AVM that would allow it to steal cash
from the agents, change their memory, and worse.

This is where trusted computing can solve the problem.  It allows
agents to verify that a remote system is running a legitimate AVM before
transferring over.  Hacked AVMs will have a different hash and this will
be detected via the trusted computing mechanisms.  Knowing that the remote
machine is running a correct implementation of the AVM allows the agent
to move about without being molested.

In this way, trusted computing can solve one of the biggest problems
with effective use of mobile agents.  Trusted computing finally allows
mobile agent technology to work right.

This is just one of what I expect to be thousands of applications which
can take advantage of the trusted computing concept.  Once you have a
whole world of people trying to think creatively about how to use this
technology, rather than just a handful, there will be an explosion of
new applications which today we would never dream are possible.




Is TCPA broken?

2002-08-13 Thread Joseph Ashwood

- Original Message -
From: Mike Rosing [EMAIL PROTECTED]
 Are you now admitting TCPA is broken?

I freely admit that I haven't made it completely through the TCPA
specification. However it seems to be, at least in effect although not
exactly, a motherboard bound smartcard.

Because it is bound to the motherboard (instead of the user) it can be used
for various things, but at the heart it is a smartcard. Also because it
supports the storage and use of a number of private RSA keys (no other type
supported) it provides some interesting possibilities.

Because of this I believe that there is a core that is fundamentally not
broken. It is the extensions to this concept that pose potential breakage.
In fact looking at Page 151 of the TCPA 1.1b spec it clearly states (typos
are mine) the OS can be attacked by a second OS replacing both the
SEALED-block encryption key, and the user database itself. There are
measures taken to make such an attack cryptographically hard, but it
requires the OS to actually do something.

Suspiciously absent though is the requirement for symmetric encryption (page
4 is easiest to see this). This presents a potential security issue, and
certainly a barrier to its use for non-authentication/authorization
purposes. This is by far the biggest potential weak point of the system. No
server designed to handle the quantity of connections necessary to do this
will have the ability to decrypt/sign/encrypt/verify enough data for the
purely theoretical universal DRM application.

The second substantial concern is that the hardware is substantially limited
in the size of the private keys, being limited to 2048 bits, the second
concern is that it is additionally bound to SHA-1. Currently these are both
sufficient for security, but in the last year we have seen realistic claims
that 1500 bit RSA may be subject to viable attack (or alternately may not
depending on who you believe). While attacks on RSA tend to be spread a fair
distance apart, this never the less puts 2048 bit RSA at fairly close to the
limit of security, it would be much preferable to support 4096-bit RSA from
a security standpoint. SHA-1 is also currently near its limit. SHA-1 offer
2^80 security, a value that it can be argued may be too small for long term
security.

For the time being TCPA seems to be unbroken, 2048-bit RSA is sufficient,
and SHA-1 is used as a MAC for important points. For the future though I
believe these choices may prove to be a weak point in the system, for those
that would like to attack the system, these are the prime targets. The
secondary targets would be forcing debugging to go unaddressed by the OS,
which since there is no provision for smartcard execution (except in
extremely small quantities just as in a smartcard) would reveal very nearly
everything (including the data desired).
Joe




Re: TCPA and Open Source

2002-08-13 Thread James A. Donald

--
On 13 Aug 2002 at 0:05, AARG! Anonymous wrote:
 The point is that while this is a form of signed code, it's not 
 something which gives the TPM control over what OS can boot. 
 Instead, the VCs are used to report to third party challengers 
 (on remote systems) what the system configuration of this system 
 is supposed to be, along with what it actually is.

It does however, enable the state to control what OS one can boot 
if one wishes to access the internet.

It does not seem to me that the TPM is likely to give hollywood 
what it wants, unless it is backed by such state enforcement.

Furthermore, since the TPM gets first whack at boot up, a simple
code download to the TPM could change the meaning of the
signature, so that the machine will not boot unless running a
state authorized operating system.

It could well happen that TPM machines become required to go on
the internet, and then later only certain operating systems are
permitted on the internet, and then later the required operating
system upgrades the TPM software so that only authorized operating
systems boot at all.

--digsig
 James A. Donald
 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG
 H/t91jm8hq5pLR2AdFYi2lRoV9AKYBZ7WqqJmKFe
 2/IFQaW0fl6ec+TL3iMKMxD6Y0ulGDK7RwqTVJlBQ




Re: Seth on TCPA at Defcon/Usenix

2002-08-13 Thread Mike Rosing

On Tue, 13 Aug 2002, James A. Donald wrote:

 To me DRM seems possible to the extent that computers themselves
 are rendered tamper resistant -- that is to say rendered set top
 boxes not computers, to the extent that unauthorized personnel are
 prohibited from accessing general purpose computers.

But even then, if it's perceptable to a human in some form, it
can be copied.  Suppose it's displayed on a screen in english
and copied with a pencil in Japanese, then sent by unicode across
the planet.  I agree it'd be mighty hard to copy pictures from
a set top box at video frame rates by hand, but there are many
musicians who can hear a song once and play it again perfectly.

All it takes is one person who has valid access and they can copy
anything.  It may take a lot of expensive equipment and be hard to
do, but they don't have to crack anything, they can just copy the
human perceptible data onto a machine that doesn't have any DRM
crap.

This is what makes the whole analog hole idea idiotic.  Humans are
analog - they can copy the data!  To plug the analog hole Hollywood
will have to control every human mind directly.

 To me, TCPA only makes sense as a step towards some of the more
 monstrous outcomes that have been suggested by myself and others
 on this list.  It does not make sense as a final destination, but
 only as a first step on a path.

Yeah, it sure seems obvious to me too.  I think preventing that
first step is mighty important.

Patience, persistence, truth,
Dr. mike




Re: trade-offs of secure programming with Palladium (Re: Palladium: technical limits and implications)

2002-08-13 Thread James A. Donald

--
On 12 Aug 2002 at 16:32, Tim Dierks wrote:
 I'm sure that the whole system is secure in theory, but I
 believe that it cannot be securely implemented in practice and
 that the implied constraints on use  usability will be
 unpalatable to consumers and vendors.

Or to say the same thing more pithily, if it really is going to be
voluntary, it really is not going to give hollywood what they
want.  If really gives hollywood what they want, it is really
going to have to be forced down people's throats.


--digsig
 James A. Donald
 6YeGpsZR+nOTh/cGwvITnSR3TdzclVpR0+pr3YYQdkG
 q/bTmZrGsVk2BT9JgumhMqvjDmyIbiElvtidl9aP
 2/0CXfo6fzHCxpa+SX8o8Jzvyb71S0KzgBs0gDRhN




Re: Challenge to David Wagner on TCPA

2002-08-13 Thread AARG! Anonymous

Brian LaMacchia writes:

 So the complexity isn't in how the keys get initialized on the SCP (hey, it
 could be some crazy little hobbit named Mel who runs around to every machine
 and puts them in with a magic wand).  The complexity is in the keying
 infrastructure and the set of signed statements (certificates, for lack of a
 better word) that convey information about how the keys were generated 
 stored.  Those statements need to be able to represent to other applications
 what protocols were followed and precautions taken to protect the private
 key.  Assuming that there's something like a cert chain here, the root of
 this chain chould be an OEM, an IHV, a user, a federal agency, your company,
 etc. Whatever that root is, the application that's going to divulge secrets
 to the SCP needs to be convinced that the key can be trusted (in the
 security sense) not to divulge data encrypted to it to third parties.
 Palladium needs to look at the hardware certificates and reliably tell
 (under user control) what they are. Anyone can decide if they trust the
 system based on the information given; Palladium simply guarantees that it
 won't tell anyone your secrets without your explicit request..

This makes a lot of sense, especially for closed systems like business
LANs and WANs where there is a reasonable centralized authority who can
validate the security of the SCP keys.  I suggested some time back that
since most large businesses receive and configure their computers in
the IT department before making them available to employees, that would
be a time that they could issue private certs on the embedded SCP keys.
The employees' computers could then be configured to use these private
certs for their business computing.

However the larger vision of trusted computing leverages the global
internet and turns it into what is potentially a giant distributed
computer.  For this to work, for total strangers on the net to have
trust in the integrity of applications on each others' machines, will
require some kind of centralized trust infrastructure.  It may possibly
be multi-rooted but you will probably not be able to get away from
this requirement.

The main problem, it seems to me, is that validating the integrity of
the SCP keys cannot be done remotely.  You really need physical access
to the SCP to be able to know what key is inside it.  And even that
is not enough, if it is possible that the private key may also exist
outside, perhaps because the SCP was initialized by loading an externally
generated public/private key pair.  You not only need physical access,
you have to be there when the SCP is initialized.

In practice it seems that only the SCP manufacturer, or at best the OEM
who (re) initializes the SCP before installing it on the motherboard,
will be in a position to issue certificates.  No other central authorities
will have physical access to the chips on a near-universal scale at the
time of their creation and installation, which is necessary to allow
them to issue meaningful certs.  At least with the PGP web of trust
people could in principle validate their keys over the phone, and even
then most PGP users never got anyone to sign their keys.  An effective
web of trust seems much more difficult to achieve with Palladium, except
possibly in small groups that already trust each other anyway.

If we do end up with only a few trusted root keys, most internet-scale
trusted computing software is going to have those roots built in.
Those keys will be extremely valuable, potentially even more so than
Verisign's root keys, because trusted computing is actually a far more
powerful technology than the trivial things done today with PKI.  I hope
the Palladium designers give serious thought to the issue of how those
trusted root keys can be protected appropriately.  It's not going to be
enough to say it's not our problem.  For trusted computing to reach
its potential, security has to be engineered into the system from the
beginning - and that security must start at the root!




Re: Palladium: technical limits and implications

2002-08-13 Thread Tim Dierks

At 07:30 PM 8/12/2002 +0100, Adam Back wrote:
(Tim Dierks: read the earlier posts about ring -1 to find the answer
to your question about feasibility in the case of Palladium; in the
case of TCPA your conclusions are right I think).

The addition of an additional security ring with a secured, protected 
memory space does not, in my opinion, change the fact that such a ring 
cannot accurately determine that a particular request is consistant with 
any definable security policy. I do not think it is technologically 
feasible for ring -1 to determine, upon receiving a request, that the 
request was generated by trusted software operating in accordance with the 
intent of whomever signed it.

Specifically, let's presume that a Palladium-enabled application is being 
used for DRM; a secure  trusted application is asking its secure key 
manager to decrypt a content encryption key so it can access properly 
licensed code. The OS is valid  signed and the application is valid  
signed. How can ring -1 distinguish a valid request from one which has been 
forged by rogue code which used a bug in the OS or any other trusted entity 
(the application, drivers, etc.)?

I think it's reasonable to presume that desktop operating systems which are 
under the control of end-users cannot be protected against privilege 
escalation attacks. All it takes is one sound card with a bug in a 
particular version of the driver to allow any attacker to go out and buy 
that card  install that driver and use the combination to execute code or 
access data beyond his privileges.

In the presence of successful privilege escalation attacks, an attacker can 
get access to any information which can be exposed to any privilige level 
he can escalate to. The attacker may not be able to access raw keys  other 
information directly managed by the TOR or the key manager, but those keys 
aren't really interesting anyway: all the interesting content  
transactions will live in regular applications at lower security levels.

The only way I can see to prevent this is for the OS to never transfer 
control to any software which isn't signed, trusted and intact. The problem 
with this is that it's economically infeasible: it implies the death of 
small developers and open source, and that's a higher price than the market 
is willing to bear.

  - Tim

PS - I'm looking for a job in or near New York City. See my resume at 
http://www.dierks.org/tim/resume.html




Re: TCPA and Open Source

2002-08-13 Thread Michael Motyka

James A. Donald [EMAIL PROTECTED] wrote :
--
On 13 Aug 2002 at 0:05, AARG! Anonymous wrote:
 The point is that while this is a form of signed code, it's not 
 something which gives the TPM control over what OS can boot. 
 Instead, the VCs are used to report to third party challengers 
 (on remote systems) what the system configuration of this system 
 is supposed to be, along with what it actually is.

It does however, enable the state to control what OS one can boot 
if one wishes to access the internet.

It does not seem to me that the TPM is likely to give hollywood 
what it wants, unless it is backed by such state enforcement.

Furthermore, since the TPM gets first whack at boot up, a simple
code download to the TPM could change the meaning of the
signature, so that the machine will not boot unless running a
state authorized operating system.

It could well happen that TPM machines become required to go on
the internet, and then later only certain operating systems are
permitted on the internet, and then later the required operating
system upgrades the TPM software so that only authorized operating
systems boot at all.

--digsig
 James A. Donald

Golly gee, I wonder why there was a floater out there about the
administration wanting to update the protocols we all use?

If you can imagine a repressive technological approach to privacy and
communication then you can bet your ass that it has already been thought
of and is on someone's wishlist in DC.

It seems a moot point to even debate whether or not this is the ultimate
intent of the current crop of crap. Fucking duh!

Mike




Re: trade-offs of secure programming with Palladium (Re: Palladium: technical limits and implications)

2002-08-13 Thread Tim Dierks

At 09:07 PM 8/12/2002 +0100, Adam Back wrote:
At some level there has to be a trade-off between what you put in
trusted agent space and what becomes application code.  If you put the
whole application in trusted agent space, while then all it's
application logic is fully protected, the danger will be that you have
added too much code to reasonably audit, so people will be able to
gain access to that trusted agent via buffer overflow.

I agree; I think the system as you describe it could work and would be 
secure, if correctly executed. However, I think it is infeasible to 
generally implement commercially viable software, especially in the 
consumer market, that will be secure under this model. Either the 
functionality will be too restricted to be accepted by the market, or there 
will be a set of software flaws that allow the system to be penetrated.

The challenge is to put all of the functionality which has access to 
content inside of a secure perimeter, while keeping the perimeter secure 
from any data leakage or privilege escalation. The perimeter must be very 
secure and well-understood from a security standpoint; for example, it 
seems implausible to me that any substantial portion of the Win32 API could 
be used from within the perimeter; thus, all user interface aspects of the 
application must be run through a complete security analysis with the 
presumption that everything outside of the perimeter is compromised and 
cannot be trusted. This includes all APIs  data.

I think we all know how difficult it is, even for security professionals, 
to produce correct systems that enforce any non-trivial set of security 
permissions. This is true even when the items to be protected and the 
software functionality are very simple and straightforward (such as key 
management systems). I think it entirely implausible that software 
developed by multimedia software engineers, managing large quantities of 
data in a multi-operation, multi-vendor environment, will be able to 
deliver a secure environment.

This is even more true when the attacker (the consumer) has control over 
the hardware  software environment. If a security bug is found  patched, 
the end user has no direct incentive to upgrade their installation; in 
fact, the most concerning end users (e.g., pirates) have every incentive to 
seek out and maintain installations with security faults. While a content 
or transaction server could refuse to conduct transactions with a user who 
has not upgraded their software, such a requirement can only increase the 
friction of commerce, a price that vendors  consumers might be quite 
unwilling to pay.

I'm sure that the whole system is secure in theory, but I believe that it 
cannot be securely implemented in practice and that the implied constraints 
on use  usability will be unpalatable to consumers and vendors.

  - Tim

PS - I'm looking for a job in or near New York City. See my resume at 
http://www.dierks.org/tim/resume.html




Re: Challenge to David Wagner on TCPA

2002-08-13 Thread lynn . wheeler

actually it is possible to build chips that generate keys as part of
manufactoring power-on/test (while still in the wafer, and the private key
never, ever exists outside of the chip)  ... and be at effectively the same
trust level as any other part of the chip (i.e. hard instruction ROM).
using such a key pair than can uniquely authenticate a chip 
effectively becomes as much a part of the chip as the ROM or the chip
serial number, etc. The public/private key pair  if appropriately
protected (with evaluated, certified and audited process) then can be
considered somewhat more trusted than a straight serial number aka a
straight serial number can be skimmed and replayed ... where a digital
signature on unique data is harder to replay/spoof.  the hips come with
unique public/private key where the private key is never known.

sometimes this is a difficult consept ... the idea of a public/private key
pair as a form of a difficult to spoof chip serial   when all uses of
public/private key, asymmetric cryptograhy might have always been portrayed
as equilanet to x.509 identity certificates (it is possible to show in
large percentage of the systems that public/private key digital signatures
are sufficient for authentication and any possible certificates are both
redundant and superfulous).

misc. ref (aads chip strawman):
http://www.garlic.com/~lynn/index.html#aads
http://www.asuretee.com/



[EMAIL PROTECTED] on 6/13/2002 11:10 am wrote:

This makes a lot of sense, especially for closed systems like business
LANs and WANs where there is a reasonable centralized authority who can
validate the security of the SCP keys.  I suggested some time back that
since most large businesses receive and configure their computers in the IT
department before making them available to employees, that would be a time
that they could issue private certs on the embedded SCP keys. The
employees' computers could then be configured to use these private certs
for their business computing.

However the larger vision of trusted computing leverages the global
internet and turns it into what is potentially a giant distributed
computer.  For this to work, for total strangers on the net to have trust
in the integrity of applications on each others' machines, will require
some kind of centralized trust infrastructure.  It may possibly be
multi-rooted but you will probably not be able to get away from this
requirement.

The main problem, it seems to me, is that validating the integrity of the
SCP keys cannot be done remotely.  You really need physical access to the
SCP to be able to know what key is inside it.  And even that is not enough,
if it is possible that the private key may also exist outside, perhaps
because the SCP was initialized by loading an externally generated
public/private key pair.  You not only need physical access, you have to be
there when the SCP is initialized.

In practice it seems that only the SCP manufacturer, or at best the OEM who
(re) initializes the SCP before installing it on the motherboard, will be
in a position to issue certificates.  No other central authorities will
have physical access to the chips on a near-universal scale at the time of
their creation and installation, which is necessary to allow them to issue
meaningful certs.  At least with the PGP web of trust people could in
principle validate their keys over the phone, and even then most PGP users
never got anyone to sign their keys.  An effective web of trust seems much
more difficult to achieve with Palladium, except possibly in small groups
that already trust each other anyway.

If we do end up with only a few trusted root keys, most internet-scale
trusted computing software is going to have those roots built in. Those
keys will be extremely valuable, potentially even more so than Verisign's
root keys, because trusted computing is actually a far more powerful
technology than the trivial things done today with PKI.  I hope the
Palladium designers give serious thought to the issue of how those trusted
root keys can be protected appropriately.  It's not going to be enough to
say it's not our problem.  For trusted computing to reach its potential,
security has to be engineered into the system from the beginning - and that
security must start at the root!

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to
[EMAIL PROTECTED]




Re: Signing as one member of a set of keys

2002-08-13 Thread Len Sassaman

Interesting. Unless some clever at jobs were involved, this was likely not
written by Ian or Ben. I can vouch that Ian was not near a computer at the
time the second message (with the complete signature) was posted, and Ben
was somewhere over the Atlantic in an airplane, unlikely to be reading his
mail. Lance has probably been too busy with Anonymizer 2.0 to be a good
choice, and I also suspect that Pr0duct Cypher is the same as one of the
people in that list. I'll put my money on the author being one of the last
three people in that list.




   

  Adam Back

  adam@cypherspaceTo:   Anonymous User 
[EMAIL PROTECTED]   
  .orgcc:   [EMAIL PROTECTED], 
[EMAIL PROTECTED], Adam Back [EMAIL PROTECTED]
  Sent by: Subject:  Re: Signing as one member of 
a set of keys
  owner-cypherpunks

  @lne.com 

   

   

  08/09/2002 12:11 

  PM   

   

   




Very nice.

Nice plausible set of candidate authors also:

pub  1022/5AC7B865 1992/12/01  [EMAIL PROTECTED]
pub  1024/2B48F6F5 1996/04/10  Ian Goldberg [EMAIL PROTECTED]
pub  1024/97558A1D 1994/01/10  Pr0duct Cypher alt.security.pgp
pub  1024/2719AF35 1995/05/13  Ben Laurie [EMAIL PROTECTED]
pub  1024/58214C37 1992/09/08  Hal Finney [EMAIL PROTECTED]
pub  1024/C8002BD1 1997/03/04  Eric Young [EMAIL PROTECTED]
pub  1024/FBBB8AB1 1994/05/07  Colin Plumb [EMAIL PROTECTED]

Wonder if we can figure out who is most likely author based on coding
style from such a small set.

It has (8 char) TABs but other wise BSD indentation style (BSD
normally 4 spaces).  Also someone who likes triply indirected pointers
***blah in there.  Has local variables inside even *if code blocks*
eg, inside main() (most people avoid that, preferring to declare
variables at the top of a function, and historically I think some
older gcc / gdb couldn't debug those variables if I recall).  Very
funky use of goto in getpgppkt, hmmm.  Somewhat concise coding and
variable names.

Off the cuff guess based on coding without looking at samples of code
to remind, probably Colin or Ian.

Of course (Lance Cottrell/Ian Goldberg/Pr0duct Cypher/Ben Laurie/Hal
Finney/Eric Young/Colin Plumb) possibly deviated or mimicked one of
their coding styles.  Kind of interesting to see a true nym in there
also.

Also the Cc -- Coderpunks lives?  I think the Cc coderpunks might be a
clue also, I think some of these people would know it died.  I think
that points more at Colin.

Other potential avenue might be implementation mistake leading to
failure of the scheme to robustly make undecidable which of the set is
the true author, given alpha code.

Adam

On Fri, Aug 09, 2002 at 03:52:56AM +, Anonymous User wrote:
 This program can be used by anonymous contributors to release partial
 information about their identity - they can show that they are someone
 from a list of PGP key holders, without revealing which member of the
 list they are.  Maybe it can help in the recent controvery over the
 identity of anonymous posters.  It's a fairly low-level program that
 should be wrapped in a nicer UI.  I'll send a couple of perl scripts
 later that make it easier to use.




Re: Reply for Dan Veeneman, Spam blocklists?

2002-08-13 Thread Greg Broiles

At 07:25 PM 8/13/2002 +0100, Peter Fairbrother wrote:

The above email got bounced, does anyone know why? Neither my (62.3.121.225)
nor the .zen.co.uk IP's are blacklisted anywhere I can find. 208.249.200.24
is on one list (xbl.selwerd.cx), but that isn't (?) the sender.

parmenides.zen.co.uk was on spam blocklists until very recently - see

http://groups.google.com/groups?q=212.23.8.69+group:news.admin.net-abuse.*hl=enlr=lang_enie=UTF-8scoring=dselm=aij5pl%2414guph%241%40ID-66783.news.dfncis.dernum=1

for a discussion of that, or

http://www.dsbl.org/listing.php?ip=212.23.8.69

for the literal details including results of relay tests;

and see

http://groups.google.com/groups?q=212.23.8.69+group:news.admin.net-abuse.*hl=enlr=lang_enie=UTF-8scoring=dselm=001901c1ead6%24c6bd9a20%244fcf44c6%40defaultrnum=4

for an example of spam that was relayed through that server.

Osirusoft seems to be a spam blocker, but blocking legitimate mail is going
too far. I'd rather have the spam. And I object strongly to third (or
fourth) parties deciding what to do with my mail.

It's the recipient, or someone acting on their behalf, who's deciding what 
to do with
*their* mail, at least from the recipient's perspective.


--
Greg Broiles -- [EMAIL PROTECTED] -- PGP 0x26E4488c or 0x94245961




[aleph1@securityfocus.com: Implementation of Chosen-Ciphertext Attacks against PGP and GnuPG]

2002-08-13 Thread Gabriel Rocha

Figured this might be of interest to folks here...

- Forwarded message from [EMAIL PROTECTED] -

Date: Mon, 12 Aug 2002 11:45:26 -0600
From: [EMAIL PROTECTED]
To: [EMAIL PROTECTED], [EMAIL PROTECTED]
Subject: Implementation of Chosen-Ciphertext Attacks against PGP and GnuPG

Implementation of Chosen-Ciphertext Attacks against PGP and GnuPG
K. Jallad, J. Katz, and B. Schneier

We recently noted that PGP and other e-mail encryption protocols are, in 
theory, highly vulnerable to chosen-ciphertext attacks in which the recipient 
of the e-mail acts as an unwitting decryption oracle. We argued further 
that such attacks are quite feasible and therefore represent a serious 
concern. Here, we investigate these claims in more detail by attempting to 
implement the suggested attacks. On one hand, we are able to successfully 
implement the described attacks against PGP and GnuPG (two widely-used 
software packages) in a number of different settings. On the other hand, we 
show that the attacks largely fail when data is compressed before encryption.

Interestingly,the attacks are unsuccessful for largely fortuitous reasons; 
resistance to these attacks does not seem due to any conscious effort made to 
prevent them. Based on our work, we discuss those instances in which 
chosen-ciphertext attacks do indeed represent an important threat and hence 
must be taken into account in order to maintain confidentiality. We also 
recommend changes in the OpenPGP standard to reduce the effectiveness of our 
attacks in these settings. 

http://www.counterpane.com/pgp-attack.pdf
http://www.counterpane.com/pgp-attack.ps.zip

-- 
Elias Levy
Symantec
Alea jacta est

- End forwarded message -




Polio, DES Crack, and Proofs of Concept

2002-08-13 Thread Khoder bin Hakkin

In the most recent _Science_ some biologists gripe that the scientists
who synthesized infectious
poliovirus from its description were not doing anything novel, just a
prank.  Any biologist
would have known that, since you could concatenate nucleotide strings,
and since polio needs nothing
besides DNA (eg no enzymes) to be infectious, obviously you can synth
polio.

This is *remarkably* similar to cognescenti reactions to the DES Crack
project.  Yes, it was
obvious it would work, and it was largely unnecessary (from a
security-planning perspective)
to actually do it.  But it was proof-of-concept.  Like synthesizing
polio.



--
Better bombing through chemistry.
 -John Pike, director of Globalsecurity.org
 on use of speed by US pilots




RE: A faster way to factor prime numbers found?

2002-08-13 Thread Lucky Green

Gary Jeffers
 Sent: Tuesday, August 13, 2002 3:07 PM
 To: [EMAIL PROTECTED]
 Subject: A faster way to factor prime numbers found?
 
 
 A faster way to factor prime numbers found?

AFICT, the proposed algorithm is for a test for primality and does not
represent an algorithm to factor composites.

 My fellow Cypherpunks,
 
I found an interesting report of a newly developed 
 algorithm for factoring prime numbers. It was on the very 
 interesting http://www.whatreallyhappened.com site. - check 
 that site out!
 
The 1st link is titled: Mathematicians in India find a 
 faster way to determine prime numbers. 
 http://www.iitk.ac.in/infocell/announce/algori thm
 
The 2nd 
 link is titled: Here is the algorithm! 
 http://www.whatreallyhappened.com/primality.pdf
 
 
 Yours Truly,
 Gary Jeffers
 
 Beat State!!!
 and all the many other oppressors!
 




RE: A faster way to factor prime numbers found?

2002-08-13 Thread Mike Rosing

On Tue, 13 Aug 2002, Lucky Green wrote:

 Gary Jeffers
  Sent: Tuesday, August 13, 2002 3:07 PM
  To: [EMAIL PROTECTED]
  Subject: A faster way to factor prime numbers found?
 
 
  A faster way to factor prime numbers found?

 AFICT, the proposed algorithm is for a test for primality and does not
 represent an algorithm to factor composites.


Yes, the paper is quite readable.  The futuristic conjecture is that
primes can be proved in O(log^3(n)) time, but the algorithm as presented
is O(log^12(n)) time.  The authors admit that present probabalistic
algorithms are faster.  However, it presents a new way to think about the
problem, so it opens the door for a lot of new research.  Time will tell
if that leads to new factoring algorithms.

Is Pollard still interested?  Maybe somebody should drop off the paper and
a new computer at his house :-)

Patience, persistence, truth,
Dr. mike