Re: An attack on paypal

2003-06-08 Thread Tim Dierks
At 02:55 PM 6/8/2003, James A. Donald wrote:
Attached is a spam mail that constitutes an attack on paypal similar
in effect and method to man in the middle.
The bottom line is that https just is not working.  Its broken.

The fact that people keep using shared secrets is a symptom of https
not working.
The flaw in https is that you cannot operate the business and trust
model using https that you can with shared secrets.
I don't think it's https that's broken, since https wasn't intended to 
solve the customer authentication / authorization problem (you could try to 
use SSL's client certificates for that, but no one ever intended client 
certificate authentication to be a generalized transaction problem).

When I responded to this before, I thought you were talking about the 
server auth problem, not the password problem. I continue to feel that the 
server authentication problem is a very hard problem to solve, since 
there's few hints to the browser as to what the user's intent is.

The password problem does need to be solved, but complaining that HTTPS or 
SSL doesn't solve it isn't any more relevant than complaining that it's not 
solved by HTML, HTTP, and/or browser or server implementations, since any 
and all of these are needed in producing a new solution which can function 
with real businesses and real users. Let's face it, passwords are so deeply 
ingrained into people's lives that nothing which is more complex in any way 
than passwords is going to have broad acceptance, and any consumer-driven 
company is going to consider easy to be more important that secure.

Right now, my best idea for solving this problem is to:
 - Standardize an HTML input method for FORM which does an SPEKE (or 
similar) mutual authentication.
 - Get browser makers to design better ways to communicate to users that 
UI elements can be trusted. For example, a proposal I saw recently which 
would have the OS decorate the borders of trusted windows with facts or 
images that an attacker wouldn't be able to predict: the name of your dog, 
or whatever. (Sorry, can't locate a link right now, but I'd appreciate one.)
 - Combine the two to allow sites to provide a user-trustable UI to enter 
a password which cannot be sucked down.
 - Evangelize to users that this is better and that they should be 
suspicious of any situation where they used such interface once, but now 
it's gone.

I agree that the overall architecture is broken; the problem is that it's 
broken in more ways than can just be fixed with any change to TLS/SSL or HTTPS.

 - Tim



Re: Maybe It's Snake Oil All the Way Down

2003-06-06 Thread Tim Dierks
At 10:09 PM 6/4/2003, James A. Donald wrote:
Eric Rescorla
 Nonsense. One can simply cache the certificate, exactly as
 one does with SSH. In fact, Mozilla at least does exactly
 this if you tell it to. The reason that this is uncommon is
 because the environments where HTTPS is used are generally
 spontaneous and therefore certificate caching is less useful.
Certificate caching is not the problem that needs solving.  The
problem is all this spam attempting to fool people into logging
in to fake BofA websites and fake e-gold websites, to steal
their passwords or credit card numbers
I don't think this problem is easier to solve (or at least I sure don't 
know how to solve it). It seems to me that you could tell a user every time 
they go to a new site that it's a new site, and hope that users would 
recognize that e-g0ld.com shouldn't be new, since they've been there 
before. However, people go to a large enough number of sites that they'd be 
seeing the new alert all the time, which leads me to believe that it 
wouldn't be taken seriously.

Fundamentally, making sure that people's perception of the identity of a 
web site matches the true identity of the web site has a technical 
component that is, at most, a small fraction of the problem and solution. 
Most of it is the social question of what it means for the identity to 
match and the UI problem of determining the user's intent (hard one, that), 
and/or allowing the user to easily and reliably match their intent against 
the reality of the true identity.

Any problem that has as a component the fact that the glyphs for 
lower-case L and one look pretty similar isn't going to be easy to 
solve technologically.

 - Tim



Re: Wiretap Act Does Not Cover Message 'in Storage' For Short Period (was Re: BNA's Internet Law News (ILN) - 2/27/03)

2003-03-05 Thread Tim Dierks
At 02:30 PM 3/5/2003 -0500, Steven M. Bellovin wrote:
From: Somebody

Technically, since their signal speed is slower than light, even
transmission lines act as storage devices.

Wire tapping is now legal.
The crucial difference, from a law enforcement perspective, is how hard
it is to get the requisite court order.  A stored message order is
relatively easy; a wiretap order is very hard.  Note that this
distinction is primarily statutory, not (as far as I know)
constitutional.
Furthermore, it's apparently not illegal for a non-governmental actor to 
retrieve stored information which they have access to, although it might be 
illegal for them to wiretap a communication even if they had access to the 
physical medium over which it travels.

I disagree with Somebody's claim; I don't think that claim would go 
anywhere in court, since a transmission clearly falls under the category of 
wire communication, and it's clear that transmission lines are the very 
entities the wiretap act has always been intended to protect, so Congress' 
intent is quite clear, regardless of any argument about storage.

 - Tim



Re: Wiretap Act Does Not Cover Message 'in Storage' For Short Period (was Re: BNA's Internet Law News (ILN) - 2/27/03)

2003-03-05 Thread Tim Dierks
At 02:30 PM 3/5/2003 -0500, Steven M. Bellovin wrote:
From: Somebody

Technically, since their signal speed is slower than light, even
transmission lines act as storage devices.

Wire tapping is now legal.
The crucial difference, from a law enforcement perspective, is how hard
it is to get the requisite court order.  A stored message order is
relatively easy; a wiretap order is very hard.  Note that this
distinction is primarily statutory, not (as far as I know)
constitutional.
Furthermore, it's apparently not illegal for a non-governmental actor to 
retrieve stored information which they have access to, although it might be 
illegal for them to wiretap a communication even if they had access to the 
physical medium over which it travels.

I disagree with Somebody's claim; I don't think that claim would go 
anywhere in court, since a transmission clearly falls under the category of 
wire communication, and it's clear that transmission lines are the very 
entities the wiretap act has always been intended to protect, so Congress' 
intent is quite clear, regardless of any argument about storage.

 - Tim



Re: trade-offs of secure programming with Palladium (Re: Palladium: technical limits and implications)

2002-08-13 Thread Tim Dierks

At 09:07 PM 8/12/2002 +0100, Adam Back wrote:
At some level there has to be a trade-off between what you put in
trusted agent space and what becomes application code.  If you put the
whole application in trusted agent space, while then all it's
application logic is fully protected, the danger will be that you have
added too much code to reasonably audit, so people will be able to
gain access to that trusted agent via buffer overflow.

I agree; I think the system as you describe it could work and would be 
secure, if correctly executed. However, I think it is infeasible to 
generally implement commercially viable software, especially in the 
consumer market, that will be secure under this model. Either the 
functionality will be too restricted to be accepted by the market, or there 
will be a set of software flaws that allow the system to be penetrated.

The challenge is to put all of the functionality which has access to 
content inside of a secure perimeter, while keeping the perimeter secure 
from any data leakage or privilege escalation. The perimeter must be very 
secure and well-understood from a security standpoint; for example, it 
seems implausible to me that any substantial portion of the Win32 API could 
be used from within the perimeter; thus, all user interface aspects of the 
application must be run through a complete security analysis with the 
presumption that everything outside of the perimeter is compromised and 
cannot be trusted. This includes all APIs  data.

I think we all know how difficult it is, even for security professionals, 
to produce correct systems that enforce any non-trivial set of security 
permissions. This is true even when the items to be protected and the 
software functionality are very simple and straightforward (such as key 
management systems). I think it entirely implausible that software 
developed by multimedia software engineers, managing large quantities of 
data in a multi-operation, multi-vendor environment, will be able to 
deliver a secure environment.

This is even more true when the attacker (the consumer) has control over 
the hardware  software environment. If a security bug is found  patched, 
the end user has no direct incentive to upgrade their installation; in 
fact, the most concerning end users (e.g., pirates) have every incentive to 
seek out and maintain installations with security faults. While a content 
or transaction server could refuse to conduct transactions with a user who 
has not upgraded their software, such a requirement can only increase the 
friction of commerce, a price that vendors  consumers might be quite 
unwilling to pay.

I'm sure that the whole system is secure in theory, but I believe that it 
cannot be securely implemented in practice and that the implied constraints 
on use  usability will be unpalatable to consumers and vendors.

  - Tim

PS - I'm looking for a job in or near New York City. See my resume at 
http://www.dierks.org/tim/resume.html




Re: Palladium: technical limits and implications

2002-08-13 Thread Tim Dierks

At 07:30 PM 8/12/2002 +0100, Adam Back wrote:
(Tim Dierks: read the earlier posts about ring -1 to find the answer
to your question about feasibility in the case of Palladium; in the
case of TCPA your conclusions are right I think).

The addition of an additional security ring with a secured, protected 
memory space does not, in my opinion, change the fact that such a ring 
cannot accurately determine that a particular request is consistant with 
any definable security policy. I do not think it is technologically 
feasible for ring -1 to determine, upon receiving a request, that the 
request was generated by trusted software operating in accordance with the 
intent of whomever signed it.

Specifically, let's presume that a Palladium-enabled application is being 
used for DRM; a secure  trusted application is asking its secure key 
manager to decrypt a content encryption key so it can access properly 
licensed code. The OS is valid  signed and the application is valid  
signed. How can ring -1 distinguish a valid request from one which has been 
forged by rogue code which used a bug in the OS or any other trusted entity 
(the application, drivers, etc.)?

I think it's reasonable to presume that desktop operating systems which are 
under the control of end-users cannot be protected against privilege 
escalation attacks. All it takes is one sound card with a bug in a 
particular version of the driver to allow any attacker to go out and buy 
that card  install that driver and use the combination to execute code or 
access data beyond his privileges.

In the presence of successful privilege escalation attacks, an attacker can 
get access to any information which can be exposed to any privilige level 
he can escalate to. The attacker may not be able to access raw keys  other 
information directly managed by the TOR or the key manager, but those keys 
aren't really interesting anyway: all the interesting content  
transactions will live in regular applications at lower security levels.

The only way I can see to prevent this is for the OS to never transfer 
control to any software which isn't signed, trusted and intact. The problem 
with this is that it's economically infeasible: it implies the death of 
small developers and open source, and that's a higher price than the market 
is willing to bear.

  - Tim

PS - I'm looking for a job in or near New York City. See my resume at 
http://www.dierks.org/tim/resume.html




Re: Palladium: technical limits and implications

2002-08-13 Thread Tim Dierks

At 07:30 PM 8/12/2002 +0100, Adam Back wrote:
(Tim Dierks: read the earlier posts about ring -1 to find the answer
to your question about feasibility in the case of Palladium; in the
case of TCPA your conclusions are right I think).

The addition of an additional security ring with a secured, protected 
memory space does not, in my opinion, change the fact that such a ring 
cannot accurately determine that a particular request is consistant with 
any definable security policy. I do not think it is technologically 
feasible for ring -1 to determine, upon receiving a request, that the 
request was generated by trusted software operating in accordance with the 
intent of whomever signed it.

Specifically, let's presume that a Palladium-enabled application is being 
used for DRM; a secure  trusted application is asking its secure key 
manager to decrypt a content encryption key so it can access properly 
licensed code. The OS is valid  signed and the application is valid  
signed. How can ring -1 distinguish a valid request from one which has been 
forged by rogue code which used a bug in the OS or any other trusted entity 
(the application, drivers, etc.)?

I think it's reasonable to presume that desktop operating systems which are 
under the control of end-users cannot be protected against privilege 
escalation attacks. All it takes is one sound card with a bug in a 
particular version of the driver to allow any attacker to go out and buy 
that card  install that driver and use the combination to execute code or 
access data beyond his privileges.

In the presence of successful privilege escalation attacks, an attacker can 
get access to any information which can be exposed to any privilige level 
he can escalate to. The attacker may not be able to access raw keys  other 
information directly managed by the TOR or the key manager, but those keys 
aren't really interesting anyway: all the interesting content  
transactions will live in regular applications at lower security levels.

The only way I can see to prevent this is for the OS to never transfer 
control to any software which isn't signed, trusted and intact. The problem 
with this is that it's economically infeasible: it implies the death of 
small developers and open source, and that's a higher price than the market 
is willing to bear.

  - Tim

PS - I'm looking for a job in or near New York City. See my resume at 
http://www.dierks.org/tim/resume.html




Re: trade-offs of secure programming with Palladium (Re: Palladium: technical limits and implications)

2002-08-13 Thread Tim Dierks

At 09:07 PM 8/12/2002 +0100, Adam Back wrote:
At some level there has to be a trade-off between what you put in
trusted agent space and what becomes application code.  If you put the
whole application in trusted agent space, while then all it's
application logic is fully protected, the danger will be that you have
added too much code to reasonably audit, so people will be able to
gain access to that trusted agent via buffer overflow.

I agree; I think the system as you describe it could work and would be 
secure, if correctly executed. However, I think it is infeasible to 
generally implement commercially viable software, especially in the 
consumer market, that will be secure under this model. Either the 
functionality will be too restricted to be accepted by the market, or there 
will be a set of software flaws that allow the system to be penetrated.

The challenge is to put all of the functionality which has access to 
content inside of a secure perimeter, while keeping the perimeter secure 
from any data leakage or privilege escalation. The perimeter must be very 
secure and well-understood from a security standpoint; for example, it 
seems implausible to me that any substantial portion of the Win32 API could 
be used from within the perimeter; thus, all user interface aspects of the 
application must be run through a complete security analysis with the 
presumption that everything outside of the perimeter is compromised and 
cannot be trusted. This includes all APIs  data.

I think we all know how difficult it is, even for security professionals, 
to produce correct systems that enforce any non-trivial set of security 
permissions. This is true even when the items to be protected and the 
software functionality are very simple and straightforward (such as key 
management systems). I think it entirely implausible that software 
developed by multimedia software engineers, managing large quantities of 
data in a multi-operation, multi-vendor environment, will be able to 
deliver a secure environment.

This is even more true when the attacker (the consumer) has control over 
the hardware  software environment. If a security bug is found  patched, 
the end user has no direct incentive to upgrade their installation; in 
fact, the most concerning end users (e.g., pirates) have every incentive to 
seek out and maintain installations with security faults. While a content 
or transaction server could refuse to conduct transactions with a user who 
has not upgraded their software, such a requirement can only increase the 
friction of commerce, a price that vendors  consumers might be quite 
unwilling to pay.

I'm sure that the whole system is secure in theory, but I believe that it 
cannot be securely implemented in practice and that the implied constraints 
on use  usability will be unpalatable to consumers and vendors.

  - Tim

PS - I'm looking for a job in or near New York City. See my resume at 
http://www.dierks.org/tim/resume.html