Re: Wiretap Act Does Not Cover Message 'in Storage' For Short Period (was Re: BNA's Internet Law News (ILN) - 2/27/03)

2003-03-05 Thread Tim Dierks
At 02:30 PM 3/5/2003 -0500, Steven M. Bellovin wrote:
From: Somebody

Technically, since their signal speed is slower than light, even
transmission lines act as storage devices.

Wire tapping is now legal.
The crucial difference, from a law enforcement perspective, is how hard
it is to get the requisite court order.  A stored message order is
relatively easy; a wiretap order is very hard.  Note that this
distinction is primarily statutory, not (as far as I know)
constitutional.
Furthermore, it's apparently not illegal for a non-governmental actor to 
retrieve stored information which they have access to, although it might be 
illegal for them to wiretap a communication even if they had access to the 
physical medium over which it travels.

I disagree with Somebody's claim; I don't think that claim would go 
anywhere in court, since a transmission clearly falls under the category of 
wire communication, and it's clear that transmission lines are the very 
entities the wiretap act has always been intended to protect, so Congress' 
intent is quite clear, regardless of any argument about storage.

 - Tim



Re: Palladium: technical limits and implications

2002-08-13 Thread Tim Dierks

At 07:30 PM 8/12/2002 +0100, Adam Back wrote:
(Tim Dierks: read the earlier posts about ring -1 to find the answer
to your question about feasibility in the case of Palladium; in the
case of TCPA your conclusions are right I think).

The addition of an additional security ring with a secured, protected 
memory space does not, in my opinion, change the fact that such a ring 
cannot accurately determine that a particular request is consistant with 
any definable security policy. I do not think it is technologically 
feasible for ring -1 to determine, upon receiving a request, that the 
request was generated by trusted software operating in accordance with the 
intent of whomever signed it.

Specifically, let's presume that a Palladium-enabled application is being 
used for DRM; a secure  trusted application is asking its secure key 
manager to decrypt a content encryption key so it can access properly 
licensed code. The OS is valid  signed and the application is valid  
signed. How can ring -1 distinguish a valid request from one which has been 
forged by rogue code which used a bug in the OS or any other trusted entity 
(the application, drivers, etc.)?

I think it's reasonable to presume that desktop operating systems which are 
under the control of end-users cannot be protected against privilege 
escalation attacks. All it takes is one sound card with a bug in a 
particular version of the driver to allow any attacker to go out and buy 
that card  install that driver and use the combination to execute code or 
access data beyond his privileges.

In the presence of successful privilege escalation attacks, an attacker can 
get access to any information which can be exposed to any privilige level 
he can escalate to. The attacker may not be able to access raw keys  other 
information directly managed by the TOR or the key manager, but those keys 
aren't really interesting anyway: all the interesting content  
transactions will live in regular applications at lower security levels.

The only way I can see to prevent this is for the OS to never transfer 
control to any software which isn't signed, trusted and intact. The problem 
with this is that it's economically infeasible: it implies the death of 
small developers and open source, and that's a higher price than the market 
is willing to bear.

  - Tim

PS - I'm looking for a job in or near New York City. See my resume at 
http://www.dierks.org/tim/resume.html




Re: trade-offs of secure programming with Palladium (Re: Palladium: technical limits and implications)

2002-08-13 Thread Tim Dierks

At 09:07 PM 8/12/2002 +0100, Adam Back wrote:
At some level there has to be a trade-off between what you put in
trusted agent space and what becomes application code.  If you put the
whole application in trusted agent space, while then all it's
application logic is fully protected, the danger will be that you have
added too much code to reasonably audit, so people will be able to
gain access to that trusted agent via buffer overflow.

I agree; I think the system as you describe it could work and would be 
secure, if correctly executed. However, I think it is infeasible to 
generally implement commercially viable software, especially in the 
consumer market, that will be secure under this model. Either the 
functionality will be too restricted to be accepted by the market, or there 
will be a set of software flaws that allow the system to be penetrated.

The challenge is to put all of the functionality which has access to 
content inside of a secure perimeter, while keeping the perimeter secure 
from any data leakage or privilege escalation. The perimeter must be very 
secure and well-understood from a security standpoint; for example, it 
seems implausible to me that any substantial portion of the Win32 API could 
be used from within the perimeter; thus, all user interface aspects of the 
application must be run through a complete security analysis with the 
presumption that everything outside of the perimeter is compromised and 
cannot be trusted. This includes all APIs  data.

I think we all know how difficult it is, even for security professionals, 
to produce correct systems that enforce any non-trivial set of security 
permissions. This is true even when the items to be protected and the 
software functionality are very simple and straightforward (such as key 
management systems). I think it entirely implausible that software 
developed by multimedia software engineers, managing large quantities of 
data in a multi-operation, multi-vendor environment, will be able to 
deliver a secure environment.

This is even more true when the attacker (the consumer) has control over 
the hardware  software environment. If a security bug is found  patched, 
the end user has no direct incentive to upgrade their installation; in 
fact, the most concerning end users (e.g., pirates) have every incentive to 
seek out and maintain installations with security faults. While a content 
or transaction server could refuse to conduct transactions with a user who 
has not upgraded their software, such a requirement can only increase the 
friction of commerce, a price that vendors  consumers might be quite 
unwilling to pay.

I'm sure that the whole system is secure in theory, but I believe that it 
cannot be securely implemented in practice and that the implied constraints 
on use  usability will be unpalatable to consumers and vendors.

  - Tim

PS - I'm looking for a job in or near New York City. See my resume at 
http://www.dierks.org/tim/resume.html