Re: [SC-L] Managed Code and Runtime Environments - Another layer of added security?

2006-04-06 Thread Dinis Cruz




Michael S Hines wrote:

  
  
  
  Which brings us to the point of
asking why we must have this run time environment to protect the
computing resources.  Why isn't this a function of and included in the
Operating System code?    
  

We need to have these layers (i.e. more than one) because there are
lots of security decisions that can only be made several layers above
the Operating system.

An OS kernel (like Windows) can easily make a security decision based
on the user identity (either allow or deny access). But that kernel
will have a hard time in making security decisions based on the level
of trust that we have in a particular executable or code (i.e. in
creating Sandboxes which limit the functionality (i.e. permissions)
available to that 'untrusted code').

The .Net Framework CAS (Code Access Security) when used to host
applications that are executed in secure partial trusted environments,
is a good example of an environment capable of securely execute
malicious code.

Eventually, some of the current functionality provided by the .Net CLR
(Common Language Runtime) will have to be moved to the Kernel (for
security and performance reasons)

   
  Is this like a firewall and IDS
- just another layer we have to add due to ineffective and insecure
OS's? 
  

The insecure OS is the one we have today which allow unmanaged
malicious code to have full access to the user's assets (this applies
to Windows, Linux and Macs). 

  Are we dealing with symptoms or
the real solution?   
  

Well I believe that Sandboxing (i.e. secure runtime environments) IS
the solution :)

Microsoft (and most of the Linux and Mac crowd) seems to think that
they can build a secure and trustworthy OS that is able to securely
execute malicious unmanaged. 

I (gently) disagree with this opinion, and argue that the desired level
of security (and trustworthiness) can only be achieved via managed
verifiable code.

Dinis Cruz
Owasp .Net Project
www.owasp.net



___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] 4 Questions: Latest IE vulnerability, Firefox vs IE security, Uservs Admin risk profile, and browsers coded in 100% Managed Verifiable code

2006-04-06 Thread Dinis Cruz




Eric Swanson wrote:

  
  


  
  
  
  One further
question:  Can we ever
really advise developers on how to develop secure code when the
foundations
they are developing on top of are inherently insecure?  
  

Yes we can, as long as each layer creates/implements a secure sandbox
for the one above it.

So if each layer acts responsibly and creates a securer environment for
one build on top of it, then yes it makes sense to write secure code at
all levels (think 'Defense in Depth', 'Sandboxing', 'Sandbox inside a
Sandbox', privilege isolation, etc...)

  
  If the
answer is ultimately
no (without re-writing the end-client OS or execution framework), we
must then
consider the question, how can we make a good business case for
developing secure
solutions when, ultimately, the secure solution can be compromised? 
  
  

There will always be a way to compromise an asset. Our job is to make
that exploit/attack: very hard to execute, very easy to detect and with
limited damage

This will scale up because the focus will be on creating secure
Sandboxes (i.e. run-time environments) instead of writing code with no
bugs and security vulnerabilities. The first one (Sandboxing) is hard
but doable, the 2nd one (code with no bugs) is probably impossible.
Another benefit will be the separation of responsibility and
accountability: the security consultants will be worried about the
Sandboxes security and the developers will be focused on features and
functionality (the security consultants are paid by the clients, and
the developers paid by the software vendor). 

Note that the interfaces between layers (i.e. the APIs exposed by the
Sandbox) reduce the number of possible interconnections between
multiple components, and dramatically simplify the job of identifying
normal behavior (which necessary in order to be able to identify
malicious behavior). Note that I don't think that one will ever be able
to accurately certify that an application written in unmanaged code
doesn't perform a certain malicious activity. But I do believe that it
will be possible to make that assertion about managed and verifiable
code designed to execute in a secure partial trust environment.

  
   Complete
security is never the ultimate destination, but rather mitigating risk
through any
acceptable means…
  

Absolutely, but when you have to trust 100% of the executed code, the
'only' mitigation activity that you can do is to prevent malicious code
from being executed in the first place (which is close to impossible in
today's computing environment).

Our current risk mitigation strategy seems to be reliant on the reduced
number of attackers and the large number of assets.

Dinis Cruz
Owasp .Net Project
www.owasp.net


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [Owasp-dotnet] RE: [SC-L] 4 Questions: Latest IE vulnerability, Firefox vs IE security, Uservs Admin risk profile, and browsers coded in 100% Managed Verifiable code

2006-04-06 Thread Dinis Cruz




Eric Swanson wrote:

  
  


  
  
  
  > What we need now is
focus, energy and commitment
to create a business environment where it is possible (and profitable)
the
creation, deployment and maintenance of applications executed in secure
sandboxes.
   
  Traditionally,
the quickest answer to a
need like this is terrorism of some kind to get people to “wake up”
to imminent threats.  But, since we’re in the business of only helping
and not hurting…
  

true, but the issue here is that the solution for this problem is not
simple and will take a huge amount of effort and focus from all parties
involved. So the later we start the process the more painful it will be.

We have been lucky so far that the number of attacker(s) with both
intent, technical capability, business-process understanding and
opportunity have been very small. It is still also hard today to make
huge amounts of money with digital assets (for example a data center)
without using extortion or blackmail (I call this the 'monetization of
digital assets')

So what you need to do is ask the question "Will the current rate of
security enhancements that we are doing to our systems will be higher
than the rate of growth in the attacker(s): numbers (as in quantity),
skills, ability to monetize digital assets and opportunity".

If those two lines (the 'security enhancements' and the 'attacker(s)
profile') don't cross (situation we live in today), we are ok. But if
the lines do cross over, then we will have a major crisis in our hands.

  
  
  How do we
motivate management decisions to
support developing more secure solutions?
  

You make them aware of the 'reality' of the situation, and the
consequences of the technological decisions they make everyday (i.e.
make them aware that the CIA (Confidentiality, Integrity and
Availability) or his/hers IT systems is completely dependent on the
honesty, integrity and non-malicious intent of thousands and thousands
of individuals, organizations and governments.

  
    It’s the
same question as
motivating better problem definitions, code requirements gathering,
documentation,
refactoring, performance optimizations, etc.  Time and budget.  The
answer is to have an affordable, flexible development process and tools
that
support these motivations. 
  
  

For me (a key part of) the answer is to have an '...affordable,
flexible development process and tools that support...' the
creation of applications which can be executed in secure partial trust
environments :)

  
   In .NET,
code reflection and in-line XML
comments coupled with formatting tools like “NDoc” made professional
code documentation an instant option available to every .NET developer,
even
those on a shoe-string budget.
  

Yes, but unfortunately it also made development partially trusted code
very expensive

  
  
   
  The answer
from OWASP might be to re-evaluate
development processes and develop both sandboxes for clients as well as
security
patterns, components, wizards, and utilities for developers.  
  

We could do that, but we would need much more resources that the ones
we currently have (and until Microsoft joins the party, it will be a
pointless exercise)

  
  We could
re-write
development processes like the hot topics “Agile Development” and
“Extreme
Programming” to include the SSDL, “Secure Software Development
Lifecycle”.  Perhaps we should be making a better business case for the
SSDL, like the 2nd Edition of Code Complete’s “Utterly
Compelling and Foolproof Argument for Doing Prerequisites
Before Construction” (Print
ISBN: 0-7356-1967-0). 
  

Agree. I am a big fan of SSDL and believe that it is an integral part
of the environment required to create secure applications

  
  Our guides
and vulnerability detection utilities just scratch the
surface.  
  
  

Yes, and also (specially the tools) show how little interest there is
in this topic

  
  The
utilities in particular do not directly address our concerns
for motivating the community, except by opening the eyes of the
developers who
actually use them and giving them something fun to play with.
  

even then, most developers and managers don't have the security
experience to understand the implications of the security issues
highlighted by these tools (and when they do, they find that there is
no market for securer apps/hosting environments)

  
  
  Given the
many options that lay ahead of
the group, my opinion would be to work on better incorporating the SSDL
into
popular development processes and making a clear-cut business case
(with
statistics) for its inclusion.  To motivate participation, we continue
to
develop the utilities, patterns, components, and wizards for developers
(both
before and after the development release cycle).  Perhaps we take the
online
guides, checklists, and utilities and begin to formulate what SSDL
looks like
through OWASP’s eyes.
  

That's the plan :)

Very soon we (Owasp) should be making an announcement which will talk
about this

Dinis Cruz
O

Re: [SC-L] 4 Questions: Latest IE vulnerability, Firefox vs IE security, User vs Admin risk profile, and browsers coded in 100% Managed Verifiable code

2006-04-06 Thread Dinis Cruz




der Mouse wrote:

  
At least one aspect of that is a design defect in TCP/IP, allowing
unprivileged users to create a port to receive inbound connections.

  
  
I don't think it's fair to call that any kind of defect in TCP/IP.
  

I agree

  There is nothing at all in TCP or IP that says anything whatsoever
about what privilege may or may not be necessary to establish a listen
for incoming connections.  If you must call this a flaw, at least place
the "flaw" where it actually is - in the implementation(s).
  

I am not sure that the problem is in the implementation either. From my
point of view, the problem is in allowing malicious applications (or
code) to have access to it in the first place.

If an application is a File Compression utility, then there is no
reason why it should have access to the TCP stack. And if they do need
access to it (for example to check for updates), then those exceptions
should be very well controlled and monitored.

  
I'm also not convinced it's a flaw at all; calling it one sounds to me
like viewing a TCP stack designed for one environment from the point of
view of a drastically different environment.  In the environment most
current TCP stacks were designed for, listening for connections on a
"high" port should not be a restricted operation.  In calling that a
defect, you appear to be looking on it from a point of view which
disagrees with that, which actually means just that you've picked the
wrong TCP stack for your environment, not that there's anything wrong
with the stack for its design environment.
  

If this was doable (creating custom TCP Stacks) and practical, maybe
that would be an alternative (since there is no better security
countermeasure, than the one that removes the 'exploitable' target)


Dinis Cruz
Owasp .Net Project
www.owasp.net


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] 4 Questions: Latest IE vulnerability, Firefox vs IE security, User vs Admin risk profile, and browsers coded in 100% Managed Verifiable code

2006-04-06 Thread Dinis Cruz




Comment inline,

ljknews wrote:

  At 11:39 AM + 3/25/06, Dinis Cruz wrote:
  
  
3) Since my assets as a user exist in user land, isn't the risk profile
of malicious unmanaged code (deployed via IE/Firefox) roughly the same
if I am running as a 'low privileged' user or as administrator? (at the

  
  
If the administrator's assets are compromised, all users of the system
will have their assets compromised.
  

Sure, but if the main assets exist within that user's space, then the
risk is similar.  

Look at your own computer, even if you use a non-admin account (like I
am doing at the moment in my PowerBook G4), if a malicious attacker is
after your assets (email, VPNs, documents, Credit Card details, access
to your online banking accounts,  attack other computers on your local
network, etc...) then he can do all that from user-land (there is no
need for admin privileges)

  
end of the day, in both cases the malicious code will still be able to:
access my files, access all websites that I have stored credentials in
my browser (cookies or username / passwords pairs), access my VPNs,

  
  
Certainly users should not store credentials in software on a computer.
  

Ok, but this is impossible today (at least in Windows). In a normal
user session, you will have credentials (or equivalent) in multiple
user-land processes. From login accounts used in your Browser to valid
Kerberous tickets (or more to the point, valid windows security handles
(i.e. tokens) which are as good as a stored credentials). 

The bottom line is, if your browser can do it, so can malicious code
executed via your browser. 

  
attack other computers on the local network, install key loggers,

  
  
If one is not the administrator, there should be no way to install
software.  If there is, the operating system is underprotected.
  

Who said that? I might not be able to put it in under the 'Program
files' folder, add files to the windows directory or write to some
sections of the registry. But since you can run executables, you can
perform all sorts of malicious actions. 

A good example are .Net applications which can be executed with no
installation.

  
establish two way communication with a Internet based boot net, etc ...

  
  
At least one aspect of that is a design defect in TCP/IP, allowing
unprivileged users to create a port to receive inbound connections.
Other networking protocols avoid that flaw.
  

This is not a design flaw with TCP/IP, the problem here is that the OS
and the run-time-Sandbox (if there is one) are allowing this to occur.

Remember that if I can talk HTTP with an external computer (located
somewhere in the Internet), then I can use it to establish a two
communication channel.

Can you really defend that all applications that are executed in our
computers (from winzip upwards) should be able to connect to the
internal, download code and execute it with the privileges of the
logged in user? 

Because that is what they can do today (if that computer is connected
to the Internet :)

Dinis Cruz
Owasp .Net Project
www.owasp.net


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php