[SC-L] 4 Questions: Latest IE vulnerability, Firefox vs IE security, User vs Admin risk profile, and browsers coded in 100% Managed Verifiable code

2006-03-27 Thread David A. Wheeler

Dinis Cruz said:


Another day, and another unmanaged-code remote command execution in IE.

What is relevant in the ISS alert (see end of this post) is that IE 7
beta 2 is also vulnerable, which leads me to this post's questions:

1) Will IE 7.0 be more secure than IE 6.0 (i.e. will after 2 years it
being released the number of exploits and attacks be smaller than today?
and will it be a trustworthy browser?)


It will be "more secure", in the sense that when you start with
something that's hideously insecure, any effort is likely to make some
sort of improvement.  It might actually be noticeably more secure --
I certainly hope so -- but only time will answer that question.
MS still seems to consider IE as "baked into" the OS, something
that was noted as one of its fundamental design flaws years ago,
so there's reason to be skeptical.



2) Given that Firefox is also build on unmanaged code, isn't Firefox as
insecure as IE and as dangerous


Actually, your presumption is not true.  A significant amount
of Firefox is written using XUL/Javascript, which is managed
(it has auto garbage collection, etc.).  You cannot break the
typesafety in Javascript; any attempt is stopped as a runtime error.
(Checking is all dynamic, rather than partly static, but it's ALWAYS done;
in constrast many of .NET's checks are skipped.)
Many Firefox runtime libraries are written in C/C++, but I believe that
is true for many of the low-level .NET libraries too (many of
the .NET libraries eventually call out to unmanaged libraries).
Comparing their implementations is actually not easy to do.



3) Since my assets as a user exist in user land, isn't the risk profile
of malicious unmanaged code (deployed via IE/Firefox) roughly the same
if I am running as a 'low privileged' user or as administrator?


No, I don't think so.  Damage and system recovery are vastly different.
If an "ordinary" user runs malicious unmanaged code, without
"admin" privileges, then files owned by others shouldn't be tamperable
(and may not be openable).  More importantly, cleanup is easy; you don't
need to reload the OS, because the OS should be undamaged.

That's assuming you CAN reload the OS; many Windows laptops
don't have a safe way to reload the OS, and the only reload possible
is from a hard drive that may be corrupted.  If you can't reload
from CDs/DVDs, then you should essentially NEVER run as admin.

Of course, this is unlikely.  Last stats I saw said that 70% of all
Windows apps REQUIRE admin, so Windows users typically run with
excess privileges.  Which is a key practical reason why Windows systems
tend to be so much less secure in practice than they
should be; users (for understandable reasons)
often run with so many unnecessary privileges that they easily get
into trouble.  Having "managed code" with excess privileges is
not a real help. I have hope that this overuse of admin
will diminish over time.



4) Finally, isn't the solution for the creation of secure and
trustworthy Internet Browsing environments the development of browsers
written in 100% managed and verifiable code, which execute on a secure
and very restricted Partially Trusted Environments? (under .Net, Mono or
Java). This way, the risk of buffer overflows will be very limited,


I think that would help, though less than you might think.
Many Linux systems are now highly resistant to buffer overflows
(Fedora Core has a number of countermeasures, and they're adding more).
There's a C/C++ compiler option under Windows that adds StackGuard-type
protection against buffer overflows; if programmers use that, the program
has some protection on Windows against buffer overflows.



This last question/idea is based on something that I have been defending
for quite a while now (couple years) which is: "Since it is impossible
to create bug/vulnerability free code, our best solution to create
securer and safer computing environments (compared to the ones we have
today), is to execute those applications in sandboxed environments".


I think that's a good idea, and I have said so myself.
But the real payoff is writing code specifically designed to defend
itself against malicious attack.  If you choose safer environments
AS PART OF that thrust, you'll do well.  But you really need to
write software with a paranoid mindset, working hard to counter
security attacks. It's the mindset, not the language, that is key.



Unfortunately, today there is NO BUSINESS case to do this. The paying
customers are not demanding products that don't have the ability to
'own' their data center,


There is one and only one requirement for change: customers must
decide to use, and switch to, products with better security. That's all.
I don't think that liability suits will be helpful for general-purpose
software, for a variety of reasons (new thread, out of scope here).

Let me repeat:
All that needs to happen is that customers CHOOSE THEIR SUPPLIER
based on which one is more secure.  When customers
do that, the market

Re: [OWASP-LEADERS] Re: [Owasp-dotnet] RE: [SC-L] 4 Questions: Latest IE vulnerability, Firefox vs IE security, Uservs Admin risk profile, and browsers coded in 100% Managed Verifiable code

2006-03-27 Thread Stephen de Vries


On 27 Mar 2006, at 11:02, Jeff Williams wrote:



I am not a Java expert, but I think that the Java Verifier is NOT  
used on
Apps that >are executed with the Security Manager disabled (which I  
believe
is the default >setting) or are loaded from a local disk (see "...  
applets
loaded via the file system >are not passed through the byte code  
verifier"

in http://java.sun.com/sfaq/)

I believe that as of Java 1.2, all Java code except the core  
libraries must

go through the verifier, unless it is specifically disabled (java
-noverify).


I had the same intuition about the verifier, but have just tested  
this and it is not the case.  It seems that the -noverify is the  
default setting! If you want to verify classes loaded from the local  
filesystem, then you need to explicitly add -verify to the cmd line.   
I tested this by compiling 2 classes where one accesses a public  
member of the other.  Then recompiled the other and changed the  
method access to private.  Tested on:

Jdk 1.4.2 Mac OS X
Jdk 1.5.0 Mac OS X
Jdk 1.5.0 Win XP

all behave the same.

[~/data/dev/applettest/src]java -cp . FullApp
Noone can access me!!
[~/data/dev/applettest/src]java -cp . -verify FullApp
Exception in thread "main" java.lang.IllegalAccessError: tried to  
access field MyData.secret from class FullApp at FullApp.main 
(FullApp.java:23)


Using the same code with an Applet loaded from the filesystem throws  
an IllegalAccessError exception as it should.



--
Stephen de Vries
Corsaire Ltd
E-mail: [EMAIL PROTECTED]
Tel:+44 1483 226014
Fax:+44 1483 226068
Web:http://www.corsaire.com





___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


FW: [SC-L] 4 Questions: Latest IE vulnerability, Firefox vs IE security, User vs Admin risk profile, and browsers coded in 100% Managed Verifiable code

2006-03-27 Thread Michael S Hines
Isn't it possible to break out of the sandbox even with managed code? (That is, 
can't
managed code call out to unmanaged code, i.e. Java call to C++)?  I was 
thinking this was
documented for Java - perhaps for various flavors of .Net too?  

---
Michael S Hines
[EMAIL PROTECTED] 

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of
Dinis Cruz
Sent: Saturday, March 25, 2006 6:39 AM
To: '[EMAIL PROTECTED]'; [EMAIL PROTECTED];
SC-L@securecoding.org; full-disclosure@lists.grok.org.uk
Cc: [EMAIL PROTECTED]
Subject: [SC-L] 4 Questions: Latest IE vulnerability, Firefox vs IE security, 
User vs
Admin risk profile, and browsers coded in 100% Managed Verifiable code

Another day, and another unmanaged-code remote command execution in IE.

What is relevant in the ISS alert (see end of this post) is that IE 7
beta 2 is also vulnerable, which leads me to this post's questions:

1) Will IE 7.0 be more secure than IE 6.0 (i.e. will after 2 years it
being released the number of exploits and attacks be smaller than today?
and will it be a trustworthy browser?)

2) Given that Firefox is also build on unmanaged code, isn't Firefox as
insecure as IE and as dangerous

3) Since my assets as a user exist in user land, isn't the risk profile
of malicious unmanaged code (deployed via IE/Firefox) roughly the same
if I am running as a 'low privileged' user or as administrator? (at the
end of the day, in both cases the malicious code will still be able to:
access my files, access all websites that I have stored credentials in
my browser (cookies or username / passwords pairs), access my VPNs,
attack other computers on the local network, install key loggers,
establish two way communication with a Internet based boot net, etc ...
(basically everything except rooting the boot, disabling AVs and
installing persistent hooks (unless of course this malicious code
executes a successful escalation of privilege attack)))

4) Finally, isn't the solution for the creation of secure and
trustworthy Internet Browsing environments the development of browsers
written in 100% managed and verifiable code, which execute on a secure
and very restricted Partially Trusted Environments? (under .Net, Mono or
Java). This way, the risk of buffer overflows will be very limited, and
when logic or authorization vulnerabilities are discovered in this
'Partially Trusted IE' the 'Secure Partially Trusted environment' will
limit what the malicious code (i.e. the exploit) can do.
   
This last question/idea is based on something that I have been defending
for quite a while now (couple years) which is: "Since it is impossible
to create bug/vulnerability free code, our best solution to create
securer and safer computing environments (compared to the ones we have
today), is to execute those applications in sandboxed environments".

Basically we need to be able to safely handle malicious code, executed
in our user's session, in a web server, in a database engine, etc... Our
current security model is based on the concept of preventing malicious
code from being executed (something which is becoming more and more
impossible to do) versus the model of 'malicious payload containment' 
(i.e. Sandboxing).

And in my view, creating sandboxes for unmanaged code is very hard or
even impossible (at least in the current Windows Architecture), so the
only solution that I am seeing at the moment is to create sandboxes for
managed and verifiable code.

Fortunately, both .Net and Java have architectures that allow the
creation of these 'secure' environments (CAS and Security Manager).

Unfortunately, today there is NO BUSINESS case to do this. The paying
customers are not demanding products that don't have the ability to
'own' their data center, software companies don't want to invest in the
development of such applications, nobody is liable for anything,
malicious attackers have not exploited this insecure software
development and deployment environment (they have still too much to
money to harvest via Spyware/Spam) and the Framework developers
(Microsoft, Sun, Novell, IBM, etc...) don't want to rock the boat and
explain their to their clients that they should be demanding (and only
paying for) applications that can be safely executed in their corporate
environment (i.e. ones where malicious activities are easily detectable,
preventable and contained (something which I believe we only have a
chance of doing with managed and verifiable code)).

I find ironic the fact that Microsoft now looks at Oracle and says 'We
are so much better than them on Security', when the reason why Oracle
has not cared (so far) about security is the same why Microsoft doesn't
make any serious efforts to promote and develop Partially Trusted .Net
applications: There is no business case for both. Btw, if Microsoft
publicly admitted that the current application development practices of
ONLY creating Full Trust code IS A MASSIVE PROBLEM, and i

[SC-L] A Modular Approach to Data Validation in Web Applications

2006-03-27 Thread Stephen de Vries


A Corsaire White Paper:

A Modular Approach to Data Validation in Web Applications

Outline:

Data that is not validated or poorly validated is the root cause of a  
number of serious security vulnerabilities affecting applications.  
This paper presents a modular approach to performing thorough data  
validation in modern web applications so that the benefits of modular  
component based design; extensibility, portability and re-use, can be  
realised. It starts with an explanation of the vulnerabilities  
introduced through poor validation and then goes on to discuss the  
merits and drawbacks of a number of common data validation strategies  
such as:

- Validation in an external Web Application Firewall;
- Validation performed in the web tier (e.g. Struts); and
- Validation performed in the domain model.
Finally, a modular approach is introduced together with practical  
examples of how to implement such a scheme in a web application.


Download:

http://www.corsaire.com/white-papers/060116-a-modular-approach-to- 
data-validation.pdf







___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [Owasp-dotnet] RE: [SC-L] 4 Questions: Latest IE vulnerability, Firefox vs IE security, Uservs Admin risk profile, and browsers coded in 100% Managed Verifiable code

2006-03-27 Thread ljknews
At 2:34 AM +0100 3/27/06, Dinis Cruz wrote:

> PS: For the Microsofties that are reading this (if any)   sorry for
>the irony and I hope I am not offending anyone, but WHEN are you going
>to join this conversion? (i.e. reply to this posts)
>
> I can only see 4 reasons for your silence: a) you are not reading these
>emails, b) you don't care about these issues, c) you don't want to talk
>about them or  d) you don't know what to say.

e) Your employer has a company policy against such participation.
-- 
Larry Kilgallen
___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] 4 Questions: Latest IE vulnerability, Firefox vs IE security, User vs Admin risk profile, and browsers coded in 100% Managed Verifiable code

2006-03-27 Thread der Mouse
> At least one aspect of that is a design defect in TCP/IP, allowing
> unprivileged users to create a port to receive inbound connections.

I don't think it's fair to call that any kind of defect in TCP/IP.
There is nothing at all in TCP or IP that says anything whatsoever
about what privilege may or may not be necessary to establish a listen
for incoming connections.  If you must call this a flaw, at least place
the "flaw" where it actually is - in the implementation(s).

I'm also not convinced it's a flaw at all; calling it one sounds to me
like viewing a TCP stack designed for one environment from the point of
view of a drastically different environment.  In the environment most
current TCP stacks were designed for, listening for connections on a
"high" port should not be a restricted operation.  In calling that a
defect, you appear to be looking on it from a point of view which
disagrees with that, which actually means just that you've picked the
wrong TCP stack for your environment, not that there's anything wrong
with the stack for its design environment.

/~\ The ASCII   der Mouse
\ / Ribbon Campaign
 X  Against HTML   [EMAIL PROTECTED]
/ \ Email!   7D C8 61 52 5D E7 2D 39  4E F1 31 3E E8 B3 27 4B
___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


[SC-L] Re: [Full-disclosure] 4 Questions: Latest IE vulnerability, Firefox vs IE security, User vs Admin risk profile, and browsers coded in 100% Managed Verifiable code

2006-03-27 Thread Pilon Mntry

> of creating a
> full-featured
> browser, from scratch, with usability as good as IE
> and Firefox
> strikes me as a fairly tricky project. 

I agree.

> What about
> using the
> facilities already provided by the OS to enforce the
> sandbox? 

But then will it be possible to prevent buffer
overflows, still running on unmanaged code?

Very nice points by Dinis, esp. the one about the
"advantages" of using our boxes with less privileges
(for internet browsing).

-pilon

--- Brian Eaton <[EMAIL PROTECTED]> wrote:

> On 3/25/06, Dinis Cruz <[EMAIL PROTECTED]> wrote:
> > 4) Finally, isn't the solution for the creation of
> secure and
> > trustworthy Internet Browsing environments the
> development of browsers
> > written in 100% managed and verifiable code, which
> execute on a secure
> > and very restricted Partially Trusted
> Environments? (under .Net, Mono or
> > Java). This way, the risk of buffer overflows will
> be very limited, and
> > when logic or authorization vulnerabilities are
> discovered in this
> > 'Partially Trusted IE' the 'Secure Partially
> Trusted environment' will
> > limit what the malicious code (i.e. the exploit)
> can do.
> 
> I am less than enthusiastic about most of the
> desktop java
> applications I use.  They are, for the most part,
> sluggish, memory
> gobbling beasts, prone to disintegration if I look
> at them cross-eyed
> or click the mouse too frequently.
> 
> Usability problems with java applications are not
> necessarily due to
> managed code, of course, but the idea of creating a
> full-featured
> browser, from scratch, with usability as good as IE
> and Firefox
> strikes me as a fairly tricky project.  What about
> using the
> facilities already provided by the OS to enforce the
> sandbox?  Rather
> than scrapping the existing codebases, start running
> them with
> restricted rights.  Use mandatory access control
> systems to make sure
> the browser doesn't overstep its bounds.
> 
> Regards,
> Brian
> 
>
-
> This List Sponsored by: SpiDynamics
> 
> ALERT: "How A Hacker Launches A Web Application
> Attack!"
> Step-by-Step - SPI Dynamics White Paper
> Learn how to defend against Web Application Attacks
> with real-world
> examples of recent hacking methods such as: SQL
> Injection, Cross Site
> Scripting and Parameter Manipulation
> 
>
https://download.spidynamics.com/1/ad/web.asp?Campaign_ID=70130003gRl
>
--
> 
> 


__
Do You Yahoo!?
Tired of spam?  Yahoo! Mail has the best spam protection around 
http://mail.yahoo.com 
___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


[SC-L] Re: [Owasp-dotnet] RE: 4 Questions: Latest IE vulnerability, Firefox vs IE security, User vs Admin risk profile, and browsers coded in 100% Managed Verifiable code

2006-03-27 Thread Dinis Cruz
Hi Jeff, comments inline

Jeff Williams wrote:
> Great topics.
>
> I'm a huge fan of sandboxes, but Dinis is right, the market hasn't really
> gotten there yet. No question that it would help if it was possible to run
> complex software like a browser inside a sandbox that restricted its ability
> to do bad things, even if there are vulnerabilities (or worse -- malicious
> code) in them.  
Absolutely, and do you see any other alternative? (or we should just
continue to TRUST every bit of code that is executed in our computers?
and TRUST every single developer/entity that had access to that code
during its development and deployment?)
>  I'm terrified about the epidemic use of libraries that are
> just downloaded from wherever (in both client and server applications). All
> that code can do *whatever* it wants in your environments folks!
>
>   
Yes they can, and one of my original questions was 'When considering the
assets, is there REALLY any major differences between running code as
normal user versus as an administrator?"
> Sandboxes are finally making some headway. Most of the Java application
> servers (Tomcat included) now run with their sandbox enabled (albeit with a
> weak policy). And I think the Java Web Start system also has the sandbox
> enabled.  So maybe we're making progress.
>   
True, but are these really secure sandboxes?

I am not a Java expert so I can't give you specific examples, but on the
.Net Framework a Partially Trusted 'Sandbox' which contains an
UnamanagedCode, MemberAccess Reflection or SkipVerification Permission,
should not be called a 'Sandbox' since it can be easily compromised.
> But, if you've ever tried to configure the Java security policy file, use
> JAAS, or implement the SecurityManager interface, you know that it's *way*
> too hard to implement a tight policy this way.
And .Net has exactly the same problem. It is super complex to create a
.Net application that can be executed in a secure Partially Trusted Sandbox.
>   You end up granting all
> kinds of privileges because it's too difficult to do it right.  
And the new VS2005 makes this allocation of privileges very easy: "Mr.
developer, your application crashed because it didn't have the required
permissions, Do you want to add these permissions, Yes No? 
(developer clicks yes) ... "You are adding the permission
UnamanagedCodePermission, do you sure, Yes No? ... (developer clicks yes
(with support from application architect and confident that all
competitor Applications require similar permissions))"
> And only the
> developer of the software could reasonably attempt it, which is backwards,
> because it's the *user* who really needs it right. 
Yes, it is the user's responsibility (i.e. its IT Security and Server
Admin staff) to define the secure environment (i.e the Sandbox) that 3rd
party or internal-developed applications are allocated inside their data
center,

> It's possible that sandboxes are going the way of multilevel security (MLS).
> A sort of ivory tower idea that's too complex to implement or use. 
I don't agree that the problem is too complex. What we have today is
very complex architectures / systems with too many interconnections.

Simplify the lot, get enough resources with the correct focus involved,
are you will see that it is doable.
> But it
> seems like a really good idea that we should try to make practical. But even
> if they do start getting used, we can't just give up on getting software
> developers to produce secure code.  There will always be security problems
> that sandboxes designed for the platform cannot help with.
>   
Of course, I am not saying that developers should produce insecure code,
I am the first to defend that developers must have a firm and solid
understanding of the tools and technologies  that they use, and also as
important, the security implications of their code.
> I'm with Dinis that the only way to get people to care is to fix the
> externalities in the software market and put the burden on those who can
> most easily avoid the costs -- the people who build the software. Maybe then
> the business case will be more clear.
>   
Yes, but the key here is not with money (since that would also kill
large chunks of the Open Source world).

One of the solutions that I like, is the situation where all software
companies have (by law) to disclose information about the
vulnerabilities that they are aware of (look at the Eeye model of
disclosing information about 'reported but unpatched vulnerabilities').

Basically, give the user data (as in information) that he can digest and
understand, and you will see the user(s) making the correct decision(s).
> (Your last point about non-verified MSIL is terrifying. I can't think of any
> reason why you would want to turn off verification -- except perhaps startup
> speed. But that's a terrible tradeoff.)
>   
See my previous post (on this same thread) about this issue, but I think
that .Net is not alone in skipping verification fo

Re: [Owasp-dotnet] RE: [SC-L] 4 Questions: Latest IE vulnerability, Firefox vs IE security, Uservs Admin risk profile, and browsers coded in 100% Managed Verifiable code

2006-03-27 Thread Dinis Cruz




Hi Kevin

  Indeed this is somewhat surprising that there is no byte-code
verification
in place, especially for strong typing, since when you think about it,
this is not too different than the "unmanaged" code case.

  

Well there is some byte coding verification. For example if you
manipulate MSIL so that you create calls to private members (something
that you can't compile with VS.NET) you will get a runtime error saying
that you tried to access a private member. So in this case there is
some verification.

What I found surprising was how little verification was done by the CLR
when verification is disabled, see for example these issues:

  Possible
Type Confusion issue in .Net 1.1 (only works in Full Trust)
  Another
Full Trust CLR Verification issue: Exploiting Passing Reference Types
by Reference
  Another
Full Trust CLR Verification issue: Changing Private Field using Proxy
Struct
  Another
Full Trust CLR Verification issue: changing the Method Parameters order
  C#
readonly modifier is not inforced by the CLR (when in Full Trust
  Also related: JIT
prevents short overflow (and PeVerify doesn't catch it) and ANSI/UNICODE
bug in System.Net.HttpListenerRequest

Basically, Microsoft decided against performing verification on Full
Trust code (which is 99% of the .Net code out there remember). Their
argument (I think) is: "if it is Full Trust then it can jump to
unmanaged code anyway, so all bets are off" (I am sure I have seen this
documented somewhere in a Microsoft book, KB article or blog, but can't
seem to find it (for the Microsofties that are reading this (if any),
can  you post some links please? thanks))

Apart from a basic problem which is "You cannot trust Full Trust code
EVEN if it doesn't make ANY direct unmanaged call or reflection" there
is a much bigger one.

When (not if) Applications start to be developed so that they run in
secure Partially Trusted environments,I think that the developers will
find that they code will suffer from an immediate performance hit due
to the fact that Verification is now being done on their code (again
for the Microsofties that are reading this (if any), can you post some
data related to the performance impact of the current CLR Verification
process? thanks)

  Apparently the whole "managed" versus "unmanaged" code only has to do
with whether or not garbage collection is attempted. 

yes, although I still think that we should fight for the words "Managed
Code" to include verification 


  However, the real question is "is this true for ALL managed code or
only managed code in the .NET Framework"? 

I am not a Java expert, but I think that the Java Verifier is NOT used
on Apps that are executed with the Security Manager disabled (which I
believe is the default setting) or are loaded from a local disk (see
"... applets loaded via the file system are not passed through the byte
code verifier" in http://java.sun.com/sfaq/) 

  Of course if software quality improvement does not take
place in these companies, their signing would be somewhat vacuous. Butit
would be better than nothing, since at least all such code would not be
fully trusted by default.
  

Yes, and note that I strongly defend that: "All local code must NOT be
given Full Trust by default" (at the moment it is)

Dinis

PS: For the Microsofties that are reading this (if any)   sorry for
the irony and I hope I am not offending anyone, but WHEN are you
going to join this conversion? (i.e. reply to this posts)

I can only see 4 reasons for your silence: a) you are not reading these
emails, b) you don't care about these issues, c) you don't want to talk
about them or  d) you don't know what to say.

Can you please engage and publicly participate in this conversation ...

Thanks


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php