Re: [SC-L] BSIMM-V Article in Application Development Times

2014-01-22 Thread Stephen de Vries

For anyone interested in this topic and working in appsec and/or dev, there’s a 
survey by the trusted software alliance which touches on some of these 
questions here: https://www.surveymonkey.com/s/Developers_and_AppSec 




> On Jan 7, 2014, at 8:07 PM, Christian Heinrich 
>  wrote:
> 
>> Stephen,
>> 
>> On Sat, Jan 4, 2014 at 8:12 PM, Stephen de Vries
>>  wrote:
>>> Leaving the definition of agile aside for the moment, doesn’t the fact that 
>>> the BSIMM measures
>>> organisation wide activities but not individual dev teams mean that we 
>>> could be drawing inaccurate
>>> conclusions from the data?  E.g.  if an organisation says it is doing Arch 
>>> reviews, code reviews and
>>> sec testing, it doesn’t necessarily mean that every team is doing all of 
>>> those activities, so it may give
>>> the BSIMM reader a false impression of the use of those activities in the 
>>> real world.
>>> 
>>> In addition to knowing which activities are practiced organisation wide, it 
>>> would also be valuable to
>>> know which activities work well on a per-team or per-project basis.
>> 
>> My reading of the "Roles" section of BSIMM-V.pdf is that the people
>> interviewed for the BSIMM sample are:
>> 1. Executive Leadership (or CISO, VP of Risk, CSO, etc)
>> 2. Everyone else within the Software Security Group (SSG)
>> 
>> What you are asking to be included is what is referred to as the
>> "Satellite" within BSIMM-V.pdf and I believe this may also require the
>> inclusion of http://cmmiinstitute.com/cmmi-solutions/cmmi-for-development/
>> too (why not :) ).
>> 
>> The issue with this is that it would invalidate the statistics from
>> the prior five BSIMM releases due to the inclusion of new questions
>> and in additional these new statistics were not gathered over time
>> either hence the improvements measured over time within BSIMM would be
>> invalid too due tot he new dataset.
>> 
>> Furthermore, Gary, Sammy and Brian have limited time to interview all
>> 67 BSIMM participating firms.
>> 
>> However, I would be interested to know the "BSIMM Advisory Board" i.e.
>> http://bsimm.com/community/ view on this is and if it would be
>> possible to undertake this additional sampling within their own BSIMM
>> participating firm to determine if there is additional value would be
>> gained for BSIMM?  However, I suspect that an objective measurement
>> would be too hard to quantify due to internal politics of each BSIMM
>> participating firm but I could be wrong.
> 


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] BSIMM-V Article in Application Development Times

2014-01-07 Thread Stephen de Vries

Hi Sammy, Antti,

On 20 Dec 2013, at 17:29, Sammy Migues  wrote:

> Also, in nearly all cases, it would be very hard to characterize an entire 
> firm or even an entire business unit in larger firms as "Agile" or not. Many 
> larger firms use "Agile" for only a small percentage of projects 


Leaving the definition of agile aside for the moment, doesn’t the fact that the 
BSIMM measures organisation wide activities but not individual dev teams mean 
that we could be drawing inaccurate conclusions from the data?  E.g.  if an 
organisation says it is doing Arch reviews, code reviews and sec testing, it 
doesn’t necessarily mean that every team is doing all of those activities, so 
it may give the BSIMM reader a false impression of the use of those activities 
in the real world.

In addition to knowing which activities are practiced organisation wide, it 
would also be valuable to know which activities work well on a per-team or 
per-project basis.

On 17 Dec 2013, at 22:01, Antti Vähä-Sipilä  wrote:
> 
> Moreover, I think this sort of split would be largely arbitrary. Especially 
> for large companies, it's often not straightforward to classify them as agile 
> or non-agile. Many companies also have mixed-mode dev shops with waterfall 
> product management bolted on top of an agile dev team, or an agile dev team 
> throwing code over the wall to a traditional ops team, or a mix of agile and 
> non-agile teams working side by side. 

Agree that the split between agile and not-agile would be arbitrary at the 
organisation wide level.  But deciding on an arbitrary line, or better yet an 
arbitrary scale of agility on a per-project level shouldn’t be too difficult.  
If we need to start somewhere, then I think borrowing from devops couldn’t 
hurt, where they measure agility by:
- frequency of code deployments
- lead time from code deploy to running in production

> In addition, I don't think you can measure agility through purely measuring 
> cadence. The point of being agile is to be able to respond to change, and not 
> all companies _need_ to be reinventing their product daily like a budding 
> startup with an existential crisis. Although continuous integration would 
> probably help the majority of companies, on the product management (i.e., 
> backlog management) side, it depends on your customers and industry whether 
> more is indeed better.

With the BSIMM’s objective of just describing activities it wouldn’t be 
necessary to promote agile or agile security practices.  But it would be 
interesting to know that if an organisation happens to have chosen agile or 
continuous delivery as their software dev methodology, then how are they 
integrating security into that process?  The burning questions I have regarding 
agile and continuous delivery and security are:
- What mixture of the BSIMM activities work well in a continuous delivery style 
environment?
- As you move from less-agile to more-agile, which activities tend to fall away 
and which are more emphasised?
- How are the security specialist and time heavy activities like attack models, 
sec arch review and pentesting performed when new code is pushed to production 
daily?
 
The BSIMM seems to be the only place where this type of data exists or could be 
captured- so would be nice to be able to extract this data from it; or include 
these types of questions in future versions.  The devops survey(*) is another 
potential, but as yet they don’t capture security specific activities.


* 
http://itrevolution.com/the-science-behind-the-2013-puppet-labs-devops-survey-of-practice/


regards,
Stephen

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] BSIMM-V Article in Application Development Times

2013-12-17 Thread Stephen de Vries

On 13 Dec 2013, at 22:51, Gary McGraw  wrote:
> 
> From time to time we talk about getting to the dev community here.  This 
> article is at least in the right publication!
> 
> Read it and pass it on: 
> http://adtmag.com/blogs/watersworks/2013/12/bsimm-v-released.aspx

Hi Gary,

In the current BSIMM-V dataset is it possible to narrow the data down to only 
organisations practising Agile dev?  I think it would be interesting to see 
which BSIMM activities are popular with agile houses, and which not.

Ideally, it would be nice to not only differentiate between Agile and 
non-agile, but different degrees of agile based on the length of iterations 
and/or the frequency of deployments.  E.g. less-agile = 3 month iterations and 
multi-month deploys, more-agile = continuous delivery with multiple deploys per 
day.


regards,


Stephen de Vries

http://www.continuumsecurity.net
Twitter: @stephendv



___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] PHP IPS

2010-09-16 Thread Stephen de Vries

You could try the OWASP ESAPI PHP project: 
http://www.owasp.org/index.php/Category:OWASP_Enterprise_Security_API#tab=PHP

Stephen

On Sep 16, 2010, at 5:20 AM, modversion wrote:

> Hi list:
>  There’s a php ids locate in www.phpids.org ,but it can NOT prevent 
> the attack.
> Does anybody know the the tools which implement in source code can prevent 
> the attack ?
>  
> Thank you !
> ___
> Secure Coding mailing list (SC-L) SC-L@securecoding.org
> List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
> List charter available at - http://www.securecoding.org/list/charter.php
> SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
> as a free, non-commercial service to the software security community.
> Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
> ___


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
Follow KRvW Associates on Twitter at: http://twitter.com/KRvW_Associates
___


Re: [SC-L] [Esapi-user] Recommending ESAPI?

2010-01-10 Thread Stephen de Vries

On Jan 10, 2010, at 5:38 AM, Kevin W. Wall wrote:
> 
> IMO, I think the ideal situation would be if we could get the Spring and 
> Struts,
> etc. development communities to integrate their frameworks so that they could
> be used with the ESAPI interfaces. (In many of these cases, these
> implementations would replace the ESAPI reference implementation.) However,
> that is obviously going to take some time. I don't think that the ESAPI
> dev team can do it all.

I think this is overestimating ESAPI's place in the pecking order.  Spring and 
J2E already have well established APIs for important security functions with a 
_lot_ of developers already invested in these APIs.  A better approach would be 
for ESAPI to adapt its API to suit Spring and the other frameworks.

To touch on one of Dinis' questions, my advise would be for developers to use 
the features from their existing frameworks and only use ESAPI for the gaps.

I confess to not having used ESAPI (just scanned the API), but from what I know 
of other frameworks some of the gaps that ESAPI might plug would be:

- Output encoding in funky places, like JavaScript and CSS (Some apps never 
need this)
- CSRF protection (Sometimes the pageflow/workflow features of a framework will 
already give you CSRF protection, if not, then ESAPI)
- Intrusion detection (if the level of assurance demanded by the application 
requires it)
- Some methods from the HttpUtilities class could be useful (e.g. 
setNoCacheHeaders, setSafeContentType)

For the overlapping functions, I think that existing frameworks already do an 
acceptable job of providing authentication, access control, data validation and 
logging, so unless there's a compelling feature that the application needs from 
ESAPI, I'd advise them to stick with their investment in their existing 
frameworks.
 

Stephen
___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] SANS Institute - CWE/SANS TOP 25 Most Dangerous Programming Errors

2009-01-15 Thread Stephen de Vries

On Jan 15, 2009, at 3:26 AM, Gary McGraw wrote:

> Brian Chess, Sammy Migues and I continue to pound out the software  
> assurance maturity model.  Expect more on that soon.   Working with  
> a large real-world data set has really been amazing.
>
> For those of you just getting wind of this, see:
> http://www.informit.com/articles/article.aspx?p=1271382
> http://www.informit.com/articles/article.aspx?p=1315431

Interesting articles, and they really whet the appetite for more of  
your maturity model.  Can we expect a public/open release?

Stephen



>
>
>
> On 1/14/09 5:18 PM, "Stephen de Vries"   
> wrote:
>
>
>
> On Jan 14, 2009, at 8:45 PM, Steven M. Christey wrote:
>>
>> To all, I'll ask a more strategic question - assuming we're agreed
>> that
>> the Top 25 is a non-optimal means to an end, what can the software
>> security community do better to raise awareness and see real-world
>> change?
>
> From a Web Security point of view, have a look at the OWASP ASVS
> project: 
> http://www.owasp.org/index.php/Category:OWASP_Application_Security_Verification_Standard_Project
>
> Abstract:
> "Whereas the OWASP Top Ten is a tool that provides web application
> security awareness, the OWASP Application Security Verification
> Standard (ASVS) is a commercially-workable open standard that defines
> ranges in coverage and levels of rigor that can be used to perform
> application security verifications
> ...
> The primary aim of the OWASP ASVS Project is to normalize the range in
> the coverage and level of rigor available in the market when it comes
> to performing application security verification using a commercially-
> workable open standard. This standard can be used to establish a level
> of confidence in the security of web applications."
>
>
> regards,
> Stephen
>
> ___
> Secure Coding mailing list (SC-L) SC-L@securecoding.org
> List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
> List charter available at - http://www.securecoding.org/list/charter.php
> SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com 
> )
> as a free, non-commercial service to the software security community.
> ___
>

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] SANS Institute - CWE/SANS TOP 25 Most Dangerous Programming Errors

2009-01-14 Thread Stephen de Vries

On Jan 14, 2009, at 8:45 PM, Steven M. Christey wrote:
>
> To all, I'll ask a more strategic question - assuming we're agreed  
> that
> the Top 25 is a non-optimal means to an end, what can the software
> security community do better to raise awareness and see real-world  
> change?

 From a Web Security point of view, have a look at the OWASP ASVS  
project: 
http://www.owasp.org/index.php/Category:OWASP_Application_Security_Verification_Standard_Project

Abstract:
"Whereas the OWASP Top Ten is a tool that provides web application  
security awareness, the OWASP Application Security Verification  
Standard (ASVS) is a commercially-workable open standard that defines  
ranges in coverage and levels of rigor that can be used to perform  
application security verifications
...
The primary aim of the OWASP ASVS Project is to normalize the range in  
the coverage and level of rigor available in the market when it comes  
to performing application security verification using a commercially- 
workable open standard. This standard can be used to establish a level  
of confidence in the security of web applications."


regards,
Stephen

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] What's the next tech problem to be solved in software security?

2007-06-08 Thread Stephen de Vries

On 8 Jun 2007, at 02:23, Steven M. Christey wrote:
>
> More modern languages advertise security but aren't necessarily
> catch-alls.

At the same time, the improvements in security made by managed code  
(e.g. the JRE and .NET runtimes) for example, should not be  
understated.  The fact that apps written in these languages are not  
susceptible to buffer overflow issues is a HUGE improvement.  And for  
this particular vulnerability these languages are effectively catch- 
alls.  (As long as all your code is managed and the runtime  
implementation itself doesn't contain BO's).  The fine grained access  
control model of the Java runtime (I guess .NET has the same thing?)  
is also a big win.  This is not an add on framework, but is built  
right into the language.

As Ben and Robert have pointed out, we're likely to see similar  
improvements when developers make more use of frameworks for  
implementing application tiers.  It's a lot harder to introduce XSS  
issues when using modern MVC frameworks (e.g. .NET's, JSF, WebWork)  
than cobling a view layer together using JSPs and servlets.
It would still be possible for developers to introduce  
vulnerabilities when using these frameworks, but it's a lot more  
difficult.

>   I remember one developer telling me how his application used
> Ruby on Rails, so he was confident he was secure, but it didn't  
> stop his
> app from having an obvious XSS in core functionality.

It's ironic that RoR is well known for it's policy of preferring  
sensible defaults instead of extensive configuration, yet you have to  
explicitly perform HTML encoding of data included in a web page.

> PHP is an excellent example, because it's clearly lowered the bar for
> programming and has many features that are outright dangerous,  
> where it's
> understandable how the careless/clueless programmer could have  
> introduced
> the issue.  Web programming in general, come to think of it.

There are also examples of languages/frameworks that get it right,  
such as JBoss Seam where both SQL injection and XSS are difficult to  
introduce by default.
It's easier to build secure applications when the building blocks  
themselves provide security by default.  Developers will adopt  
frameworks because they make programming easier - if these frameworks  
also prevent common security vulnerabilities then that's a big win  
for more secure applications.  Where security pro's can help out is  
in pointing out poor security defaults in frameworks and getting the  
owners to change them.  Change once, benefit everywhere.

regards,
Stephen "the glass is half full" de Vries






___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Compilers

2006-12-21 Thread Stephen de Vries

On 21 Dec 2006, at 23:19, ljknews wrote:
>
> Isn't the whole basis of Spark a matter of adding proof statements in
> the comments ?

You can achieve very similar goals by using unit tests.  Although the  
tests are not integrated into the code as tightly as something like  
Spark (or enforcing rules in the compiler), they are considered part  
of the source.   IMO unit and integration testing are vastly  
underutilised for performing security tests which is a shame because  
all the infrastructure, tools and skills are there - developers (and  
security testers) just need to start implementing security tests in  
addition to the functional tests.

[shameless plug] I wrote a paper about this for OWASP a few months back:
http://www.corsaire.com/white-papers/060531-security-testing-web- 
applications-through-automated-software-tests.pdf



-- 
Stephen de Vries
Corsaire Ltd
E-mail: [EMAIL PROTECTED]
Tel:+44 1483 226014
Fax:+44 1483 226068
Web:http://www.corsaire.com






--
CONFIDENTIALITY:  This e-mail and any files transmitted with it are
confidential and intended solely for the use of the recipient(s) only.
Any review, retransmission, dissemination or other use of, or taking
any action in reliance upon this information by persons or entities
other than the intended recipient(s) is prohibited.  If you have
received this e-mail in error please notify the sender immediately
and destroy the material whether stored on a computer or otherwise.
--
DISCLAIMER:  Any views or opinions presented within this e-mail are
solely those of the author and do not necessarily represent those
of Corsaire Limited, unless otherwise specifically stated.
--
Corsaire Limited, 3 Tannery House, Tannery Lane, Send, Surrey, GU23 7EF
Telephone: +44(0)1483-226000  Email:[EMAIL PROTECTED]

___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___


Re: [SC-L] Google code search games

2006-10-05 Thread Stephen de Vries

Also:

XSS in Java apps
http://www.google.com/codesearch?hl=en&lr=&q=%3C%25% 
3D.*getParameter&btnG=Search

(Obvious) SQL Injection in Java apps:
http://www.google.com/codesearch? 
hl=en&lr=&q=executeQuery.*getParameter&btnG=Search

XSS in code from O'Reilly and Sun:
http://www.google.com/codesearch?hl=en&lr=&q=%3C%25%3D.*getParameter 
+package%3A%28oreilly.com%7Csun.com%29&btnG=Search


El 6 Oct 2006, a las 07:45, Gadi Evron escribió:

> Another guy just wrote some more fun keyw ords to search for:
> http://blogs.securiteam.com/index.php/archives/661
>
> On Thu, 5 Oct 2006, Gadi Evron wrote:
>
>> playing with Google Code Search, as Lev Toger just wrote:
>>
>> Google released a code search engine to catch up with Krugle,  
>> Koders, and
>> Codease.
>>
>> Like most of the other Google?s tools it can be easily abused for  
>> hacking
>> :)
>>
>> To find undisclosed vulnerabilities pass over this code:
>>
>> http://www.google.com/codesearch?q=ugly%7Chack%7Cfixme
>>
>> Or some other interesting combination (Use your favorite ugly code
>> comment).
>> -
>>
>> http://blogs.securiteam.com/index.php/archives/659
>>
>> SO... ugly? dirty hack?
>>
>>  Gadi.
>>
>>
>
> ___
> Secure Coding mailing list (SC-L)
> SC-L@securecoding.org
> List information, subscriptions, etc - http://krvw.com/mailman/ 
> listinfo/sc-l
> List charter available at - http://www.securecoding.org/list/ 
> charter.php

-- 
Stephen de Vries
Corsaire Ltd
E-mail: [EMAIL PROTECTED]
Tel:+44 1483 226014
Fax:+44 1483 226068
Web:http://www.corsaire.com





___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] "Bumper sticker" definition of secure software

2006-07-16 Thread Stephen de Vries

Not even Chuck Norris can break Secure Software.

;)

-- Stephen de Vries
Corsaire Ltd
E-mail: [EMAIL PROTECTED]
Tel:+44 1483 226014
Fax:+44 1483 226068
Web:http://www.corsaire.com


On 16 Jul 2006, at 02:27, Goertzel Karen wrote:

> I've been struggling for a while to synthesise a definition of  
> secure software that is short and sweet, yet accurate and  
> comprehensive. Here's what I've come up with:
>
> Secure software is software that remains dependable despite efforts  
> to compromise its dependability.
>
> Agree? Disagree?
>
> --
> Karen Mercedes Goertzel, CISSP
> Booz Allen Hamilton
> 703-902-6981
> [EMAIL PROTECTED]
>
> ___
> Secure Coding mailing list (SC-L)
> SC-L@securecoding.org
> List information, subscriptions, etc - http://krvw.com/mailman/ 
> listinfo/sc-l
> List charter available at - http://www.securecoding.org/list/ 
> charter.php





___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


[SC-L] OWASP Java Project: Call for volunteers

2006-07-01 Thread Stephen de Vries


The OWASP Java Project needs your help!

The project's goal is to enable Java and J2EE developers to build  
secure applications efficiently. To this end we plan on producing  
materials that show J2EE architects, developers, and deployers how to  
deal with most common application security problems.  The material  
will be produced in wiki form at the OWASP Java Project wiki:

http://www.owasp.org/index.php/Category:OWASP_Java_Project

Joining the project is easy
- Have a look at the Roadmap:
  http://www.owasp.org/index.php/OWASP_Java_Project_Roadmap
- Read the tutorial on submitting to OWASP:
  http://www.owasp.org/index.php/Tutorial
- And join the mailing list:
  http://lists.owasp.org/mailman/listinfo/java-project

Regards,

The OWASP Java Project leads
Rohyt Belani and Stephen de Vries



___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


[SC-L] Reusable Security for Segmented Data Domains

2006-06-07 Thread Stephen de Vries


Article which may be of interest to the J2EE crowd:
http://www.growingbusinesssolutions.com/Reusable-Security-for- 
Segmented-Data-Domains.pdf


"According to John C. Dale, MS MIS, president of Growing Business  
Solutions, for firms providing software development outsourcing  
services, the practice of software reuse can reduce overhead and  
increase margins.
Currently, an alarming number of enterprise software development  
projects are over budget, delivered late, or both. As software  
development organizations mature, so too should their ability to  
deliver increasingly complex software solutions on time and on  
budget. One strategy for achieving this is to identify opportunities  
for software reuse. In traditional manufacturing nomenclature, this  
process would be expressed as "manufacturing efficiency" or "economy  
of scale."


In this article, Dale discusses one way in which open source J2EE  
Security Realms can be used to facilitate code reuse - and thus  
manufacturing efficiency - into the enterprise software manufacturing  
process.


Subsequently, enterprise software development firms who employ this  
methodology should expect to deliver software with greater efficiency  
and predictability at a lower cost."




--
Stephen de Vries
Corsaire Ltd
E-mail: [EMAIL PROTECTED]
Tel:+44 1483 226014
Fax:+44 1483 226068
Web:http://www.corsaire.com



___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


[SC-L] Re: [WEB SECURITY] On sandboxes, and why you should care

2006-05-27 Thread Stephen de Vries
 See my example above.


XSS (payload deployed to the admin section),
XSS (since being a client-side exploit) is one where the Sandbox  
approach would be harder to implement (unless the affected user is  
also using a Sandboxed browser where some types of exploits could  
be prevented).


To prevent XSS via a Sandbox, one approach would be to use the  
Sandbox model to clearly define the 'input chokepoints' and force  
(via CAS Demands) that data validation was performed on those  
requests. This way, the developers would have no option but to  
validate their data. Another option would be to encode all inputs  
and outputs from the untrusted sandboxes (i.e. only the 'trusted'  
sandboxes would have the ability to manipulate Html directly.


Again, this makes the sandboxes central to the application design.   
And for applications where security is a primary driver this is  
appropriate.  But this is not the case for the vast majority of apps.


Of course that somewhere, in one of those Sandboxes, there will be  
code that will be able to access the database directly. But if we  
are able to limit the amount of code that needs these privileges  
(Sandboxes B and C in the example above), then the amount of code  
that needs to be audited (and for example certified by a third  
party security-audit-company) will be smaller and manageable.


Good point, and definitely a benefit of using sandboxes.

To summarise, sandboxing an app is useful in preventing specific  
attacks such as executing OS commands, making unauthorized  
connections and accessing arbitrary system resources but it will  
not do anything to prevent the vast majority of serious security  
issues affecting web apps, because the valuable stuff is inside  
the sandbox.
After my explanations in this email do you still think that this is  
correct? Or can you accept now that it is possible to build a  
Sandboxed environment that is able to protect against the majority  
of the serious security issues that affect web apps today?


I still don't see sandboxes addressing all the issues as explained  
above. Another important disadvantage is the cost and impact of  
implementing sandboxes in the first place.  Creating multiple layered  
sandboxes in the code is much more of an obstacle to their  
implementation than simply defining constraints at runtime through a  
configuration change, because it would make security _the central_  
design constraint of the application (it may also break OO  
patterns).  And while this is fine for some high risk apps, this is  
not the case for the majority of organisations who have other  
functional concerns as the reasons they built the app.
Consider the JVM that provides a full sandbox model that's reasonable  
easy to implement for almost any Java app, and then consider the 1%  
(using your metrics) of java applications that enable this  
sandboxing.  If a simple configuration change is too much for  
projects to manage, how much less so an entire new sandbox  
development framework!
Saying that, I don't want to cast too much negativity on the idea -  
it's a good idea, but for niche markets.




If you do accept that it is possible to build such sandboxes, then  
we need to move to the next interesting discussion, which is the 'HOW'


The 'How' would also give us an idea of how difficult it would be to  
implement these sandboxes and shed some light on exactly which  
security issues they would prevent and which they would not.


regards,

--
Stephen de Vries
Corsaire Ltd
E-mail: [EMAIL PROTECTED]
Tel:+44 1483 226014
Fax:+44 1483 226068
Web:http://www.corsaire.com



___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] By default, the Verifier is disabled on .Net and Java

2006-05-13 Thread Stephen de Vries


On 12 May 2006, at 14:58, Dinis Cruz wrote:


Michael Silk wrote:
You can't disable the security manager even with the verifier off.  
But
you could extend some final or private class that the security  
manager

gives access to.
This is not correct. With the verifier disabled there are multiple  
ways you can jump out of the Security Manager Sandbox.


Here is a quote from 1997's Java Security (Gary McGraw and Eduard  
W. Feltern) book, page 75, Chapter Three 'Serious Holes in the  
Security Model"


I'm a bit sceptical of this, I know Sun's track record on fixing JVM  
vulnerabilities hasn't always been great, but 9 years seems a bit  
excessive!  Unfortunately the book doesn't provide any more details  
on the vulnerabilities so we're left guessing whether these still  
affect modern JVMs.  Even with verification turned off with the - 
noverify option, I think it would be difficult to break out of a  
defined security manager.




"... The Type Confusion Tool Kit The Princeton team, as a  
feasibility demonstration, created a tool kit that allows any type  
confusion attack to be turned into a disarming of Java's security.  
In other words, the tool kit servers as a way of turning a small  
security breach into a complete system penetration. The type  
confusion tool kit has not been released to the public, and is  
considered to dangerous to describe in any detail here..."


A variation of this quote can also at the bottom of this page:  
Section 7 -- You're Not My Type


Another quote from Section 7 -- You're Not My Type
"...As mentioned in Chapter 2, every aspect of Java security  
depends critically on the type-safety of the language. This means  
that if Java is going to be secure, it has to make sure that all  
pointers are properly tagged; that is, the tag must match the  
actual type of object that is being pointed to.


In a type-confusion attack, a malicious applet creates two pointers  
to the same object-with incompatible type tags. When this happens,  
the Java system is in trouble. The applet can write into that  
memory address through one pointer, and read it through another  
pointer. The result is that the applet can bypass the typing rules  
of Java, completely undermining its security"


The example that we have been playing around here (the direct  
access to a private member) is probably not the best one to use to  
test the verifier, since there are multiple ways that this type of  
illegal access can be 'accidentally' detected by the VM (in Java  
there are some cases where the class loading process detects this,  
and in .Net the JIT will catch it)


I think that it will be better to use the examples shown in the  
brilliant LSD paper http://lsd-pl.net/papers.html#java


The paper mentions avenues of attack through vulnerabilities in  
Netscape 4.x's JVM and IE (Mirosoft's JVM).  These are  
vulnerabilities in specific implementations of the JVM rather than  
inherent flaws in the JVM spec.  Any type confusion attacks that are  
possible because of the lack of default verification (via -verify) in  
the JRE would affect the security of the users' own local code so  
it's unlikely that this will prove to be a practical attack vector,  
IMHO.



or a variation of the ones I discovered in .Net:

Possible Type Confusion issue in .Net 1.1 (only works in FullTrust)  
(http://owasp.net/blogs/dinis_cruz/archive/2005/11/08/36.aspx)
Another Full Trust CLR Verification issue: Exploiting Passing  
Reference Types by Reference (http://owasp.net/blogs/dinis_cruz/ 
archive/2005/12/28/393.aspx)
Another Full Trust CLR Verification issue: Changing Private Field  
using Proxy Struct (http://owasp.net/blogs/dinis_cruz/archive/ 
2005/12/28/394.aspx)
Another Full Trust CLR Verification issue: changing the Method  
Parameters order (http://owasp.net/blogs/dinis_cruz/archive/ 
2005/12/26/390.aspx)
In fact, it would be great to have a 'verifier checker' tool. A set  
of scripts that would test for verifier issues on Java execution  
environments (this would make it very easy to detect who is using  
the verifier and what type of verification is performed).


After this explanation, Stephen, do you still disagree with my  
original comments:


"This is a very weird decision by the Java Architects, since what  
is the point of creating and enforcing a airtight security policy  
if you can jump strait out of it via a Type Confusion attack?


This is speculation.  We don't know if it's possible to break the  
security manager through a type confusion attack - the one reference  
we have is 9 years old and doesn't say much, the other targets  
specific implementation flaws older JVMs.  Java verification and  
security has many layers (as we've seen in trying to pinpoint exactly  
when it happens!), so I don't think it's accurate to equate a lack of  
local code verification with a complete breakdown of the security  
manager - unless someone demonstrates otherwise.


regards,
Stephen



Re: [SC-L] By default, the Verifier is disabled on .Net and Java

2006-05-13 Thread Stephen de Vries


On 12 May 2006, at 09:10, Charles Miller wrote:


It's not reflection: you're confusing IllegalAccessException and  
IllegalAccessError.


For any non-Java nerd still listening in: there are two fundamental  
types of "Throwable" exception-conditions in Java: Exceptions and  
Errors[1]. Exceptions represent application-level conditions --  
things an application is likely to be able to recover from, like  
network timeouts, trying to read beyond the end of a file, and so  
on. Errors, on the other hand, represent VM-level problems that an  
application can't really do anything about, like running out of  
memory, not finding a required native library, or encountering  
corrupted class files.


IllegalAccessException happens when reflective code attempts to  
access some field or method it's not supposed to. Because it's a  
result of reflection, it's considered an application-level problem  
and it's assumed your code can recover gracefully.


Amusingly enough, you can get around most IllegalAccessExceptions  
in java just by calling {field|method}.setAccessible(true). So long  
as there's no explicit SecurityManager installed, as soon as you've  
done that you're free to modify the field or call method to your  
heart's content[2].


IllegalAccess_Error_, on the other hand, happens when some non- 
reflective code issues a bytecode instruction that attempts to  
access a field or method it shouldn't be able to see. If you look  
at its class hierarchy, the meaning of the class is pretty clear:  
IllegalAccessError is a subclass of IncompatibleClassChangeError,  
which is a subclass of LinkageError. Because this is a problem at  
the bytecode/classloading level, and literally something that could  
happen on _any_ method-call or field-access, it's flagged as an error.


The Error generally occurs when class A has been compiled against a  
version of class B where a method is public, but that method is  
private in the version of the same class it encounters at runtime.  
This sort of thing happens quite often in Java, you're frequently  
stuck in "jar file hell", in a twisty turny maze of library  
interdependencies, all with slightly different version numbers.


More about the circumstances of IllegalAccessError here:

   http://java.sun.com/docs/books/vmspec/2nd-edition/html/ 
ConstantPool.doc.html


Dynamic classloading isn't really at fault here. There are all  
sorts of pits you can fall into when you start rolling your own  
classloader (the Java webapp I develop supports dynamic runtime- 
deployable plugins, and the classloading issues are a HUGE  
headache), but IllegalAccessError isn't one of them.


Charles

   [1] Exceptions are further divided into checked exceptions and  
runtime exceptions, but that's beyond the scope of this email
   [2] See also: http://www.javaspecialists.co.za/archive/ 
Issue014.html


Thanks for clearing this up Charles.
I've created another example that uses a class loader to load the  
classes, and this time, it throws an IllegalAccessError just like  
Tomcat does:


Loading class: /Users/stephen/data/dev/classloader/myclass/ 
somepackage/MyTest.class
Loading class: /Users/stephen/data/dev/classloader/myclass/java/lang/ 
Runnable.class
Loading class: /Users/stephen/data/dev/classloader/myclass/java/lang/ 
Object.class
Loading class: /Users/stephen/data/dev/classloader/myclass/ 
somepackage/MyData.class
Loading class: /Users/stephen/data/dev/classloader/myclass/java/lang/ 
System.class
Exception in thread "main" java.lang.IllegalAccessError: tried to  
access method somepackage.MyData.getName()Ljava/lang/String; from  
class somepackage.MyTest

at somepackage.MyTest.run(MyTest.java:15)
at classloader.Main.main(Main.java:26)
Java Result: 1

This error is thrown irrespective of the -verify flag.  So it looks  
like using a classloader causes the VM to perform verification,  
whether or not the "verifier" was enabled.  Michael Silk made a  
similar statement earlier in this thread.  Would you agree?


PoC code below:

package classloader;

public class Main {

public Main() {
}

public static void main(String[] args) {
//Illegal Access Error
try {
CustomLoader cl = new CustomLoader(System.getProperty 
("user.dir")+"/myclass/");

Class myClass = cl.loadClass("somepackage.MyTest");
Runnable r = (Runnable)myClass.newInstance();
r.run();

} catch (Exception e) {
e.printStackTrace();
}


}

}


package classloader;

import java.io.File;
import java.io.FileInputStream;
import java.io.IOException;

public class CustomLoader extends ClassLoader {
private String path = null;

public CustomLoader(String path) {
this.path = path;
}


private byte[] getBytes( String filename ) throws IOException {
File file = new File( filename );
long len = file.length();
byte raw[] = new byte[(int)len];
FileInputStream fi

Re: [SC-L] By default, the Verifier is disabled on .Net and Java

2006-05-11 Thread Stephen de Vries
Michael Silk wrote:
> On 5/9/06, Dinis Cruz <[EMAIL PROTECTED]> wrote:



> 
>> Is there a example out there where (by default) java code is executed in
>> an environment with :
>>
>> * the security manager enabled (with a strong security policy) and
>> * the verifier disabled
> 
> Yes. Your local JRE.

...but only in the exceptional case where a local Java application was
started with a security manager activated, but without the -verify flag
enabled.
Most local Java applications are started without the verifier enabled
and without a security manager.

For untrusted applets and webstart apps, both the verifier and a
security manager are enabled.



-- 
Stephen de Vries
Corsaire Ltd
E-mail: [EMAIL PROTECTED]
Tel:+44 1483 226014
Fax:+44 1483 226068
Web:http://www.corsaire.com

___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] By default, the Verifier is disabled on .Net and Java

2006-05-05 Thread Stephen de Vries
David Eisner wrote:


> 
> What determines when access to a private member is illegal?  Is it, in
> fact, the bytecode verifier? 

Yes, it's done by the fourth pass of the verifier as described here:
http://java.sun.com/sfaq/verifier.html#HEADING13

Interestingly, Sun have posted a contest to try and crack the new
verifier in Mustang:  https://jdk.dev.java.net/CTV/learn.html


-- 
Stephen de Vries
Corsaire Ltd
E-mail: [EMAIL PROTECTED]
Tel:+44 1483 226014
Fax:+44 1483 226068
Web:http://www.corsaire.com
___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


RE: [SC-L] By default, the Verifier is disabled on .Net and Java

2006-05-05 Thread Stephen de Vries

Jim Halfpenny on the Webappsec list has discovered that BEA's JRockit
JDK _does_ use verification by default, his complete post quoted below
(the test was to access private methods on a class):


Hi,
BEA JRockit verifies by default and as far as I am aware does not offer a
-noverify option.

$ java -cp . verifytest2.Main
java.lang.IllegalAccessError: getName
at verifytest2/Main.()V(Main.java:???)
at verifytest2/Main.main([Ljava/lang/String;)V(Main.java:12)

Tested with JRockit 1.4.2_08.

Regards,
Jim Halfpenny


___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [SC-L] By default, the Verifier is disabled on .Net and Java

2006-05-03 Thread Stephen de Vries


On 3 May 2006, at 06:48, Dinis Cruz wrote:

Here is a more detailed explanation of why (in my previous post) I  
said: "99% of .Net and Java code that is currently deployed is  
executed on an environment where the VM verifier is disabled,  ."


--

In .Net the verifier (the CLR function that checks for type safety)  
is only enabled on partial trust .Net environments.


Java has implemented this a bit differently, in that the byte code  
verifier and the security manager are independent.  So you could for  
example, run an application with an airtight security policy (equiv  
to partial trust), but it could still be vulnerable to type confusion  
attacks if the verifier was not explicitly enabled.  To have both  
enabled you'd need to run with:

java -verify -Djava.security.policy ...

regards,

--
Stephen de Vries
Corsaire Ltd
E-mail: [EMAIL PROTECTED]
Tel:+44 1483 226014
Fax:+44 1483 226068
Web:http://www.corsaire.com





___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [OWASP-LEADERS] Re: [Owasp-dotnet] RE: [SC-L] 4 Questions: Latest IE vulnerability, Firefox vs IE security, Uservs Admin risk profile, and browsers coded in 100% Managed Verifiable code

2006-03-29 Thread Stephen de Vries


Hi Dinis,

On 29 Mar 2006, at 05:52, Dinis Cruz wrote:



Thanks for confirming this (I wonder how many other other Java
developers are aware of this (especially the ones not focused on
security)).



Most I've worked with aren't really aware of the security manager,  
never mind bytecode verification.
It is an issue, but the security risk in the real world may be a bit  
overstated.  If I were a maliciously minded attacker that wanted  
users to execute my evil Java program, I wouldn't need to mess about  
with the lack of verification, I could just write evil code in  
perfectly verifiable format and rely on users to execute it.
Can anyone come up with attack vectors that exploit lack of  
verification on downloaded code that couldn't be exploited by other  
(easier) means?




Stephen, do you have any idea of what is the current percentage of  
'real

world' Java applications are executed:

a) with verification

b) on a secure sandbox



Very few.  As Jeff mentioned some Java Application servers ship with  
a security policy enabled, but the policy doesn't restrict anything  
(e.g. JBoss), other's show you how to run with a sec policy, but  
don't apply it by default (e.g. Tomcat).  In some cases, with the  
more complex app servers a sec policy would be of little security  
benefit because the server needs so much access in order to function  
properly that the policy could be considered completely open.


In some ways I think we're applying double standards here.  Just  
because a virtual machine offers the facility for defining a security  
policy and verification doesn't mean that it _has_ to use it.  There  
are  native executable programs that I trust, so why should a program  
that runs in a VM be subject to more stringent security controls just  
because they're available?  IMO whether code needs to be sandboxed  
and controlled by a policy should be decided on a case by case basis  
rather than a blanket rule.


Note that for example I have seen several Java Based Financial
Applications which are executed on the client which either require  
local

installation (via setup.exe / App.msi) or require that the user grants
that Java application more permissions that the ones allocated to a
normal Sandboxed browser based Java App.


This is quite common for an app, and granting more permissions is  
fine as long as those are tightly controlled by the java security  
policy.






Humm, this is indeed interesting. Ironically, the 1.1 and 2.0 versions
of the CLR will thrown an exception in this case (even in Full Trust).
Since verification is not performed on that .Net Assembly, the CLR  
might

pick up this information when it is resolving the method's relative
address into the real physical addresses (i.e. during JIT).


Using the same code with an Applet loaded from the filesystem throws
an IllegalAccessError exception as it should.



What do you mean by 'Applet loaded from the filesystem'?

Where? In a Browser?



If you load an applet in a browser using a url such as: file:///data/ 
stuff/launch.html then no verification is performed.

But if you access the applet using http/s then it will be verified.

cheers,

--
Stephen de Vries
Corsaire Ltd
E-mail: [EMAIL PROTECTED]
Tel:+44 1483 226014
Fax:+44 1483 226068
Web:http://www.corsaire.com





___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


Re: [OWASP-LEADERS] Re: [Owasp-dotnet] RE: [SC-L] 4 Questions: Latest IE vulnerability, Firefox vs IE security, Uservs Admin risk profile, and browsers coded in 100% Managed Verifiable code

2006-03-27 Thread Stephen de Vries


On 27 Mar 2006, at 11:02, Jeff Williams wrote:



I am not a Java expert, but I think that the Java Verifier is NOT  
used on
Apps that >are executed with the Security Manager disabled (which I  
believe
is the default >setting) or are loaded from a local disk (see "...  
applets
loaded via the file system >are not passed through the byte code  
verifier"

in http://java.sun.com/sfaq/)

I believe that as of Java 1.2, all Java code except the core  
libraries must

go through the verifier, unless it is specifically disabled (java
-noverify).


I had the same intuition about the verifier, but have just tested  
this and it is not the case.  It seems that the -noverify is the  
default setting! If you want to verify classes loaded from the local  
filesystem, then you need to explicitly add -verify to the cmd line.   
I tested this by compiling 2 classes where one accesses a public  
member of the other.  Then recompiled the other and changed the  
method access to private.  Tested on:

Jdk 1.4.2 Mac OS X
Jdk 1.5.0 Mac OS X
Jdk 1.5.0 Win XP

all behave the same.

[~/data/dev/applettest/src]java -cp . FullApp
Noone can access me!!
[~/data/dev/applettest/src]java -cp . -verify FullApp
Exception in thread "main" java.lang.IllegalAccessError: tried to  
access field MyData.secret from class FullApp at FullApp.main 
(FullApp.java:23)


Using the same code with an Applet loaded from the filesystem throws  
an IllegalAccessError exception as it should.



--
Stephen de Vries
Corsaire Ltd
E-mail: [EMAIL PROTECTED]
Tel:+44 1483 226014
Fax:+44 1483 226068
Web:http://www.corsaire.com





___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php


[SC-L] A Modular Approach to Data Validation in Web Applications

2006-03-27 Thread Stephen de Vries


A Corsaire White Paper:

A Modular Approach to Data Validation in Web Applications

Outline:

Data that is not validated or poorly validated is the root cause of a  
number of serious security vulnerabilities affecting applications.  
This paper presents a modular approach to performing thorough data  
validation in modern web applications so that the benefits of modular  
component based design; extensibility, portability and re-use, can be  
realised. It starts with an explanation of the vulnerabilities  
introduced through poor validation and then goes on to discuss the  
merits and drawbacks of a number of common data validation strategies  
such as:

- Validation in an external Web Application Firewall;
- Validation performed in the web tier (e.g. Struts); and
- Validation performed in the domain model.
Finally, a modular approach is introduced together with practical  
examples of how to implement such a scheme in a web application.


Download:

http://www.corsaire.com/white-papers/060116-a-modular-approach-to- 
data-validation.pdf







___
Secure Coding mailing list (SC-L)
SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php