Re: [SC-L] IBM Acquires Ounce Labs, Inc.

2009-08-04 Thread Arian J. Evans
Chris -- Good point with Larry's paper. NTO Spider is, by design, a
simplified scanner for unskilled users, and I do not think it was
designed to be an effective tool for deep dynamic analysis of a web
application. It is, however, probably the best scanner on the market
for people who don't have the time or skill to configure dynamic
testing tools for their applications!

Larry Suto's paper reinforces this use-case desktop webapp scanners:

Each scanner was run in default mode and not tuned in any capacity to
the application. The
importance of this lies in how effective the default mode so that
scalability of scanning is not limited
by manual intervention and setup procedures which can be very time
consuming. Second, in most
cases it is simply unrealistic to spend much time with many applications.


While I definitely look forward to more objective data on this
subject, I do not think Suto's report is a good example of how
consultants or SaaS providers deliver expert security analysis when
they use dynamic testing automation IMO. (I know this is not how I
ever did things.)

I think anyone who has experience with deep dynamic testing knows they
need automation tools with custom configuration ability, the ability
to record workflow, a framework to create custom tests, etc. I do not
believe NTO spider offers any of these essential features. I believe
it was explicitly designed for unskilled users' use-cases. (A valid
and important market to be sure.)

Admittedly I could be very wrong here. The NTO guys are sharp folks
and I haven't seen Spider in a while.


 As a group of security practitioners it is amazing to me that we don't have 
 more quantifiable testing and tools/services are just dismissed with 
 anecdotal data.

Completely agreed. I prefer to back up my statements about dynamic
tools with hard, quantifiable data. Deprived of that, I tend to rely
on historical experience, if it is a subject I have enough experience
on within that problem domain.

If you recall I used to do extensive testing and benchmarking of
dynamic testing tools across custom widgets I wrote, and production
enterprise applications, and publish them @OWASP and NIST conferences,
and HE: Webapps 2nd Ed. Back then the quality of the tools was very
volatile, and changed significantly every release, and from
application to application you would test, so by the time you vetted
all your data, it was almost obsolete. Again the importance of
customizable, manually guided tools, when dealing with new and bespoke
applications.

I tried to tackle static analysis but became overwhelmed by the
challenge of setting up effective labs, and the huge array of static
analysis tools that were available.

Given that I now work on a dynamic testing platform: it would be
completely fair to accuse me of being non-objective when discussing
various vendors dynamic testing tools -- and I would have to agree
with you. It won't make my statements any less valid, but I have to
throw that out there to be fair.


Ultimately you hit the need for objective data spot-on. I would be
lying if I didn't say that I would LOVE to see more head-on
benchmarking between static analysis technology vendors like Veracode,
Fortify, Ounce, Coverity, Klockwork, etc. etc.

The problem I had in the past with benchmarks was the huge degree of
customization in each application I would test. While patterns emerge
that are almost always automatable to some degree, the technologies
almost always require hand care-and-feeding to get them to an
effective place. I think this notion of combining the tools with
qualified users is the true potential power of the SaaS solutions that
are coming to market.

I look forward to seeing the release of more objective analysis by
smarter minds than I, and am very impressed with how far things have
come since the simple tests I tried to run over the years.

$0.02. Cheers,

-- 
Arian Evans




On Tue, Aug 4, 2009 at 5:54 PM, Chris Wysopalcwyso...@veracode.com wrote:

 I wouldn't say that NTO Spider is a sort of dynamic web scanner. It is a 
 top tier scanner that can battle head to head on false negative rate with the 
 big conglomerates' scanners: IBM AppScan and HP WebInspect.  Larry Suto 
 published an analysis a year ago, that certainly had some flaws (and was 
 rightly criticized), but genuinely showed all three to be in the same league. 
 I haven't seen a better head-to-head analysis conducted by anyone. A little 
 bird whispered to me that we may see a new analysis by someone soon.

 As a group of security practitioners it is amazing to me that we don't have 
 more quantifiable testing and tools/services are just dismissed with 
 anecdotal data.  I am glad NIST SATE '09 will soon be underway and, at least 
 for static analysis tools, we will have unbiased independent testing. I am 
 hoping for a big improvement over last year.  I especially like the category 
 they are using for some flaws found as valid but insignificant. Clearly 
 they are 

Re: [SC-L] IBM Acquires Ounce Labs, Inc.

2009-08-04 Thread Arian J. Evans
Great answer, John. I especially like your point about web.xml.

This goes dually for black-box testing. There would be a lot of
advantage to being able to get (and compare) these types of config
files today for dialing in BBB (Better Black Box vs. blind black box)
testing. I don't think anyone is doing this optimally now. I know I am
eager to find static analysis that can provide/guide my BBB testing
with more context. I definitely think we will see more of these
combined-services evolve in the future. It only makes sense,
especially given some of the context-sensitive framing considerations
in your response.

Thanks for the solid thoughts,

-- 
Arian Evans





On Wed, Jul 29, 2009 at 5:44 AM, John Stevenjste...@cigital.com wrote:
 All,

 The question of Is my answer going to be high-enough resolution to support 
 manual review? or ...to support a developer fixing the problem? comes down 
 to it depends.  And, as we all know, I simply can't resist an it depends 
 kind of subtlety.

 Yes, Jim, if you're doing a pure JavaSE application, and you don't care about 
 non-standards compilers (jikes, gcj, etc.), then the source and the binary 
 are largely equivalent (at least in terms of resolution) Larry mentioned 
 gcj.  Ease of parsing, however, is a different story (for instance, actual 
 dependencies are way easier to pull out of a binary than the source code, 
 whereas stack-local variable names are easiest in source).

 Where you care about a whole web application rather than a pure-Java 
 module, you have to concern yourself with JSP and all the other MVC 
 technologies. Placing aside the topic of XML-based configuration files, 
 you'll want to know what (container) your JSPs were compiled to target. In 
 this case, source code is different than binary. Similar factors sneak 
 themselves in across the Java platform.

 Then you've got the world of Aspect Oriented programming. Spring and a 
 broader class of packages that use AspectJ to weave code into your 
 application will dramatically change the face of your binary. To get the same 
 resolution out of your source code, you must in essence 'apply' those point 
 cuts yourself... Getting binary-quality resolution from source code  
 therefore means predicting what transforms will occur at what point-cut 
 locations. I doubt highly any source-based approach will get this thoroughly 
 correct.

 Finally, from the perspective of dynamic analysis, one must consider the 
 post-compiler transforms that occur. Java involves both JIT and Hotspot 
 (using two hotspot compilers: client and server, each of which conducting 
 different transforms), which neither binary nor source-code-based static 
 analysis are likely to correctly predict or account for. The binary image 
 that runs is simply not that which is fed to classloader.defineClass[] as a 
 bytestream.

 ...and  (actually) finally, one of my favorite code-review techniques is to 
 ask for both a .war/ear/jar file AND the source code. This almost invariable 
 get's a double-take, but it's worth the trouble. How many times do you think 
 a web.xml match between the two? What exposure might you report if they were  
 identical? ... What might you test for If they're dramatically different?

 Ah... Good times,
 
 John Steven
 Senior Director; Advanced Technology Consulting
 Direct: (703) 404-5726 Cell: (703) 727-4034
 Key fingerprint = 4772 F7F3 1019 4668 62AD  94B0 AE7F EEF4 62D5 F908

 Blog: http://www.cigital.com/justiceleague
 Papers: http://www.cigital.com/papers/jsteven

 http://www.cigital.com
 Software Confidence. Achieved.


 On 7/28/09 4:36 PM, ljknews ljkn...@mac.com wrote:

 At 8:39 AM -1000 7/28/09, Jim Manico wrote:

 A quick note, in the Java world (obfuscation aside), the source and
 binary is really the same thing. The fact that Fortify analizes
 source and Veracode analizes class files is a fairly minor detail.

 It seems to me that would only be true for those using a
 Java bytecode engine, not those using a Java compiler that
 creates machine code.

 ___
 Secure Coding mailing list (SC-L) SC-L@securecoding.org
 List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
 List charter available at - http://www.securecoding.org/list/charter.php
 SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
 as a free, non-commercial service to the software security community.
 ___


___
Secure Coding mailing list (SC-L) SC-L@securecoding.org
List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l
List charter available at - http://www.securecoding.org/list/charter.php
SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com)
as a free, non-commercial service to the software security community.
___