On Mon, Jun 10, 2002 at 12:27:20PM -0400, David Ford wrote:
> I have looked at it. How many of the nessus plugins actualy reference 
> them?  Out of 1002 scripts, 318 reference CVE entries.  

I don't know how you obtained that figure :
[renaud@delusion scripts]$ ls *.nasl|wc -w
    986
[renaud@delusion scripts]$ grep script_cve_id *.nasl|wc -l
    578


Out of 986 scripts, 578 have a CVE cross reference. 

> Everyone is building their own little scanning tool and there isn't much 
> collaboration.  

You can send me CVE cross references. Some people do that, and this kind of
patch gets applied quickly.

> Lately the false positive SNR and the false negative SNR 
> is petering.  How can you trust results when they differ from scan to 
> scan, when you get a lot of security holes for product ABC when such a 
> product doesn't even exist?  Nessus v.s. Cybercrap^H^Hcop v.s. Retina 
> v.s. etc, each has tests which are very accurate and each has tests 
> which are very inaccurate and they don't parallel.

Nessus tells you when something is not really bound to be accurate. It's
written as in :

*** Warning, Nessus solely relied on the banner to issue this warning


When it does not do that, it's a bug which should be reported. Once you
know Nessus is not 100% confident on the result, it's your job, as a
pen-tester, to verify that.

> As for standards in reporting, why does scanner ABC say something is 
> critically important but scanner DEF says it's trivial?  Some of the 
> tests in nessus report things as really important but then trivialize it 
> in the description and vice versa.  The Queso detection is horribly 
> inaccurate but that is what's used in nessus to determine what OS is 
> running.  

No. QueSO is used if you don't have nmap installed / enabled. It's a
"better than nothing" solution.

[....]
> Everyone picks their own priorities on tests and results and everyone is 
> interpreting them with sometimes very significant differences.  It isn't 
> just plugins that needs commonality, it's the whole process.
> 
> We the tester suffer the angst of the client because the client argues 
> that your results don't match the previous contractor.  Who are they 
> going to believe when everybody gives them something different?  How can 
> we argue our case when we know our stuff isn't rock solid?


That's why, as a responsible pen-tester, you're bound to manually verify
everything. If you just enter an IP, click, print the result and ask for
cash, the something is wrong.


                                -- Renaud

Reply via email to