Here's the way I read it Tony; Take it for what it's worth. The risk factor is the probability of someone actually compromising the target using a given vulnerability. For instance, a vulnerability with a low risk factor would probably not be used by most attackers for one reason or another, it may be too difficult for most attackers or perhaps it's too time consuming, as opposed to a DoS attack with a high risk factor that any 'script kiddie' with internet access could perform.
The severity rating is the amount of damage an attacker could cause to a system, the level of privileges gained through a given vulnerability, etc. ie. An snmp read vulnerability in which an attacker can acquire your hostname is clearly not as severe as a buffer overflow vulnerability where an attacker could gain root privileges. ----Original Message----- From: tony toni [mailto:[EMAIL PROTECTED]] Sent: Friday, May 31, 2002 6:33 PMTo: [EMAIL PROTECTED] Subject: Nessus Security Reporting..Inconsistent Reporting? Hi, I started using Nessus about a month ago. The security metrics that I use is that the IT Server Staff must review/correct Nessus security findings that are rated as "High" in the severity column of the report and/or if the risk factor is "Serious" in the Description column. I know this security metric I am using may seem kind of simple minded....but I have 400 servers on multiple platforms ...and the CIO wants all of the finding rated as high or serious to be corrected first before addressing less risky findings. Question: Should I be concerned about a finding that has a low severity rating but the risk factor is high? Why isn't a finding that has a high risk factor rated with a high severity rating? I have seen lots of findings like this that have a low severity rating but a high risk factor. Just does not make sense to me. What is the logic behind Nessus doing this? Tony _________________________________________________________________ Send and receive Hotmail on your mobile device: http://mobile.msn.com
