I wanted to throw this question out to a broad range of security 
professionals because I have been struggling with this for quite some 
time.  The question is simple, but the answers elude me.  How does one 
measure the success of a security program?  I find it relatively simple 
to identify a risk and mitigate it using technology, but when corporate 
culture and business 'needs' butt heads with security requirements, I 
find myself losing more often than not.  Simple things such as DMZ 
environments versus punch throughs to forcing patches on developers.  
They are quite simple to understand and to implement, and the cost is 
not a factor, it's plain and simple 'Time is money'.  But rarely does 
the 'Time is Money' come into play when rebuilding a box due to NIMDA 
or some other tragedy du jour.  OK, that's mostly bitchin about life, 
but where I'm trying to go with this is; If you develop a sound 
security program, implement it both tactically and strategically, how 
do you really measure its success?  The number of incidents may go 
down, but even with a solid plan, the sheer number of new exploits and 
the fast rate of virus propagation may make the incident numbers go 
up.  This really isn't a measure of success or failure in my book.  Any 
suggestions, recommendation or generally information would be 
tremendously helpful!

Cheers,

Leds!

-- 
There's nothing wrong with Windows until you install it........

Reply via email to