> Reading comprehension fail.  Tomas's point is that yes, often there *is* an
> engineering solution.  But if you invest $250K in an engineering solution
> for a
> problem that only risks $100K loss, you're being stupid.  At that point,
> just
> making a note that you have a potential $100K liability and getting on with
> your life *is* the proper way to manage that risk.
>
> (Of course, if the engineering solution only costs $10K, then yes it
> should be
> pursued.  But only when it costs less than just ignoring the risk).
>
>
This is still oversimplification though. The remaining factor is what
the likelihood of that risk actually happening is. A $10K solution to a
$200K problem that will "probably never happen" is still often seen as
money being thrown away.

Even If a serious risk analysis and quantification actually takes place (1
in 10000 chance per year over 20 year service lifetime, blah blah blah) ,
it may still be seen as not worth fixing. Nevermind the fact that the risks
evolve through legislation, an a 1 in 10000 security event before
everything was connected to the internet is now closer to a 1 in 100 or
even 1 in 10.

IMHO, Some of the most effective cyber-security regulation efforts
basically fix this through little more than amplifying the cost of failure
to where it can't be ignored - case in point, HIPPA and PCI. Both are
designed to potentially be open ended money pits for companies that get
breached - hopefully restoring fear of risk to where it needs to be, while
trying to spell out a set of good practices that if followed, might let
companies off the hook for the sort of failures they can't reasonably
defend against.

These aren't perfect rules, but they are a good general direction as far as
making companies take risks seriously.
_______________________________________________
Fun and Misc security discussion for OT posts.
https://linuxbox.org/cgi-bin/mailman/listinfo/funsec
Note: funsec is a public and open mailing list.

Reply via email to