> On Jul 7, 2016, at 4:20 PM, Clayton Daley <clayton.da...@gmail.com> wrote:
> 
> I don't mean to jump down your throat here; the tone is definitely harsher 
> than I would like, but I want it to be very clear why I have such strong 
> feelings about upgrading security-critical dependencies.
> 
> I don't take it personally.

Whew :).

> I do a little coding (hello startup) but I'm actually the guy who:
> Had to develop our Policies and Procedures (P&P) in conjunction with our 
> Compliance consultant
> Has to work with our lawyers to negotiate Information Security and Business 
> Associate agreements with customers
> Has to provide implementation details in request to our customers' security 
> groups (I spent all day working through a 160 item self-assessment so it was 
> top-of-mind for me)
> When there's a vulnerability, you can fast track an upgrade because there's a 
> non-theoretical risk to doing nothing.  The problem is an "optional" version 
> bump.  It's all CYA.  If I don't follow my P&P, the federal government, state 
> government, and customers all have (extra) grounds to sue my company 
> (cofounder so literally *mine*).  If the consequence of waiting are a 
> transient Twisted bug or a delayed feature depending on a feature in a 
> blocked version, it's an easy choice.

The problem with this perspective is that it inappropriately assigns risk by 
default to upgrading but no risk by default to not-upgrading.  For example, it 
is well known that various adversaries stockpile 0-day vulnerabilities in 
popular libraries.  Of course new releases don't ever empty this stockpile, but 
they quite often reduce it.

Often fixes to this type of secret vulnerability are not identified as "high 
severity" because severity classifications are often incorrect, almost always 
in the direction of having a lower severity than they ought to.  See for 
example this paper: 
https://www.usenix.org/legacy/event/hotos09/tech/full_papers/arnold/arnold_html/
  It shows that (under the range of data collected, 2006-2008) there is 
_always_ a non-zero number of misclassified bugs impacting "stable" kernel 
versions' security.  The same is almost certainly true of OpenSSL.  Not to 
mention the fact that being stuck on old OpenSSL means being stuck without 
fundamental improvements such as TLS 1.3.

In other words, it may be possible to show that there is absolutely always a 
vulnerability being fixed by new versions, even when you are pretty sure there 
isn't.

I understand that certain regulatory regimes do still give a huge financial 
incentive to bias your change management decisions towards "status quo"; my 
comment earlier indicated that this is starting to change, not that that 
process is complete.  Even if you were to agree with me completely it might not 
be reasonable to risk the entire future of your business on a fast upgrade 
cadence if you are liable for the risks of upgrading but not liable for the 
risks of not-upgrading.

However, in a situation with perverse incentives like that, an equally 
significant risk is building a process that punishes even preparing to make a 
change.  Inasmuch as it's feasible should always have a codebase which is ready 
to roll out upgraded versions of every dependency, as if the regulators were to 
allow the upgrades, because when security researchers identify that a 
vulnerability is high-impact after the fact, you don't want to have to make big 
changes or retrofit your tooling or your codebase in the moment of that impact. 
 Presumably all the governments and the customers could still sue you if you 
hadn't managed to fix e.g. Heartbleed after <some span of time that a layperson 
would think is unreasonable>.

Another great example of why you want to be ready to upgrade: if you can run 
your tests against a new Twisted in its pre-release week and report a 
regression (or better yet, run continuously against trunk and report 
regressions that affect your code as they occur) then you can offload the work 
of actually keeping your application running onto us, and force us to avoid 
ever releasing a version that breaks you.  But if you only identify bugs years 
after the fact, there's no longer anything we can do except fix them with the 
same priority as everything else.

-glyph

_______________________________________________
Twisted-Python mailing list
Twisted-Python@twistedmatrix.com
http://twistedmatrix.com/cgi-bin/mailman/listinfo/twisted-python

Reply via email to