Zane Bitter wrote: > We have around 60 repos using the 'mccabe' package to do cyclomatic > complexity checks in flake8 in their gate jobs.[1] Based on advice > received at the time[2], the version of mccabe that we use in the gate > is effectively pinned to 0.2.1 by hacking (the project, not... never > mind).[3] > > This is starting to cause problems, because version 0.2.1 does not work > with Python 3. By 'does not work' I don't mean that it refuses to run, I > mean that it returns completely different numbers. (As far as I can tell > they're always the same or lower.) tox creates the pep8 environment > using the version of python that tox itself is running in, so as distros > increasingly prefer to install packages using python3, people will > increasingly be running flake8 under python3, and they'll run into > situations like the one I had today where their pep8 tests pass locally > but fail in the gate (which still runs under python2). When the gate > switches to python3, as I assume it eventually must, we'll get bogus > results. > > Unfortunately, moving to a later version is complicated by the fact that > 0.2.1 was... not good software. There is exactly one regression test.[4] > And if you're about to ask whether it was to check the trivial case that > the function: > > def foo(): > return None > > has a cyclomatic complexity of 1, then the joke is on you because it > actually returns 2 for that function. > > Later versions have more tests and appear to be consistent with each > other and across python versions, but they return completely different > results to 0.2.1. And the results are mostly higher, so if we bump the > requirements many of those 60 projects will break. > > (I could also go into details on how the numbers that later versions > report are also wrong in different ways, but I'll save it for another day.) > > Here's how the allegedly most complex function in Heat[5] shakes out, > for example: > > mccabe py27 py36 > 0.2.1 14 6 > 0.3.1 23 23 > 0.6.1 23 23 > > I don't have a solution to suggest, but I burned most of my day digging > into this so I just wanted to let y'all know how screwed we are. Thanks for digging into this, Zane. I'd say we have two options, based on how useful running those tests is:
1/ If they are useful, we should bump the version, at the same time adjusting max-complexity per repo to match the most complex function (with some slack factor). Encourage more projects to use cyclomatic complexity checks and fix the biggest offenders using cycle goals. 2/ If they are not useful, just drop cyclomatic complexity tests overall rather than relying on wrong results and wasting CPU cycles So I'd like to hear from the 60+ repos on whether using those tests lead to any good results :) -- Thierry Carrez (ttx) __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: [email protected]?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev
