Brad Roberts wrote:
* graphing of issue trends
That's a crock <g>.

Uh, whatever.  Most of the rest of us humans respond much better to
pictures and trends than to raw numbers.  Show me some visual
indication of the quality of my code (ignoring the arguments about
the validity of such graphs) and I can pretty much guarantee that
I'll work to improve that measure.  Nearly everyone I've ever worked
with behaves similarly.. once they agree that the statistic being measured is useful. One of the best examples is percent of code
covered by unit tests.  The same applies to number of non-false
positive issues discovered through static analysis.



A long time ago, the company I worked for decided to put up a huge chart
on the wall that everyone could see, and every day the current bug count
was plotted on it. The idea was to show a downward trend.

It wasn't very long (a few days) before this scheme completely backfired:

1. engineers stopped submitting new bug reports

2. the engineers and QA would argue about what was a bug and what wasn't

3. multiple bugs would get combined into one bug report so it only
counted once

4. if a bug was "X is not implemented", then when X was implemented,
there might be 3 or 4 bugs against X. Therefore, X did not get implemented.

5. there was a great rush to submit half-assed fixes before the daily
count was made

6. people would invent bugs for which they would simultaneously submit fixes (look ma, I fixed all these bugs!)

7. arguing about it started to consume a large fraction of the
engineering day, including the managers who were always called in to
resolve the disputes

In other words, everyone figured out they were being judged on the
graph, not the quality of the product, and quickly changed their
behavior to "work the graph" rather than the quality.

To the chagrin of the QA staff, management finally tore down the chart.

Note that nobody involved in this was a moron. They all knew exactly what was happening, it was simply irresistible.

A specific case of this:  At informix we had a step in our build
> process that
ran lint (yes, it's ancient, but this was a decade ago and the
> practice was at
least a decade old before I got there).  Any new warnings weren't
> tolerated.
The build automatically reported any delta over the previous build.
> It was standard practice and kept the code pretty darned clean.

I think that's something different - it's not graphing or trending the data.

Reply via email to