Back in 1988, I was tasked with a code review for what seems like 150 
programmers. Now I am very lazy, being a good programmer, I am willing to 
spend all day automating a one hour job. They were all using an in house 
OOPS language. I performed a cluster analysis of metrics using a Comal 
program augmented with C packages (tool set I used as actuary). What I 
discovered at that time is that most of the programmers metrics were in a 
centroid cluster of characteristics and a very few were outliers. The 
problems seemed to be in two groups, spaghetti code written by people who 
cut and paste without comprehension and those unable to create useful 
abstractions and created a plethora of classes and functions. Some of the 
programmers had to be moved to QA, some of the programmers got pay raises 
and stock options.

With Go, all of our code reviews are manual, the very nature of the 
language promotes the development of useful orthogonal abstractions.

If you are working with a large group of programmers, I do believe in using 
heuristics, but be aware of its limitations, the Heisenberg Uncertainty 
Principle says that you cannot measure something without changing it, if it 
is known that you are measuring something then that measurement becomes 
useless.

I do believe in a 5% to 10% duplication of effort. I will give the same 
assignment to two different programmers, see who does the best job in the 
least time. Best job does tend to be rather subjective. Least time is easy, 
matter of time sheets. Least time is dangerous, i prefer a meticulous 
programmer with complete documentation and a bug free proven product, but 
is someone can do a quality job faster, you best measure it and compensate 
so you do not lose that person..

I wish you the best of luck but keep in mind that not everyone writes 
idiomatic Go, most programmers will have a prior history with other 
languages that may impact how they code. Poorly designed heuristics may 
penalize good programmers for not doing things in an inferior way.

On Thursday, June 6, 2019 at 2:58:19 PM UTC-5, Carl wrote:
>
> I'd like to know what people are using to measure the quality of their Go 
> code bases and why. Specifically, I'm asking if:
>
>    1. There is a way to score a code base on quality that works with 
>    idiomatic Go
>    2. Use this metric in a CI system and fail the build if the quality 
>    falls
>    3. Use this metric in a team to detect trends over time and hot spots 
>    where technical debt tends to accumulate
>
> It strikes me that Go is quite different to the usual set of popular 
> languages and that even the most basic measures (cyclomatic complexity for 
> example) may not be a good fit. 
>
> For example:
>
>    1. Adding proper error handling increases measured complexity, but is 
>    actually preferable to not handling errors
>    2. A switch case where a case statement contains one or more commas is 
>    actually more maintainable, but would probably have the same score as 
>    individual case statements on separate lines
>
> There are already a lot of tools that analyse Go code in several different 
> ways - but which ones really work in the long term? 
>
> I'd like to ask the community which ones you've had good experiences with 
> - a test of a good tool could be that a better score results in more 
> idiomatic, maintainable code.
>

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/golang-nuts/73fafe92-8cb5-4c69-b16d-ede473bfeb16%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to