El 07/02/15 a les 09:24, Cédric Krier ha escrit:
On 07 Feb 12:34, Sharoon Thomas wrote:
>On 02/06, Sergi Almacellas Abellana wrote:
> >Hi,
> >
> >As a PoC i just added flake8 check to tox [1] on one of my personal
> >projects. The output is available on drone [2].
> >
> >I tought it my be interesting to have this check also on all tryton modules.
> >What do you think? .
>
>We at Openlabs have been using this for our modules since 2 years.
>Having a check on Flake8 brings consistency to the codebase and makes it
>easier for new contributors to understand the coding standards expected
>instead of a human doing it.
We already have it on codereview so nothing new.
And such check in after commit is just too late.

I propose it because i imagine that some point in the future we will have tests run on codereview so it's easier to have all information in one place.


> >Also this will allow to deprecate reviewbot once all codereviews are tested
> >on drone.
>Having flake8 tests on CI is certainly better than having it on the review 
tool.
No, it is too late and false positive will be such a pain.
More over, our reviewbot is doing a great job by commenting inline
flake8 issues instead of having a plain report somewhere else.

But it also reports false positives (unchanged code), so it's the same problem.

Also what the point about having such thing in the code except just
making it harder to read:

     
https://bitbucket.org/pokoli/trytond-buti/commits/60d7a13855325f9069d85db15ab7421edb959922#L__init__.pyT5

> >I also managed to run coverage to ensure that module coverage is over a
> >specific % (75% on my case, but can be customized per module). Patch is
> >here[3] (and here[4] is run by default). The output is also available on
> >drone [5]. Do you think this is also interesting to add?
>
>Coverage % could be a vanity metric but we have been trying to achieve
>100% coverage for critical modules and a significantly high coverage
>for others. This has certainly improved the quality of the code and if
>the tests are written well (not just for the sake of coverage) there are
>hardly any maintenance issues.
Base on what? Where do you find a correlation between coverage and
quality? How do you achieve "not just for the sake of coverage"? Which
threshold is it?

What the point of having failure like that:

     https://coveralls.io/r/openlabs/nereid-webshop
     https://travis-ci.org/openlabs/nereid-webshop/jobs/49836710

As usual you are just following to mass without thinking exactly what it
implies and what it brings.


--
Sergi Almacellas Abellana
www.koolpi.com
Twitter: @pokoli_srk

Reply via email to