Re: [xwiki-devs] [VOTE] Add new check to measure quality of tests

2018-03-15 Thread Ecaterina Moraru (Valica)
+1

Thanks,
Caty

On Thu, Mar 15, 2018 at 12:48 PM, Eduard Moraru 
wrote:

> Sounds interesting,
> +1.
>
> Thanks,
> Eduard
>
> On Thu, Mar 15, 2018 at 12:26 PM, Thomas Mortagne <
> thomas.morta...@xwiki.com
> > wrote:
>
> > +1
> >
> > On Thu, Mar 15, 2018 at 9:30 AM, Vincent Massol 
> > wrote:
> > > Hi devs,
> > >
> > > As part of the STAMP research project, we’ve developed a new tool
> > (Descartes, based on Pitest) to measure the quality of tests. It
> generates
> > a mutation score for your tests, defining how good the tests are.
> Technical
> > Descartes performs some extreme mutations on the code under test (e.g.
> > remove content of void methods, return true for methods returning a
> > boolean, etc - See https://github.com/STAMP-project/pitest-descartes).
> If
> > the test continues to pass then it means it’s not killing the mutant and
> > thus its mutation score decreases.
> > >
> > > So in short:
> > > * Jacoco/Clover: measure how much of the code is tested
> > > * Pitest/Descartes: measure how good the tests are
> > >
> > > Both provide a percentage value.
> > >
> > > I’m proposing to compute the current mutation scores for xwiki-commons
> > and xwiki-rendering and fail the build when new code is added that reduce
> > the mutation score threshold (exactly the same as our jacoco threshold
> and
> > strategy).
> > >
> > > I consider this is an experiment to push the limit of software
> > engineering a bit further. I don’t know how well it’ll work or not. I
> > propose to do the work and test this for over 2-3 months and see how well
> > it works or not. At that time we can then decide whether it works or not
> > (i.e whether the gains it brings are more important than the problems it
> > causes).
> > >
> > > Here’s my +1 to try this out.
> > >
> > > Some links:
> > > * pitest: http://pitest.org/
> > > * descartes: https://github.com/STAMP-project/pitest-descartes
> > > * http://massol.myxwiki.org/xwiki/bin/view/Blog/ControllingTestQuality
> > > * http://massol.myxwiki.org/xwiki/bin/view/Blog/
> MutationTestingDescartes
> > >
> > > If you’re curious, you can see a screenshot of a mutation score report
> > at http://massol.myxwiki.org/xwiki/bin/download/Blog/
> > MutationTestingDescartes/report.png
> > >
> > > Please cast your votes.
> > >
> > > Thanks
> > > -Vincent
> >
> >
> >
> > --
> > Thomas Mortagne
> >
>


Re: [xwiki-devs] [VOTE] Add new check to measure quality of tests

2018-03-15 Thread Eduard Moraru
Sounds interesting,
+1.

Thanks,
Eduard

On Thu, Mar 15, 2018 at 12:26 PM, Thomas Mortagne  wrote:

> +1
>
> On Thu, Mar 15, 2018 at 9:30 AM, Vincent Massol 
> wrote:
> > Hi devs,
> >
> > As part of the STAMP research project, we’ve developed a new tool
> (Descartes, based on Pitest) to measure the quality of tests. It generates
> a mutation score for your tests, defining how good the tests are. Technical
> Descartes performs some extreme mutations on the code under test (e.g.
> remove content of void methods, return true for methods returning a
> boolean, etc - See https://github.com/STAMP-project/pitest-descartes). If
> the test continues to pass then it means it’s not killing the mutant and
> thus its mutation score decreases.
> >
> > So in short:
> > * Jacoco/Clover: measure how much of the code is tested
> > * Pitest/Descartes: measure how good the tests are
> >
> > Both provide a percentage value.
> >
> > I’m proposing to compute the current mutation scores for xwiki-commons
> and xwiki-rendering and fail the build when new code is added that reduce
> the mutation score threshold (exactly the same as our jacoco threshold and
> strategy).
> >
> > I consider this is an experiment to push the limit of software
> engineering a bit further. I don’t know how well it’ll work or not. I
> propose to do the work and test this for over 2-3 months and see how well
> it works or not. At that time we can then decide whether it works or not
> (i.e whether the gains it brings are more important than the problems it
> causes).
> >
> > Here’s my +1 to try this out.
> >
> > Some links:
> > * pitest: http://pitest.org/
> > * descartes: https://github.com/STAMP-project/pitest-descartes
> > * http://massol.myxwiki.org/xwiki/bin/view/Blog/ControllingTestQuality
> > * http://massol.myxwiki.org/xwiki/bin/view/Blog/MutationTestingDescartes
> >
> > If you’re curious, you can see a screenshot of a mutation score report
> at http://massol.myxwiki.org/xwiki/bin/download/Blog/
> MutationTestingDescartes/report.png
> >
> > Please cast your votes.
> >
> > Thanks
> > -Vincent
>
>
>
> --
> Thomas Mortagne
>


Re: [xwiki-devs] [VOTE] Add new check to measure quality of tests

2018-03-15 Thread Alex Cotiugă
+1

Thanks,
Alex

On Mar 15, 2018 12:26, "Thomas Mortagne"  wrote:

+1

On Thu, Mar 15, 2018 at 9:30 AM, Vincent Massol  wrote:
> Hi devs,
>
> As part of the STAMP research project, we’ve developed a new tool
(Descartes, based on Pitest) to measure the quality of tests. It generates
a mutation score for your tests, defining how good the tests are. Technical
Descartes performs some extreme mutations on the code under test (e.g.
remove content of void methods, return true for methods returning a
boolean, etc - See https://github.com/STAMP-project/pitest-descartes). If
the test continues to pass then it means it’s not killing the mutant and
thus its mutation score decreases.
>
> So in short:
> * Jacoco/Clover: measure how much of the code is tested
> * Pitest/Descartes: measure how good the tests are
>
> Both provide a percentage value.
>
> I’m proposing to compute the current mutation scores for xwiki-commons
and xwiki-rendering and fail the build when new code is added that reduce
the mutation score threshold (exactly the same as our jacoco threshold and
strategy).
>
> I consider this is an experiment to push the limit of software
engineering a bit further. I don’t know how well it’ll work or not. I
propose to do the work and test this for over 2-3 months and see how well
it works or not. At that time we can then decide whether it works or not
(i.e whether the gains it brings are more important than the problems it
causes).
>
> Here’s my +1 to try this out.
>
> Some links:
> * pitest: http://pitest.org/
> * descartes: https://github.com/STAMP-project/pitest-descartes
> * http://massol.myxwiki.org/xwiki/bin/view/Blog/ControllingTestQuality
> * http://massol.myxwiki.org/xwiki/bin/view/Blog/MutationTestingDescartes
>
> If you’re curious, you can see a screenshot of a mutation score report at
http://massol.myxwiki.org/xwiki/bin/download/Blog/MutationTestingDescartes/
report.png
>
> Please cast your votes.
>
> Thanks
> -Vincent



--
Thomas Mortagne


Re: [xwiki-devs] [VOTE] Add new check to measure quality of tests

2018-03-15 Thread Guillaume Delhumeau
+1

2018-03-15 11:26 GMT+01:00 Thomas Mortagne :

> +1
>
> On Thu, Mar 15, 2018 at 9:30 AM, Vincent Massol 
> wrote:
> > Hi devs,
> >
> > As part of the STAMP research project, we’ve developed a new tool
> (Descartes, based on Pitest) to measure the quality of tests. It generates
> a mutation score for your tests, defining how good the tests are. Technical
> Descartes performs some extreme mutations on the code under test (e.g.
> remove content of void methods, return true for methods returning a
> boolean, etc - See https://github.com/STAMP-project/pitest-descartes). If
> the test continues to pass then it means it’s not killing the mutant and
> thus its mutation score decreases.
> >
> > So in short:
> > * Jacoco/Clover: measure how much of the code is tested
> > * Pitest/Descartes: measure how good the tests are
> >
> > Both provide a percentage value.
> >
> > I’m proposing to compute the current mutation scores for xwiki-commons
> and xwiki-rendering and fail the build when new code is added that reduce
> the mutation score threshold (exactly the same as our jacoco threshold and
> strategy).
> >
> > I consider this is an experiment to push the limit of software
> engineering a bit further. I don’t know how well it’ll work or not. I
> propose to do the work and test this for over 2-3 months and see how well
> it works or not. At that time we can then decide whether it works or not
> (i.e whether the gains it brings are more important than the problems it
> causes).
> >
> > Here’s my +1 to try this out.
> >
> > Some links:
> > * pitest: http://pitest.org/
> > * descartes: https://github.com/STAMP-project/pitest-descartes
> > * http://massol.myxwiki.org/xwiki/bin/view/Blog/ControllingTestQuality
> > * http://massol.myxwiki.org/xwiki/bin/view/Blog/MutationTestingDescartes
> >
> > If you’re curious, you can see a screenshot of a mutation score report
> at http://massol.myxwiki.org/xwiki/bin/download/Blog/
> MutationTestingDescartes/report.png
> >
> > Please cast your votes.
> >
> > Thanks
> > -Vincent
>
>
>
> --
> Thomas Mortagne
>



-- 
Guillaume Delhumeau (guillaume.delhum...@xwiki.com)
Research & Development Engineer at XWiki SAS
Committer on the XWiki.org project


Re: [xwiki-devs] [VOTE] Add new check to measure quality of tests

2018-03-15 Thread Thomas Mortagne
+1

On Thu, Mar 15, 2018 at 9:30 AM, Vincent Massol  wrote:
> Hi devs,
>
> As part of the STAMP research project, we’ve developed a new tool (Descartes, 
> based on Pitest) to measure the quality of tests. It generates a mutation 
> score for your tests, defining how good the tests are. Technical Descartes 
> performs some extreme mutations on the code under test (e.g. remove content 
> of void methods, return true for methods returning a boolean, etc - See 
> https://github.com/STAMP-project/pitest-descartes). If the test continues to 
> pass then it means it’s not killing the mutant and thus its mutation score 
> decreases.
>
> So in short:
> * Jacoco/Clover: measure how much of the code is tested
> * Pitest/Descartes: measure how good the tests are
>
> Both provide a percentage value.
>
> I’m proposing to compute the current mutation scores for xwiki-commons and 
> xwiki-rendering and fail the build when new code is added that reduce the 
> mutation score threshold (exactly the same as our jacoco threshold and 
> strategy).
>
> I consider this is an experiment to push the limit of software engineering a 
> bit further. I don’t know how well it’ll work or not. I propose to do the 
> work and test this for over 2-3 months and see how well it works or not. At 
> that time we can then decide whether it works or not (i.e whether the gains 
> it brings are more important than the problems it causes).
>
> Here’s my +1 to try this out.
>
> Some links:
> * pitest: http://pitest.org/
> * descartes: https://github.com/STAMP-project/pitest-descartes
> * http://massol.myxwiki.org/xwiki/bin/view/Blog/ControllingTestQuality
> * http://massol.myxwiki.org/xwiki/bin/view/Blog/MutationTestingDescartes
>
> If you’re curious, you can see a screenshot of a mutation score report at 
> http://massol.myxwiki.org/xwiki/bin/download/Blog/MutationTestingDescartes/report.png
>
> Please cast your votes.
>
> Thanks
> -Vincent



-- 
Thomas Mortagne


[xwiki-devs] [VOTE] Add new check to measure quality of tests

2018-03-15 Thread Vincent Massol
Hi devs,

As part of the STAMP research project, we’ve developed a new tool (Descartes, 
based on Pitest) to measure the quality of tests. It generates a mutation score 
for your tests, defining how good the tests are. Technical Descartes performs 
some extreme mutations on the code under test (e.g. remove content of void 
methods, return true for methods returning a boolean, etc - See 
https://github.com/STAMP-project/pitest-descartes). If the test continues to 
pass then it means it’s not killing the mutant and thus its mutation score 
decreases.

So in short:
* Jacoco/Clover: measure how much of the code is tested
* Pitest/Descartes: measure how good the tests are

Both provide a percentage value.

I’m proposing to compute the current mutation scores for xwiki-commons and 
xwiki-rendering and fail the build when new code is added that reduce the 
mutation score threshold (exactly the same as our jacoco threshold and 
strategy).

I consider this is an experiment to push the limit of software engineering a 
bit further. I don’t know how well it’ll work or not. I propose to do the work 
and test this for over 2-3 months and see how well it works or not. At that 
time we can then decide whether it works or not (i.e whether the gains it 
brings are more important than the problems it causes).

Here’s my +1 to try this out.

Some links:
* pitest: http://pitest.org/
* descartes: https://github.com/STAMP-project/pitest-descartes
* http://massol.myxwiki.org/xwiki/bin/view/Blog/ControllingTestQuality
* http://massol.myxwiki.org/xwiki/bin/view/Blog/MutationTestingDescartes

If you’re curious, you can see a screenshot of a mutation score report at 
http://massol.myxwiki.org/xwiki/bin/download/Blog/MutationTestingDescartes/report.png

Please cast your votes.

Thanks
-Vincent

[xwiki-devs] [XWiki Day] BFD#170

2018-03-15 Thread Alex Cotiugă
Hello devs,

This Thursday is BFD#170:
http://dev.xwiki.org/xwiki/bin/view/Community/XWikiDays#HBugfixingdays

Our current status is:
* -30 bugs over 120 days (4 months), i.e. we need to close 30 bugs to have
created bugs # = closed bugs #
* -95 bugs over 365 days (1 year)
* -99 bugs over 500 days (between 1 and 2 years)
* -315 bugs over 1600 days (4.3 years)
* -711 bugs since the beginning of XWiki

See https://jira.xwiki.org/secure/Dashboard.jspa?selectPageId=10352


Here's the BFD#170 dashboard to follow the progress during the day:
https://jira.xwiki.org/secure/Dashboard.jspa?selectPageId=14090

Happy Bug Fixing Day,
Alex