[ 
https://issues.apache.org/jira/browse/SUREFIRE-2004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17519521#comment-17519521
 ] 

Alexander Kriegisch edited comment on SUREFIRE-2004 at 4/8/22 11:26 AM:
------------------------------------------------------------------------

[~tibordigana], I did *not* say the issue has anything to do with long release 
cycles. I said that due to those long cycles, chances are that a release fixing 
this issue is still far away, because it did not get in before the M6 cut-off 
date. My statement does not say anything about your work load - I see you fixed 
111 issues for M6, which is impressive - or how high a priority this issue has 
for you. I simply expressed my sympathy and understanding for another person 
who tried to get it into M6, because I was in the same situation of having to 
wait long for a release with some of my own Surefire issues.

Of course, you do not want to make a long-awaited milestone release unstable at 
the last minute by introducing an unstable fix. We all appreciate your 
diligence and quality focus as a Surefire maintainer. *I am simply assuming 
that your decision to keep it out of M6 was correct,* because you know the code 
base better than anything else currently active in this project.

Having said that, smaller feedback cycles and more frequent releases would 
still be good. If a milestone takes almost two years to finish, even if (and 
also because) you are only one person, it is simply too big. You work as fast 
and high-quality as you can, no doubt about that. Therefore, you can do at 
least two things (maybe even both at the same time):

* Make your milestones smaller. "Release early, release often", the good old 
Linux principle. You can benefit from more frequent user feedback, amplifying 
your learning process for the next release and simultaneously providing 
business value to users in smaller increments. Win-win.

* Involve more collaborators, e.g. by managing contributions in a different, 
more collaborative and less tiresome way, maximising the work *not* done by 
yourself and accepting contributions, even if it means that you might have to 
do some more polishing. That would still be quicker than micro-managing 
contributors until they changed every detail the way you would have implemented 
it yourself. You would get more work done per time unit like that. If PRs would 
be less time-consuming and bureaucratic, the danger of disheartening 
contibutors and making them stop contributing after the first few tries would 
also be smaller. Not everyone can afford to focus so much on this project as 
you can, i.e. if PR reviews require many iterations, you only get one-time 
contributors. That does not scale well. People who contribute more often also 
tend to learn and improve the quality of contributions over time. For yourself, 
many iterations of reviewing, discussing and re-reviewing is also wasteful, 
because each iteration requires a context switch from what you did before and 
what you want to do next. You lose focus. It would be better to get a PR off 
the table quickly, actively helping to finish it. When it is merged, it is off 
the table, does not dangle around for weeks or months, having to be rebased 
often or ending with an ugly merge. You can forget about it and focus on your 
next piece of work. The ratio of touch time vs. cycle time for each given piece 
of work should be as small as possible, everything else is waste. Can you 
afford waste, given your limited resources?


was (Author: kriegaex):
[~tibordigana], I did *not* say the issue has anything to do with long release 
cycles. I said that due to those long cycles, chances are that a release fixing 
this issue is still far away, because it did not get in before the M6 cut-off 
date. My statement does not say anything about your work load - I see you fixed 
111 issues for M6, which is impressive - or how high a priority this issue has 
for you. I simply expressed my sympathy and understanding for another person 
who tried to get it into M6, because I was in the same situation of having to 
wait long for a release with some of my own Surefire issues.

Of course, you do not want to make a long-awaited milestone release unstable at 
the last minute by introducing an unstable fix. We all appreciate your 
diligence and quality focus as a Surefire maintainer. *I am simply assuming 
that your decision to keep it out of M6 was correct,* because you know the code 
base better than anything else currently active in this project.

Having said that, smaller feedback cycles and more frequent releases would 
still be good. If a milestone takes almost two years to finish, even if (and 
also because) you are only one person, it is simply too big. You work as fast 
and high-quality as you can, no doubt about that. Therefore, you can do at 
least two things (maybe even both at the same time):

* Make your milestones smaller. "Release early, release often", the good old 
Linux principle. You can benefit from more frequent user feedback, amplifying 
your learning process for the next release and simultaneously providing 
business value to users in smaller increments. Win-win.

* Involve more collaborators, e.g. by managing contributions in a different, 
more collaborative and less tiresome way, maximising the work *not* done by 
yourself and accepting contributions, even if it means that you might have to 
do some more polishing. That would still be quicker than micro-managing 
contributors until they changed every detail the way you would have implemented 
it yourself. You would get more work done per time unit like that. If PRs would 
be less time-consuming and bureaucratic, the danger of disheartening 
contibutors and making them stop contributing after the first few tries would 
also be smaller. Not everyone can afford to focus so much on this project as 
you can, i.e. if PR reviews require many iterations, you only get one-time 
contributors. That does not scale well. People who contribute more often also 
tend to learn and improve the quality of contributions over time. For yourself, 
many iterations of reviewing, discussing and re-reviewing is also wasteful, 
because each iteration requires a context switch from what you did before and 
what you want to do next. You lose focus. It would be better to get a PR off 
the table quickly, actively helping to finish it. When it is merged, it is off 
the table, does not dangle around for weeks or months, having to be rebased 
often or ending with an ugly merge. You can forget about it and focus on your 
next piece of work. The ration of touch time vs. cycle time for each given 
piece of work should be as small as possible, everything else is waste. Can you 
afford waste, given your limited resources?

> Empty report for single-module project with 'aggregate=true'
> ------------------------------------------------------------
>
>                 Key: SUREFIRE-2004
>                 URL: https://issues.apache.org/jira/browse/SUREFIRE-2004
>             Project: Maven Surefire
>          Issue Type: Bug
>          Components: Maven Surefire Report Plugin
>    Affects Versions: 2.4, 3.0.0-M5
>            Reporter: Alexander Kriegisch
>            Priority: Major
>             Fix For: waiting-for-apache-feedback
>
>
> Using either {{-Daggregate=true}} on CLI or {{<aggregate>true</aggregate>}} 
> in the plugin configuration leads to an empty report (i.e. zero tests 
> reported) when e.g. executing
> {code:none}
> mvn -Dmaven.test.failure.ignore=true -Daggregate=true clean verify 
> surefire-report:report-only
> {code}
> in the context of a single-module project. As soon as I make the root module 
> pom-packaged and move the tests to into a child module, the aggregate report 
> works.
> FYI, if I do not define the plugin and its version in my POM at all, the 
> default version 2.4 used by Maven on my workstation has the same problem, so 
> this does not seem to be a 3.0.0-M5 issue only.
> ----
> Background info about how and why I actually stumbled across this problem: I 
> have an OSS multi-module project with lots of expensive UI tests. The full 
> build can take 2.5 hours. I wanted to test a few CLI settings before creating 
> an additional GitHub CI build workflow which can be run on demand and always 
> runs all tests in all modules (ignoring errors and failures), no matter what. 
> In the end, it is supposed to create a single-file aggregate HTML report 
> which can easily be attached to the build and later is available for 
> download, if the user so chooses in order to analyse failing tests 
> comfortable and without having to scroll through build logs.  You get the 
> picture, I guess. In the original project, there is a pom-packaged root POM, 
> so the problem described in this issue does not occur there. I simply created 
> a single-module dummy project in order to verify the effect of certain build 
> options quickly and not having to wait for the slow original build to finish. 
> Eventually, I noticed the issue described above.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)

Reply via email to