Hi Dale,

well I guess that all should work well with the properties, the way you 
currently set them. I just ran into trouble a little too often with groupId 
variables before. Especially when it comes to releasing. Then the 
maven-release-plugin magic sometimes had problems doing its thing. Things get 
totally out of hand as soon as variables go into the parent declaration (I 
think for this reason you can no longer even use variables in the parent 
element in more recent Maven versions). I might feel more comfortable with 
using “${project.groupId}“ instead. But I guess we’ll see what happens when 
releasing. It’s just that from experience, I like to keep properties out of the 
artifactId and groupId as much as possible … using them in the version element 
is normal practice, but in the others, I did have a lot of trouble with this.

In general, we could enforce different types of rules on our pom by using 
JQAssistant (https://jqassistant.org/) I have used this to define rules in some 
larger project that are automatically enforced.

I re-checked the “fail-at-end” and you are right. If a module fails, it will 
not deploy that particular module. But I am currently not 100% sure what 
happens if a module fails. Nevertheless, it will be one of these two options: 
Either the upstream modules, which depend on the failing module will build with 
a reference to the last SNAPSHOT version … this could be bad as it will be 
losing the changes of the current build. It could rely and be tested against a 
failing downstream module making it very difficult to find the reason for 
failures (The failing tests will Fail in module B even if the reason is in 
Module A). I don’t quite know if I like any of these two options. The only way 
to avoid them completely, is by not failing the build at the end. I also have 
noticed these random failures, but have had them on my Mac too … so I wouldn’t 
say it’s an Apache Infra problem but more a problem in our test suite. I would 
prefer a System property to increase the timeouts when run on a high load CI 
server.

In general, the build shouldn’t fail at all and by the current pace new code is 
finding its way into the repo, I guess we would be fine with the build failing 
if something was broken by a commit. I would agree if we were having a problem 
of concurrent commits breaking things all the time, but that’s currently not 
the case.

Chris




Von: Dale LaBossiere <dml.apa...@gmail.com>
Datum: Dienstag, 24. Oktober 2017 um 15:49
An: Christofer Dutz <christofer.d...@c-ware.de>
Betreff: Re: shouldn't we only define one groupId per-platform?


On Oct 23, 2017, at 6:09 PM, Christofer Dutz 
<christofer.d...@c-ware.de<mailto:christofer.d...@c-ware.de>> wrote:
...
I just updated and noticed you tweaked the groupIds of the modules quite a bit. 
May I ask why? Right now, the groupId of the artifact is inherited by the 
parent. Was there a reason to switch the other dependencies to a property?
There were a few types of changes that I made.  I’m happy to clarify each as 
needed but in general I believe the changes are logically correct and they make 
it more likely that things stay that way.

e.g., I had encountered two places where the dependency spec was wrong because 
of a pom copy-paste error from J7 to Android - the dependency decls for an 
Android declared deps on Edgent java7 projects when it should not have.  The 
general rule is, other than for retrolambda or class-copy stuff, an Edgent 
project's deps on other Edgent projects must be for the same platform - i.e. 
for the same groupId as the declaring project.  I think the change to use 
${project.groupId} in the dependency decls, instead of o.a.e.<platform>, 
correctly captures that “rule”.

There were other inconsistencies as I reacall.  I believe there were some poms 
which explicitly declared the project’s groupId instead of inheriting from the 
parent as they should have (of course there are some poms like 
platforms/java7/pom where this is required; but I think there were some where 
it was explicitly declared and not required).  The net is that I think I 
removed unnecessary explicit decls and didn’t add any unnecessary explicit 
declarations.

Of course if you saw a change that you didn’t think was logically accurate then 
that’s something else :-)  Please pick out an example and I can clarify/verify.


...
Then it seems the usage of String.join is not downgradable to Java7 so I 
replaced that with something from apache commons.
Woops, missed that one.  Thanks.  Guess the —fail-fast behavior on Jenkins was 
preventing getting to running that test so as to be able to notice it.



The update to have Jenkins fail at end has a negative side-effect as it would 
deploy SNAPSHOT versions of artifacts with eventually failing tests.

Just for clarification, are you sure it would deploy the artifacts if there 
were failures?  My expectation is that a failure results in dependent 
tasks/goals/phases, still will not get performed.  But other non-dependent 
tasks will still be run.  I guess I can imagine that J8 could pass and deploy 
and then J7 fail and not get deployed.  Not ideal.  But seeing Jenkins runs 
showing only a small fraction of the tests run (due to —fail-fast) also isn’t 
helpful.  And it makes it hard to make enough progress in the tests to more 
quickly identify those that seem to be more sensitive than others.

But in general it seems were in a tough place if we want/need Jenkins builds to 
fully succeed.

It’s effectively impossible to get clean test runs due to the load on the 
Jenkins build servers and all of the tests inherently having timeout related 
constraints… normally not too aggressive (e.g., a test expects to see the 
correct tuple/events on a stream within 10sec).  That said there seems to be at 
least one place in the runtime where it expects a state change to a topology 
(e.g., “start”) to take no more than 4sec; this may also be the source of some 
of the failures.  I suppose a change could be made to that runtime code to 
enable overriding that value with a java property (or using the edgent.build.ci 
property to affect the value it uses).  And perhaps a similar thing could be 
done to the test harness (automatically increasing the timeout value if 
edgent.build.ci is set).  I may experiment with that to see if that approach 
helps/works.  Maybe this is the best approach as opposed to a shorter term 
desire to get a better sense of overall project health via —fail-at-end… though 
I still think that —fail-at-end may be useful.

— Dale


Reply via email to