Hi all,

while having a look at the build I noticed some further things, I'd like to 
address.

It seems that you want to take the path of configuring everything in the master 
pom and have the configuration applied through inheritance to the sub-modules.
Especially the "antlr" and "scala" profiles.

First off all you have to manually activate them, If you add an activation rule 
they would auto-enable themselves ... I've done that in the PR I just submitted.

But I would rather ask to reconsider doing it 100% this way.

The problem ist hat things "magically" happen and this some times scares new 
folks joining in.
I usually prefer to configure the plugins in a central pluginManagement section 
but to explicitly define the plugins in the poms they are used in.
This makes things a lot clearer for newcomers. Also this way can IDEs like 
IntelliJ pick this up more easily.

Another thing ... you are naming the scala artifacts with a suffix of "_2.11" 
in the artifact id, then you are using the jar plugin to add the scala version 
at the end of the finalName.
However this will not have an effect on the maven dependency resolution. It 
will always be deployed as "_2.11" ... then you are referencing artifacts using 
a property which can se set to "_2.12" for example.
I am a bit confused how this could have ever worked before.

Here for the Mahout project we had a similar situation. A somewhat clean way to 
solve this problem is to not use the "_2.11" in the artifact id, but to provide 
a maven classifier ... so whenever you have a maven dependency to a scala 
artifact you also provide a classifier. But to be honest ... this is still not 
100% great.

Also does it seem that the Antlr4 runtime and Plugin version differ greatly. 
Usually you always use the same plugin version as the runtime, or you could run 
into incompatability issues.

Also does the build not build on anything above Java 1.8 right now as some 
dependencies referenced ar too old to support newer versions. Here we should 
consider updating these dependencies.

Last not least it took me a while to realize that you need to have to have 
HADOOP_HOME set and in my case (on windows) I also needed to manually install 
the winutils.exe stuff.
Both the requirement to Java 1.8 nor HADOOP_HOME wasn't mentioned in the README 
or Builds file. Therefore I added something that I also did in PLC4X for the 
first time: I added a Groovy script that programatically checks all known 
prerequisites and fails the build with usefull information if something's 
missing. YOu can disable this check, by activating the profile 
"skip-prerequisite-check".

Hope you like the proposed changes.

Chris

Reply via email to