On Wed, 31 Aug 2016 07:52:20 +0200, Stephen Connolly <stephen.alan.conno...@gmail.com> wrote:

I've been thinking about what to call the "consumer Pom"...

I think this is actually not a project object model, but the project
dependency trees

It should list each side artifact and their dependency trees...

So for example:

* the java doc artifacts should depend on the corresponding dependency java
doc artifacts (in an ideal world) because we expect {@link} references

* the source artifacts do not depend on anything else (normally) but for an
über jar (which yes is a bad pattern) you would actually be correct to
depend on the bundled artifacts source jars... So the concept still makes
sense

* the test jar artifact would have the full test dependency tree exposed as
this would allow for test reuse

Now I guess the question is if .pdt or .adt (artifact dependency trees) are
too entrenched in some other domain that we'd want to avoid using one
of those extensions

Next steps:

* start fleshing out a schema for the .pdt files
* start fleshing out a spec for the repository layout (should be "parsable"
by modelVersion 4.0.0 aware clients, but need to decide how to expose new
features)


+1 for .pdt and the described next steps

Robert

On Tuesday 30 August 2016, Stephen Connolly <stephen.alan.conno...@gmail.com>
wrote:

On 29 August 2016 at 23:27, Christian Schulte <c...@schulte.it
<javascript:_e(%7B%7D,'cvml','c...@schulte.it');>> wrote:

Am 08/30/16 um 00:16 schrieb Paul Benedict:
> I see a deployed faulty "consumer pom" to be more more harmful than
> generating it locally on demand. At least with the local one I can
upgrade
> my client to fix a dependency calculation. There will be no such relief
in
> the case of your proposal.

It's not my proposal but I agree to what is proposed. This whole
discussion started because users have requested to revert commits due to compatibility issues. They want to keep such "faulty" behaviour. If they
want to fix it, they can deploy a new version using a more recent Maven
version. The older Maven version will then also see this new behaviour.
If the consumer pom contains the complete resolved dependency tree, the
code interpreting that data is not much more than downloading some files
from some repository. Yes. Repository information needs to be part of
that consumer pom as well. So the resolved dependency tree including
repository information from where to get the resolved artifacts. And we
also need to find a way to handle version ranges.


So the way I see this, different types of artifacts have different tree
requirements.

I may have one type of artifact (i.e. jar) where it really is not
supported to have multiple versions of the same artifact on the same
classpath at the same time.

We currently view artifact dependency resolution as building a linear path from the dependency tree based on the assumption of single version of any
dependency.

In the JavaScript world... and even in the OSGi world... that assumption
is simply not true.

This implies - to me - that the consumer pom should contain the tree that
was used at build time... *as a tree*

By all means, each node could include the version range as well as the
resolved version, e.g.

<dependency groupId="..." artifactId="..." architecture="..."
classifier="..." type="..." version="[1.0,2.0)" resolvedVersion="1.5.0">
  <dependency groupId="..." artifactId="..." architecture="..."
classifier="..." type="..." version="..." resolvedVersion="...">
    ...
  </dependency>
  ...
  <dependency groupId="..." artifactId="..." architecture="..."
classifier="..." type="..." version="..." resolvedVersion="..."/>
</dependency>

The child elements are the dependencies of the resolved version.

Now the consumer of the consumer pom has all the dependency information
used at build time as well as the information to perform substitutions...

This means that if I - as a consumer - am already pulling in a different
resolvedVersion (but valid within the advertised range) of a child
dependency and I am using the tree to produce a single-version classpath, I
can prune the tree and know I have removed the unrequired dependencies.

If I - as a consumer - need to produce a multi-version classpath - because I am producing for an OSGi container - I can build the correct tree rather than being forced to look at a flattened classpath that may not be aligned
with the requirements of the dependency system I am using

If I - as a consumer - decide that I want to push the ranges to newer, we
still have the range information to allow for range validation... but by
default I will be using the versions that were resolved at build time and
consequently tested with by the builder.

HTH

-Stephen



Regards,
--
Christian


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
<javascript:_e(%7B%7D,'cvml','dev-unsubscr...@maven.apache.org');>
For additional commands, e-mail: dev-h...@maven.apache.org
<javascript:_e(%7B%7D,'cvml','dev-h...@maven.apache.org');>




---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
For additional commands, e-mail: dev-h...@maven.apache.org

Reply via email to