Re: Model Version 5.0.0

2013-11-24 Thread Stephen Connolly
On Sunday, 24 November 2013, Jason van Zyl wrote:


 On Nov 23, 2013, at 5:44 PM, Stephen Connolly 
 stephen.alan.conno...@gmail.com javascript:; wrote:

  Before I forget, here are some of my thoughts on moving towards Model
  Version 5.0.0
 
 The pom that we build with need not be the pom that gets deployed...
 thus the pom that is built with need not be the same format as the pom
 that gets deployed.
 

 Can you explain why you think this is useful? To me all the information
 that is carried with the POM after deployment is primarily for the
 consumption of tools, and there are a lot of tools that expect more than
 the dependency information. Removing all other elements in the POM is
 equivalent to being massively backward incompatible for an API. And if the
 subsequent consumption after deployment is primarily by programs, then why
 does it matter what gets deployed. I don't really see much benefit, but
 will create all sorts of technical problems where we need multiple readers
 and all that entails and the massive number of problems this will cause
 people who have created tooling, especially IDE integration.


I am not saying that we remove *all* other elements. I am saying that we
don't really need as many of them.

There are a lot of elements that have questionable utility...

How often are the developers and contributors tags correct?

Do we really need to know the distributionManagement?

On the other hand there are some tags that have utility: SCM, URL, name,
description, dependencies (to name a few off the top of my head)

I am not saying that the above are a complete list. I am saying that this
gives us an opertunity to look at this and see what we really want in the
pom


  Only with packagingpom/packaging do we actually need things like the
  plugins section in the deployed pom, because it is a reality that for
  noo-pom packaging we just want the transitive dependencies.
 
  Now there is the extensions issue where you might be registering a
  different file type that has different rules with respect to the
  classpath... but I am unsure if we actually consider those when
 evaluating
  the dependency tree... and in any case, once we accept that the deployed
  pom is not the same as the pom used to build (for non-pom packaging at
  least) we can transform that dependency tree using the exact rules that
  have to be known at build time thus closing the extensions issue.
 
  For projects with packagingpom/packaging in fact we are only
 deploying
  smal files so perhaps we can deploy two pom files... the one that exposes
  the standard dependency stuff and then a second one that is used for
 build
  inheritance.
 
  My vision is thus that we deploy between 2 and three pom files for every
  project.
 
  For jar/war/ear/... we deploy:
  * a modelVersion 4.0.0 pom as .pom (only lists dependencies)
  * a modelVersion 5.0.0 pom as -v5.pom (only lists dependencies but allows
  for new scopes)
 
  For pom we deploy
  * a modelVersion 4.0.0 pom as .pom (only lists dependencies)
  * a modelVersion 5.0.0 pom as -v5.pom (only lists dependencies but allows
  for new scopes)
  * the pom itself
 
  When building a pom, your parent pom must be of a modelVersion = your
  modelVersion.

 Thanks,

 Jason

 --
 Jason van Zyl
 Founder,  Apache Maven
 http://twitter.com/jvanzyl
 -









-- 
Sent from my phone


Re: Model Version 5.0.0

2013-11-24 Thread Stephen Connolly
On Sunday, 24 November 2013, Igor Fedorenko wrote:



 On 11/23/2013, 23:08, Jason van Zyl wrote:


 On Nov 23, 2013, at 5:44 PM, Stephen Connolly
 stephen.alan.conno...@gmail.com wrote:

  Before I forget, here are some of my thoughts on moving towards
 Model Version 5.0.0

 The pom that we build with need not be the pom that gets
 deployed... thus the pom that is built with need not be the same
 format as the pom that gets deployed.


 Can you explain why you think this is useful? To me all the
 information that is carried with the POM after deployment is
 primarily for the consumption of tools, and there are a lot of tools
 that expect more than the dependency information. Removing all other
 elements in the POM is equivalent to being massively backward
 incompatible for an API. And if the subsequent consumption after
 deployment is primarily by programs, then why does it matter what
 gets deployed. I don't really see much benefit, but will create all
 sorts of technical problems where we need multiple readers and all
 that entails and the massive number of problems this will cause
 people who have created tooling, especially IDE integration. 


 The way I see it, what is deployed describes how the artifact needs to
 be consumed. This is artifact's public API, if you will, it will be
 consumed by wide range of tools that resolve dependencies from Maven
 repositories and descriptor format should be very stable. Mostly likely
 we have no choice but use (a subset of) the current 4.0.0 model version.


I would be very sad if we are limited to a subset.

There are some critical concepts that in my view are missing from pom files.

Number one on my hit list is a provides concept.

Where you declare that an artifact *provides* the same api as another GAV.

Technically you'd need to be able to specify this both at the root of a pom
and also flag specific dependencies (because the api they provide was not
specified when that pom was deployed)

Thus the Geronimo specs poms could all provides the corresponding JavaEE
specs and excludes issues or other hacks would no longer be required.

Look at the issues you will have if you use the excludes wildcards in your
pom... Namely *anyone* who uses your artifact as a dependency will need to
be using Maven 3 or newer... does ivy read those wildcards correctly? Does
sbt? Does Buildr?

They are a tempting siren... And from another PoV they will force others to
follow... *but* if we are forcing them to follow should we not pick a nicer
format to follow... Not one consisting of many layers of hacks?

The modelVersion 4.0.0 pom is deployed to the repo (in my scheme) so that
legacy clients can still make some sense... If a modelVersion 5.0.0 feature
cannot be mapped down to 4.0.0... Well we try our best and that's what you
get... We should make it sure that people stuck with older clients can read
something semi-sensible and then layer their hacks as normal to get the
behaviour they need.

Thus provides (which is not a scope but a GAV) can be modelled by having
the modelVersion 4.0.0 pom list the collapsed dependencies with the
appropriate excludes added (without wildcards)

Other concepts cannot be mapped, so they get dropped.


 How the artifact is produced, on the other hand, is artifact's
 implementation detail. It is perfectly reasonable for a project to
 require minimal version of Maven, for example. Or use completely
 different format, not related to pom at all.


Exactly... The pom used to build can be written in JSON or whatever domain
specific DSL you want... We deploy a modelVersion 5.0.0 pom as XML because
it will be read my machines.

Now for the reason I think deploying a pom as xml may be a good thing...
XSLT

Suppose we specify a XSLT GAV that will down-map the pom to a modelVersion
5.0.0 pom... Now we can actually deploy a modelVersion 7.3.5 pom to the one
GAVCT and a modelVersion 5.0.0 client reads is, sees it is a modelVersion
that is not understood, sees the GAV of the XSLT, pulls it down and
transforms the model into the version it can parse

Will it be able to parse all the info in the original pom? Nope... It's an
older client... Older clients should not expect to grok all the subtleties
of newer poms... But it should get the general shape

In all of the above, packagingpom/packaging is special... We just
deploy that as is in whatever format (JSON/DSL/XML/groovy/etc) as the
-build.pom

So 4.0.0 = .pom
5.0.0 onward (XSLT down versioning) = -dep.pom
And as a parent = -build.pom

Modern clients can ask for the -dep.pom first... And fall back to the .pom

It's not perfect, but it should not be the hell of 3.0.0-4.0.0 the fear of
which has prevented forward progress since


 By separating consumption and production metadata formats, we'll be
 able to evolve production format more aggressively. For example, it
 would be nice to have Tycho-specific configuration markup inside build
 section. This is not currently possible because all poms must be
 compatible 

Re: Model Version 5.0.0

2013-11-24 Thread Stephen Connolly
On Sunday, 24 November 2013, Manfred Moser wrote:


  By separating consumption and production metadata formats, we'll be
  able to evolve production format more aggressively. For example, it
  would be nice to have Tycho-specific configuration markup inside build
  section. This is not currently possible because all poms must be
  compatible with the same model.

 I like the idea of consumptions specifics. It would be great if we could
 agree/define some sort of standard on how to declare suitability for
 artifacts for certain deployment scenarios ..
 e.g. it is jar suitable for Java 6, 7, 8, 9 or what, what about running on
 Android, or on some embedded Java version profile.

 I dont believe that the previous approaches of using classifiers is just
 not powerful enough. And I also agree that we should potentially just
 stick to the existing format.

 E.g. nothing stops us from declaring a standard for e.g. for a bunch of
 properties like

 properties
  runtime.androidtrue/runtime.android
  runtime.java6true/runtime.java6
 /properties

 or
 properties
  runtime.androidfalse/runtime.android
  runtime.java6false/runtime.java6
  runtime.java7true/runtime.java7
 /properties


How is that any different from having a modelVersion 5.0.0? (Other than not
giving the benefit of a schema change)

We still have to get the hoard of non-maven pom parsers to become aware of
these conventions and no xml schema to assist their correct
implementation... Plus we'd need to be sure we are not accidentally
introducing a keyword of enum that loads of people have used as a
variable name for their Enumerators...


 Of course we should put more thought into this but declaring a standard
 sooner rather than later could help a lot with the oncoming wave of
 libraries that will not work for Java 6 anymore and others going forward
 with e.g. Java 8 only and so on.

 Manfred


 -
 To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org javascript:;
 For additional commands, e-mail: dev-h...@maven.apache.org javascript:;



-- 
Sent from my phone


Re: release maven-verifier

2013-11-24 Thread Robert Scholte

Does it contain a fix for testing the log4j branch of Maven?

See for example  
https://builds.apache.org/job/core-integration-testing-maven-3-jdk-1.6-log4j2/457/console


Running integration tests for Maven 4J2)
	using Maven executable:  
/home/jenkins/jenkins-slave/workspace/core-integration-testing-maven-3-jdk-1.6-log4j2/apache-maven-3-SNAPSHOT/bin/mvn
Bootstrap(Bootstrap)SKIPPED -  
Maven version 4J2) not in range [2.0,)
mng5530MojoExecutionScope(_copyfiles)...SKIPPED -  
Maven version 4J2) not in range [3.1.2,)
mng5482AetherNotFound(PluginDependency).SKIPPED -  
Maven version 4J2) not in range [3.1-A,)
mng5482AetherNotFound(PluginSite)...SKIPPED -  
Maven version 4J2) not in range [3.1-A,)
mng5445LegacyStringSearchModelInterpolator(it)..SKIPPED -  
Maven version 4J2) not in range [3.1,)
mng5387ArtifactReplacementPlugin(ArtifactReplacementExecution)SKIPPED -  
Maven version 4J2) not in range [3.1,)
mng5382Jsr330Plugin(Jsr330PluginExecution)..SKIPPED -  
Maven version 4J2) not in range [3.1-alpha,)
mng5338FileOptionToDirectory(FileOptionToADirectory)SKIPPED -  
Maven version 4J2) not in range [3.1-A,)



Somehow the version is extracted as 4J2), which makes this job quite  
useless.


Robert


Op Sun, 24 Nov 2013 06:33:23 +0100 schreef Igor Fedorenko  
i...@ifedorenko.com:



Hello,

I'd like to release maven-verifier 1.5 and switch core ITs to the new
version some time next week. This is to pick up generics changes and
IDE integration hooks I just introduced. Any objections?

--
Regards,
Igor

-
To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
For additional commands, e-mail: dev-h...@maven.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
For additional commands, e-mail: dev-h...@maven.apache.org



RE: Model Version 5.0.0

2013-11-24 Thread Martin Gainty


Date: Sat, 23 Nov 2013 23:47:55 -0500
 From: i...@ifedorenko.com
 To: dev@maven.apache.org
 Subject: Re: Model Version 5.0.0
 
 
 
 On 11/23/2013, 23:08, Jason van Zyl wrote:
 
  On Nov 23, 2013, at 5:44 PM, Stephen Connolly
  stephen.alan.conno...@gmail.com wrote:
 
  Before I forget, here are some of my thoughts on moving towards
  Model Version 5.0.0
 
  The pom that we build with need not be the pom that gets
  deployed... thus the pom that is built with need not be the same
  format as the pom that gets deployed.
 
 
  Can you explain why you think this is useful? To me all the
 information that is carried with the POM after deployment is
  primarily for the consumption of tools, and there are a lot of tools
  that expect more than the dependency information. Removing all other
  elements in the POM is equivalent to being massively backward
  incompatible for an API. And if the subsequent consumption after
  deployment is primarily by programs, then why does it matter what
  gets deployed. I don't really see much benefit, but will create all
  sorts of technical problems where we need multiple readers and all
  that entails and the massive number of problems this will cause
  people who have created tooling, especially IDE integration. 

MGgood point!..which reader is default? and which version of reader to use?

MGthe permutations of every reader for both format types produce daunting 
numbers

MGIgor ..can i assume your fallback  Model would be 4.0.0?
 
 The way I see it, what is deployed describes how the artifact needs to
 be consumed. This is artifact's public API, if you will, it will be
 consumed by wide range of tools that resolve dependencies from Maven
 repositories and descriptor format should be very stable. Mostly likely
 we have no choice but use (a subset of) the current 4.0.0 model version.
MGIgor..so you agree with previous paragraph?


 How the artifact is produced, on the other hand, is artifact's
 implementation detail. It is perfectly reasonable for a project to
 require minimal version of Maven, for example. Or use completely
 different format, not related to pom at all.
MGHow would new format be described?
MGHow would new format be described in archetypes?

 By separating consumption and production metadata formats, we'll be
MGHow would op migrate from 'consumption metadata format' to 'production 
metadata format'?

MGSans namespace identification (pointing to XSDs) as suggested by Paul
MGHow would the plugin know which format to implement (consumption vs 
production?)


 able to evolve production format more aggressively. For example, it
 would be nice to have Tycho-specific configuration markup inside build
 section. This is not currently possible because all poms must be
 compatible with the same model.

MGTycho is latest eclipse but dont forget Europa, Ganymede, Helios, Indigo and 
Juno..once your are done refactoring Eclipse

http://wiki.eclipse.org/Older_Versions_Of_Eclipse
MGwhat about MyEclipse which is based on Helios?
http://www.myeclipseide.com/module-htmlpages-display-pid-342.html
MGOnce Eclipse (and MyEclipse) refactorings are completed what about the 
thousand of users who use Idea or Netbeans?
MGUnless every IDE and every IDE variants are accomodated you could be 
spending 40 hours a week for months
MGat a time to refactor plugin changes to every version of every IDE...are you 
volunteering to be that refactoring resource?
 --
 Regards,
 Igor

MGRegards, Martin
 
 
 
  Only with packagingpom/packaging do we actually need things like the
  plugins section in the deployed pom, because it is a reality that for
  noo-pom packaging we just want the transitive dependencies.
 
  Now there is the extensions issue where you might be registering a
  different file type that has different rules with respect to the
  classpath... but I am unsure if we actually consider those when evaluating
  the dependency tree... and in any case, once we accept that the deployed
  pom is not the same as the pom used to build (for non-pom packaging at
  least) we can transform that dependency tree using the exact rules that
  have to be known at build time thus closing the extensions issue.
 
  For projects with packagingpom/packaging in fact we are only deploying
  smal files so perhaps we can deploy two pom files... the one that exposes
  the standard dependency stuff and then a second one that is used for build
  inheritance.
 
  My vision is thus that we deploy between 2 and three pom files for every
  project.
 
  For jar/war/ear/... we deploy:
  * a modelVersion 4.0.0 pom as .pom (only lists dependencies)
  * a modelVersion 5.0.0 pom as -v5.pom (only lists dependencies but allows
  for new scopes)
 
  For pom we deploy
  * a modelVersion 4.0.0 pom as .pom (only lists dependencies)
  * a modelVersion 5.0.0 pom as -v5.pom (only lists dependencies but allows
  for new scopes)
  * the pom itself
 
  When building a pom, your parent pom must be of a modelVersion = your
  

Re: Model Version 5.0.0

2013-11-24 Thread Igor Fedorenko

I think we are saying the same thing -- we evolve project model used
during the build but deploy both the new and backwards compatible models.

One quick note on representing dependencies as provided/required
capabilities. Although I like this idea in general, I believe it will
require completely new repository layout to be able to efficiently
find capability providers. Single repository-wide metadata index file,
the approach implemented in P2 for example, won't scale for repositories
of the size of Central, so most likely the new repository layout will
require active server-side component to assist dependency resolution.

--
Regards,
Igor

On 11/24/2013, 4:25, Stephen Connolly wrote:

On Sunday, 24 November 2013, Igor Fedorenko wrote:




On 11/23/2013, 23:08, Jason van Zyl wrote:



On Nov 23, 2013, at 5:44 PM, Stephen Connolly
stephen.alan.conno...@gmail.com wrote:

  Before I forget, here are some of my thoughts on moving towards

Model Version 5.0.0

The pom that we build with need not be the pom that gets
deployed... thus the pom that is built with need not be the same
format as the pom that gets deployed.



Can you explain why you think this is useful? To me all the
information that is carried with the POM after deployment is
primarily for the consumption of tools, and there are a lot of tools
that expect more than the dependency information. Removing all other
elements in the POM is equivalent to being massively backward
incompatible for an API. And if the subsequent consumption after
deployment is primarily by programs, then why does it matter what
gets deployed. I don't really see much benefit, but will create all
sorts of technical problems where we need multiple readers and all
that entails and the massive number of problems this will cause
people who have created tooling, especially IDE integration. 



The way I see it, what is deployed describes how the artifact needs to
be consumed. This is artifact's public API, if you will, it will be
consumed by wide range of tools that resolve dependencies from Maven
repositories and descriptor format should be very stable. Mostly likely
we have no choice but use (a subset of) the current 4.0.0 model version.



I would be very sad if we are limited to a subset.

There are some critical concepts that in my view are missing from pom files.

Number one on my hit list is a provides concept.

Where you declare that an artifact *provides* the same api as another GAV.

Technically you'd need to be able to specify this both at the root of a pom
and also flag specific dependencies (because the api they provide was not
specified when that pom was deployed)

Thus the Geronimo specs poms could all provides the corresponding JavaEE
specs and excludes issues or other hacks would no longer be required.

Look at the issues you will have if you use the excludes wildcards in your
pom... Namely *anyone* who uses your artifact as a dependency will need to
be using Maven 3 or newer... does ivy read those wildcards correctly? Does
sbt? Does Buildr?

They are a tempting siren... And from another PoV they will force others to
follow... *but* if we are forcing them to follow should we not pick a nicer
format to follow... Not one consisting of many layers of hacks?

The modelVersion 4.0.0 pom is deployed to the repo (in my scheme) so that
legacy clients can still make some sense... If a modelVersion 5.0.0 feature
cannot be mapped down to 4.0.0... Well we try our best and that's what you
get... We should make it sure that people stuck with older clients can read
something semi-sensible and then layer their hacks as normal to get the
behaviour they need.

Thus provides (which is not a scope but a GAV) can be modelled by having
the modelVersion 4.0.0 pom list the collapsed dependencies with the
appropriate excludes added (without wildcards)

Other concepts cannot be mapped, so they get dropped.



How the artifact is produced, on the other hand, is artifact's
implementation detail. It is perfectly reasonable for a project to
require minimal version of Maven, for example. Or use completely
different format, not related to pom at all.



Exactly... The pom used to build can be written in JSON or whatever domain
specific DSL you want... We deploy a modelVersion 5.0.0 pom as XML because
it will be read my machines.

Now for the reason I think deploying a pom as xml may be a good thing...
XSLT

Suppose we specify a XSLT GAV that will down-map the pom to a modelVersion
5.0.0 pom... Now we can actually deploy a modelVersion 7.3.5 pom to the one
GAVCT and a modelVersion 5.0.0 client reads is, sees it is a modelVersion
that is not understood, sees the GAV of the XSLT, pulls it down and
transforms the model into the version it can parse

Will it be able to parse all the info in the original pom? Nope... It's an
older client... Older clients should not expect to grok all the subtleties
of newer poms... But it should get the general shape

In all of the above, packagingpom/packaging is 

Re: release maven-verifier

2013-11-24 Thread Igor Fedorenko

I haven't looked at log4j2 branch, but master passes all ITs with the
latest verifier 1.5-SNAPSHOT.

--
Regards,
Igor

On 11/24/2013, 5:24, Robert Scholte wrote:

Does it contain a fix for testing the log4j branch of Maven?

See for example
https://builds.apache.org/job/core-integration-testing-maven-3-jdk-1.6-log4j2/457/console


Running integration tests for Maven 4J2)
 using Maven executable:
/home/jenkins/jenkins-slave/workspace/core-integration-testing-maven-3-jdk-1.6-log4j2/apache-maven-3-SNAPSHOT/bin/mvn

Bootstrap(Bootstrap)SKIPPED -
Maven version 4J2) not in range [2.0,)
mng5530MojoExecutionScope(_copyfiles)...SKIPPED -
Maven version 4J2) not in range [3.1.2,)
mng5482AetherNotFound(PluginDependency).SKIPPED -
Maven version 4J2) not in range [3.1-A,)
mng5482AetherNotFound(PluginSite)...SKIPPED -
Maven version 4J2) not in range [3.1-A,)
mng5445LegacyStringSearchModelInterpolator(it)..SKIPPED -
Maven version 4J2) not in range [3.1,)
mng5387ArtifactReplacementPlugin(ArtifactReplacementExecution)SKIPPED -
Maven version 4J2) not in range [3.1,)
mng5382Jsr330Plugin(Jsr330PluginExecution)..SKIPPED -
Maven version 4J2) not in range [3.1-alpha,)
mng5338FileOptionToDirectory(FileOptionToADirectory)SKIPPED -
Maven version 4J2) not in range [3.1-A,)


Somehow the version is extracted as 4J2), which makes this job quite
useless.

Robert


Op Sun, 24 Nov 2013 06:33:23 +0100 schreef Igor Fedorenko
i...@ifedorenko.com:


Hello,

I'd like to release maven-verifier 1.5 and switch core ITs to the new
version some time next week. This is to pick up generics changes and
IDE integration hooks I just introduced. Any objections?

--
Regards,
Igor

-
To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
For additional commands, e-mail: dev-h...@maven.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
For additional commands, e-mail: dev-h...@maven.apache.org



-
To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
For additional commands, e-mail: dev-h...@maven.apache.org



Re: Model Version 5.0.0

2013-11-24 Thread Jason van Zyl

On Nov 23, 2013, at 11:47 PM, Igor Fedorenko i...@ifedorenko.com wrote:

 
 
 On 11/23/2013, 23:08, Jason van Zyl wrote:
 
 On Nov 23, 2013, at 5:44 PM, Stephen Connolly
 stephen.alan.conno...@gmail.com wrote:
 
 Before I forget, here are some of my thoughts on moving towards
 Model Version 5.0.0
 
 The pom that we build with need not be the pom that gets
 deployed... thus the pom that is built with need not be the same
 format as the pom that gets deployed.
 
 
 Can you explain why you think this is useful? To me all the
 information that is carried with the POM after deployment is
 primarily for the consumption of tools, and there are a lot of tools
 that expect more than the dependency information. Removing all other
 elements in the POM is equivalent to being massively backward
 incompatible for an API. And if the subsequent consumption after
 deployment is primarily by programs, then why does it matter what
 gets deployed. I don't really see much benefit, but will create all
 sorts of technical problems where we need multiple readers and all
 that entails and the massive number of problems this will cause
 people who have created tooling, especially IDE integration. 
 
 The way I see it, what is deployed describes how the artifact needs to
 be consumed. This is artifact's public API, if you will, it will be
 consumed by wide range of tools that resolve dependencies from Maven
 repositories and descriptor format should be very stable. Mostly likely
 we have no choice but use (a subset of) the current 4.0.0 model version.
 
 How the artifact is produced, on the other hand, is artifact's
 implementation detail. It is perfectly reasonable for a project to
 require minimal version of Maven, for example. Or use completely
 different format, not related to pom at all.
 
 By separating consumption and production metadata formats, we'll be
 able to evolve production format more aggressively. For example, it
 would be nice to have Tycho-specific configuration markup inside build
 section. This is not currently possible because all poms must be
 compatible with the same model.
 

I think this sounds nice in theory but losing the information about how an 
artifact is produced is not a good idea. I also don't think having a bunch of 
different tools to read one format or another is manageable. I think that 
making readers that are more accepting of different versions and accommodating 
different elements is another approach. Keeping it all together forces you to 
think about the implications of a change.

I think general extensibility of the format might be useful but in a general 
reader. Right now specific tools can work around this issue by having a plugin 
define specifics for a type. While not ideal it works but is more akin to a 
general extension mechanism that works with a single type of accommodating 
reader.

I think splitting building vs consumption will open a huge can of worms. Now 
I'm all for being able to aggressively change the format, but I would rather 
have a single document per version of the model. Possibly think about a future 
proof version and publish just continue to publish a model version 4.0.0 along 
side it indefinitely. I'm not sure how build vs consumption actually helps us 
evolve the model.

 --
 Regards,
 Igor
 
 
 
 Only with packagingpom/packaging do we actually need things like the
 plugins section in the deployed pom, because it is a reality that for
 noo-pom packaging we just want the transitive dependencies.
 
 Now there is the extensions issue where you might be registering a
 different file type that has different rules with respect to the
 classpath... but I am unsure if we actually consider those when evaluating
 the dependency tree... and in any case, once we accept that the deployed
 pom is not the same as the pom used to build (for non-pom packaging at
 least) we can transform that dependency tree using the exact rules that
 have to be known at build time thus closing the extensions issue.
 
 For projects with packagingpom/packaging in fact we are only deploying
 smal files so perhaps we can deploy two pom files... the one that exposes
 the standard dependency stuff and then a second one that is used for build
 inheritance.
 
 My vision is thus that we deploy between 2 and three pom files for every
 project.
 
 For jar/war/ear/... we deploy:
 * a modelVersion 4.0.0 pom as .pom (only lists dependencies)
 * a modelVersion 5.0.0 pom as -v5.pom (only lists dependencies but allows
 for new scopes)
 
 For pom we deploy
 * a modelVersion 4.0.0 pom as .pom (only lists dependencies)
 * a modelVersion 5.0.0 pom as -v5.pom (only lists dependencies but allows
 for new scopes)
 * the pom itself
 
 When building a pom, your parent pom must be of a modelVersion = your
 modelVersion.
 
 Thanks,
 
 Jason
 
 --
 Jason van Zyl
 Founder,  Apache Maven
 http://twitter.com/jvanzyl
 

Re: Model Version 5.0.0

2013-11-24 Thread Jason van Zyl

On Nov 24, 2013, at 12:19 AM, Manfred Moser manf...@mosabuam.com wrote:

 
 By separating consumption and production metadata formats, we'll be
 able to evolve production format more aggressively. For example, it
 would be nice to have Tycho-specific configuration markup inside build
 section. This is not currently possible because all poms must be
 compatible with the same model.
 
 I like the idea of consumptions specifics. It would be great if we could
 agree/define some sort of standard on how to declare suitability for
 artifacts for certain deployment scenarios ..

I don't believe this requires separate documents to support this.

 e.g. it is jar suitable for Java 6, 7, 8, 9 or what, what about running on
 Android, or on some embedded Java version profile.
 
 I dont believe that the previous approaches of using classifiers is just
 not powerful enough. And I also agree that we should potentially just
 stick to the existing format.
 
 E.g. nothing stops us from declaring a standard for e.g. for a bunch of
 properties like
 
 properties
 runtime.androidtrue/runtime.android
 runtime.java6true/runtime.java6
 /properties
 
 or
 properties
 runtime.androidfalse/runtime.android
 runtime.java6false/runtime.java6
 runtime.java7true/runtime.java7
 /properties
 
 Of course we should put more thought into this but declaring a standard
 sooner rather than later could help a lot with the oncoming wave of
 libraries that will not work for Java 6 anymore and others going forward
 with e.g. Java 8 only and so on.
 
 Manfred
 
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
 For additional commands, e-mail: dev-h...@maven.apache.org
 

Thanks,

Jason

--
Jason van Zyl
Founder,  Apache Maven
http://twitter.com/jvanzyl
-









Re: Model Version 5.0.0

2013-11-24 Thread Benson Margulies
It seems to me that this thread is mixing two topics.

Topic #1: How to we move to pom 5.0, given a giant ecosystem of crappy
XML-parsing POM consumers?

Topic #2: To what extent does the pom mix a 'description of contract'
(dependencies, etc) with a 'specification of build'?

On the first topic, there was a wiki page months ago that explored a
scheme for writing both a v4 pom and a v5 pom when deploying from a v5
project, so that old tools could see and consume what they understand.
To the extent that this scheme made sense, it can be adopted without
(necessarily) touching the second.

On the second topic, I'm in agreement that there should be a clear
separation between describing a contract and other things. For
example, why is it a good idea for deployed poms to reference parents,
rather than being self-contained? Why is it a good idea for deployed
poms to include profiles? Why is it a good thing for deployed poms to
include parameter references, thereby in some cases accidently
changing their semantics due to collisions with the consuming
application's pom? The full 'here's how to build' pom, in my view, is
part of the source, and should be deployed with the source, and any
tool that can usefully analyze the details (plugins, pluginManagement,
etc) is welcome to do so. Having written this, it also seems to me
that one compromise could be that v5 deployed poms could be
self-contained, complete, but organized so as be clear as to the two
categories of contents.



On Sun, Nov 24, 2013 at 9:29 AM, Igor Fedorenko i...@ifedorenko.com wrote:
 I think we are saying the same thing -- we evolve project model used
 during the build but deploy both the new and backwards compatible models.

 One quick note on representing dependencies as provided/required
 capabilities. Although I like this idea in general, I believe it will
 require completely new repository layout to be able to efficiently
 find capability providers. Single repository-wide metadata index file,
 the approach implemented in P2 for example, won't scale for repositories
 of the size of Central, so most likely the new repository layout will
 require active server-side component to assist dependency resolution.

 --
 Regards,
 Igor


 On 11/24/2013, 4:25, Stephen Connolly wrote:

 On Sunday, 24 November 2013, Igor Fedorenko wrote:



 On 11/23/2013, 23:08, Jason van Zyl wrote:


 On Nov 23, 2013, at 5:44 PM, Stephen Connolly
 stephen.alan.conno...@gmail.com wrote:

   Before I forget, here are some of my thoughts on moving towards

 Model Version 5.0.0

 The pom that we build with need not be the pom that gets
 deployed... thus the pom that is built with need not be the same
 format as the pom that gets deployed.


 Can you explain why you think this is useful? To me all the
 information that is carried with the POM after deployment is
 primarily for the consumption of tools, and there are a lot of tools
 that expect more than the dependency information. Removing all other
 elements in the POM is equivalent to being massively backward
 incompatible for an API. And if the subsequent consumption after
 deployment is primarily by programs, then why does it matter what
 gets deployed. I don't really see much benefit, but will create all
 sorts of technical problems where we need multiple readers and all
 that entails and the massive number of problems this will cause
 people who have created tooling, especially IDE integration. 


 The way I see it, what is deployed describes how the artifact needs to
 be consumed. This is artifact's public API, if you will, it will be
 consumed by wide range of tools that resolve dependencies from Maven
 repositories and descriptor format should be very stable. Mostly likely
 we have no choice but use (a subset of) the current 4.0.0 model version.



 I would be very sad if we are limited to a subset.

 There are some critical concepts that in my view are missing from pom
 files.

 Number one on my hit list is a provides concept.

 Where you declare that an artifact *provides* the same api as another GAV.

 Technically you'd need to be able to specify this both at the root of a
 pom
 and also flag specific dependencies (because the api they provide was not
 specified when that pom was deployed)

 Thus the Geronimo specs poms could all provides the corresponding JavaEE
 specs and excludes issues or other hacks would no longer be required.

 Look at the issues you will have if you use the excludes wildcards in your
 pom... Namely *anyone* who uses your artifact as a dependency will need to
 be using Maven 3 or newer... does ivy read those wildcards correctly? Does
 sbt? Does Buildr?

 They are a tempting siren... And from another PoV they will force others
 to
 follow... *but* if we are forcing them to follow should we not pick a
 nicer
 format to follow... Not one consisting of many layers of hacks?

 The modelVersion 4.0.0 pom is deployed to the repo (in my scheme) so that
 legacy clients can still make some sense... If 

Re: Model Version 5.0.0

2013-11-24 Thread Stephen Connolly
On Sunday, 24 November 2013, Igor Fedorenko wrote:

 I think we are saying the same thing -- we evolve project model used
 during the build but deploy both the new and backwards compatible models.

 One quick note on representing dependencies as provided/required
 capabilities.


I think it needs to be deterministic, which means it should not need an
active server-side assist.

A pom could declare

provides
  provide gav=javax:servlet-api:3.0/
/provides

That means if you declare *that* pom as a dependency of yours it will (by
being nearer in the graph) replace any servlet-api dependencies in your
graph.

You can also do similar with dependencies, eg

dependency gav=org.slf4j:log4j-over-slf4j:1.7
  provides
provide gav=log4j:log4j:1.2/
  /provides
/dependency

Either form is deterministic and does not introduce dynamic resolution into
the model... But they make the things people want to do a lot easier IMHO

Although I like this idea in general, I believe it will
 require completely new repository layout to be able to efficiently
 find capability providers. Single repository-wide metadata index file,
 the approach implemented in P2 for example, won't scale for repositories
 of the size of Central, so most likely the new repository layout will
 require active server-side component to assist dependency resolution.

 --
 Regards,
 Igor

 On 11/24/2013, 4:25, Stephen Connolly wrote:

 On Sunday, 24 November 2013, Igor Fedorenko wrote:



 On 11/23/2013, 23:08, Jason van Zyl wrote:


 On Nov 23, 2013, at 5:44 PM, Stephen Connolly
 stephen.alan.conno...@gmail.com wrote:

   Before I forget, here are some of my thoughts on moving towards

 Model Version 5.0.0

 The pom that we build with need not be the pom that gets
 deployed... thus the pom that is built with need not be the same
 format as the pom that gets deployed.


  Can you explain why you think this is useful? To me all the
 information that is carried with the POM after deployment is
 primarily for the consumption of tools, and there are a lot of tools
 that expect more than the dependency information. Removing all other
 elements in the POM is equivalent to being massively backward
 incompatible for an API. And if the subsequent consumption after
 deployment is primarily by programs, then why does it matter what
 gets deployed. I don't really see much benefit, but will create all
 sorts of technical problems where we need multiple readers and all
 that entails and the massive number of problems this will cause
 people who have created tooling, especially IDE integration. 


 The way I see it, what is deployed describes how the artifact needs to
 be consumed. This is artifact's public API, if you will, it will be
 consumed by wide range of tools that resolve dependencies from Maven
 repositories and descriptor format should be very stable. Mostly likely
 we have no choice but use (a subset of) the current 4.0.0 model version.



 I would be very sad if we are limited to a subset.

 There are some critical concepts that in my view are missing from pom
 files.

 Number one on my hit list is a provides concept.

 Where you declare that an artifact *provides* the same api as another GAV.

 Technically you'd need to be able to specify this both at the root of a pom
 and also flag specific dependencies (because the api they provide was not
 specified when that pom was deployed)

 Thus the Geronimo specs poms could all provides the corresponding JavaEE
 specs and excludes issues or other hacks would no longer be required.

 Look at the issues you will have if you use the excludes wildcards in your
 pom... Namely *anyone* who uses your artifact as a dependency will need to
 be using Maven 3 or newer... does ivy read those wildcards correctly? Does
 sbt? Does Buildr?

 They are a tempting siren... And from another PoV they will force others to
 follow... *but* if we are forcing them to follow should we not pick a nicer
 format to follow... Not one consisting of many layers of hacks?

 The modelVersion 4.0.0 pom is deployed to the repo (in my scheme) so that
 legacy clients can still make some sense... If a modelVersion 5.0.0 feature
 cannot be mapped down to 4.0.0... Well we try our best and that's what you
 get... We should make it sure that people stuck with older clients can read
 something semi-sensible and then layer their hacks as normal to get the
 behaviour they need.

 Thus provides (which is not a scope but a GAV) can be modelled by having
 the modelVersion 4.0.0 pom list the collapsed dependencies with the
 appropriate excludes added (without wildcards)

 Other concepts cannot be mapped, so they get dropped.


  How the artifact is produced, on the other hand, is artifact's
 implementation detail. It is perfectly reasonable for a project to
 require minimal version of Maven, for example. Or use completely
 different format, not related to pom at all.



 Exactly... The pom used to build can be written in JSON or whatever domain
 specific DSL you 

Re: Model Version 5.0.0

2013-11-24 Thread Jason van Zyl

On Nov 24, 2013, at 3:59 AM, Stephen Connolly stephen.alan.conno...@gmail.com 
wrote:

 On Sunday, 24 November 2013, Jason van Zyl wrote:
 
 
 On Nov 23, 2013, at 5:44 PM, Stephen Connolly 
 stephen.alan.conno...@gmail.com javascript:; wrote:
 
 Before I forget, here are some of my thoughts on moving towards Model
 Version 5.0.0
 
   The pom that we build with need not be the pom that gets deployed...
   thus the pom that is built with need not be the same format as the pom
   that gets deployed.
 
 
 Can you explain why you think this is useful? To me all the information
 that is carried with the POM after deployment is primarily for the
 consumption of tools, and there are a lot of tools that expect more than
 the dependency information. Removing all other elements in the POM is
 equivalent to being massively backward incompatible for an API. And if the
 subsequent consumption after deployment is primarily by programs, then why
 does it matter what gets deployed. I don't really see much benefit, but
 will create all sorts of technical problems where we need multiple readers
 and all that entails and the massive number of problems this will cause
 people who have created tooling, especially IDE integration.
 
 
 I am not saying that we remove *all* other elements. I am saying that we
 don't really need as many of them.
 
 There are a lot of elements that have questionable utility...

That may be, but they are there and you have no idea what they are being used 
for and again to have a new, future proof, extensible format is great. But 
we'll continue to publish model 4.0.0 poms. I think that's fine, I'm still not 
convinced making multiple documents is useful.

 
 How often are the developers and contributors tags correct?
 

We honestly have no idea.

 Do we really need to know the distributionManagement?
 

For provenance possibly.

 On the other hand there are some tags that have utility: SCM, URL, name,
 description, dependencies (to name a few off the top of my head)
 
 I am not saying that the above are a complete list. I am saying that this
 gives us an opertunity to look at this and see what we really want in the
 pom
 

Honestly I think it would be better starting with concrete desires for 
additions like what Manfred described, or Tycho specific things Igor suggested 
and just figure out what an ideal format might look like with the two things 
that I think are most important: extensibility and future proofing. I don't 
think trying to start with the tech required for an ill defined set of actual 
use cases is a good idea.

Most things that need to be changed can be transformed into a model 4.0.0 POM. 
For things that can't we can still use the technique of having a pseudo plugin 
that contains configuration that can change the way the core behaves. A global 
artifact swap for example: flip commons-logging to SLF4J. A contrived example 
but this can be specified in a pseudo plugin configuration. A participant can 
change the model before execution and it provides a way that people can try 
different things without changing the core directly or the pom model. I think 
it's a good way to try features and you just need to be using Maven 3. If we 
find that we like particular features the configuration in the pseudo plugin 
can be graduated to the official model and we can move code from participants 
into the core.

Anyone can try this technique today for a feature they want in Maven.

 
 Only with packagingpom/packaging do we actually need things like the
 plugins section in the deployed pom, because it is a reality that for
 noo-pom packaging we just want the transitive dependencies.
 
 Now there is the extensions issue where you might be registering a
 different file type that has different rules with respect to the
 classpath... but I am unsure if we actually consider those when
 evaluating
 the dependency tree... and in any case, once we accept that the deployed
 pom is not the same as the pom used to build (for non-pom packaging at
 least) we can transform that dependency tree using the exact rules that
 have to be known at build time thus closing the extensions issue.
 
 For projects with packagingpom/packaging in fact we are only
 deploying
 smal files so perhaps we can deploy two pom files... the one that exposes
 the standard dependency stuff and then a second one that is used for
 build
 inheritance.
 
 My vision is thus that we deploy between 2 and three pom files for every
 project.
 
 For jar/war/ear/... we deploy:
 * a modelVersion 4.0.0 pom as .pom (only lists dependencies)
 * a modelVersion 5.0.0 pom as -v5.pom (only lists dependencies but allows
 for new scopes)
 
 For pom we deploy
 * a modelVersion 4.0.0 pom as .pom (only lists dependencies)
 * a modelVersion 5.0.0 pom as -v5.pom (only lists dependencies but allows
 for new scopes)
 * the pom itself
 
 When building a pom, your parent pom must be of a modelVersion = your
 modelVersion.
 
 Thanks,
 
 Jason
 
 

Re: Model Version 5.0.0

2013-11-24 Thread Igor Fedorenko

How do you find all artifacts that provide gav=javax:servlet-api:3.0?
One option is to download entire repository index to the client, but
Central index will likely be in 100x of megabytes, which makes this
approach impractical. The only other option is to keep the index on the
server and have server-side helper to answer index queries.

--
Regards,
Igor

On 11/24/2013, 10:38, Stephen Connolly wrote:

On Sunday, 24 November 2013, Igor Fedorenko wrote:


I think we are saying the same thing -- we evolve project model used
during the build but deploy both the new and backwards compatible models.

One quick note on representing dependencies as provided/required
capabilities.



I think it needs to be deterministic, which means it should not need an
active server-side assist.

A pom could declare

provides
   provide gav=javax:servlet-api:3.0/
/provides

That means if you declare *that* pom as a dependency of yours it will (by
being nearer in the graph) replace any servlet-api dependencies in your
graph.

You can also do similar with dependencies, eg

dependency gav=org.slf4j:log4j-over-slf4j:1.7
   provides
 provide gav=log4j:log4j:1.2/
   /provides
/dependency

Either form is deterministic and does not introduce dynamic resolution into
the model... But they make the things people want to do a lot easier IMHO

Although I like this idea in general, I believe it will

require completely new repository layout to be able to efficiently
find capability providers. Single repository-wide metadata index file,
the approach implemented in P2 for example, won't scale for repositories
of the size of Central, so most likely the new repository layout will
require active server-side component to assist dependency resolution.

--
Regards,
Igor

On 11/24/2013, 4:25, Stephen Connolly wrote:

On Sunday, 24 November 2013, Igor Fedorenko wrote:



On 11/23/2013, 23:08, Jason van Zyl wrote:


On Nov 23, 2013, at 5:44 PM, Stephen Connolly
stephen.alan.conno...@gmail.com wrote:

   Before I forget, here are some of my thoughts on moving towards

Model Version 5.0.0

The pom that we build with need not be the pom that gets
deployed... thus the pom that is built with need not be the same
format as the pom that gets deployed.


  Can you explain why you think this is useful? To me all the
information that is carried with the POM after deployment is
primarily for the consumption of tools, and there are a lot of tools
that expect more than the dependency information. Removing all other
elements in the POM is equivalent to being massively backward
incompatible for an API. And if the subsequent consumption after
deployment is primarily by programs, then why does it matter what
gets deployed. I don't really see much benefit, but will create all
sorts of technical problems where we need multiple readers and all
that entails and the massive number of problems this will cause
people who have created tooling, especially IDE integration. 


The way I see it, what is deployed describes how the artifact needs to
be consumed. This is artifact's public API, if you will, it will be
consumed by wide range of tools that resolve dependencies from Maven
repositories and descriptor format should be very stable. Mostly likely
we have no choice but use (a subset of) the current 4.0.0 model version.



I would be very sad if we are limited to a subset.

There are some critical concepts that in my view are missing from pom
files.

Number one on my hit list is a provides concept.

Where you declare that an artifact *provides* the same api as another GAV.

Technically you'd need to be able to specify this both at the root of a pom
and also flag specific dependencies (because the api they provide was not
specified when that pom was deployed)

Thus the Geronimo specs poms could all provides the corresponding JavaEE
specs and excludes issues or other hacks would no longer be required.

Look at the issues you will have if you use the excludes wildcards in your
pom... Namely *anyone* who uses your artifact as a dependency will need to
be using Maven 3 or newer... does ivy read those wildcards correctly? Does
sbt? Does Buildr?

They are a tempting siren... And from another PoV they will force others to
follow... *but* if we are forcing them to follow should we not pick a nicer
format to follow... Not one consisting of many layers of hacks?

The modelVersion 4.0.0 pom is deployed to the repo (in my scheme) so that
legacy clients can still make some sense... If a modelVersion 5.0.0 feature
cannot be mapped down to 4.0.0... Well we try our best and that's what you
get... We should make it sure that people stuck with older clients can read
something semi-sensible and then layer their hacks as normal to get the
behaviour they need.

Thus provides (which is not a scope but a GAV) can be modelled by having
the modelVersion 4.0.0 pom list the collapsed dependencies with the
appropriate excludes added (without wildcards)

Other concepts cannot be mapped, so they get 

Re: Model Version 5.0.0

2013-11-24 Thread Benson Margulies
I have one more remark to contribute to this.

In my view, the first step should be to make a 4.0-beta version of
Maven that has a '5.0.0' pom that is _identical_ to the 4.0.0 pom. The
difference is that we will document, after the fashion of HTML5, our
intent to change it over time. We can then adopt any ideas for a
better POM as small increments. Maybe we don't call it 4.0(no beta)
until we have incremented to the point of serious new value.


On Sun, Nov 24, 2013 at 10:48 AM, Igor Fedorenko i...@ifedorenko.com wrote:
 How do you find all artifacts that provide gav=javax:servlet-api:3.0?
 One option is to download entire repository index to the client, but
 Central index will likely be in 100x of megabytes, which makes this
 approach impractical. The only other option is to keep the index on the
 server and have server-side helper to answer index queries.

 --
 Regards,
 Igor


 On 11/24/2013, 10:38, Stephen Connolly wrote:

 On Sunday, 24 November 2013, Igor Fedorenko wrote:

 I think we are saying the same thing -- we evolve project model used
 during the build but deploy both the new and backwards compatible models.

 One quick note on representing dependencies as provided/required
 capabilities.



 I think it needs to be deterministic, which means it should not need an
 active server-side assist.

 A pom could declare

 provides
provide gav=javax:servlet-api:3.0/
 /provides

 That means if you declare *that* pom as a dependency of yours it will (by
 being nearer in the graph) replace any servlet-api dependencies in your
 graph.

 You can also do similar with dependencies, eg

 dependency gav=org.slf4j:log4j-over-slf4j:1.7
provides
  provide gav=log4j:log4j:1.2/
/provides
 /dependency

 Either form is deterministic and does not introduce dynamic resolution
 into
 the model... But they make the things people want to do a lot easier IMHO

 Although I like this idea in general, I believe it will

 require completely new repository layout to be able to efficiently
 find capability providers. Single repository-wide metadata index file,
 the approach implemented in P2 for example, won't scale for repositories
 of the size of Central, so most likely the new repository layout will
 require active server-side component to assist dependency resolution.

 --
 Regards,
 Igor

 On 11/24/2013, 4:25, Stephen Connolly wrote:

 On Sunday, 24 November 2013, Igor Fedorenko wrote:



 On 11/23/2013, 23:08, Jason van Zyl wrote:


 On Nov 23, 2013, at 5:44 PM, Stephen Connolly
 stephen.alan.conno...@gmail.com wrote:

Before I forget, here are some of my thoughts on moving towards

 Model Version 5.0.0

 The pom that we build with need not be the pom that gets
 deployed... thus the pom that is built with need not be the same
 format as the pom that gets deployed.


   Can you explain why you think this is useful? To me all the
 information that is carried with the POM after deployment is
 primarily for the consumption of tools, and there are a lot of tools
 that expect more than the dependency information. Removing all other
 elements in the POM is equivalent to being massively backward
 incompatible for an API. And if the subsequent consumption after
 deployment is primarily by programs, then why does it matter what
 gets deployed. I don't really see much benefit, but will create all
 sorts of technical problems where we need multiple readers and all
 that entails and the massive number of problems this will cause
 people who have created tooling, especially IDE integration. 


 The way I see it, what is deployed describes how the artifact needs to
 be consumed. This is artifact's public API, if you will, it will be
 consumed by wide range of tools that resolve dependencies from Maven
 repositories and descriptor format should be very stable. Mostly likely
 we have no choice but use (a subset of) the current 4.0.0 model version.



 I would be very sad if we are limited to a subset.

 There are some critical concepts that in my view are missing from pom
 files.

 Number one on my hit list is a provides concept.

 Where you declare that an artifact *provides* the same api as another
 GAV.

 Technically you'd need to be able to specify this both at the root of a
 pom
 and also flag specific dependencies (because the api they provide was not
 specified when that pom was deployed)

 Thus the Geronimo specs poms could all provides the corresponding
 JavaEE
 specs and excludes issues or other hacks would no longer be required.

 Look at the issues you will have if you use the excludes wildcards in
 your
 pom... Namely *anyone* who uses your artifact as a dependency will need
 to
 be using Maven 3 or newer... does ivy read those wildcards correctly?
 Does
 sbt? Does Buildr?

 They are a tempting siren... And from another PoV they will force others
 to
 follow... *but* if we are forcing them to follow should we not pick a
 nicer
 format to follow... Not one consisting of many layers of hacks?

 The 

Re: Model Version 5.0.0

2013-11-24 Thread Stephen Connolly
On Sunday, 24 November 2013, Igor Fedorenko wrote:

 How do you find all artifacts that provide gav=javax:servlet-api:3.0?


You don't need to.

You just need to treat it as a global excludes on javax:servlet-api

The difference is that it also excludes any other poms that get pulled in
transitively and also have the same provides...

You only need to look at the poms that are resolved via the current pom for
which we are evaluating the dependency tree


 One option is to download entire repository index to the client, but
 Central index will likely be in 100x of megabytes, which makes this
 approach impractical. The only other option is to keep the index on the
 server and have server-side helper to answer index queries.

 --
 Regards,
 Igor

 On 11/24/2013, 10:38, Stephen Connolly wrote:

 On Sunday, 24 November 2013, Igor Fedorenko wrote:

  I think we are saying the same thing -- we evolve project model used
 during the build but deploy both the new and backwards compatible models.

 One quick note on representing dependencies as provided/required
 capabilities.



 I think it needs to be deterministic, which means it should not need an
 active server-side assist.

 A pom could declare

 provides
provide gav=javax:servlet-api:3.0/
 /provides

 That means if you declare *that* pom as a dependency of yours it will (by
 being nearer in the graph) replace any servlet-api dependencies in your
 graph.

 You can also do similar with dependencies, eg

 dependency gav=org.slf4j:log4j-over-slf4j:1.7
provides
  provide gav=log4j:log4j:1.2/
/provides
 /dependency

 Either form is deterministic and does not introduce dynamic resolution into
 the model... But they make the things people want to do a lot easier IMHO

 Although I like this idea in general, I believe it will

 require completely new repository layout to be able to efficiently
 find capability providers. Single repository-wide metadata index file,
 the approach implemented in P2 for example, won't scale for repositories
 of the size of Central, so most likely the new repository layout will
 require active server-side component to assist dependency resolution.

 --
 Regards,
 Igor

 On 11/24/2013, 4:25, Stephen Connolly wrote:

 On Sunday, 24 November 2013, Igor Fedorenko wrote:



 On 11/23/2013, 23:08, Jason van Zyl wrote:


 On Nov 23, 2013, at 5:44 PM, Stephen Connolly
 stephen.alan.conno...@gmail.com wrote:

Before I forget, here are some of my thoughts on moving towards

 Model Version 5.0.0

 The pom that we build with need not be the pom that gets
 deployed... thus the pom that is built with need not be the same
 format as the pom that gets deployed.


   Can you explain why you think this is useful? To me all the
 information that is carried with the POM after deployment is
 primarily for the consumption of tools, and there are a lot of tools
 that expect more than the dependency information. Removing all other
 elements in the POM is equivalent to being massively backward
 incompatible for an API. And if the subsequent consumption after
 deployment is primarily by programs, then why does it matter what
 gets deployed. I don't really see much benefit, but will create all
 sorts of technical problems where we need multiple readers and all
 that entails and the massive number of problems this will cause
 people who have created tooling, especially IDE integration. 


 The way I see it, what is deployed describes how the artifact needs to
 be consumed. This is artifact's public API, if you will, it will be
 consumed by wide range of tools that resolve dependencies from Maven
 repositories and descriptor format should be very stable. Mostly likely
 we have no choice but use (a subset of) the current 4.0.0 model version.



 I would be very sad if we are limited to a subset.

 There are some critical concepts that in my view are missing from pom
 files.

 Number one on my hit list is a provides concept.

 Where you declare that an artifact *provides* the same api as another GAV.

 Technically you'd need to be able to specify this both at the root of a pom
 and also flag specific dependencies (because the api they provide was not
 specified when that pom was deployed)

 Thus the Geronimo specs poms could all provides the corresponding JavaEE
 specs and excludes issues or other hacks would no longer be required.

 Look at the issues you will have if you use the excludes wildcards in your
 pom... Namely *anyone* who uses your artifact as a dependency will need to
 be using Maven 3 or newer... does ivy read those wildcards correctly? Does
 sbt? Does Buildr?

 They are a tempting siren... And from another PoV they will force others to
 follow... *but* if we are forcing them to follow should we not pick a nicer
 format to follow... Not one consisting of many layers of hacks?

 The modelVersion 4.0.0 pom is deployed to the repo (in my scheme) so that
 legacy clients can still make some sense... If a modelVersion 5.0.0 feature

 

Re: Model Version 5.0.0

2013-11-24 Thread Stephen Connolly
On Sunday, 24 November 2013, Jason van Zyl wrote:


 On Nov 24, 2013, at 3:59 AM, Stephen Connolly 
 stephen.alan.conno...@gmail.com javascript:; wrote:

  On Sunday, 24 November 2013, Jason van Zyl wrote:
 
 
  On Nov 23, 2013, at 5:44 PM, Stephen Connolly 
  stephen.alan.conno...@gmail.com javascript:; javascript:; wrote:
 
  Before I forget, here are some of my thoughts on moving towards Model
  Version 5.0.0
 
The pom that we build with need not be the pom that gets deployed...
thus the pom that is built with need not be the same format as the
 pom
that gets deployed.
 
 
  Can you explain why you think this is useful? To me all the information
  that is carried with the POM after deployment is primarily for the
  consumption of tools, and there are a lot of tools that expect more than
  the dependency information. Removing all other elements in the POM is
  equivalent to being massively backward incompatible for an API. And if
 the
  subsequent consumption after deployment is primarily by programs, then
 why
  does it matter what gets deployed. I don't really see much benefit, but
  will create all sorts of technical problems where we need multiple
 readers
  and all that entails and the massive number of problems this will cause
  people who have created tooling, especially IDE integration.
 
 
  I am not saying that we remove *all* other elements. I am saying that we
  don't really need as many of them.
 
  There are a lot of elements that have questionable utility...

 That may be, but they are there and you have no idea what they are being
 used for and again to have a new, future proof, extensible format is great.
 But we'll continue to publish model 4.0.0 poms. I think that's fine, I'm
 still not convinced making multiple documents is useful.


Well I would favour us re-examining the elements to see if there is a good
reason to keep them in the non-build pom

It may be that all of them have good reason for keeping, but my suspicion
is that there are a few that no-longer have value



 
  How often are the developers and contributors tags correct?
 

 We honestly have no idea.


Is there merrit in keeping them? They need manual updating... I would
imagine it is only the rare ones that are actually correct



  Do we really need to know the distributionManagement?
 

 For provenance possibly.


In most cases it will be via sonatype... Unless we get somewhere with John
Casey's  ideas for a fully decentralised repository system that is ;-)


  On the other hand there are some tags that have utility: SCM, URL, name,
  description, dependencies (to name a few off the top of my head)
 
  I am not saying that the above are a complete list. I am saying that this
  gives us an opertunity to look at this and see what we really want in the
  pom
 

 Honestly I think it would be better starting with concrete desires for
 additions like what Manfred described, or Tycho specific things Igor
 suggested and just figure out what an ideal format might look like with the
 two things that I think are most important: extensibility and future
 proofing. I don't think trying to start with the tech required for an ill
 defined set of actual use cases is a good idea.

 Most things that need to be changed can be transformed into a model 4.0.0
 POM. For things that can't we can still use the technique of having a
 pseudo plugin that contains configuration that can change the way the core
 behaves. A global artifact swap for example: flip commons-logging to SLF4J.
 A contrived example but this can be specified in a pseudo plugin
 configuration. A participant can change the model before execution and it
 provides a way that people can try different things without changing the
 core directly or the pom model. I think it's a good way to try features and
 you just need to be using Maven 3. If we find that we like particular
 features the configuration in the pseudo plugin can be graduated to the
 official model and we can move code from participants into the core.

 Anyone can try this technique today for a feature they want in Maven.


The issue is that such a solution is non-portable.

Unless the deployed model is different from the on-disk model then
*consumers* will not get the correct model.

This is why the excludes with wildcards is a bad plan... It only works as
long as either:
* The deployed pom has wildcards expanded
Or
* all consumers understand wildcards (which will basically break any
consumers using maven 2.x or ant tasks or perhaps ivy, Buildr, other ruby
based clients, etc

Yes, we can shoehorn the functionality into a plugin's configuration... But
that makes writing such features near impossible... As you now need to
reparse the effective model for all projects in your transitive tree... And
you're fighting with aether all the way (because before you have your
chance, aether has resolved the model without your tweaks)

The reality is one if maven's core features is dependency management...
Thus it 

Re: Model Version 5.0.0

2013-11-24 Thread Jason van Zyl

On Nov 24, 2013, at 10:28 AM, Benson Margulies bimargul...@gmail.com wrote:

 It seems to me that this thread is mixing two topics.
 
 Topic #1: How to we move to pom 5.0, given a giant ecosystem of crappy
 XML-parsing POM consumers?
 
 Topic #2: To what extent does the pom mix a 'description of contract'
 (dependencies, etc) with a 'specification of build'?
 
 On the first topic, there was a wiki page months ago that explored a
 scheme for writing both a v4 pom and a v5 pom when deploying from a v5
 project, so that old tools could see and consume what they understand.
 To the extent that this scheme made sense, it can be adopted without
 (necessarily) touching the second.
 

If you are referring to this:

https://cwiki.apache.org/confluence/display/MAVEN/Moving+forward+with+the+POM+data+model

Then I think what this document lacks are the use cases that drive anything. 
Without actually having some target features you cannot understand what you 
require technically. 

I think from the discussion thus far we have the following features:

- API provides (from Stephen)
- Runtime requirements (from Manfred)
- Global excludes (much asked for feature)
- Global swaps (much asked for feature)

Additionally by requirements:
- Are we going to allow for extensibility?
- Are we going to be future proof?
- Are we going to provide backward compatibility?

I believe this is where we start. Many of the answers about how the 
implementation will look will be driven by specific features and answers to 
requirements questions.

 On the second topic, I'm in agreement that there should be a clear
 separation between describing a contract and other things. For
 example, why is it a good idea for deployed poms to reference parents,
 rather than being self-contained? Why is it a good idea for deployed
 poms to include profiles? Why is it a good thing for deployed poms to
 include parameter references, thereby in some cases accidently
 changing their semantics due to collisions with the consuming
 application's pom? The full 'here's how to build' pom, in my view, is
 part of the source, and should be deployed with the source, and any
 tool that can usefully analyze the details (plugins, pluginManagement,
 etc) is welcome to do so. Having written this, it also seems to me
 that one compromise could be that v5 deployed poms could be
 self-contained, complete, but organized so as be clear as to the two
 categories of contents.
 
 
 
 On Sun, Nov 24, 2013 at 9:29 AM, Igor Fedorenko i...@ifedorenko.com wrote:
 I think we are saying the same thing -- we evolve project model used
 during the build but deploy both the new and backwards compatible models.
 
 One quick note on representing dependencies as provided/required
 capabilities. Although I like this idea in general, I believe it will
 require completely new repository layout to be able to efficiently
 find capability providers. Single repository-wide metadata index file,
 the approach implemented in P2 for example, won't scale for repositories
 of the size of Central, so most likely the new repository layout will
 require active server-side component to assist dependency resolution.
 
 --
 Regards,
 Igor
 
 
 On 11/24/2013, 4:25, Stephen Connolly wrote:
 
 On Sunday, 24 November 2013, Igor Fedorenko wrote:
 
 
 
 On 11/23/2013, 23:08, Jason van Zyl wrote:
 
 
 On Nov 23, 2013, at 5:44 PM, Stephen Connolly
 stephen.alan.conno...@gmail.com wrote:
 
  Before I forget, here are some of my thoughts on moving towards
 
 Model Version 5.0.0
 
 The pom that we build with need not be the pom that gets
 deployed... thus the pom that is built with need not be the same
 format as the pom that gets deployed.
 
 
 Can you explain why you think this is useful? To me all the
 information that is carried with the POM after deployment is
 primarily for the consumption of tools, and there are a lot of tools
 that expect more than the dependency information. Removing all other
 elements in the POM is equivalent to being massively backward
 incompatible for an API. And if the subsequent consumption after
 deployment is primarily by programs, then why does it matter what
 gets deployed. I don't really see much benefit, but will create all
 sorts of technical problems where we need multiple readers and all
 that entails and the massive number of problems this will cause
 people who have created tooling, especially IDE integration. 
 
 
 The way I see it, what is deployed describes how the artifact needs to
 be consumed. This is artifact's public API, if you will, it will be
 consumed by wide range of tools that resolve dependencies from Maven
 repositories and descriptor format should be very stable. Mostly likely
 we have no choice but use (a subset of) the current 4.0.0 model version.
 
 
 
 I would be very sad if we are limited to a subset.
 
 There are some critical concepts that in my view are missing from pom
 files.
 
 Number one on my hit list is a provides concept.
 
 Where you declare that 

Re: release maven-verifier

2013-11-24 Thread Hervé BOUTEMY
I think I fixed it, just by avoiding confusing verifier version detection = 
parsing of content between parenthesis

Regards,

Hervé

Le dimanche 24 novembre 2013 11:24:42 Robert Scholte a écrit :
 Does it contain a fix for testing the log4j branch of Maven?
 
 See for example
 https://builds.apache.org/job/core-integration-testing-maven-3-jdk-1.6-log4j
 2/457/console
 
 Running integration tests for Maven 4J2)
   using Maven executable:
 /home/jenkins/jenkins-slave/workspace/core-integration-testing-maven-3-jdk-1
 .6-log4j2/apache-maven-3-SNAPSHOT/bin/mvn
 Bootstrap(Bootstrap)SKIPPED - Maven
 version 4J2) not in range [2.0,)
 mng5530MojoExecutionScope(_copyfiles)...SKIPPED -
 Maven version 4J2) not in range [3.1.2,)
 mng5482AetherNotFound(PluginDependency).SKIPPED -
 Maven version 4J2) not in range [3.1-A,)
 mng5482AetherNotFound(PluginSite)...SKIPPED -
 Maven version 4J2) not in range [3.1-A,)
 mng5445LegacyStringSearchModelInterpolator(it)..SKIPPED -
 Maven version 4J2) not in range [3.1,)
 mng5387ArtifactReplacementPlugin(ArtifactReplacementExecution)SKIPPED -
 Maven version 4J2) not in range [3.1,)
 mng5382Jsr330Plugin(Jsr330PluginExecution)..SKIPPED -
 Maven version 4J2) not in range [3.1-alpha,)
 mng5338FileOptionToDirectory(FileOptionToADirectory)SKIPPED -
 Maven version 4J2) not in range [3.1-A,)
 
 
 Somehow the version is extracted as 4J2), which makes this job quite
 useless.
 
 Robert
 
 
 Op Sun, 24 Nov 2013 06:33:23 +0100 schreef Igor Fedorenko
 
 i...@ifedorenko.com:
  Hello,
  
  I'd like to release maven-verifier 1.5 and switch core ITs to the new
  version some time next week. This is to pick up generics changes and
  IDE integration hooks I just introduced. Any objections?
  
  --
  Regards,
  Igor
  
  -
  To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
  For additional commands, e-mail: dev-h...@maven.apache.org
 
 -
 To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
 For additional commands, e-mail: dev-h...@maven.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
For additional commands, e-mail: dev-h...@maven.apache.org



Re: [VOTE] Apache Maven SCM 1.9

2013-11-24 Thread Dominik Bartholdi
Hi everyone,
I think I solved all the issues we had on windows with the jgit-provider
@Robert can you have another try now?
The build https://builds.apache.org/job/maven-scm/ currently fails, but this is 
related to an issue with the upload to the snapshot repository at 
https://repository.apache.org/content/repositories/snapshots/
regards Domi


On 29.10.2013, at 09:27, Olivier Lamy ol...@apache.org wrote:

 for the record vote cancel.
 
 
 On 29 October 2013 17:20, Domi d...@fortysix.ch wrote:
 I was pointed to Matthias Sohn (jgit commiter) let's see if he has an idea, 
 before we do a release of this.
 His first thought was the WindowCache.reconfigure() - but Robert already 
 fixed that.
 /Domi
 
 Am 28.10.2013 um 20:51 schrieb Robert Scholte rfscho...@apache.org:
 
 @Kristian: Brilliant data!
 
 @Dennis: the statistics have changed[1]. I managed to fix it a bit, but as 
 Kristian mentioned: some parts are out of reach and can't be closed by our 
 code (let's avoid reflection!).
 
 I believe that in this case the Windows behavior is the preferred one: if 
 you open a stream, you should close it too.
 Anyhow, we need a fix from JGit.
 
 Since the JGit is not yet part of the Maven SCM Standard Providers I think 
 we are safe.
 Users need to explicitly add this provider if they want to use it.
 So a non Windows compatible warning on the website is fine by me.
 
 Robert
 
 [1] https://builds.apache.org/job/maven-scm-windows/
 
 Op Mon, 28 Oct 2013 16:15:06 +0100 schreef Dennis Lundberg 
 denn...@apache.org:
 
 Thanks a lot Kristian!
 
 Do I understand you correctly that the leak is in the jgit Checkout 
 command?
 If so, there are probably more leaks in there since 9 of our tests
 fail, each testing a different command. Some tests do succeed though.
 
 So how do we proceed with this?
 Submit patches for jgit?
 Release maven-scm as is? If so we need to inform our users about the
 current limitations.
 
 
 I agree that Windows sometimes suck when it comes to handling files,
 but this is a double-edged sword. It does help us find problems like
 these, that might otherwise pop up in a Windows production environment
 after we release.
 
 Also having failing tests for one platform isn't very likely to
 attract new developers from that platform. Turning it into a
 never-ending downward spiral.
 
 
 On Mon, Oct 28, 2013 at 8:22 AM, Kristian Rosenvold
 kristian.rosenv...@gmail.com wrote:
 Finding this kind of leaks with my graciously provided OSS license of
 YJP is like stealing candy from children
 
 export MAVEN_OPTS=-Xms512m -Xmx2084m -XX:MaxPermSize=512m
 -agentpath:C:/java/yjp-12.0.6/bin/win64/yjpagent.dll=onexit=snapshot
 c:/java/apache-maven-3.1.1/bin/mvn $@
 
 Run test with forkMode never.
 
 Click on the inspections tag, run all inspections.
 
 
 A quick run with jprofiler on the surefire fork reveals that the
 un-closed file is allocated here. This even works on linux :)
 
 Kristian
 
 
 
 java.io.RandomAccessFile.init(File, String)
 org.eclipse.jgit.internal.storage.file.PackFile.doOpen()
 org.eclipse.jgit.internal.storage.file.PackFile.beginWindowCache()
 org.eclipse.jgit.internal.storage.file.WindowCache.load(PackFile, long)
 org.eclipse.jgit.internal.storage.file.WindowCache.getOrLoad(PackFile, 
 long)
 org.eclipse.jgit.internal.storage.file.WindowCache.get(PackFile, long)
 org.eclipse.jgit.internal.storage.file.WindowCursor.pin(PackFile, long)
 org.eclipse.jgit.internal.storage.file.WindowCursor.copy(PackFile,
 long, byte[], int, int)
 org.eclipse.jgit.internal.storage.file.PackFile.readFully(long,
 byte[], int, int, WindowCursor)
 org.eclipse.jgit.internal.storage.file.PackFile.load(WindowCursor, long)
 org.eclipse.jgit.internal.storage.file.PackFile.get(WindowCursor, 
 AnyObjectId)
 org.eclipse.jgit.internal.storage.file.ObjectDirectory.openObject1(WindowCursor,
 AnyObjectId)
 org.eclipse.jgit.internal.storage.file.FileObjectDatabase.openObjectImpl1(WindowCursor,
 AnyObjectId)
 org.eclipse.jgit.internal.storage.file.FileObjectDatabase.openObject(WindowCursor,
 AnyObjectId)
 org.eclipse.jgit.internal.storage.file.WindowCursor.open(AnyObjectId, int)
 org.eclipse.jgit.lib.ObjectReader.open(AnyObjectId)
 org.eclipse.jgit.revwalk.RevWalk.parseAny(AnyObjectId)
 org.eclipse.jgit.revwalk.RevWalk.parseCommit(AnyObjectId)
 org.eclipse.jgit.api.CloneCommand.parseCommit(Repository, Ref)
 org.eclipse.jgit.api.CloneCommand.checkout(Repository, FetchResult)
 org.eclipse.jgit.api.CloneCommand.call()
 org.apache.maven.scm.provider.git.jgit.command.checkout.JGitCheckOutCommand.executeCheckOutCommand(ScmProviderRepository,
 ScmFileSet, ScmVersion, boolean)
 org.apache.maven.scm.command.checkout.AbstractCheckOutCommand.executeCommand(ScmProviderRepository,
 ScmFileSet, CommandParameters)
 org.apache.maven.scm.command.AbstractCommand.execute(ScmProviderRepository,
 ScmFileSet, CommandParameters)
 org.apache.maven.scm.provider.git.AbstractGitScmProvider.executeCommand(GitCommand,
 ScmProviderRepository, ScmFileSet, 

Re: Model Version 5.0.0

2013-11-24 Thread Stephen Connolly
On 24 November 2013 17:44, Jason van Zyl ja...@tesla.io wrote:


 On Nov 24, 2013, at 10:28 AM, Benson Margulies bimargul...@gmail.com
 wrote:

  It seems to me that this thread is mixing two topics.
 
  Topic #1: How to we move to pom 5.0, given a giant ecosystem of crappy
  XML-parsing POM consumers?
 
  Topic #2: To what extent does the pom mix a 'description of contract'
  (dependencies, etc) with a 'specification of build'?
 
  On the first topic, there was a wiki page months ago that explored a
  scheme for writing both a v4 pom and a v5 pom when deploying from a v5
  project, so that old tools could see and consume what they understand.
  To the extent that this scheme made sense, it can be adopted without
  (necessarily) touching the second.
 

 If you are referring to this:


 https://cwiki.apache.org/confluence/display/MAVEN/Moving+forward+with+the+POM+data+model

 Then I think what this document lacks are the use cases that drive
 anything. Without actually having some target features you cannot
 understand what you require technically.

 I think from the discussion thus far we have the following features:

 - API provides (from Stephen)
 - Runtime requirements (from Manfred)
 - Global excludes (much asked for feature)
 - Global swaps (much asked for feature)


Additionally, I think we should refine scopes... there are some that are
likely missing and some, such as `system` that should be removed.

Platform dependency *could* be handled by dependency, e.g.

dependency gav=java:java:1.8:platform/

could indicate that you need java 8 to run...

The question though is how you handle multiple potential platforms, e.g.
works on java 1.6 or android...

That may require a change to the dependency model... some sort of
dependency group... whereby any one of the deps in the group can satisfy
the need...

A potentially better solution would be a specific platform section... but
is the more generic dep based solution perhaps better?



 Additionally by requirements:
 - Are we going to allow for extensibility?
 - Are we going to be future proof?
 - Are we going to provide backward compatibility?

 I believe this is where we start. Many of the answers about how the
 implementation will look will be driven by specific features and answers to
 requirements questions.


Another point is that if we don't ack that we need to rev the spec and this
may be the only chance to rev the spec for a while, we won't see the
features we need.

Hacking the 4.0.0 pom will only make baby steps and lead to hacky
solutions... opening up the chance to rev the pom spec and schema opens up
the potential for other ideas



  On the second topic, I'm in agreement that there should be a clear
  separation between describing a contract and other things. For
  example, why is it a good idea for deployed poms to reference parents,
  rather than being self-contained? Why is it a good idea for deployed
  poms to include profiles? Why is it a good thing for deployed poms to
  include parameter references, thereby in some cases accidently
  changing their semantics due to collisions with the consuming
  application's pom? The full 'here's how to build' pom, in my view, is
  part of the source, and should be deployed with the source, and any
  tool that can usefully analyze the details (plugins, pluginManagement,
  etc) is welcome to do so. Having written this, it also seems to me
  that one compromise could be that v5 deployed poms could be
  self-contained, complete, but organized so as be clear as to the two
  categories of contents.
 
 
 
  On Sun, Nov 24, 2013 at 9:29 AM, Igor Fedorenko i...@ifedorenko.com
 wrote:
  I think we are saying the same thing -- we evolve project model used
  during the build but deploy both the new and backwards compatible
 models.
 
  One quick note on representing dependencies as provided/required
  capabilities. Although I like this idea in general, I believe it will
  require completely new repository layout to be able to efficiently
  find capability providers. Single repository-wide metadata index file,
  the approach implemented in P2 for example, won't scale for repositories
  of the size of Central, so most likely the new repository layout will
  require active server-side component to assist dependency resolution.
 
  --
  Regards,
  Igor
 
 
  On 11/24/2013, 4:25, Stephen Connolly wrote:
 
  On Sunday, 24 November 2013, Igor Fedorenko wrote:
 
 
 
  On 11/23/2013, 23:08, Jason van Zyl wrote:
 
 
  On Nov 23, 2013, at 5:44 PM, Stephen Connolly
  stephen.alan.conno...@gmail.com wrote:
 
   Before I forget, here are some of my thoughts on moving towards
 
  Model Version 5.0.0
 
  The pom that we build with need not be the pom that gets
  deployed... thus the pom that is built with need not be the same
  format as the pom that gets deployed.
 
 
  Can you explain why you think this is useful? To me all the
  information that is carried with the POM after deployment is
  primarily for the consumption 

Re: [VOTE] Apache Maven SCM 1.9

2013-11-24 Thread Robert Scholte

We're getting closer, only one error left:

Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.926 sec  
 FA
ILURE! - in  
org.apache.maven.scm.provider.git.jgit.command.tag.JGitTagCommandTck

Test
testTagCommandTest(org.apache.maven.scm.provider.git.jgit.command.tag.JGitTagCom
mandTckTest)  Time elapsed: 1.817 sec   ERROR!
java.io.IOException: Could not delete file  
F:\java-workspace\apache-maven-scm\ma

ven-scm\maven-scm-providers\maven-scm-providers-git\maven-scm-provider-jgit\targ
et\scm-test\updating-copy\.git\objects\pack\pack-3ecde7a8782b53b94510513a4b1275d
7e33392a9.idx
at org.eclipse.jgit.util.FileUtils.delete(FileUtils.java:180)
at org.eclipse.jgit.util.FileUtils.delete(FileUtils.java:147)
at org.eclipse.jgit.util.FileUtils.delete(FileUtils.java:149)
at org.eclipse.jgit.util.FileUtils.delete(FileUtils.java:149)
at org.eclipse.jgit.util.FileUtils.delete(FileUtils.java:149)
at  
org.apache.maven.scm.provider.git.jgit.command.tag.JGitTagCommandTckT

est.deleteDirectory(JGitTagCommandTckTest.java:52)


Results :

Tests in error:
  JGitTagCommandTckTestScmTckTestCase.setUp:106-ScmTestCase.setUp:71-deleteDi
rectory:52 ╗ IO

Even when Maven has finished I still can't delete these files.

Robert

Op Sun, 24 Nov 2013 19:30:22 +0100 schreef Dominik Bartholdi  
d...@fortysix.ch:



Hi everyone,
I think I solved all the issues we had on windows with the jgit-provider
@Robert can you have another try now?
The build https://builds.apache.org/job/maven-scm/ currently fails, but  
this is related to an issue with the upload to the snapshot repository  
at https://repository.apache.org/content/repositories/snapshots/

regards Domi


On 29.10.2013, at 09:27, Olivier Lamy ol...@apache.org wrote:


for the record vote cancel.


On 29 October 2013 17:20, Domi d...@fortysix.ch wrote:
I was pointed to Matthias Sohn (jgit commiter) let's see if he has an  
idea, before we do a release of this.
His first thought was the WindowCache.reconfigure() - but Robert  
already fixed that.

/Domi

Am 28.10.2013 um 20:51 schrieb Robert Scholte  
rfscho...@apache.org:


@Kristian: Brilliant data!

@Dennis: the statistics have changed[1]. I managed to fix it a bit,  
but as Kristian mentioned: some parts are out of reach and can't be  
closed by our code (let's avoid reflection!).


I believe that in this case the Windows behavior is the preferred  
one: if you open a stream, you should close it too.

Anyhow, we need a fix from JGit.

Since the JGit is not yet part of the Maven SCM Standard Providers I  
think we are safe.

Users need to explicitly add this provider if they want to use it.
So a non Windows compatible warning on the website is fine by me.

Robert

[1] https://builds.apache.org/job/maven-scm-windows/

Op Mon, 28 Oct 2013 16:15:06 +0100 schreef Dennis Lundberg  
denn...@apache.org:



Thanks a lot Kristian!

Do I understand you correctly that the leak is in the jgit Checkout  
command?

If so, there are probably more leaks in there since 9 of our tests
fail, each testing a different command. Some tests do succeed though.

So how do we proceed with this?
Submit patches for jgit?
Release maven-scm as is? If so we need to inform our users about the
current limitations.


I agree that Windows sometimes suck when it comes to handling files,
but this is a double-edged sword. It does help us find problems like
these, that might otherwise pop up in a Windows production  
environment

after we release.

Also having failing tests for one platform isn't very likely to
attract new developers from that platform. Turning it into a
never-ending downward spiral.


On Mon, Oct 28, 2013 at 8:22 AM, Kristian Rosenvold
kristian.rosenv...@gmail.com wrote:
Finding this kind of leaks with my graciously provided OSS license  
of

YJP is like stealing candy from children

export MAVEN_OPTS=-Xms512m -Xmx2084m -XX:MaxPermSize=512m
-agentpath:C:/java/yjp-12.0.6/bin/win64/yjpagent.dll=onexit=snapshot
c:/java/apache-maven-3.1.1/bin/mvn $@

Run test with forkMode never.

Click on the inspections tag, run all inspections.


A quick run with jprofiler on the surefire fork reveals that the
un-closed file is allocated here. This even works on linux :)

Kristian



java.io.RandomAccessFile.init(File, String)
org.eclipse.jgit.internal.storage.file.PackFile.doOpen()
org.eclipse.jgit.internal.storage.file.PackFile.beginWindowCache()
org.eclipse.jgit.internal.storage.file.WindowCache.load(PackFile,  
long)
org.eclipse.jgit.internal.storage.file.WindowCache.getOrLoad(PackFile,  
long)
org.eclipse.jgit.internal.storage.file.WindowCache.get(PackFile,  
long)
org.eclipse.jgit.internal.storage.file.WindowCursor.pin(PackFile,  
long)

org.eclipse.jgit.internal.storage.file.WindowCursor.copy(PackFile,
long, byte[], int, int)
org.eclipse.jgit.internal.storage.file.PackFile.readFully(long,
byte[], int, int, WindowCursor)
org.eclipse.jgit.internal.storage.file.PackFile.load(WindowCursor,  
long)

RE: Model Version 5.0.0

2013-11-24 Thread Robert Patrick
 Additionally, I think we should refine scopes... there are some that are 
 likely missing and some, such as `system` that should be removed.

Pardon my ignorance but while I understand the negative implications of using 
system-scoped dependencies, I believe there are cases at least a few use cases 
where they are required.  For example, we have a plugin that depends on 
tools.jar from the JDK.  We currently use a system-scoped dependency for this.  
If you were to remove system-scoped dependencies, how would you propose that 
people handle use cases such as this?

 
-Original Message-
From: Stephen Connolly [mailto:stephen.alan.conno...@gmail.com] 
Sent: Sunday, November 24, 2013 1:34 PM
To: Maven Developers List
Subject: Re: Model Version 5.0.0

On 24 November 2013 17:44, Jason van Zyl ja...@tesla.io wrote:


 On Nov 24, 2013, at 10:28 AM, Benson Margulies bimargul...@gmail.com
 wrote:

  It seems to me that this thread is mixing two topics.
 
  Topic #1: How to we move to pom 5.0, given a giant ecosystem of 
  crappy XML-parsing POM consumers?
 
  Topic #2: To what extent does the pom mix a 'description of contract'
  (dependencies, etc) with a 'specification of build'?
 
  On the first topic, there was a wiki page months ago that explored a 
  scheme for writing both a v4 pom and a v5 pom when deploying from a 
  v5 project, so that old tools could see and consume what they understand.
  To the extent that this scheme made sense, it can be adopted without
  (necessarily) touching the second.
 

 If you are referring to this:


 https://cwiki.apache.org/confluence/display/MAVEN/Moving+forward+with+
 the+POM+data+model

 Then I think what this document lacks are the use cases that drive 
 anything. Without actually having some target features you cannot 
 understand what you require technically.

 I think from the discussion thus far we have the following features:

 - API provides (from Stephen)
 - Runtime requirements (from Manfred)
 - Global excludes (much asked for feature)
 - Global swaps (much asked for feature)


Additionally, I think we should refine scopes... there are some that are likely 
missing and some, such as `system` that should be removed.

Platform dependency *could* be handled by dependency, e.g.

dependency gav=java:java:1.8:platform/

could indicate that you need java 8 to run...

The question though is how you handle multiple potential platforms, e.g.
works on java 1.6 or android...

That may require a change to the dependency model... some sort of dependency 
group... whereby any one of the deps in the group can satisfy the need...

A potentially better solution would be a specific platform section... but is 
the more generic dep based solution perhaps better?



 Additionally by requirements:
 - Are we going to allow for extensibility?
 - Are we going to be future proof?
 - Are we going to provide backward compatibility?

 I believe this is where we start. Many of the answers about how the 
 implementation will look will be driven by specific features and 
 answers to requirements questions.


Another point is that if we don't ack that we need to rev the spec and this may 
be the only chance to rev the spec for a while, we won't see the features we 
need.

Hacking the 4.0.0 pom will only make baby steps and lead to hacky solutions... 
opening up the chance to rev the pom spec and schema opens up the potential for 
other ideas



  On the second topic, I'm in agreement that there should be a clear 
  separation between describing a contract and other things. For 
  example, why is it a good idea for deployed poms to reference 
  parents, rather than being self-contained? Why is it a good idea for 
  deployed poms to include profiles? Why is it a good thing for 
  deployed poms to include parameter references, thereby in some cases 
  accidently changing their semantics due to collisions with the 
  consuming application's pom? The full 'here's how to build' pom, in 
  my view, is part of the source, and should be deployed with the 
  source, and any tool that can usefully analyze the details (plugins, 
  pluginManagement,
  etc) is welcome to do so. Having written this, it also seems to me 
  that one compromise could be that v5 deployed poms could be 
  self-contained, complete, but organized so as be clear as to the two 
  categories of contents.
 
 
 
  On Sun, Nov 24, 2013 at 9:29 AM, Igor Fedorenko 
  i...@ifedorenko.com
 wrote:
  I think we are saying the same thing -- we evolve project model 
  used during the build but deploy both the new and backwards 
  compatible
 models.
 
  One quick note on representing dependencies as provided/required 
  capabilities. Although I like this idea in general, I believe it 
  will require completely new repository layout to be able to 
  efficiently find capability providers. Single repository-wide 
  metadata index file, the approach implemented in P2 for example, 
  won't scale for repositories of the size of Central, so 

Re: [VOTE] Apache Maven SCM 1.9

2013-11-24 Thread Robert Scholte

Hmm, maybe I cheered too early. A second run gave me 6 errors.
Still unsure what is keeping a lock of the files.
Both 'mvn clean' and 'rmdir /S target' fail.

F:\java-workspace\apache-maven-scm\maven-scm\maven-scm-providers\maven-scm-provi
ders-git\maven-scm-provider-jgitrmdir /S target
target. Weet u het zeker (J/N)? j
target\scm-test\WORKIN~1\GIT~1\objects\pack\pack-3ecde7a8782b53b94510513a4b1275d
7e33392a9.idx - Toegang geweigerd.
target\scm-test\WORKIN~1\GIT~1\objects\pack\pack-3ecde7a8782b53b94510513a4b1275d
7e33392a9.pack - Het proces heeft geen toegang tot het bestand omdat het  
door ee

n ander proces wordt gebruikt.

translations:
- Are you sure (Y/N)
- Access denied
- The process has no access to the file because it is used by another  
process.


Robert


Op Sun, 24 Nov 2013 20:43:35 +0100 schreef Robert Scholte  
rfscho...@apache.org:



We're getting closer, only one error left:

Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 1.926  
sec  FA
ILURE! - in  
org.apache.maven.scm.provider.git.jgit.command.tag.JGitTagCommandTck

Test
testTagCommandTest(org.apache.maven.scm.provider.git.jgit.command.tag.JGitTagCom
mandTckTest)  Time elapsed: 1.817 sec   ERROR!
java.io.IOException: Could not delete file  
F:\java-workspace\apache-maven-scm\ma

ven-scm\maven-scm-providers\maven-scm-providers-git\maven-scm-provider-jgit\targ
et\scm-test\updating-copy\.git\objects\pack\pack-3ecde7a8782b53b94510513a4b1275d
7e33392a9.idx
 at org.eclipse.jgit.util.FileUtils.delete(FileUtils.java:180)
 at org.eclipse.jgit.util.FileUtils.delete(FileUtils.java:147)
 at org.eclipse.jgit.util.FileUtils.delete(FileUtils.java:149)
 at org.eclipse.jgit.util.FileUtils.delete(FileUtils.java:149)
 at org.eclipse.jgit.util.FileUtils.delete(FileUtils.java:149)
 at  
org.apache.maven.scm.provider.git.jgit.command.tag.JGitTagCommandTckT

est.deleteDirectory(JGitTagCommandTckTest.java:52)


Results :

Tests in error:
   
JGitTagCommandTckTestScmTckTestCase.setUp:106-ScmTestCase.setUp:71-deleteDi
rectory:52 ╗ IO

Even when Maven has finished I still can't delete these files.

Robert

Op Sun, 24 Nov 2013 19:30:22 +0100 schreef Dominik Bartholdi  
d...@fortysix.ch:



Hi everyone,
I think I solved all the issues we had on windows with the jgit-provider
@Robert can you have another try now?
The build https://builds.apache.org/job/maven-scm/ currently fails, but  
this is related to an issue with the upload to the snapshot repository  
at https://repository.apache.org/content/repositories/snapshots/

regards Domi


On 29.10.2013, at 09:27, Olivier Lamy ol...@apache.org wrote:


for the record vote cancel.


On 29 October 2013 17:20, Domi d...@fortysix.ch wrote:
I was pointed to Matthias Sohn (jgit commiter) let's see if he has an  
idea, before we do a release of this.
His first thought was the WindowCache.reconfigure() - but Robert  
already fixed that.

/Domi

Am 28.10.2013 um 20:51 schrieb Robert Scholte  
rfscho...@apache.org:


@Kristian: Brilliant data!

@Dennis: the statistics have changed[1]. I managed to fix it a bit,  
but as Kristian mentioned: some parts are out of reach and can't be  
closed by our code (let's avoid reflection!).


I believe that in this case the Windows behavior is the preferred  
one: if you open a stream, you should close it too.

Anyhow, we need a fix from JGit.

Since the JGit is not yet part of the Maven SCM Standard Providers I  
think we are safe.

Users need to explicitly add this provider if they want to use it.
So a non Windows compatible warning on the website is fine by me.

Robert

[1] https://builds.apache.org/job/maven-scm-windows/

Op Mon, 28 Oct 2013 16:15:06 +0100 schreef Dennis Lundberg  
denn...@apache.org:



Thanks a lot Kristian!

Do I understand you correctly that the leak is in the jgit Checkout  
command?

If so, there are probably more leaks in there since 9 of our tests
fail, each testing a different command. Some tests do succeed  
though.


So how do we proceed with this?
Submit patches for jgit?
Release maven-scm as is? If so we need to inform our users about the
current limitations.


I agree that Windows sometimes suck when it comes to handling files,
but this is a double-edged sword. It does help us find problems like
these, that might otherwise pop up in a Windows production  
environment

after we release.

Also having failing tests for one platform isn't very likely to
attract new developers from that platform. Turning it into a
never-ending downward spiral.


On Mon, Oct 28, 2013 at 8:22 AM, Kristian Rosenvold
kristian.rosenv...@gmail.com wrote:
Finding this kind of leaks with my graciously provided OSS license  
of

YJP is like stealing candy from children

export MAVEN_OPTS=-Xms512m -Xmx2084m -XX:MaxPermSize=512m
-agentpath:C:/java/yjp-12.0.6/bin/win64/yjpagent.dll=onexit=snapshot
c:/java/apache-maven-3.1.1/bin/mvn $@

Run test with forkMode never.

Click on the inspections tag, run all inspections.


A 

Re: Model Version 5.0.0

2013-11-24 Thread Stephen Connolly
On 24 November 2013 19:42, Robert Patrick robert.patr...@oracle.com wrote:

  Additionally, I think we should refine scopes... there are some that are
 likely missing and some, such as `system` that should be removed.

 Pardon my ignorance but while I understand the negative implications of
 using system-scoped dependencies, I believe there are cases at least a few
 use cases where they are required.  For example, we have a plugin that
 depends on tools.jar from the JDK.  We currently use a system-scoped
 dependency for this.  If you were to remove system-scoped dependencies, how
 would you propose that people handle use cases such as this?


I think that we need a less Java centric concept for this... or at least to
rework it...

Perhaps it is part of the platform specification, since the ext directory
is really a function of the platform in some senses. The current design is
really supposed to only work with ${java.home} paths... and didn't even
work with those for OSX at least until Oracle took over JDK on OSX... too
much abuse of this as a hack leads me to think that its current incarnation
is just bad design... and I would rather strip out bad design if we have
the chance... of course I am but one voice... if others make compelling
arguments against then we can let the project committers vote and decide in
the absence of consensus... but until we get to that point I think we need
a healthy debate... this is a subject we have ignored to our peril for far
far too long



 -Original Message-
 From: Stephen Connolly [mailto:stephen.alan.conno...@gmail.com]
 Sent: Sunday, November 24, 2013 1:34 PM
 To: Maven Developers List
 Subject: Re: Model Version 5.0.0

 On 24 November 2013 17:44, Jason van Zyl ja...@tesla.io wrote:

 
  On Nov 24, 2013, at 10:28 AM, Benson Margulies bimargul...@gmail.com
  wrote:
 
   It seems to me that this thread is mixing two topics.
  
   Topic #1: How to we move to pom 5.0, given a giant ecosystem of
   crappy XML-parsing POM consumers?
  
   Topic #2: To what extent does the pom mix a 'description of contract'
   (dependencies, etc) with a 'specification of build'?
  
   On the first topic, there was a wiki page months ago that explored a
   scheme for writing both a v4 pom and a v5 pom when deploying from a
   v5 project, so that old tools could see and consume what they
 understand.
   To the extent that this scheme made sense, it can be adopted without
   (necessarily) touching the second.
  
 
  If you are referring to this:
 
 
  https://cwiki.apache.org/confluence/display/MAVEN/Moving+forward+with+
  the+POM+data+model
 
  Then I think what this document lacks are the use cases that drive
  anything. Without actually having some target features you cannot
  understand what you require technically.
 
  I think from the discussion thus far we have the following features:
 
  - API provides (from Stephen)
  - Runtime requirements (from Manfred)
  - Global excludes (much asked for feature)
  - Global swaps (much asked for feature)
 

 Additionally, I think we should refine scopes... there are some that are
 likely missing and some, such as `system` that should be removed.

 Platform dependency *could* be handled by dependency, e.g.

 dependency gav=java:java:1.8:platform/

 could indicate that you need java 8 to run...

 The question though is how you handle multiple potential platforms, e.g.
 works on java 1.6 or android...

 That may require a change to the dependency model... some sort of
 dependency group... whereby any one of the deps in the group can satisfy
 the need...

 A potentially better solution would be a specific platform section... but
 is the more generic dep based solution perhaps better?



  Additionally by requirements:
  - Are we going to allow for extensibility?
  - Are we going to be future proof?
  - Are we going to provide backward compatibility?
 
  I believe this is where we start. Many of the answers about how the
  implementation will look will be driven by specific features and
  answers to requirements questions.
 

 Another point is that if we don't ack that we need to rev the spec and
 this may be the only chance to rev the spec for a while, we won't see the
 features we need.

 Hacking the 4.0.0 pom will only make baby steps and lead to hacky
 solutions... opening up the chance to rev the pom spec and schema opens up
 the potential for other ideas


 
   On the second topic, I'm in agreement that there should be a clear
   separation between describing a contract and other things. For
   example, why is it a good idea for deployed poms to reference
   parents, rather than being self-contained? Why is it a good idea for
   deployed poms to include profiles? Why is it a good thing for
   deployed poms to include parameter references, thereby in some cases
   accidently changing their semantics due to collisions with the
   consuming application's pom? The full 'here's how to build' pom, in
   my view, is part of the source, and 

Re: Model Version 5.0.0

2013-11-24 Thread Hervé BOUTEMY
Le dimanche 24 novembre 2013 10:26:13 Jason van Zyl a écrit :
 On Nov 24, 2013, at 12:19 AM, Manfred Moser manf...@mosabuam.com wrote:
  By separating consumption and production metadata formats, we'll be
  able to evolve production format more aggressively. For example, it
  would be nice to have Tycho-specific configuration markup inside build
  section. This is not currently possible because all poms must be
  compatible with the same model.
  
  I like the idea of consumptions specifics. It would be great if we could
  agree/define some sort of standard on how to declare suitability for
  artifacts for certain deployment scenarios ..
 
 I don't believe this requires separate documents to support this.

true, this does not require separate documents

but having separate documents helps separating concerns: building a project vs 
consuming its artifacts

and descriptor for consuption will be:
- a lot shorter than descriptor for building: not any plugin, for example
- build-agnostic

I'm pretty sure that separating descriptors will help us move forward and even 
better design things

Regards,

Hervé


-
To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
For additional commands, e-mail: dev-h...@maven.apache.org



Re: Model Version 5.0.0

2013-11-24 Thread Hervé BOUTEMY
don't we have toolchains for such a case?

Regards,

Hervé

Le dimanche 24 novembre 2013 20:13:38 Stephen Connolly a écrit :
 On 24 November 2013 19:42, Robert Patrick robert.patr...@oracle.com wrote:
   Additionally, I think we should refine scopes... there are some that are
  
  likely missing and some, such as `system` that should be removed.
  
  Pardon my ignorance but while I understand the negative implications of
  using system-scoped dependencies, I believe there are cases at least a few
  use cases where they are required.  For example, we have a plugin that
  depends on tools.jar from the JDK.  We currently use a system-scoped
  dependency for this.  If you were to remove system-scoped dependencies,
  how
  would you propose that people handle use cases such as this?
 
 I think that we need a less Java centric concept for this... or at least to
 rework it...
 
 Perhaps it is part of the platform specification, since the ext directory
 is really a function of the platform in some senses. The current design is
 really supposed to only work with ${java.home} paths... and didn't even
 work with those for OSX at least until Oracle took over JDK on OSX... too
 much abuse of this as a hack leads me to think that its current incarnation
 is just bad design... and I would rather strip out bad design if we have
 the chance... of course I am but one voice... if others make compelling
 arguments against then we can let the project committers vote and decide in
 the absence of consensus... but until we get to that point I think we need
 a healthy debate... this is a subject we have ignored to our peril for far
 far too long
 
  -Original Message-
  From: Stephen Connolly [mailto:stephen.alan.conno...@gmail.com]
  Sent: Sunday, November 24, 2013 1:34 PM
  To: Maven Developers List
  Subject: Re: Model Version 5.0.0
  
  On 24 November 2013 17:44, Jason van Zyl ja...@tesla.io wrote:
   On Nov 24, 2013, at 10:28 AM, Benson Margulies bimargul...@gmail.com
   
   wrote:
It seems to me that this thread is mixing two topics.

Topic #1: How to we move to pom 5.0, given a giant ecosystem of
crappy XML-parsing POM consumers?

Topic #2: To what extent does the pom mix a 'description of contract'
(dependencies, etc) with a 'specification of build'?

On the first topic, there was a wiki page months ago that explored a
scheme for writing both a v4 pom and a v5 pom when deploying from a
v5 project, so that old tools could see and consume what they
  
  understand.
  
To the extent that this scheme made sense, it can be adopted without
(necessarily) touching the second.
   
   If you are referring to this:
   
   
   https://cwiki.apache.org/confluence/display/MAVEN/Moving+forward+with+
   the+POM+data+model
   
   Then I think what this document lacks are the use cases that drive
   anything. Without actually having some target features you cannot
   understand what you require technically.
   
   I think from the discussion thus far we have the following features:
   
   - API provides (from Stephen)
   - Runtime requirements (from Manfred)
   - Global excludes (much asked for feature)
   - Global swaps (much asked for feature)
  
  Additionally, I think we should refine scopes... there are some that are
  likely missing and some, such as `system` that should be removed.
  
  Platform dependency *could* be handled by dependency, e.g.
  
  dependency gav=java:java:1.8:platform/
  
  could indicate that you need java 8 to run...
  
  The question though is how you handle multiple potential platforms, e.g.
  works on java 1.6 or android...
  
  That may require a change to the dependency model... some sort of
  dependency group... whereby any one of the deps in the group can satisfy
  the need...
  
  A potentially better solution would be a specific platform section... but
  is the more generic dep based solution perhaps better?
  
   Additionally by requirements:
   - Are we going to allow for extensibility?
   - Are we going to be future proof?
   - Are we going to provide backward compatibility?
   
   I believe this is where we start. Many of the answers about how the
   implementation will look will be driven by specific features and
   answers to requirements questions.
  
  Another point is that if we don't ack that we need to rev the spec and
  this may be the only chance to rev the spec for a while, we won't see the
  features we need.
  
  Hacking the 4.0.0 pom will only make baby steps and lead to hacky
  solutions... opening up the chance to rev the pom spec and schema opens up
  the potential for other ideas
  
On the second topic, I'm in agreement that there should be a clear
separation between describing a contract and other things. For
example, why is it a good idea for deployed poms to reference
parents, rather than being self-contained? Why is it a good idea for
deployed poms to include profiles? Why is it a good thing for

Re: Model Version 5.0.0

2013-11-24 Thread Hervé BOUTEMY
ah ok, I better understand your intend with provides: it's not a way to find 
implementers (like expected by Igor and I), but a way to avoid collisions

I didn't think at such an approach for the moment: need to thk=ink more at it

but at a first glance, I find your idea better than what I feared previously :)

Regards,

Hervé

Le dimanche 24 novembre 2013 16:16:33 Stephen Connolly a écrit :
 On Sunday, 24 November 2013, Igor Fedorenko wrote:
  How do you find all artifacts that provide gav=javax:servlet-api:3.0?
 
 You don't need to.
 
 You just need to treat it as a global excludes on javax:servlet-api
 
 The difference is that it also excludes any other poms that get pulled in
 transitively and also have the same provides...
 
 You only need to look at the poms that are resolved via the current pom for
 which we are evaluating the dependency tree
 
  One option is to download entire repository index to the client, but
  Central index will likely be in 100x of megabytes, which makes this
  approach impractical. The only other option is to keep the index on the
  server and have server-side helper to answer index queries.
  
  --
  Regards,
  Igor
  
  On 11/24/2013, 10:38, Stephen Connolly wrote:
  
  On Sunday, 24 November 2013, Igor Fedorenko wrote:
   I think we are saying the same thing -- we evolve project model used
  
  during the build but deploy both the new and backwards compatible models.
  
  One quick note on representing dependencies as provided/required
  capabilities.
  
  
  
  I think it needs to be deterministic, which means it should not need an
  active server-side assist.
  
  A pom could declare
  
  provides
  
 provide gav=javax:servlet-api:3.0/
  
  /provides
  
  That means if you declare *that* pom as a dependency of yours it will (by
  being nearer in the graph) replace any servlet-api dependencies in your
  graph.
  
  You can also do similar with dependencies, eg
  
  dependency gav=org.slf4j:log4j-over-slf4j:1.7
  
 provides
 
   provide gav=log4j:log4j:1.2/
 
 /provides
  
  /dependency
  
  Either form is deterministic and does not introduce dynamic resolution
  into
  the model... But they make the things people want to do a lot easier IMHO
  
  Although I like this idea in general, I believe it will
  
  require completely new repository layout to be able to efficiently
  find capability providers. Single repository-wide metadata index file,
  the approach implemented in P2 for example, won't scale for repositories
  of the size of Central, so most likely the new repository layout will
  require active server-side component to assist dependency resolution.
  
  --
  Regards,
  Igor
  
  On 11/24/2013, 4:25, Stephen Connolly wrote:
  
  On Sunday, 24 November 2013, Igor Fedorenko wrote:
  
  
  
  On 11/23/2013, 23:08, Jason van Zyl wrote:
  
  
  On Nov 23, 2013, at 5:44 PM, Stephen Connolly
  
  stephen.alan.conno...@gmail.com wrote:
 Before I forget, here are some of my thoughts on moving towards
  
  Model Version 5.0.0
  
  The pom that we build with need not be the pom that gets
  deployed... thus the pom that is built with need not be the same
  format as the pom that gets deployed.
  
Can you explain why you think this is useful? To me all the
  
  information that is carried with the POM after deployment is
  primarily for the consumption of tools, and there are a lot of tools
  that expect more than the dependency information. Removing all other
  elements in the POM is equivalent to being massively backward
  incompatible for an API. And if the subsequent consumption after
  deployment is primarily by programs, then why does it matter what
  gets deployed. I don't really see much benefit, but will create all
  sorts of technical problems where we need multiple readers and all
  that entails and the massive number of problems this will cause
  people who have created tooling, especially IDE integration. 
  
  
  The way I see it, what is deployed describes how the artifact needs to
  be consumed. This is artifact's public API, if you will, it will be
  consumed by wide range of tools that resolve dependencies from Maven
  repositories and descriptor format should be very stable. Mostly likely
  we have no choice but use (a subset of) the current 4.0.0 model version.
  
  
  
  I would be very sad if we are limited to a subset.
  
  There are some critical concepts that in my view are missing from pom
  files.
  
  Number one on my hit list is a provides concept.
  
  Where you declare that an artifact *provides* the same api as another GAV.
  
  Technically you'd need to be able to specify this both at the root of a
  pom
  and also flag specific dependencies (because the api they provide was not
  specified when that pom was deployed)
  
  Thus the Geronimo specs poms could all provides the corresponding JavaEE
  specs and excludes issues or other hacks would no longer be required.
  
  Look at the issues you will have if you 

Re: Model Version 5.0.0

2013-11-24 Thread Hervé BOUTEMY
[...]
 I think this sounds nice in theory but losing the information about how an
 artifact is produced is not a good idea.
for consumers, plugins information is really bloat

 I also don't think having a bunch
 of different tools to read one format or another is manageable.
we can have multiple tools only is the format is simple: format for consuption 
will probably be simpler than format for build

 I think
 that making readers that are more accepting of different versions and
 accommodating different elements is another approach. Keeping it all
 together forces you to think about the implications of a change.
it keeps consumers tied to the tool used by artifact producer to build: not 
ideal either

 
 I think general extensibility of the format might be useful but in a general
 reader. Right now specific tools can work around this issue by having a
 plugin define specifics for a type. While not ideal it works but is more
 akin to a general extension mechanism that works with a single type of
 accommodating reader.
 
 I think splitting building vs consumption will open a huge can of worms.
for sure, this is a big change
but I'm convinced this is worth to try

 Now
 I'm all for being able to aggressively change the format, but I would
 rather have a single document per version of the model. Possibly think
 about a future proof version and publish just continue to publish a model
 version 4.0.0 along side it indefinitely. I'm not sure how build vs
 consumption actually helps us evolve the model.
I didn't try to classify our wishes for evolution between consuption and build
Evolution on consuption format will always be hard, because there is a full 
ecosystem for consuption
Evolution on build will be easier, since you can simply require someone to use 
a neer version of your tool if he wants to build some project instead of 
consuming its pre-built artifacts

Regards,

Hervé

  --
  Regards,
  Igor
  
  Only with packagingpom/packaging do we actually need things like the
  plugins section in the deployed pom, because it is a reality that for
  noo-pom packaging we just want the transitive dependencies.
  
  Now there is the extensions issue where you might be registering a
  different file type that has different rules with respect to the
  classpath... but I am unsure if we actually consider those when
  evaluating
  the dependency tree... and in any case, once we accept that the deployed
  pom is not the same as the pom used to build (for non-pom packaging at
  least) we can transform that dependency tree using the exact rules that
  have to be known at build time thus closing the extensions issue.
  
  For projects with packagingpom/packaging in fact we are only
  deploying
  smal files so perhaps we can deploy two pom files... the one that
  exposes
  the standard dependency stuff and then a second one that is used for
  build
  inheritance.
  
  My vision is thus that we deploy between 2 and three pom files for every
  project.
  
  For jar/war/ear/... we deploy:
  * a modelVersion 4.0.0 pom as .pom (only lists dependencies)
  * a modelVersion 5.0.0 pom as -v5.pom (only lists dependencies but
  allows
  for new scopes)
  
  For pom we deploy
  * a modelVersion 4.0.0 pom as .pom (only lists dependencies)
  * a modelVersion 5.0.0 pom as -v5.pom (only lists dependencies but
  allows
  for new scopes)
  * the pom itself
  
  When building a pom, your parent pom must be of a modelVersion = your
  modelVersion.
  
  Thanks,
  
  Jason
  
  --
  Jason van Zyl
  Founder,  Apache Maven
  http://twitter.com/jvanzyl
  -
  
  -
  To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
  For additional commands, e-mail: dev-h...@maven.apache.org
 
 Thanks,
 
 Jason
 
 --
 Jason van Zyl
 Founder,  Apache Maven
 http://twitter.com/jvanzyl
 -


-
To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
For additional commands, e-mail: dev-h...@maven.apache.org



Re: Model Version 5.0.0

2013-11-24 Thread Hervé BOUTEMY
Le dimanche 24 novembre 2013 16:58:33 Stephen Connolly a écrit :
 Given that deployed poms can be generated by Gradle, Buildr, etc... It
 makes no sense to include build information in the pom (unless it is a
 parent pom)
if you think at it, a parent pom is a pure build configuration for sharing 
between multiple modules

it's not used by pure artifact consumers
it's used by builders for sharing

Regards,

Hervé

-
To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
For additional commands, e-mail: dev-h...@maven.apache.org



Re: Model Version 5.0.0

2013-11-24 Thread Barrie Treloar
On 25 November 2013 03:28, Stephen Connolly
stephen.alan.conno...@gmail.com wrote:
[del]
 Given that we have decided that the reporting stuff possibly was a
 mistake... Well let's drop that

 Given that profiles do not make sense in deployed poms... Drop them too

 We think repositories is evil... Let's drop that... We've dropped build
 and reporting= no need for pluginRepositories at all so.

I'm in favour of cleaning out elements that cause problems when they
are tweaked in a the non-Maven Way.
The emails to the users list would be reduce and there is less chance
of causing confusion.

Applying the current best practises and baking them into the poms is
a good thing.

-
To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
For additional commands, e-mail: dev-h...@maven.apache.org



Re: Model Version 5.0.0

2013-11-24 Thread Stephen Connolly
That's why I say parent poms are deployed in three formats: 4.0.0, 5.0.0+
and build. And you specify that your parent Pom must be = modelVersion of
child pom so that it can evolve as needed

On Sunday, 24 November 2013, Hervé BOUTEMY wrote:

 Le dimanche 24 novembre 2013 16:58:33 Stephen Connolly a écrit :
  Given that deployed poms can be generated by Gradle, Buildr, etc... It
  makes no sense to include build information in the pom (unless it is a
  parent pom)
 if you think at it, a parent pom is a pure build configuration for sharing
 between multiple modules

 it's not used by pure artifact consumers
 it's used by builders for sharing

 Regards,

 Hervé

 -
 To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org javascript:;
 For additional commands, e-mail: dev-h...@maven.apache.org javascript:;



-- 
Sent from my phone


Re: Model Version 5.0.0

2013-11-24 Thread Stephen Connolly
Tool chains don't help at runtime... But for system scope you likely need a
level above to inject the deps into the ext folder (or use a system
property for the JVM startup options)... But point us these are deps
required but required outside of the scope that is reasonably managed by
maven. They fit mire as platform extensions to my mind


On Sunday, 24 November 2013, Hervé BOUTEMY wrote:

 don't we have toolchains for such a case?

 Regards,

 Hervé

 Le dimanche 24 novembre 2013 20:13:38 Stephen Connolly a écrit :
  On 24 November 2013 19:42, Robert Patrick robert.patr...@oracle.com
 wrote:
Additionally, I think we should refine scopes... there are some that
 are
  
   likely missing and some, such as `system` that should be removed.
  
   Pardon my ignorance but while I understand the negative implications of
   using system-scoped dependencies, I believe there are cases at least a
 few
   use cases where they are required.  For example, we have a plugin that
   depends on tools.jar from the JDK.  We currently use a system-scoped
   dependency for this.  If you were to remove system-scoped dependencies,
   how
   would you propose that people handle use cases such as this?
 
  I think that we need a less Java centric concept for this... or at least
 to
  rework it...
 
  Perhaps it is part of the platform specification, since the ext directory
  is really a function of the platform in some senses. The current design
 is
  really supposed to only work with ${java.home} paths... and didn't even
  work with those for OSX at least until Oracle took over JDK on OSX... too
  much abuse of this as a hack leads me to think that its current
 incarnation
  is just bad design... and I would rather strip out bad design if we have
  the chance... of course I am but one voice... if others make compelling
  arguments against then we can let the project committers vote and decide
 in
  the absence of consensus... but until we get to that point I think we
 need
  a healthy debate... this is a subject we have ignored to our peril for
 far
  far too long
 
   -Original Message-
   From: Stephen Connolly [mailto:stephen.alan.conno...@gmail.com]
   Sent: Sunday, November 24, 2013 1:34 PM
   To: Maven Developers List
   Subject: Re: Model Version 5.0.0
  
   On 24 November 2013 17:44, Jason van Zyl ja...@tesla.io wrote:
On Nov 24, 2013, at 10:28 AM, Benson Margulies 
 bimargul...@gmail.com
   
wrote:
 It seems to me that this thread is mixing two topics.

 Topic #1: How to we move to pom 5.0, given a giant ecosystem of
 crappy XML-parsing POM consumers?

 Topic #2: To what extent does the pom mix a 'description of
 contract'
 (dependencies, etc) with a 'specification of build'?

 On the first topic, there was a wiki page months ago that explored
 a
 scheme for writing both a v4 pom and a v5 pom when deploying from a
 v5 project, so that old tools could see and consume what they
  
   understand.
  
 To the extent that this scheme made sense, it can be adopted
 without
 (necessarily) touching the second.
   
If you are referring to this:
   
   
   
 https://cwiki.apache.org/confluence/display/MAVEN/Moving+forward+with+
the+POM+data+model
   
Then I think what this document lacks are the use cases that drive
anything. Without actually having some target features you cannot
understand what you require technically.
   
I think from the discussion thus far we have the following features:
   
- API provides (from Stephen)
- Runtime requirements (from Manfred)
- Global excludes (much asked for feature)
- Global swaps (much asked for feature)
  
   Additionally, I think we should refine scopes... there are some that
 are
   likely missing and some, such as `system` that should be removed.
  
   Platform dependency *could* be handled by dependency, e.g.
  
   dependency gav=java:java:1.8:platform/
  
   could indicate that you need java 8 to run.



-- 
Sent from my phone


usage of hidden.edu.emory.mathcs.backport.java.util.concurrent

2013-11-24 Thread Sergey Bondarenko
Good afternoon,

I have caught a deadlock in Maven several times, when it was executing
TestNG tests (see the thread dump attached).
It was happening in
hidden.edu.emory.mathcs.backport.java.util.concurrent.LinkedBlockingQueue.

Do you think it is a defect in this concurrency back-port?

Is there any reason why Maven uses the back-port when running in Java 7?
Should not it use default Java implementation instead?

Thanks a lot for any feedback,
Sergey

-
To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
For additional commands, e-mail: dev-h...@maven.apache.org

Re: usage of hidden.edu.emory.mathcs.backport.java.util.concurrent

2013-11-24 Thread Olivier Lamy
Within Maven core or a plugin? I'm not sure Maven core use that..

BTW Hard to know without any logs and/or stack trace or/and the Maven
version you are using.




On 22 November 2013 06:16, Sergey Bondarenko ente...@gmail.com wrote:
 Good afternoon,

 I have caught a deadlock in Maven several times, when it was executing
 TestNG tests (see the thread dump attached).
 It was happening in
 hidden.edu.emory.mathcs.backport.java.util.concurrent.LinkedBlockingQueue.

 Do you think it is a defect in this concurrency back-port?

 Is there any reason why Maven uses the back-port when running in Java 7?
 Should not it use default Java implementation instead?

 Thanks a lot for any feedback,
 Sergey


 -
 To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
 For additional commands, e-mail: dev-h...@maven.apache.org



-- 
Olivier Lamy
Ecetera: http://ecetera.com.au
http://twitter.com/olamy | http://linkedin.com/in/olamy

-
To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
For additional commands, e-mail: dev-h...@maven.apache.org



[VOTE] Apache Maven Shade Plugin 2.2

2013-11-24 Thread Olivier Lamy
Hi,
I'd like to release Apache Maven Shade plugin 2.2

We fixed 1 issue:
http://jira.codehaus.org/secure/ReleaseNote.jspa?projectId=11540version=18768

Staging repository:
https://repository.apache.org/content/repositories/maven-002/

Source release:
https://repository.apache.org/content/repositories/maven-002/org/apache/maven/plugins/maven-shade-plugin/2.2/maven-shade-plugin-2.2-source-release.zip

Staging site: 
http://maven.apache.org/plugins-archives/maven-shade-plugin-LATEST/

Vote open for 72H.

[+1]
[0]
[-1]

Thanks
-- 
Olivier Lamy
Ecetera: http://ecetera.com.au
http://twitter.com/olamy | http://linkedin.com/in/olamy

-
To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
For additional commands, e-mail: dev-h...@maven.apache.org



Re: Model Version 5.0.0

2013-11-24 Thread Manfred Moser
 On Sunday, 24 November 2013, Manfred Moser wrote:


  By separating consumption and production metadata formats, we'll
 be
  able to evolve production format more aggressively. For example, it
  would be nice to have Tycho-specific configuration markup inside
 build
  section. This is not currently possible because all poms must be
  compatible with the same model.

 I like the idea of consumptions specifics. It would be great if we could
 agree/define some sort of standard on how to declare suitability for
 artifacts for certain deployment scenarios ..
 e.g. it is jar suitable for Java 6, 7, 8, 9 or what, what about running
 on
 Android, or on some embedded Java version profile.

 I dont believe that the previous approaches of using classifiers is just
 not powerful enough. And I also agree that we should potentially just
 stick to the existing format.

 E.g. nothing stops us from declaring a standard for e.g. for a bunch of
 properties like

 properties
  runtime.androidtrue/runtime.android
  runtime.java6true/runtime.java6
 /properties

 or
 properties
  runtime.androidfalse/runtime.android
  runtime.java6false/runtime.java6
  runtime.java7true/runtime.java7
 /properties


 How is that any different from having a modelVersion 5.0.0? (Other than
 not
 giving the benefit of a schema change)


It probably isnt different ;-)

manfred

-
To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
For additional commands, e-mail: dev-h...@maven.apache.org



Re: usage of hidden.edu.emory.mathcs.backport.java.util.concurrent

2013-11-24 Thread Sergey Bondarenko
Hi Olivier,

The package is part of apache-maven-2.2.1/lib/maven-2.2.1-uber.jar.
It looks like it is a part of Maven Core (at least it is a part of standard
distribution).

I do not know why the attached thread dump did not work for you, so I am
sending it as a pastebin: http://pastebin.com/T1MAkwL7

There is no package like this in Maven 3.0.4, so maybe that's just an old
version.

I am getting the same deadlock at least once a week, when I am running
TestNG-based UI tests (Webdriver) using Surefire.
Chromedriver crashes, and it leads to the deadlock in Maven. Note, I am
connecting to Chromedriver through Selenium standalone server, so the code
that gets deadlock is executed in separate VM, so it has nothing to do with
Chromedriver's native calls.

If I use Maven 3.0.4 and get the same Chromedriver cache, I do not get a
deadlock in Maven, so the issue is specific to version 2.2.1.
It sounds like a bug in hidden.edu.emory.mathcs.
backport.java.util.concurrent.LinkedBlockingQueue, so it Maven 2 line is
still supported, maybe it makes sense to fix an error there, or to get rid
of this package at all ...

What do you think about it?

Thanks,
Sergey


2013/11/24 Olivier Lamy ol...@apache.org

 Within Maven core or a plugin? I'm not sure Maven core use that..

 BTW Hard to know without any logs and/or stack trace or/and the Maven
 version you are using.




 On 22 November 2013 06:16, Sergey Bondarenko ente...@gmail.com wrote:
  Good afternoon,
 
  I have caught a deadlock in Maven several times, when it was executing
  TestNG tests (see the thread dump attached).
  It was happening in
 
 hidden.edu.emory.mathcs.backport.java.util.concurrent.LinkedBlockingQueue.
 
  Do you think it is a defect in this concurrency back-port?
 
  Is there any reason why Maven uses the back-port when running in Java 7?
  Should not it use default Java implementation instead?
 
  Thanks a lot for any feedback,
  Sergey
 
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
  For additional commands, e-mail: dev-h...@maven.apache.org



 --
 Olivier Lamy
 Ecetera: http://ecetera.com.au
 http://twitter.com/olamy | http://linkedin.com/in/olamy

 -
 To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
 For additional commands, e-mail: dev-h...@maven.apache.org




Re: usage of hidden.edu.emory.mathcs.backport.java.util.concurrent

2013-11-24 Thread Olivier Lamy
sounds more an issue in surefire plugin.
Try use last version of this plugin. (2.16)



On 25 November 2013 15:29, Sergey Bondarenko ente...@gmail.com wrote:
 Hi Olivier,

 The package is part of apache-maven-2.2.1/lib/maven-2.2.1-uber.jar.
 It looks like it is a part of Maven Core (at least it is a part of standard
 distribution).

 I do not know why the attached thread dump did not work for you, so I am
 sending it as a pastebin: http://pastebin.com/T1MAkwL7

 There is no package like this in Maven 3.0.4, so maybe that's just an old
 version.

 I am getting the same deadlock at least once a week, when I am running
 TestNG-based UI tests (Webdriver) using Surefire.
 Chromedriver crashes, and it leads to the deadlock in Maven. Note, I am
 connecting to Chromedriver through Selenium standalone server, so the code
 that gets deadlock is executed in separate VM, so it has nothing to do with
 Chromedriver's native calls.

 If I use Maven 3.0.4 and get the same Chromedriver cache, I do not get a
 deadlock in Maven, so the issue is specific to version 2.2.1.
 It sounds like a bug in hidden.edu.emory.mathcs.
 backport.java.util.concurrent.LinkedBlockingQueue, so it Maven 2 line is
 still supported, maybe it makes sense to fix an error there, or to get rid
 of this package at all ...

 What do you think about it?

 Thanks,
 Sergey


 2013/11/24 Olivier Lamy ol...@apache.org

 Within Maven core or a plugin? I'm not sure Maven core use that..

 BTW Hard to know without any logs and/or stack trace or/and the Maven
 version you are using.




 On 22 November 2013 06:16, Sergey Bondarenko ente...@gmail.com wrote:
  Good afternoon,
 
  I have caught a deadlock in Maven several times, when it was executing
  TestNG tests (see the thread dump attached).
  It was happening in
 
 hidden.edu.emory.mathcs.backport.java.util.concurrent.LinkedBlockingQueue.
 
  Do you think it is a defect in this concurrency back-port?
 
  Is there any reason why Maven uses the back-port when running in Java 7?
  Should not it use default Java implementation instead?
 
  Thanks a lot for any feedback,
  Sergey
 
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
  For additional commands, e-mail: dev-h...@maven.apache.org



 --
 Olivier Lamy
 Ecetera: http://ecetera.com.au
 http://twitter.com/olamy | http://linkedin.com/in/olamy

 -
 To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
 For additional commands, e-mail: dev-h...@maven.apache.org





-- 
Olivier Lamy
Ecetera: http://ecetera.com.au
http://twitter.com/olamy | http://linkedin.com/in/olamy

-
To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
For additional commands, e-mail: dev-h...@maven.apache.org



Re: usage of hidden.edu.emory.mathcs.backport.java.util.concurrent

2013-11-24 Thread Sergey Bondarenko
I use Surefire 2.16, and the problem is reproducible with that version. Why
do you think it is Surefire? Isn't that package part of Maven Core?

Thanks,
Sergey


2013/11/24 Olivier Lamy ol...@apache.org

 sounds more an issue in surefire plugin.
 Try use last version of this plugin. (2.16)



 On 25 November 2013 15:29, Sergey Bondarenko ente...@gmail.com wrote:
  Hi Olivier,
 
  The package is part of apache-maven-2.2.1/lib/maven-2.2.1-uber.jar.
  It looks like it is a part of Maven Core (at least it is a part of
 standard
  distribution).
 
  I do not know why the attached thread dump did not work for you, so I am
  sending it as a pastebin: http://pastebin.com/T1MAkwL7
 
  There is no package like this in Maven 3.0.4, so maybe that's just an old
  version.
 
  I am getting the same deadlock at least once a week, when I am running
  TestNG-based UI tests (Webdriver) using Surefire.
  Chromedriver crashes, and it leads to the deadlock in Maven. Note, I am
  connecting to Chromedriver through Selenium standalone server, so the
 code
  that gets deadlock is executed in separate VM, so it has nothing to do
 with
  Chromedriver's native calls.
 
  If I use Maven 3.0.4 and get the same Chromedriver cache, I do not get a
  deadlock in Maven, so the issue is specific to version 2.2.1.
  It sounds like a bug in hidden.edu.emory.mathcs.
  backport.java.util.concurrent.LinkedBlockingQueue, so it Maven 2 line is
  still supported, maybe it makes sense to fix an error there, or to get
 rid
  of this package at all ...
 
  What do you think about it?
 
  Thanks,
  Sergey
 
 
  2013/11/24 Olivier Lamy ol...@apache.org
 
  Within Maven core or a plugin? I'm not sure Maven core use that..
 
  BTW Hard to know without any logs and/or stack trace or/and the Maven
  version you are using.
 
 
 
 
  On 22 November 2013 06:16, Sergey Bondarenko ente...@gmail.com wrote:
   Good afternoon,
  
   I have caught a deadlock in Maven several times, when it was executing
   TestNG tests (see the thread dump attached).
   It was happening in
  
 
 hidden.edu.emory.mathcs.backport.java.util.concurrent.LinkedBlockingQueue.
  
   Do you think it is a defect in this concurrency back-port?
  
   Is there any reason why Maven uses the back-port when running in Java
 7?
   Should not it use default Java implementation instead?
  
   Thanks a lot for any feedback,
   Sergey
  
  
   -
   To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
   For additional commands, e-mail: dev-h...@maven.apache.org
 
 
 
  --
  Olivier Lamy
  Ecetera: http://ecetera.com.au
  http://twitter.com/olamy | http://linkedin.com/in/olamy
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
  For additional commands, e-mail: dev-h...@maven.apache.org
 
 



 --
 Olivier Lamy
 Ecetera: http://ecetera.com.au
 http://twitter.com/olamy | http://linkedin.com/in/olamy

 -
 To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
 For additional commands, e-mail: dev-h...@maven.apache.org




Re: usage of hidden.edu.emory.mathcs.backport.java.util.concurrent

2013-11-24 Thread Olivier Lamy
reading your stack trace.

No that's not part of maven core.
Maven core define a version you can override.
So as Core 2.2.1 is a bit old it comes with old surefire version
Except if you override that in your pom.



On 25 November 2013 16:32, Sergey Bondarenko ente...@gmail.com wrote:
 I use Surefire 2.16, and the problem is reproducible with that version. Why
 do you think it is Surefire? Isn't that package part of Maven Core?

 Thanks,
 Sergey


 2013/11/24 Olivier Lamy ol...@apache.org

 sounds more an issue in surefire plugin.
 Try use last version of this plugin. (2.16)



 On 25 November 2013 15:29, Sergey Bondarenko ente...@gmail.com wrote:
  Hi Olivier,
 
  The package is part of apache-maven-2.2.1/lib/maven-2.2.1-uber.jar.
  It looks like it is a part of Maven Core (at least it is a part of
 standard
  distribution).
 
  I do not know why the attached thread dump did not work for you, so I am
  sending it as a pastebin: http://pastebin.com/T1MAkwL7
 
  There is no package like this in Maven 3.0.4, so maybe that's just an old
  version.
 
  I am getting the same deadlock at least once a week, when I am running
  TestNG-based UI tests (Webdriver) using Surefire.
  Chromedriver crashes, and it leads to the deadlock in Maven. Note, I am
  connecting to Chromedriver through Selenium standalone server, so the
 code
  that gets deadlock is executed in separate VM, so it has nothing to do
 with
  Chromedriver's native calls.
 
  If I use Maven 3.0.4 and get the same Chromedriver cache, I do not get a
  deadlock in Maven, so the issue is specific to version 2.2.1.
  It sounds like a bug in hidden.edu.emory.mathcs.
  backport.java.util.concurrent.LinkedBlockingQueue, so it Maven 2 line is
  still supported, maybe it makes sense to fix an error there, or to get
 rid
  of this package at all ...
 
  What do you think about it?
 
  Thanks,
  Sergey
 
 
  2013/11/24 Olivier Lamy ol...@apache.org
 
  Within Maven core or a plugin? I'm not sure Maven core use that..
 
  BTW Hard to know without any logs and/or stack trace or/and the Maven
  version you are using.
 
 
 
 
  On 22 November 2013 06:16, Sergey Bondarenko ente...@gmail.com wrote:
   Good afternoon,
  
   I have caught a deadlock in Maven several times, when it was executing
   TestNG tests (see the thread dump attached).
   It was happening in
  
 
 hidden.edu.emory.mathcs.backport.java.util.concurrent.LinkedBlockingQueue.
  
   Do you think it is a defect in this concurrency back-port?
  
   Is there any reason why Maven uses the back-port when running in Java
 7?
   Should not it use default Java implementation instead?
  
   Thanks a lot for any feedback,
   Sergey
  
  
   -
   To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
   For additional commands, e-mail: dev-h...@maven.apache.org
 
 
 
  --
  Olivier Lamy
  Ecetera: http://ecetera.com.au
  http://twitter.com/olamy | http://linkedin.com/in/olamy
 
  -
  To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
  For additional commands, e-mail: dev-h...@maven.apache.org
 
 



 --
 Olivier Lamy
 Ecetera: http://ecetera.com.au
 http://twitter.com/olamy | http://linkedin.com/in/olamy

 -
 To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
 For additional commands, e-mail: dev-h...@maven.apache.org





-- 
Olivier Lamy
Ecetera: http://ecetera.com.au
http://twitter.com/olamy | http://linkedin.com/in/olamy

-
To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
For additional commands, e-mail: dev-h...@maven.apache.org



Re: Model Version 5.0.0

2013-11-24 Thread Kristian Rosenvold
IMO publishing to central/acrhiva would involve publishing the richest
format available. Based on use-agent identification (or lack of a given
request param indicating old-style client) the repository should be able to
down-transform a v5 pom to a v4 pom on the fly ? We're not going to be
losing semantic
backward compatibility on any of the changes I've seen suggested yet ?

Also, did I miss the bit where someone explained why the whole how to
build section cannot be stripped away upon publication ? I don't
understand why that means we need multuiple files.

I'm exposed to the competition at @dayjob these days, and I must say I
think reducing verobosity and duplication is /the/ most important feture of
 a v5 pom for me.

Kristian



2013/11/25 Manfred Moser manf...@mosabuam.com

  On Sunday, 24 November 2013, Manfred Moser wrote:
 
 
   By separating consumption and production metadata formats, we'll
  be
   able to evolve production format more aggressively. For example, it
   would be nice to have Tycho-specific configuration markup inside
  build
   section. This is not currently possible because all poms must be
   compatible with the same model.
 
  I like the idea of consumptions specifics. It would be great if we could
  agree/define some sort of standard on how to declare suitability for
  artifacts for certain deployment scenarios ..
  e.g. it is jar suitable for Java 6, 7, 8, 9 or what, what about running
  on
  Android, or on some embedded Java version profile.
 
  I dont believe that the previous approaches of using classifiers is just
  not powerful enough. And I also agree that we should potentially just
  stick to the existing format.
 
  E.g. nothing stops us from declaring a standard for e.g. for a bunch of
  properties like
 
  properties
   runtime.androidtrue/runtime.android
   runtime.java6true/runtime.java6
  /properties
 
  or
  properties
   runtime.androidfalse/runtime.android
   runtime.java6false/runtime.java6
   runtime.java7true/runtime.java7
  /properties
 
 
  How is that any different from having a modelVersion 5.0.0? (Other than
  not
  giving the benefit of a schema change)


 It probably isnt different ;-)

 manfred

 -
 To unsubscribe, e-mail: dev-unsubscr...@maven.apache.org
 For additional commands, e-mail: dev-h...@maven.apache.org