[ https://issues.apache.org/jira/browse/HADOOP-9991?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13774389#comment-13774389 ]
Steve Loughran commented on HADOOP-9991: ---------------------------------------- Proposed Fixes # turn on maven enforcement in the build so highlight inconsistencies coming in from below (e.g Avro's SLF4J dependency) # fix those inconsistencies by excluding the conflicts coming from dependencies # Add explicit imports and scope limits on all dependencies, with version numbers we manage # Tightening the downstream exported dependencies, so hadoop-client only declares dependencies on the JARs it really needs (not, say JUnit). # enum later versions of JARs that we can easily migrate to simply by incrementing JARs: and doing that in trunk, with the enforcer identifying more dependency problems to address. # identifying low-cost updates (ideally those with patches in, like the JetS3t/S3 patch), and selectively applying them -again, fixing problems. I'd push this all at trunk, though items 1-4 could be backported to 2.x once complete > Fix up Hadoop Poms for enforced dependencies, roll up JARs to latest versions > ----------------------------------------------------------------------------- > > Key: HADOOP-9991 > URL: https://issues.apache.org/jira/browse/HADOOP-9991 > Project: Hadoop Common > Issue Type: Improvement > Components: build > Affects Versions: 2.3.0, 2.1.1-beta > Reporter: Steve Loughran > > If you try using Hadoop downstream with a classpath shared with HBase and > Accumulo, you soon discover how messy the dependencies are. > Hadoop's side of this problem is > # not being up to date with some of the external releases of common JARs > # not locking down/excluding inconsistent versions of artifacts provided down > the dependency graph -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira