Re: Hadoop in Fedora updated to 2.2.0

2013-10-31 Thread Steve Loughran
On 30 October 2013 18:43, Robert Rati rr...@redhat.com wrote:

 I've updated the version of Hadoop in Fedora 20 to 2.2.0.  This means
 Hadoop 2.2.0 will be the included in the official release of Fedora 20.

 Hadoop on Fedora is running against numerous updated dependencies,
 including:
 Java 7 (OpenJDK IcedTea)
 Jetty 9
 Tomcat 7
 Jets3t 0.9.0

 I've logged/updated jiras for all the changes we've made that could be
 useful to the Hadoop project:

 https://issues.apache.org/**jira/browse/HADOOP-9594https://issues.apache.org/jira/browse/HADOOP-9594
 https://issues.apache.org/**jira/browse/MAPREDUCE-5431https://issues.apache.org/jira/browse/MAPREDUCE-5431
 https://issues.apache.org/**jira/browse/HADOOP-9611https://issues.apache.org/jira/browse/HADOOP-9611
 https://issues.apache.org/**jira/browse/HADOOP-9613https://issues.apache.org/jira/browse/HADOOP-9613
 https://issues.apache.org/**jira/browse/HADOOP-9623https://issues.apache.org/jira/browse/HADOOP-9623
 https://issues.apache.org/**jira/browse/HDFS-5411https://issues.apache.org/jira/browse/HDFS-5411
 https://issues.apache.org/**jira/browse/HADOOP-10067https://issues.apache.org/jira/browse/HADOOP-10067
 https://issues.apache.org/**jira/browse/HDFS-5075https://issues.apache.org/jira/browse/HDFS-5075
 https://issues.apache.org/**jira/browse/HADOOP-10068https://issues.apache.org/jira/browse/HADOOP-10068
 https://issues.apache.org/**jira/browse/HADOOP-10075https://issues.apache.org/jira/browse/HADOOP-10075
 https://issues.apache.org/**jira/browse/HADOOP-10076https://issues.apache.org/jira/browse/HADOOP-10076
 https://issues.apache.org/**jira/browse/HADOOP-9849https://issues.apache.org/jira/browse/HADOOP-9849


most (all?) of these are  pom changes


 Most of the changes are minor.  There are 2 big updates though: Jetty 9
 (which requires java 7) and tomcat 7.  These are also the most difficult
 patches to rebase when hadoop produces a new release.


that's not going to go in the 2.x branch. Java 6 is still a common platform
that people are using, because historically java7 (or any leading edge java
version) is buggy.

that said, our QA team did test hadoop 2  HDP-2 at scale on java7 and
openjdk 7, so it all works -it's just the commit java7 only is a big
decision that


 It would be great to get some feedback on these proposed changes and
 discuss how/when/if these could make it into a Hadoop release.

 Rob


-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.


Build failed in Jenkins: Hadoop-Common-trunk #938

2013-10-31 Thread Apache Jenkins Server
See https://builds.apache.org/job/Hadoop-Common-trunk/938/changes

Changes:

[tucu] YARN-1343. NodeManagers additions/restarts are not reported as node 
updates in AllocateResponse responses to AMs. (tucu)

[cnauroth] HDFS-4633. Change attribution in CHANGES.txt to version 2.2.1.

[vinodkv] YARN-1321. Changed NMTokenCache to support both singleton and an 
instance usage. Contributed by Alejandro Abdelnur.

[cnauroth] HDFS-5065. Change attribution in CHANGES.txt to version 2.2.1.

[cnauroth] YARN-1358. TestYarnCLI fails on Windows due to line endings. 
Contributed by Chuan Liu.

[cnauroth] YARN-1357. TestContainerLaunch.testContainerEnvVariables fails on 
Windows. Contributed by Chuan Liu.

[cnauroth] HDFS-5386. Add feature documentation for datanode caching. 
Contributed by Colin Patrick McCabe.

[cnauroth] HDFS-5432. TestDatanodeJsp fails on Windows due to assumption that 
loopback address resolves to host name localhost. Contributed by Chris Nauroth.

[wang] HDFS-5433. When reloading fsimage during checkpointing, we should clear 
existing snapshottable directories. Contributed by Aaron T. Myers.

--
[...truncated 57176 lines...]
Adding reference: maven.local.repository
[DEBUG] Initialize Maven Ant Tasks
parsing buildfile 
jar:file:/home/jenkins/.m2/repository/org/apache/maven/plugins/maven-antrun-plugin/1.7/maven-antrun-plugin-1.7.jar!/org/apache/maven/ant/tasks/antlib.xml
 with URI = 
jar:file:/home/jenkins/.m2/repository/org/apache/maven/plugins/maven-antrun-plugin/1.7/maven-antrun-plugin-1.7.jar!/org/apache/maven/ant/tasks/antlib.xml
 from a zip file
parsing buildfile 
jar:file:/home/jenkins/.m2/repository/org/apache/ant/ant/1.8.2/ant-1.8.2.jar!/org/apache/tools/ant/antlib.xml
 with URI = 
jar:file:/home/jenkins/.m2/repository/org/apache/ant/ant/1.8.2/ant-1.8.2.jar!/org/apache/tools/ant/antlib.xml
 from a zip file
Class org.apache.maven.ant.tasks.AttachArtifactTask loaded from parent loader 
(parentFirst)
 +Datatype attachartifact org.apache.maven.ant.tasks.AttachArtifactTask
Class org.apache.maven.ant.tasks.DependencyFilesetsTask loaded from parent 
loader (parentFirst)
 +Datatype dependencyfilesets org.apache.maven.ant.tasks.DependencyFilesetsTask
Setting project property: test.build.dir - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/test-dir
Setting project property: test.exclude.pattern - _
Setting project property: hadoop.assemblies.version - 3.0.0-SNAPSHOT
Setting project property: test.exclude - _
Setting project property: distMgmtSnapshotsId - apache.snapshots.https
Setting project property: project.build.sourceEncoding - UTF-8
Setting project property: java.security.egd - file:///dev/urandom
Setting project property: distMgmtSnapshotsUrl - 
https://repository.apache.org/content/repositories/snapshots
Setting project property: distMgmtStagingUrl - 
https://repository.apache.org/service/local/staging/deploy/maven2
Setting project property: avro.version - 1.7.4
Setting project property: test.build.data - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/test-dir
Setting project property: commons-daemon.version - 1.0.13
Setting project property: hadoop.common.build.dir - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/../../hadoop-common-project/hadoop-common/target
Setting project property: testsThreadCount - 4
Setting project property: maven.test.redirectTestOutputToFile - true
Setting project property: jdiff.version - 1.0.9
Setting project property: distMgmtStagingName - Apache Release Distribution 
Repository
Setting project property: project.reporting.outputEncoding - UTF-8
Setting project property: build.platform - Linux-i386-32
Setting project property: protobuf.version - 2.5.0
Setting project property: failIfNoTests - false
Setting project property: protoc.path - ${env.HADOOP_PROTOC_PATH}
Setting project property: jersey.version - 1.9
Setting project property: distMgmtStagingId - apache.staging.https
Setting project property: distMgmtSnapshotsName - Apache Development Snapshot 
Repository
Setting project property: ant.file - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/pom.xml
[DEBUG] Setting properties with prefix: 
Setting project property: project.groupId - org.apache.hadoop
Setting project property: project.artifactId - hadoop-common-project
Setting project property: project.name - Apache Hadoop Common Project
Setting project property: project.description - Apache Hadoop Common Project
Setting project property: project.version - 3.0.0-SNAPSHOT
Setting project property: project.packaging - pom
Setting project property: project.build.directory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target
Setting project property: project.build.outputDirectory - 
https://builds.apache.org/job/Hadoop-Common-trunk/ws/trunk/hadoop-common-project/target/classes
Setting 

Implementing a custom hadoop key and value - need Help

2013-10-31 Thread unmesha sreeveni
this is my post from stackoverflow
but i am not getting any response.


I need to emit a 2D double array as key and value from mapper.There are
questions posted in stackoverflow. But they are not answered.
we have to create a custom datatype.but how?I am new to these custom
datatypes. i dnt have any idea where to start.I am doing some of the matrix
multiplication in a given dataset.and after that i need to emit the value of
 A*Atrns which will be a matrix and Atrans*D which will also be a matrix.so
how to emit these matrices from mapper.And the value should be
corresponding to the key itself.I think for that we need to use
WritableComparable.



public class MatrixWritable implements WritableComparableMatrixWritable{

/**
 * @param args
 */
private double[][] value;

public MatrixWritable() {
// TODO Auto-generated constructor stub
  set(new double[0][0]);
}

public MatrixWritable(double[][] value) {
// TODO Auto-generated constructor stub
  this.value = value;
}

public void set(double[][] value) {
  this.value = value;
 }

public double[][] getValue() {
return value;
 }

 @Override
  public void write(DataOutput out) throws IOException {
 System.out.println(write);
 int row=0;
  int col=0;
for(int i=0; ivalue.length;i++){
row = value.length;
for(int j=0; jvalue[i].length; j++){
col = value[i].length;
}
}
out.writeInt(row);
out.writeInt(col);

System.out.println(\nTotal no of observations: +row+:+col);

for(int i=0;irow ; i++){
for(int j= 0 ; j col;j++){

 out.writeDouble(value[i][j]);
}
}
//priting array
for(int vali =0;vali value.length ;vali ++){
for(int valj = 0;valj value[0].length;valj++){
System.out.print(value[vali][valj]+ \t);
}
System.out.println();
}

  }

  @Override
  public void readFields(DataInput in) throws IOException {
  int row = in.readInt();
  int col = in.readInt();

  double[][] value = new double[row][col];
  for(int i=0;irow ; i++){
for(int j= 0 ; j col;j++){
value[i][j] = in.readDouble();

}
}

  }

  @Override
  public int hashCode() {

  }

  @Override
  public boolean equals(Object o) {

  }


@Override
public int compareTo(MatrixWritable o) {
// TODO Auto-generated method stub
return 0;
}
 @Override
  public String toString() {

return Arrays.toString(value);

  }



}

I wrote matrix write,readfields and toString.

1.But my toString is not returning anything. why is it so?

2.How to print these values with in Reducer ?I tried doing(tried with
emiting only custom value- for checking)

public class MyReducer extends


ReducerMatrixWritable, MatrixWritable, IntWritable, Text {

public void reduce(IterableMatrixWritable  key,
IterableMatrixWritable values, Context context){
  for(MatrixWritable c : values){

System.out.println(print value +c.toString());

}

}

but Nothing is printing.when i tried to print value[0].length in toString()
method it showsArrayIndexOutOfBoundExcep.Am i doing any thing wrong.and i
also needed to print my data asmatrix so i tried

public String toString() {

 String separator = , ;
StringBuffer result = new StringBuffer();

// iterate over the first dimension
for (int i = 0; i  value.length; i++) {
// iterate over the second dimension
for(int j = 0; j  value[i].length; j++){
result.append(value[i][j]);
System.out.print(value[i][j]);
result.append(separator);
}
// remove the last separator
result.setLength(result.length() - separator.length());
// add a line break.
result.append(\n);
}


return result.toString();


  }

Again my output is empty. 3.Inorder to emit a key too as custom datatype
CompareTo is neccessary right .

4.so what should i include in that methods CompareTo,hashcode,equals and
what are these methods intended for.
Any Idea.Pls suggest a solution.

-- 
*Thanks  Regards*
*
*
Unmesha Sreeveni U.B*
*
*Junior Developer
*
*Amrita Center For Cyber Security
*
*
Amritapuri.

www.amrita.edu/cyber/
*


Re: Hadoop in Fedora updated to 2.2.0

2013-10-31 Thread Andre Kelpe
On Thu, Oct 31, 2013 at 9:44 AM, Steve Loughran ste...@hortonworks.com wrote:


 that's not going to go in the 2.x branch. Java 6 is still a common platform
 that people are using, because historically java7 (or any leading edge java
 version) is buggy.

Given the fact that Java 6 is end of life, this is becoming more and
more of a problem. Are there any plans to retire java 6 in hadoop?

 that said, our QA team did test hadoop 2  HDP-2 at scale on java7 and
 openjdk 7, so it all works -it's just the commit java7 only is a big
 decision that

If openjdk 7 would work w/o hiccups, deployment would become so much
easier, since that is already in all the linux distributions and you
can avoid doing the oracle license dance during deployment.

-- André


Re: Implementing a custom hadoop key and value - need Help

2013-10-31 Thread Amr Shahin
Check this out:
http://developer.yahoo.com/hadoop/tutorial/module5.html#writable-notes
It shows how to create a customer writable. If  you have hadoop the
definitive guide there is a really good explanation about custom data
types.

Happy Halloween


On Thu, Oct 31, 2013 at 3:03 PM, unmesha sreeveni unmeshab...@gmail.comwrote:

 this is my post from stackoverflow
 but i am not getting any response.


 I need to emit a 2D double array as key and value from mapper.There are
 questions posted in stackoverflow. But they are not answered.
 we have to create a custom datatype.but how?I am new to these custom
 datatypes. i dnt have any idea where to start.I am doing some of the matrix
 multiplication in a given dataset.and after that i need to emit the value
 of
  A*Atrns which will be a matrix and Atrans*D which will also be a matrix.so
 how to emit these matrices from mapper.And the value should be
 corresponding to the key itself.I think for that we need to use
 WritableComparable.



 public class MatrixWritable implements WritableComparableMatrixWritable{

 /**
  * @param args
  */
 private double[][] value;

 public MatrixWritable() {
 // TODO Auto-generated constructor stub
   set(new double[0][0]);
 }

 public MatrixWritable(double[][] value) {
 // TODO Auto-generated constructor stub
   this.value = value;
 }

 public void set(double[][] value) {
   this.value = value;
  }

 public double[][] getValue() {
 return value;
  }

  @Override
   public void write(DataOutput out) throws IOException {
  System.out.println(write);
  int row=0;
   int col=0;
 for(int i=0; ivalue.length;i++){
 row = value.length;
 for(int j=0; jvalue[i].length; j++){
 col = value[i].length;
 }
 }
 out.writeInt(row);
 out.writeInt(col);

 System.out.println(\nTotal no of observations: +row+:+col);

 for(int i=0;irow ; i++){
 for(int j= 0 ; j col;j++){

  out.writeDouble(value[i][j]);
 }
 }
 //priting array
 for(int vali =0;vali value.length ;vali ++){
 for(int valj = 0;valj value[0].length;valj++){
 System.out.print(value[vali][valj]+ \t);
 }
 System.out.println();
 }

   }

   @Override
   public void readFields(DataInput in) throws IOException {
   int row = in.readInt();
   int col = in.readInt();

   double[][] value = new double[row][col];
   for(int i=0;irow ; i++){
 for(int j= 0 ; j col;j++){
 value[i][j] = in.readDouble();

 }
 }

   }

   @Override
   public int hashCode() {

   }

   @Override
   public boolean equals(Object o) {

   }


 @Override
 public int compareTo(MatrixWritable o) {
 // TODO Auto-generated method stub
 return 0;
 }
  @Override
   public String toString() {

 return Arrays.toString(value);

   }



 }

 I wrote matrix write,readfields and toString.

 1.But my toString is not returning anything. why is it so?

 2.How to print these values with in Reducer ?I tried doing(tried with
 emiting only custom value- for checking)

 public class MyReducer extends


 ReducerMatrixWritable, MatrixWritable, IntWritable, Text {

 public void reduce(IterableMatrixWritable  key,
 IterableMatrixWritable values, Context context){
   for(MatrixWritable c : values){

 System.out.println(print value +c.toString());

 }

 }

 but Nothing is printing.when i tried to print value[0].length in toString()
 method it showsArrayIndexOutOfBoundExcep.Am i doing any thing wrong.and i
 also needed to print my data asmatrix so i tried

 public String toString() {

  String separator = , ;
 StringBuffer result = new StringBuffer();

 // iterate over the first dimension
 for (int i = 0; i  value.length; i++) {
 // iterate over the second dimension
 for(int j = 0; j  value[i].length; j++){
 result.append(value[i][j]);
 System.out.print(value[i][j]);
 result.append(separator);
 }
 // remove the last separator
 result.setLength(result.length() - separator.length());
 // add a line break.
 result.append(\n);
 }


 return result.toString();


   }

 Again my output is empty. 3.Inorder to emit a key too as custom datatype
 CompareTo is neccessary right .

 4.so what should i include in that methods CompareTo,hashcode,equals and
 what are these methods intended for.
 Any Idea.Pls suggest a solution.

 --
 *Thanks  Regards*
 *
 *
 Unmesha Sreeveni U.B*
 *
 *Junior Developer
 *
 *Amrita Center For Cyber Security
 *
 *
 Amritapuri.

 www.amrita.edu/cyber/
 *



Re: Hadoop in Fedora updated to 2.2.0

2013-10-31 Thread Robert Rati

https://issues.apache.org/**jira/browse/HADOOP-9594https://issues.apache.org/jira/browse/HADOOP-9594
https://issues.apache.org/**jira/browse/MAPREDUCE-5431https://issues.apache.org/jira/browse/MAPREDUCE-5431
https://issues.apache.org/**jira/browse/HADOOP-9611https://issues.apache.org/jira/browse/HADOOP-9611
https://issues.apache.org/**jira/browse/HADOOP-9613https://issues.apache.org/jira/browse/HADOOP-9613
https://issues.apache.org/**jira/browse/HADOOP-9623https://issues.apache.org/jira/browse/HADOOP-9623
https://issues.apache.org/**jira/browse/HDFS-5411https://issues.apache.org/jira/browse/HDFS-5411
https://issues.apache.org/**jira/browse/HADOOP-10067https://issues.apache.org/jira/browse/HADOOP-10067
https://issues.apache.org/**jira/browse/HDFS-5075https://issues.apache.org/jira/browse/HDFS-5075
https://issues.apache.org/**jira/browse/HADOOP-10068https://issues.apache.org/jira/browse/HADOOP-10068
https://issues.apache.org/**jira/browse/HADOOP-10075https://issues.apache.org/jira/browse/HADOOP-10075
https://issues.apache.org/**jira/browse/HADOOP-10076https://issues.apache.org/jira/browse/HADOOP-10076
https://issues.apache.org/**jira/browse/HADOOP-9849https://issues.apache.org/jira/browse/HADOOP-9849



most (all?) of these are  pom changes


A good number are basically pom changes to update to newer versions of 
dependencies.  A few, such as commons-math3, required code changes as 
well because of a namespace change.  Some are minor code changes to 
enhance compatibility with newer dependencies.  Even tomcat is mostly 
changes in pom files.



Most of the changes are minor.  There are 2 big updates though: Jetty 9
(which requires java 7) and tomcat 7.  These are also the most difficult
patches to rebase when hadoop produces a new release.



that's not going to go in the 2.x branch. Java 6 is still a common platform
that people are using, because historically java7 (or any leading edge java
version) is buggy.

that said, our QA team did test hadoop 2  HDP-2 at scale on java7 and
openjdk 7, so it all works -it's just the commit java7 only is a big
decision that


I realize moving to java 7 is a big decision and wasn't trying to imply 
this should happen without discussion and planning, just that it would 
be nice to have the discussion and see where things land.  It can also 
help minimize work.  There is an open bz for updating jetty to jetty 8 
(the last version that would work on java 6), but if there are plans to 
move to java7, maybe it makes sense to just to jetty 9 and not test a 
new version of jetty twice.


With Hadoop in Fedora running on these newer deps there is a test bed to 
play with to give some level of confidence before taking the plunge on 
any major change.


Rob


Re: test-patch failing with OOM errors in javah

2013-10-31 Thread Jason Lowe
I don't think that OOM error below indicates it needs more heap space, 
as it's complaining about the ability to create a new native thread.  
That usually is caused by lack of available virtual address space or 
hitting process ulimits.


What's most likely going on is the jenkins user is hitting a process 
ulimit.  This can occur if processes have leaked from previous 
build/test runs and are using a large number of threads, or a large 
number of processes have leaked overall.  Could someone with access to 
the build machines check if that is indeed the case?  If it has, bonus 
points for indentifying the source of the leak. ;-)


Thanks!

Jason

On 10/30/2013 05:39 PM, Roman Shaposhnik wrote:

I can take a look sometime later today. Meantime I can only
say that I've been running into 1Gb limit in a few builds as
of late. These days -- I just go with 2G by default.

Thanks,
Roman.

On Wed, Oct 30, 2013 at 3:33 PM, Alejandro Abdelnur t...@cloudera.com wrote:

The following is happening in builds for MAPREDUCE and YARN patches.
I've seen the failures in hadoop5 and hadoop7 machines. I've increased
Maven memory to 1GB (export MAVEN_OPTS=-Xmx1024m in the jenkins
jobs) but still some failures persist:
https://builds.apache.org/job/PreCommit-MAPREDUCE-Build/4159/

Does anybody has an idea of what may be going on?



thx


[INFO] --- native-maven-plugin:1.0-alpha-7:javah (default) @ hadoop-common ---
[INFO] /bin/sh -c cd
/home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-common-project/hadoop-common
 /home/jenkins/tools/java/latest/bin/javah -d
/home/jenkins/jenkins-slave/workspace/PreCommit-MAPREDUCE-Build/trunk/hadoop-common-project/hadoop-common/target/native/javah
-classpath 

[jira] [Created] (HADOOP-10078) KerberosAuthenticator always does SPNEGO

2013-10-31 Thread Robert Kanter (JIRA)
Robert Kanter created HADOOP-10078:
--

 Summary: KerberosAuthenticator always does SPNEGO
 Key: HADOOP-10078
 URL: https://issues.apache.org/jira/browse/HADOOP-10078
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.3-alpha
Reporter: Robert Kanter
Assignee: Robert Kanter
Priority: Minor


HADOOP-8883 made this change to {{KerberosAuthenticator}}
{code:java}
@@ -158,7 +158,7 @@ public class KerberosAuthenticator implements Authenticator 
{
   conn.setRequestMethod(AUTH_HTTP_METHOD);
   conn.connect();
   
-  if (conn.getResponseCode() == HttpURLConnection.HTTP_OK) {
+  if (conn.getRequestProperty(AUTHORIZATION) != null  
conn.getResponseCode() == HttpURLConnection.HTTP_OK) {
 LOG.debug(JDK performed authentication on our behalf.);
 // If the JDK already did the SPNEGO back-and-forth for
 // us, just pull out the token.
{code}
to fix OOZIE-1010.  However, as [~aklochkov] pointed out recently, this 
inadvertently made the if statement always false because it turns out that the 
JDK excludes some headers, including the Authorization one that we're 
checking (see discussion 
[here|https://issues.apache.org/jira/browse/HADOOP-8883?focusedCommentId=13807596page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13807596]).
  This means that it was always either calling {{doSpnegoSequence(token);}} or 
{{getFallBackAuthenticator().authenticate(url, token);}}, which is actually the 
old behavior that existed before HADOOP-8855 changed it in the first place.

In any case, I tried removing the Authorization check and Oozie still works 
with and without Kerberos; the NPE reported in OOZIE-1010 has since been 
properly fixed due as a side effect for a similar issue in OOZIE-1368.



--
This message was sent by Atlassian JIRA
(v6.1#6144)


Re: Hadoop in Fedora updated to 2.2.0

2013-10-31 Thread Andrew Wang
I'm in agreement with Steve on this one. We're aware that Java 6 is EOL,
but we can't drop support for the lifetime of the 2.x line since it's a
(very) incompatible change. AFAIK a 3.x release fixing this isn't on any of
our horizons yet.

Best,
Andrew


On Thu, Oct 31, 2013 at 6:15 AM, Robert Rati rr...@redhat.com wrote:

 https://issues.apache.org/jira/browse/HADOOP-9594https://issues.apache.org/**jira/browse/HADOOP-9594
 https:**//issues.apache.org/jira/**browse/HADOOP-9594https://issues.apache.org/jira/browse/HADOOP-9594
 
 https://issues.apache.org/jira/browse/MAPREDUCE-5431https://issues.apache.org/**jira/browse/MAPREDUCE-5431
 htt**ps://issues.apache.org/jira/**browse/MAPREDUCE-5431https://issues.apache.org/jira/browse/MAPREDUCE-5431
 
 https://issues.apache.org/jira/browse/HADOOP-9611https://issues.apache.org/**jira/browse/HADOOP-9611
 https:**//issues.apache.org/jira/**browse/HADOOP-9611https://issues.apache.org/jira/browse/HADOOP-9611
 
 https://issues.apache.org/jira/browse/HADOOP-9613https://issues.apache.org/**jira/browse/HADOOP-9613
 https:**//issues.apache.org/jira/**browse/HADOOP-9613https://issues.apache.org/jira/browse/HADOOP-9613
 
 https://issues.apache.org/jira/browse/HADOOP-9623https://issues.apache.org/**jira/browse/HADOOP-9623
 https:**//issues.apache.org/jira/**browse/HADOOP-9623https://issues.apache.org/jira/browse/HADOOP-9623
 
 https://issues.apache.org/jira/browse/HDFS-5411https://issues.apache.org/**jira/browse/HDFS-5411
 https://**issues.apache.org/jira/browse/**HDFS-5411https://issues.apache.org/jira/browse/HDFS-5411
 
 https://issues.apache.org/jira/browse/HADOOP-10067https://issues.apache.org/**jira/browse/HADOOP-10067
 https**://issues.apache.org/jira/**browse/HADOOP-10067https://issues.apache.org/jira/browse/HADOOP-10067
 
 https://issues.apache.org/jira/browse/HDFS-5075https://issues.apache.org/**jira/browse/HDFS-5075
 https://**issues.apache.org/jira/browse/**HDFS-5075https://issues.apache.org/jira/browse/HDFS-5075
 
 https://issues.apache.org/jira/browse/HADOOP-10068https://issues.apache.org/**jira/browse/HADOOP-10068
 https**://issues.apache.org/jira/**browse/HADOOP-10068https://issues.apache.org/jira/browse/HADOOP-10068
 
 https://issues.apache.org/jira/browse/HADOOP-10075https://issues.apache.org/**jira/browse/HADOOP-10075
 https**://issues.apache.org/jira/**browse/HADOOP-10075https://issues.apache.org/jira/browse/HADOOP-10075
 
 https://issues.apache.org/jira/browse/HADOOP-10076https://issues.apache.org/**jira/browse/HADOOP-10076
 https**://issues.apache.org/jira/**browse/HADOOP-10076https://issues.apache.org/jira/browse/HADOOP-10076
 
 https://issues.apache.org/jira/browse/HADOOP-9849https://issues.apache.org/**jira/browse/HADOOP-9849
 https:**//issues.apache.org/jira/**browse/HADOOP-9849https://issues.apache.org/jira/browse/HADOOP-9849
 


  most (all?) of these are  pom changes


 A good number are basically pom changes to update to newer versions of
 dependencies.  A few, such as commons-math3, required code changes as well
 because of a namespace change.  Some are minor code changes to enhance
 compatibility with newer dependencies.  Even tomcat is mostly changes in
 pom files.


  Most of the changes are minor.  There are 2 big updates though: Jetty 9
 (which requires java 7) and tomcat 7.  These are also the most difficult
 patches to rebase when hadoop produces a new release.


  that's not going to go in the 2.x branch. Java 6 is still a common
 platform
 that people are using, because historically java7 (or any leading edge
 java
 version) is buggy.

 that said, our QA team did test hadoop 2  HDP-2 at scale on java7 and
 openjdk 7, so it all works -it's just the commit java7 only is a big
 decision that


 I realize moving to java 7 is a big decision and wasn't trying to imply
 this should happen without discussion and planning, just that it would be
 nice to have the discussion and see where things land.  It can also help
 minimize work.  There is an open bz for updating jetty to jetty 8 (the last
 version that would work on java 6), but if there are plans to move to
 java7, maybe it makes sense to just to jetty 9 and not test a new version
 of jetty twice.

 With Hadoop in Fedora running on these newer deps there is a test bed to
 play with to give some level of confidence before taking the plunge on any
 major change.

 Rob