[jira] [Commented] (HADOOP-8201) create the configure script for native compilation as part of the build

2012-03-23 Thread Matt Foley (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236383#comment-13236383
 ] 

Matt Foley commented on HADOOP-8201:


Giri, isn't this the issue that causes MR to fail in 1.0.2-rc1 if Snappy 
compression is configured on?  If so, it needs to be marked as a bug, not an 
improvement.  Thanks.

 create the configure script for native compilation as part of the build
 ---

 Key: HADOOP-8201
 URL: https://issues.apache.org/jira/browse/HADOOP-8201
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 1.0.0, 1.0.1
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan
Priority: Blocker
 Attachments: HADOOP-8201.patch


 configure script is checked into svn and its not regenerated during build. 
 Ideally configure scritp should not be checked into svn and instead should be 
 generated during build using autoreconf.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8159) NetworkTopology: getLeaf should check for invalid topologies

2012-03-23 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236561#comment-13236561
 ] 

Hudson commented on HADOOP-8159:


Integrated in Hadoop-Hdfs-trunk #993 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/993/])
HADOOP-8159. NetworkTopology: getLeaf should check for invalid topologies. 
Contributed by Colin Patrick McCabe (Revision 1304118)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1304118
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java


 NetworkTopology: getLeaf should check for invalid topologies
 

 Key: HADOOP-8159
 URL: https://issues.apache.org/jira/browse/HADOOP-8159
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 0.23.3

 Attachments: HADOOP-8159-b1.005.patch, HADOOP-8159-b1.007.patch, 
 HADOOP-8159.005.patch, HADOOP-8159.006.patch, HADOOP-8159.007.patch, 
 HADOOP-8159.008.patch, HADOOP-8159.009.patch


 Currently, in NetworkTopology, getLeaf doesn't do too much validation on the 
 InnerNode object itself. This results in us getting ClassCastException 
 sometimes when the network topology is invalid. We should have a less 
 confusing exception message for this case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8197) Configuration logs WARNs on every use of a deprecated key

2012-03-23 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236562#comment-13236562
 ] 

Hudson commented on HADOOP-8197:


Integrated in Hadoop-Hdfs-trunk #993 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/993/])
HADOOP-8197. Configuration logs WARNs on every use of a deprecated key 
(tucu) (Revision 1303884)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1303884
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestDeprecatedKeys.java


 Configuration logs WARNs on every use of a deprecated key
 -

 Key: HADOOP-8197
 URL: https://issues.apache.org/jira/browse/HADOOP-8197
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.23.3, 0.24.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 0.23.3

 Attachments: HADOOP-8197.patch, HADOOP-8197.patch


 The logic to do print a warning only once per deprecated key does not work:
 {code}
 2012-03-21 22:32:58,121  WARN Configuration:661 - user.name is deprecated. 
 Instead, use mapreduce.job.user.name
 
 2012-03-21 22:32:58,123  WARN Configuration:661 - fs.default.name is 
 deprecated. Instead, use fs.defaultFS
 ...
 2012-03-21 22:32:58,130  WARN Configuration:661 - mapred.job.tracker is 
 deprecated. Instead, use mapreduce.jobtracker.address
 2012-03-21 22:32:58,351  WARN Configuration:345 - fs.default.name is 
 deprecated. Instead, use fs.defaultFS
 ...
 2012-03-21 22:32:58,843  WARN Configuration:661 - user.name is deprecated. 
 Instead, use mapreduce.job.user.name
 2012-03-21 22:32:58,844  WARN Configuration:661 - mapred.job.tracker is 
 deprecated. Instead, use mapreduce.jobtracker.address
 2012-03-21 22:32:58,844  WARN Configuration:661 - fs.default.name is 
 deprecated. Instead, use fs.defaultFS
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8200) Remove HADOOP_[JOBTRACKER|TASKTRACKER]_OPTS

2012-03-23 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236560#comment-13236560
 ] 

Hudson commented on HADOOP-8200:


Integrated in Hadoop-Hdfs-trunk #993 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/993/])
HADOOP-8200. Remove HADOOP_[JOBTRACKER|TASKTRACKER]_OPTS. Contributed by 
Eli Collins (Revision 1304112)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1304112
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/hadoop-setup-conf.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/templates/conf/hadoop-env.sh


 Remove HADOOP_[JOBTRACKER|TASKTRACKER]_OPTS 
 

 Key: HADOOP-8200
 URL: https://issues.apache.org/jira/browse/HADOOP-8200
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Fix For: 0.23.3

 Attachments: hadoop-8200.txt


 The HADOOP_[JOBTRACKER|TASKTRACKER]_OPTS env variables are no longer in 
 trunk/23 since there's no MR1 implementation and the tests don't use them. 
 This makes the patch for HADOOP-8149 easier.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8159) NetworkTopology: getLeaf should check for invalid topologies

2012-03-23 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236568#comment-13236568
 ] 

Hudson commented on HADOOP-8159:


Integrated in Hadoop-Hdfs-0.23-Build #206 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/206/])
HADOOP-8159. svn merge -c 1304118 from trunk (Revision 1304119)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1304119
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/block_forensics
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build-contrib.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/data_join
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/eclipse-plugin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/index
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/vaidya
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/examples
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/fs
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/ipc
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/webapps/job
* /hadoop/common/branches/branch-0.23/hadoop-project
* /hadoop/common/branches/branch-0.23/hadoop-project/src/site


 NetworkTopology: getLeaf should check for invalid topologies
 

 Key: HADOOP-8159
 URL: https://issues.apache.org/jira/browse/HADOOP-8159
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 0.23.3

 Attachments: HADOOP-8159-b1.005.patch, HADOOP-8159-b1.007.patch, 
 HADOOP-8159.005.patch, HADOOP-8159.006.patch, HADOOP-8159.007.patch, 
 HADOOP-8159.008.patch, HADOOP-8159.009.patch


 Currently, in NetworkTopology, getLeaf doesn't do too much validation on the 
 InnerNode object itself. This results in us getting ClassCastException 
 sometimes when the network topology is invalid. We should have a less 
 confusing exception message for this case.

--
This message is automatically 

[jira] [Commented] (HADOOP-8197) Configuration logs WARNs on every use of a deprecated key

2012-03-23 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236569#comment-13236569
 ] 

Hudson commented on HADOOP-8197:


Integrated in Hadoop-Hdfs-0.23-Build #206 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/206/])
Merge -r 1303883:1303884 from trunk to branch. FIXES: HADOOP-8197 (Revision 
1303886)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1303886
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestDeprecatedKeys.java


 Configuration logs WARNs on every use of a deprecated key
 -

 Key: HADOOP-8197
 URL: https://issues.apache.org/jira/browse/HADOOP-8197
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.23.3, 0.24.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 0.23.3

 Attachments: HADOOP-8197.patch, HADOOP-8197.patch


 The logic to do print a warning only once per deprecated key does not work:
 {code}
 2012-03-21 22:32:58,121  WARN Configuration:661 - user.name is deprecated. 
 Instead, use mapreduce.job.user.name
 
 2012-03-21 22:32:58,123  WARN Configuration:661 - fs.default.name is 
 deprecated. Instead, use fs.defaultFS
 ...
 2012-03-21 22:32:58,130  WARN Configuration:661 - mapred.job.tracker is 
 deprecated. Instead, use mapreduce.jobtracker.address
 2012-03-21 22:32:58,351  WARN Configuration:345 - fs.default.name is 
 deprecated. Instead, use fs.defaultFS
 ...
 2012-03-21 22:32:58,843  WARN Configuration:661 - user.name is deprecated. 
 Instead, use mapreduce.job.user.name
 2012-03-21 22:32:58,844  WARN Configuration:661 - mapred.job.tracker is 
 deprecated. Instead, use mapreduce.jobtracker.address
 2012-03-21 22:32:58,844  WARN Configuration:661 - fs.default.name is 
 deprecated. Instead, use fs.defaultFS
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8200) Remove HADOOP_[JOBTRACKER|TASKTRACKER]_OPTS

2012-03-23 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236567#comment-13236567
 ] 

Hudson commented on HADOOP-8200:


Integrated in Hadoop-Hdfs-0.23-Build #206 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/206/])
HADOOP-8200. svn merge -c 1304112 from trunk (Revision 1304113)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1304113
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/packages/hadoop-setup-conf.sh
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/packages/templates/conf/hadoop-env.sh
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/block_forensics
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build-contrib.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/data_join
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/eclipse-plugin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/index
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/vaidya
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/examples
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/fs
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/ipc
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/webapps/job
* /hadoop/common/branches/branch-0.23/hadoop-project
* /hadoop/common/branches/branch-0.23/hadoop-project/src/site


 Remove HADOOP_[JOBTRACKER|TASKTRACKER]_OPTS 
 

 Key: HADOOP-8200
 URL: https://issues.apache.org/jira/browse/HADOOP-8200
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Fix For: 0.23.3

 Attachments: hadoop-8200.txt


 The HADOOP_[JOBTRACKER|TASKTRACKER]_OPTS env variables are no longer in 
 trunk/23 since there's no MR1 implementation and the tests don't use them. 
 This makes the patch for HADOOP-8149 easier.

--

[jira] [Commented] (HADOOP-8200) Remove HADOOP_[JOBTRACKER|TASKTRACKER]_OPTS

2012-03-23 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236574#comment-13236574
 ] 

Hudson commented on HADOOP-8200:


Integrated in Hadoop-Mapreduce-0.23-Build #234 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Build/234/])
HADOOP-8200. svn merge -c 1304112 from trunk (Revision 1304113)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1304113
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/packages/hadoop-setup-conf.sh
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/packages/templates/conf/hadoop-env.sh
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/block_forensics
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build-contrib.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/data_join
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/eclipse-plugin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/index
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/vaidya
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/examples
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/fs
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/ipc
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/webapps/job
* /hadoop/common/branches/branch-0.23/hadoop-project
* /hadoop/common/branches/branch-0.23/hadoop-project/src/site


 Remove HADOOP_[JOBTRACKER|TASKTRACKER]_OPTS 
 

 Key: HADOOP-8200
 URL: https://issues.apache.org/jira/browse/HADOOP-8200
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Fix For: 0.23.3

 Attachments: hadoop-8200.txt


 The HADOOP_[JOBTRACKER|TASKTRACKER]_OPTS env variables are no longer in 
 trunk/23 since there's no MR1 implementation and the tests don't use them. 
 This makes the patch for HADOOP-8149 

[jira] [Commented] (HADOOP-8159) NetworkTopology: getLeaf should check for invalid topologies

2012-03-23 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236575#comment-13236575
 ] 

Hudson commented on HADOOP-8159:


Integrated in Hadoop-Mapreduce-0.23-Build #234 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Build/234/])
HADOOP-8159. svn merge -c 1304118 from trunk (Revision 1304119)

 Result = FAILURE
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1304119
Files : 
* /hadoop/common/branches/branch-0.23
* /hadoop/common/branches/branch-0.23/hadoop-common-project
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-auth
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/docs
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/core
* /hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/native
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/datanode
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/main/webapps/secondary
* 
/hadoop/common/branches/branch-0.23/hadoop-hdfs-project/hadoop-hdfs/src/test/hdfs
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/bin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/conf
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/src/main/resources/mapred-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-mapreduce-examples
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-site/src/site/apt
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/c++
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/block_forensics
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build-contrib.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/build.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/data_join
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/eclipse-plugin
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/index
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/contrib/vaidya
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/examples
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/java
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/fs
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/hdfs
* 
/hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/test/mapred/org/apache/hadoop/ipc
* /hadoop/common/branches/branch-0.23/hadoop-mapreduce-project/src/webapps/job
* /hadoop/common/branches/branch-0.23/hadoop-project
* /hadoop/common/branches/branch-0.23/hadoop-project/src/site


 NetworkTopology: getLeaf should check for invalid topologies
 

 Key: HADOOP-8159
 URL: https://issues.apache.org/jira/browse/HADOOP-8159
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 0.23.3

 Attachments: HADOOP-8159-b1.005.patch, HADOOP-8159-b1.007.patch, 
 HADOOP-8159.005.patch, HADOOP-8159.006.patch, HADOOP-8159.007.patch, 
 HADOOP-8159.008.patch, HADOOP-8159.009.patch


 Currently, in NetworkTopology, getLeaf doesn't do too much validation on the 
 InnerNode object itself. This results in us getting ClassCastException 
 sometimes when the network topology is invalid. We should have a less 
 confusing exception message for this case.

--
This message is 

[jira] [Commented] (HADOOP-8197) Configuration logs WARNs on every use of a deprecated key

2012-03-23 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236576#comment-13236576
 ] 

Hudson commented on HADOOP-8197:


Integrated in Hadoop-Mapreduce-0.23-Build #234 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Build/234/])
Merge -r 1303883:1303884 from trunk to branch. FIXES: HADOOP-8197 (Revision 
1303886)

 Result = FAILURE
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1303886
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestDeprecatedKeys.java


 Configuration logs WARNs on every use of a deprecated key
 -

 Key: HADOOP-8197
 URL: https://issues.apache.org/jira/browse/HADOOP-8197
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.23.3, 0.24.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 0.23.3

 Attachments: HADOOP-8197.patch, HADOOP-8197.patch


 The logic to do print a warning only once per deprecated key does not work:
 {code}
 2012-03-21 22:32:58,121  WARN Configuration:661 - user.name is deprecated. 
 Instead, use mapreduce.job.user.name
 
 2012-03-21 22:32:58,123  WARN Configuration:661 - fs.default.name is 
 deprecated. Instead, use fs.defaultFS
 ...
 2012-03-21 22:32:58,130  WARN Configuration:661 - mapred.job.tracker is 
 deprecated. Instead, use mapreduce.jobtracker.address
 2012-03-21 22:32:58,351  WARN Configuration:345 - fs.default.name is 
 deprecated. Instead, use fs.defaultFS
 ...
 2012-03-21 22:32:58,843  WARN Configuration:661 - user.name is deprecated. 
 Instead, use mapreduce.job.user.name
 2012-03-21 22:32:58,844  WARN Configuration:661 - mapred.job.tracker is 
 deprecated. Instead, use mapreduce.jobtracker.address
 2012-03-21 22:32:58,844  WARN Configuration:661 - fs.default.name is 
 deprecated. Instead, use fs.defaultFS
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8159) NetworkTopology: getLeaf should check for invalid topologies

2012-03-23 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236603#comment-13236603
 ] 

Hudson commented on HADOOP-8159:


Integrated in Hadoop-Mapreduce-trunk #1028 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1028/])
HADOOP-8159. NetworkTopology: getLeaf should check for invalid topologies. 
Contributed by Colin Patrick McCabe (Revision 1304118)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1304118
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetworkTopology.java


 NetworkTopology: getLeaf should check for invalid topologies
 

 Key: HADOOP-8159
 URL: https://issues.apache.org/jira/browse/HADOOP-8159
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 0.23.3

 Attachments: HADOOP-8159-b1.005.patch, HADOOP-8159-b1.007.patch, 
 HADOOP-8159.005.patch, HADOOP-8159.006.patch, HADOOP-8159.007.patch, 
 HADOOP-8159.008.patch, HADOOP-8159.009.patch


 Currently, in NetworkTopology, getLeaf doesn't do too much validation on the 
 InnerNode object itself. This results in us getting ClassCastException 
 sometimes when the network topology is invalid. We should have a less 
 confusing exception message for this case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8197) Configuration logs WARNs on every use of a deprecated key

2012-03-23 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8197?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236604#comment-13236604
 ] 

Hudson commented on HADOOP-8197:


Integrated in Hadoop-Mapreduce-trunk #1028 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1028/])
HADOOP-8197. Configuration logs WARNs on every use of a deprecated key 
(tucu) (Revision 1303884)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1303884
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestDeprecatedKeys.java


 Configuration logs WARNs on every use of a deprecated key
 -

 Key: HADOOP-8197
 URL: https://issues.apache.org/jira/browse/HADOOP-8197
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.23.3, 0.24.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 0.23.3

 Attachments: HADOOP-8197.patch, HADOOP-8197.patch


 The logic to do print a warning only once per deprecated key does not work:
 {code}
 2012-03-21 22:32:58,121  WARN Configuration:661 - user.name is deprecated. 
 Instead, use mapreduce.job.user.name
 
 2012-03-21 22:32:58,123  WARN Configuration:661 - fs.default.name is 
 deprecated. Instead, use fs.defaultFS
 ...
 2012-03-21 22:32:58,130  WARN Configuration:661 - mapred.job.tracker is 
 deprecated. Instead, use mapreduce.jobtracker.address
 2012-03-21 22:32:58,351  WARN Configuration:345 - fs.default.name is 
 deprecated. Instead, use fs.defaultFS
 ...
 2012-03-21 22:32:58,843  WARN Configuration:661 - user.name is deprecated. 
 Instead, use mapreduce.job.user.name
 2012-03-21 22:32:58,844  WARN Configuration:661 - mapred.job.tracker is 
 deprecated. Instead, use mapreduce.jobtracker.address
 2012-03-21 22:32:58,844  WARN Configuration:661 - fs.default.name is 
 deprecated. Instead, use fs.defaultFS
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8200) Remove HADOOP_[JOBTRACKER|TASKTRACKER]_OPTS

2012-03-23 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8200?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236602#comment-13236602
 ] 

Hudson commented on HADOOP-8200:


Integrated in Hadoop-Mapreduce-trunk #1028 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1028/])
HADOOP-8200. Remove HADOOP_[JOBTRACKER|TASKTRACKER]_OPTS. Contributed by 
Eli Collins (Revision 1304112)

 Result = SUCCESS
eli : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1304112
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/docs/src/documentation/content/xdocs/cluster_setup.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/hadoop-setup-conf.sh
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/packages/templates/conf/hadoop-env.sh


 Remove HADOOP_[JOBTRACKER|TASKTRACKER]_OPTS 
 

 Key: HADOOP-8200
 URL: https://issues.apache.org/jira/browse/HADOOP-8200
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Fix For: 0.23.3

 Attachments: hadoop-8200.txt


 The HADOOP_[JOBTRACKER|TASKTRACKER]_OPTS env variables are no longer in 
 trunk/23 since there's no MR1 implementation and the tests don't use them. 
 This makes the patch for HADOOP-8149 easier.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7682) taskTracker could not start because Failed to set permissions to ttprivate to 0700

2012-03-23 Thread FKorning (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236645#comment-13236645
 ] 

FKorning commented on HADOOP-7682:
--


There's a bunch of issues at work.  I've patched this up locally
on my own 1.0.2-SNAPSHOT, but it takes a lot of yak-shaving to fix.



--- 

First you need to set up hadoop-1.0.1 including source, ant, ivy,
and cygwin with ssh/ssl and tcp_wrappers.

Then use sshd_config to create a cyg_server priviledged user.
From an admin cygwin shell, you then have to edit the /etc/passwd
file and give that user a valid shell and user home, change the
password for the user, and finally generate ssh keys for the user
and copy the user's id_rsa.pub public key into ~/.ssh/authorized_keys.

if done right you should be able to ssh cyg_server@localhost.


--- 

Now the main problem is a confusion between the hadoop shell scripts
that expect unix paths like /tmp, and the haddop java binaries who
interpret this path as C:\tmp.

Unfortunately, neither Cygwin symlinks nor even Windows NT Junctions
are supported by the java io filesystem.  Thus the only way to get
around this is to enforce the cygwin paths to be identical to windows
paths.

I get around this by creating a circular symlink in /cygwin - /.
To avoid confusion with C: drive mappings, all my paths are relative.
This means that windows \cygwin\tmp equals cygwin's /cygwin/tmp.

For pid files use /cygwin/tmp/
For tmp file  use /cygwin/tmp/haddop-${USER}/
For log files use /cygwin/tmp/haddop-${USER}/logs/


--- 

First the ssh slaves invocation warpper is broken because it fails to
provide the user's ssh login, which isn't defaulted to in cygwin openssh.


slaves.sh:

for slave in `cat $HOSTLIST|sed  s/#.*$//;/^$/d`; do
 ssh -l $USER $HADOOP_SSH_OPTS $slave $${@// /\\ } \
   21 | sed s/^/$slave: / 
 if [ $HADOOP_SLAVE_SLEEP !=  ]; then
   sleep $HADOOP_SLAVE_SLEEP
 fi
done


Next the hadoop shell scripts are broken.  you need to fix the environments
for cygwin paths in hadoop-env.sh, and then make sure this file is invoked
by both hadoop-config.sh, and finally the hadoop* sh wrapper script. For me
its JRE java invocation was also broken, so I provide the whole srcript below.


hadoop-env.sh:

  HADOOP_PID_DIR=/cygwin/tmp/
  HADOOP_TMP_DIR=/cygwin/tmp/hadoop-${USER}
  HADOOP_LOG_DIR=/cygwin/tmp/hadoop-${USER}/logs



hadoop (sh):


#!/usr/bin/env bash

# Licensed to the Apache Software Foundation (ASF) under one or more
# contributor license agreements.  See the NOTICE file distributed with
# this work for additional information regarding copyright ownership.
# The ASF licenses this file to You under the Apache License, Version 2.0
# (the License); you may not use this file except in compliance with
# the License.  You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an AS IS BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.


# The Hadoop command script
#
# Environment Variables
#
#   JAVA_HOMEThe java implementation to use.  Overrides JAVA_HOME.
#
#   HADOOP_CLASSPATH Extra Java CLASSPATH entries.
#
#   HADOOP_USER_CLASSPATH_FIRST  When defined, the HADOOP_CLASSPATH is 
#added in the beginning of the global
#classpath. Can be defined, for example,
#by doing 
#export HADOOP_USER_CLASSPATH_FIRST=true
#
#   HADOOP_HEAPSIZE  The maximum amount of heap to use, in MB. 
#Default is 1000.
#
#   HADOOP_OPTS  Extra Java runtime options.
#   
#   HADOOP_NAMENODE_OPTS   These options are added to HADOOP_OPTS 
#   HADOOP_CLIENT_OPTS when the respective command is run.
#   HADOOP_{COMMAND}_OPTS etc  HADOOP_JT_OPTS applies to JobTracker 
#  for e.g.  HADOOP_CLIENT_OPTS applies to 
#  more than one command (fs, dfs, fsck, 
#  dfsadmin etc)  
#
#   HADOOP_CONF_DIR  Alternate conf dir. Default is ${HADOOP_HOME}/conf.
#
#   HADOOP_ROOT_LOGGER The root appender. Default is INFO,console
#

bin=`dirname $0`
bin=`cd $bin; pwd`

cygwin=false
case `uname` in
CYGWIN*) cygwin=true;;
esac


if [ -e $bin/../libexec/hadoop-config.sh ]; then
  . $bin/../libexec/hadoop-config.sh
else
  . $bin/hadoop-config.sh
fi


# if no args specified, show usage
if [ $# = 0 ]; then
  echo Usage: hadoop [--config confdir] COMMAND
  echo where COMMAND is one of:
  echo   namenode -format format the DFS filesystem
  echo   secondarynamenoderun the DFS secondary namenode
  echo   namenode run the 

[jira] [Commented] (HADOOP-8201) create the configure script for native compilation as part of the build

2012-03-23 Thread Owen O'Malley (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8201?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236755#comment-13236755
 ] 

Owen O'Malley commented on HADOOP-8201:
---

+1

We should systematically remove all of the autoconf/automake files and 
regenerate them in the build directory, but this is a step in the right 
direction.

 create the configure script for native compilation as part of the build
 ---

 Key: HADOOP-8201
 URL: https://issues.apache.org/jira/browse/HADOOP-8201
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 1.0.0, 1.0.1
Reporter: Giridharan Kesavan
Assignee: Giridharan Kesavan
Priority: Blocker
 Attachments: HADOOP-8201.patch


 configure script is checked into svn and its not regenerated during build. 
 Ideally configure scritp should not be checked into svn and instead should be 
 generated during build using autoreconf.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8202) stopproxy() is not closing the proxies correctly

2012-03-23 Thread Hari Mankude (Created) (JIRA)
stopproxy() is not closing the proxies correctly


 Key: HADOOP-8202
 URL: https://issues.apache.org/jira/browse/HADOOP-8202
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.24.0
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor


I was running testbackupnode and noticed that NNprotocol proxy was not being 
closed. Talked with Suresh and he observed that most of the protocols do not 
implement ProtocolTranslator and hence the logic in stopproxy() does not work. 
Instead, since all of them are closeable, Suresh suggested that closeable 
property should be used at close.


--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8202) stopproxy() is not closing the proxies correctly

2012-03-23 Thread Hari Mankude (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236789#comment-13236789
 ] 

Hari Mankude commented on HADOOP-8202:
--

This jira is related to hadoop-7607

 stopproxy() is not closing the proxies correctly
 

 Key: HADOOP-8202
 URL: https://issues.apache.org/jira/browse/HADOOP-8202
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.24.0
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor

 I was running testbackupnode and noticed that NNprotocol proxy was not being 
 closed. Talked with Suresh and he observed that most of the protocols do not 
 implement ProtocolTranslator and hence the logic in stopproxy() does not 
 work. Instead, since all of them are closeable, Suresh suggested that 
 closeable property should be used at close.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8202) stopproxy() is not closing the proxies correctly

2012-03-23 Thread Hari Mankude (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Mankude updated HADOOP-8202:
-

Status: Patch Available  (was: Open)

 stopproxy() is not closing the proxies correctly
 

 Key: HADOOP-8202
 URL: https://issues.apache.org/jira/browse/HADOOP-8202
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.24.0
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Attachments: HADOOP-8202.patch


 I was running testbackupnode and noticed that NNprotocol proxy was not being 
 closed. Talked with Suresh and he observed that most of the protocols do not 
 implement ProtocolTranslator and hence the logic in stopproxy() does not 
 work. Instead, since all of them are closeable, Suresh suggested that 
 closeable property should be used at close.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8202) stopproxy() is not closing the proxies correctly

2012-03-23 Thread Hari Mankude (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Mankude updated HADOOP-8202:
-

Attachment: HADOOP-8202.patch

 stopproxy() is not closing the proxies correctly
 

 Key: HADOOP-8202
 URL: https://issues.apache.org/jira/browse/HADOOP-8202
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.24.0
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Attachments: HADOOP-8202.patch


 I was running testbackupnode and noticed that NNprotocol proxy was not being 
 closed. Talked with Suresh and he observed that most of the protocols do not 
 implement ProtocolTranslator and hence the logic in stopproxy() does not 
 work. Instead, since all of them are closeable, Suresh suggested that 
 closeable property should be used at close.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8202) stopproxy() is not closing the proxies correctly

2012-03-23 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236863#comment-13236863
 ] 

Hadoop QA commented on HADOOP-8202:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12519670/HADOOP-8202.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

-1 findbugs.  The patch appears to introduce 1 new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests:
  org.apache.hadoop.fs.viewfs.TestViewFsTrash

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/754//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/754//artifact/trunk/hadoop-common-project/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/754//console

This message is automatically generated.

 stopproxy() is not closing the proxies correctly
 

 Key: HADOOP-8202
 URL: https://issues.apache.org/jira/browse/HADOOP-8202
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.24.0
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Attachments: HADOOP-8202.patch


 I was running testbackupnode and noticed that NNprotocol proxy was not being 
 closed. Talked with Suresh and he observed that most of the protocols do not 
 implement ProtocolTranslator and hence the logic in stopproxy() does not 
 work. Instead, since all of them are closeable, Suresh suggested that 
 closeable property should be used at close.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8202) stopproxy() is not closing the proxies correctly

2012-03-23 Thread Hari Mankude (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Mankude updated HADOOP-8202:
-

Attachment: HADOOP-8202.patch

 stopproxy() is not closing the proxies correctly
 

 Key: HADOOP-8202
 URL: https://issues.apache.org/jira/browse/HADOOP-8202
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.24.0
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Attachments: HADOOP-8202.patch, HADOOP-8202.patch


 I was running testbackupnode and noticed that NNprotocol proxy was not being 
 closed. Talked with Suresh and he observed that most of the protocols do not 
 implement ProtocolTranslator and hence the logic in stopproxy() does not 
 work. Instead, since all of them are closeable, Suresh suggested that 
 closeable property should be used at close.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8202) stopproxy() is not closing the proxies correctly

2012-03-23 Thread Suresh Srinivas (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236888#comment-13236888
 ] 

Suresh Srinivas commented on HADOOP-8202:
-

Does this fix the exception that you observed in the test, where proxy was not 
stopped?

 stopproxy() is not closing the proxies correctly
 

 Key: HADOOP-8202
 URL: https://issues.apache.org/jira/browse/HADOOP-8202
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.24.0
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Attachments: HADOOP-8202.patch, HADOOP-8202.patch


 I was running testbackupnode and noticed that NNprotocol proxy was not being 
 closed. Talked with Suresh and he observed that most of the protocols do not 
 implement ProtocolTranslator and hence the logic in stopproxy() does not 
 work. Instead, since all of them are closeable, Suresh suggested that 
 closeable property should be used at close.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8202) stopproxy() is not closing the proxies correctly

2012-03-23 Thread Hari Mankude (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236893#comment-13236893
 ] 

Hari Mankude commented on HADOOP-8202:
--

bq.
Does this fix the exception that you observed in the test, where proxy was not 
stopped?

Yes, it does.

 stopproxy() is not closing the proxies correctly
 

 Key: HADOOP-8202
 URL: https://issues.apache.org/jira/browse/HADOOP-8202
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.24.0
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Attachments: HADOOP-8202.patch, HADOOP-8202.patch


 I was running testbackupnode and noticed that NNprotocol proxy was not being 
 closed. Talked with Suresh and he observed that most of the protocols do not 
 implement ProtocolTranslator and hence the logic in stopproxy() does not 
 work. Instead, since all of them are closeable, Suresh suggested that 
 closeable property should be used at close.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8184) ProtoBuf RPC engine does not need it own reply packet - it can use the IPC layer reply packet.

2012-03-23 Thread Tsz Wo (Nicholas), SZE (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HADOOP-8184:
---

   Resolution: Fixed
Fix Version/s: 0.24.0
   0.23.3
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Suresh for the review.

I have committed this.  Thanks, Sanjay!

 ProtoBuf RPC engine does not need it own reply packet - it can use the IPC 
 layer reply packet.
 --

 Key: HADOOP-8184
 URL: https://issues.apache.org/jira/browse/HADOOP-8184
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Fix For: 0.23.3, 0.24.0

 Attachments: rpcFixPBHeader2.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8184) ProtoBuf RPC engine does not need it own reply packet - it can use the IPC layer reply packet.

2012-03-23 Thread Tsz Wo (Nicholas), SZE (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8184?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsz Wo (Nicholas), SZE updated HADOOP-8184:
---

Component/s: ipc

 ProtoBuf RPC engine does not need it own reply packet - it can use the IPC 
 layer reply packet.
 --

 Key: HADOOP-8184
 URL: https://issues.apache.org/jira/browse/HADOOP-8184
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Fix For: 0.23.3, 0.24.0

 Attachments: rpcFixPBHeader2.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8203) Remove dependency on sun's jdk.

2012-03-23 Thread Owen O'Malley (Created) (JIRA)
Remove dependency on sun's jdk.
---

 Key: HADOOP-8203
 URL: https://issues.apache.org/jira/browse/HADOOP-8203
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Owen O'Malley
Assignee: Owen O'Malley


When the signal handlers were added, they introduced a dependency on 
sun.misc.Signal and sun.misc.SignalHandler. We can look these classes up by 
reflection and avoid the warning and also provide a soft-fail for non-sun jvms.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7030) new topology mapping implementations

2012-03-23 Thread Tom White (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7030:
--

Attachment: HADOOP-7030.patch

New patch addressing Alejandro's feedback.

 new topology mapping implementations
 

 Key: HADOOP-7030
 URL: https://issues.apache.org/jira/browse/HADOOP-7030
 Project: Hadoop Common
  Issue Type: New Feature
Affects Versions: 0.20.1, 0.20.2, 0.21.0
Reporter: Patrick Angeles
Assignee: Tom White
 Attachments: HADOOP-7030-2.patch, HADOOP-7030.patch, 
 HADOOP-7030.patch, HADOOP-7030.patch, HADOOP-7030.patch, topology.patch


 The default ScriptBasedMapping implementation of DNSToSwitchMapping for 
 determining cluster topology has some drawbacks. Principally, it forks to an 
 OS-specific script.
 This issue proposes two new Java implementations of DNSToSwitchMapping. 
 TableMapping reads a two column text file that maps an IP or hostname to a 
 rack ID. Ip4RangeMapping reads a three column text file where each line 
 represents a start and end IP range plus a rack ID.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8163) Improve ActiveStandbyElector to provide hooks for fencing old active

2012-03-23 Thread Bikas Saha (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236905#comment-13236905
 ] 

Bikas Saha commented on HADOOP-8163:


How would the admin know that B is the errant controller?

Aside from the above.
I have not seen a detailed design for the failover controller. So I cannot make 
a precise comment but let me post a general idea.

I have a feeling that putting the fencing concept into the elector is diluting 
the distinctness between the elector and the failover controller. In my mind, 
the elector is a distributed leader election library that signals candidates 
about being made leader or standby. In the ideal world, where the HA service 
behaves perfectly and does not execute any instruction unless it is a leader, 
we only need the elector. But the world is not ideal and we can have errant 
leader who need to be fenced etc. Here is where the Failover controller comes 
in. It manages the HA service by using the elector to do distributed leader 
selection and get those notifications passed onto the HAservice. In addition is 
guards service sanity by making sure that the signal is passed only when it is 
safe to do so. 
How about this slightly different alternative flow. Elector gets leader lock. 
For all intents and purposes it is the new leader. It passes the signal to the 
failover controller with the breadcrumb of the last leader.
appClient-becomeActive(breadcrumb);
the failoverController now has to ensure that all previous master are fenced 
before making its service the master. the breadcrumb is an optimization that 
lets it know that such an operation may not be necessary. If it is necessary, 
then it performs fencing. If fencing is successful, it calls.
elector-becameActive() or elector-transitionedToActive() at which point the 
elector can overwrite the breadcrumb with its own info. I havent thought 
through if this should be called before or after a successful call to 
HAService-transitionToActive() but my gut feeling is for the former.
This keeps the notion of fencing inside the controller instead of being in both 
the elector and the controller.

Secondly, we are performing blocking calls on the ZKClient callback that 
happens on the ZK threads. It is advisable to not block ZK client threads for 
long. The create and delete methods might be ok but I would try to move the 
fencing operation and transitioning to active operations away from the ZK 
thread. i.e. when the FailoverController is notified about becoming master, it 
returns the call and then processes fencing/transitioning on some other 
thread/threadpool. The above flow allows for this.

Thirdly, how about using the setData(breadcrumb, appData, version)?
This replaces a 2 step operation (delete+create) with a 1 step operation (set) 
which is always a desirable thing in distributed transactions. The version 
number also gives an idea of the number of switches. The version also prevents 
a setData() from succeeding if someone else has already set it before you (may 
not be important here but is a good sanity check).

Let me know your thoughts on the above.


 Improve ActiveStandbyElector to provide hooks for fencing old active
 

 Key: HADOOP-8163
 URL: https://issues.apache.org/jira/browse/HADOOP-8163
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ha
Affects Versions: 0.23.3, 0.24.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-8163.txt, hadoop-8163.txt, hadoop-8163.txt, 
 hadoop-8163.txt


 When a new node becomes active in an HA setup, it may sometimes have to take 
 fencing actions against the node that was formerly active. This JIRA extends 
 the ActiveStandbyElector which adds an extra non-ephemeral node into the ZK 
 directory, which acts as a second copy of the active node's information. 
 Then, if the active loses its ZK session, the next active to be elected may 
 easily locate the unfenced node to take the appropriate actions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8202) stopproxy() is not closing the proxies correctly

2012-03-23 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236908#comment-13236908
 ] 

Aaron T. Myers commented on HADOOP-8202:


Hi Hari, a few comments:

# c = (Closeable) Proxy.getInvocationHandler(proxy); - this could potentially 
cause an uncaught ClassCastException, if the InvocationHandler itself doesn't 
implement Closeable.
# Given the above, the error message at the bottom of the message should 
perhaps also include or invocation handler does not implement Closeable
# 'LOG.error(... + or does not provide invocation handler for proxy class' - 
should put a space after class
# 'LOG.error(Cannot close proxy since it is null );' - unnecessary whitespace 
at the end of the string.
# Seems like it shouldn't be too tough to write a test for this with some mock 
objects.
# Rather than have a single catch-all error message at the bottom, and return 
early to avoid it, I think it'd be better to only ever log a single error, and 
include the relevant information which caused the failure to close the proxy in 
that log message.

Also:

bq. Does this fix the exception that you observed in the test, where proxy was 
not stopped?

What exception? In what test?

 stopproxy() is not closing the proxies correctly
 

 Key: HADOOP-8202
 URL: https://issues.apache.org/jira/browse/HADOOP-8202
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.24.0
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Attachments: HADOOP-8202.patch, HADOOP-8202.patch


 I was running testbackupnode and noticed that NNprotocol proxy was not being 
 closed. Talked with Suresh and he observed that most of the protocols do not 
 implement ProtocolTranslator and hence the logic in stopproxy() does not 
 work. Instead, since all of them are closeable, Suresh suggested that 
 closeable property should be used at close.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8184) ProtoBuf RPC engine does not need it own reply packet - it can use the IPC layer reply packet.

2012-03-23 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236930#comment-13236930
 ] 

Hudson commented on HADOOP-8184:


Integrated in Hadoop-Common-0.23-Commit #719 (See 
[https://builds.apache.org/job/Hadoop-Common-0.23-Commit/719/])
svn merge -c 1304542 from trunk for HADOOP-8184. (Revision 1304546)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1304546
Files : 
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/proto/hadoop_rpc.proto


 ProtoBuf RPC engine does not need it own reply packet - it can use the IPC 
 layer reply packet.
 --

 Key: HADOOP-8184
 URL: https://issues.apache.org/jira/browse/HADOOP-8184
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Fix For: 0.23.3, 0.24.0

 Attachments: rpcFixPBHeader2.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8202) stopproxy() is not closing the proxies correctly

2012-03-23 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236931#comment-13236931
 ] 

Aaron T. Myers commented on HADOOP-8202:


Also, given that implementing the ProtocolTranslator interface is necessary in 
order for {{RPC#getServerAddress}} to work, perhaps a better solution would be 
to leave {{RPC#stopProxy}} as it is now, and change the protocol translators 
that currently implement {{Closeable}} to instead implement 
{{ProtocolTrnaslator}}.

Thoughts?

 stopproxy() is not closing the proxies correctly
 

 Key: HADOOP-8202
 URL: https://issues.apache.org/jira/browse/HADOOP-8202
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.24.0
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Attachments: HADOOP-8202.patch, HADOOP-8202.patch


 I was running testbackupnode and noticed that NNprotocol proxy was not being 
 closed. Talked with Suresh and he observed that most of the protocols do not 
 implement ProtocolTranslator and hence the logic in stopproxy() does not 
 work. Instead, since all of them are closeable, Suresh suggested that 
 closeable property should be used at close.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8163) Improve ActiveStandbyElector to provide hooks for fencing old active

2012-03-23 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236934#comment-13236934
 ] 

Todd Lipcon commented on HADOOP-8163:
-

Hi Bikas. I think your ideas have some merit, especially with regard to a fully 
general election framework. But since we only have one user of this framework 
at this point (HDFS) and we currently only support a single standby node, I 
would prefer to punt these changes to another JIRA as additional improvements. 
This will let us move forward with the high priority task of auto failover for 
HA NNs, rather than getting distracted making this extremely general.

bq. Secondly, we are performing blocking calls on the ZKClient callback that 
happens on the ZK threads. It is advisable to not block ZK client threads for 
long

This is only the case if you have other operations that are waiting on timely 
delivery of callbacks. In the case of the election framework, all of our 
notifications from ZK have to be received in-order and processed sequentially, 
or else we have a huge explosion of possible interactions to worry about. Doing 
blocking calls in the callbacks will _not_ result in lost ZK leases, etc. To 
quote from the ZK programmer's guide:

All IO happens on the IO thread (using Java NIO). All event callbacks happen 
on the event thread. Session maintenance such as reconnecting to ZooKeeper 
servers and maintaining heartbeat is done on the IO thread. Responses for 
synchronous methods are also processed in the IO thread. All responses to 
asynchronous methods and watch events are processed on the event thread... 
Callbacks do not block the processing of the IO thread or the processing of the 
synchronous calls

bq. Thirdly, how about using the setData(breadcrumb, appData, version)?

Let me see about making this change. Like you said, it's a good safety check.

 Improve ActiveStandbyElector to provide hooks for fencing old active
 

 Key: HADOOP-8163
 URL: https://issues.apache.org/jira/browse/HADOOP-8163
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ha
Affects Versions: 0.23.3, 0.24.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-8163.txt, hadoop-8163.txt, hadoop-8163.txt, 
 hadoop-8163.txt


 When a new node becomes active in an HA setup, it may sometimes have to take 
 fencing actions against the node that was formerly active. This JIRA extends 
 the ActiveStandbyElector which adds an extra non-ephemeral node into the ZK 
 directory, which acts as a second copy of the active node's information. 
 Then, if the active loses its ZK session, the next active to be elected may 
 easily locate the unfenced node to take the appropriate actions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8060) Add a capability to use of consistent checksums for append and copy

2012-03-23 Thread Kihwal Lee (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236946#comment-13236946
 ] 

Kihwal Lee commented on HADOOP-8060:


bq. What about making the checksum type part of the FileSystem cache key

The checksum type is a dfs config item. We can't do that in FileSystem, which 
is in common. But FileSystem already has things like setVerfyChecksum() and 
getFileChecksum(). So we could make the checksum type a Filesystem-level 
config. 

To address the issue of dynamically configurable properties, we could introduce 
a file system config digest method, which is kind of like hashCode(). The 
tricky part will be to get the hdfs part of the formula added to Configuration 
when, say, HdfsConfiguration.init() is called. Or maybe having each file system 
implement a digest method is better.

For this jira, I will just add conf as a part of the key. The equality check 
will be just a shallow comparison.

 Add a capability to use of consistent checksums for append and copy
 ---

 Key: HADOOP-8060
 URL: https://issues.apache.org/jira/browse/HADOOP-8060
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, util
Affects Versions: 0.23.0, 0.23.1, 0.24.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 0.23.2, 0.24.0


 After the improved CRC32C checksum feature became default, some of use cases 
 involving data movement are no longer supported.  For example, when running 
 DistCp to copy from a file stored with the CRC32 checksum to a new cluster 
 with the CRC32C set to default checksum, the final data integrity check fails 
 because of mismatch in checksums.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8184) ProtoBuf RPC engine does not need it own reply packet - it can use the IPC layer reply packet.

2012-03-23 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236947#comment-13236947
 ] 

Hudson commented on HADOOP-8184:


Integrated in Hadoop-Common-trunk-Commit #1920 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1920/])
HADOOP-8184.  ProtoBuf RPC engine uses the IPC layer reply packet.  
Contributed by Sanjay Radia (Revision 1304542)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1304542
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/proto/hadoop_rpc.proto


 ProtoBuf RPC engine does not need it own reply packet - it can use the IPC 
 layer reply packet.
 --

 Key: HADOOP-8184
 URL: https://issues.apache.org/jira/browse/HADOOP-8184
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Fix For: 0.23.3, 0.24.0

 Attachments: rpcFixPBHeader2.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8202) stopproxy() is not closing the proxies correctly

2012-03-23 Thread Suresh Srinivas (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236948#comment-13236948
 ] 

Suresh Srinivas commented on HADOOP-8202:
-

bq. c = (Closeable) Proxy.getInvocationHandler(proxy); - this could 
potentially cause an uncaught ClassCastException, if the InvocationHandler 
itself doesn't implement Closeable.
Not necessary - all invocation handlers in Hadoop are RpcInvocationHandlers and 
they implement Closeable.

bq. Seems like it shouldn't be too tough to write a test for this with some 
mock objects.
I think this seems wholly unnecessary. Currently stopProxy is completely 
failing. I know that multiple times things have changed in this part of the 
code, and has been broken for some time.

I am +1 without any tests.

bq. Rather than have a single catch-all error message at the bottom, and return 
early to avoid it, I think it'd be better to only ever log a single error, and 
include the relevant information which caused the failure to close the proxy in 
that log message.
This is your coding style. I am not sure if it should be followed by every one.

Again, I am okay to commit this, once we have log statement from the existing 
test.

 stopproxy() is not closing the proxies correctly
 

 Key: HADOOP-8202
 URL: https://issues.apache.org/jira/browse/HADOOP-8202
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.24.0
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Attachments: HADOOP-8202.patch, HADOOP-8202.patch


 I was running testbackupnode and noticed that NNprotocol proxy was not being 
 closed. Talked with Suresh and he observed that most of the protocols do not 
 implement ProtocolTranslator and hence the logic in stopproxy() does not 
 work. Instead, since all of them are closeable, Suresh suggested that 
 closeable property should be used at close.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8202) stopproxy() is not closing the proxies correctly

2012-03-23 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236949#comment-13236949
 ] 

Hadoop QA commented on HADOOP-8202:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12519675/HADOOP-8202.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests:
  org.apache.hadoop.fs.viewfs.TestViewFsTrash

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/755//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/755//console

This message is automatically generated.

 stopproxy() is not closing the proxies correctly
 

 Key: HADOOP-8202
 URL: https://issues.apache.org/jira/browse/HADOOP-8202
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.24.0
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Attachments: HADOOP-8202.patch, HADOOP-8202.patch


 I was running testbackupnode and noticed that NNprotocol proxy was not being 
 closed. Talked with Suresh and he observed that most of the protocols do not 
 implement ProtocolTranslator and hence the logic in stopproxy() does not 
 work. Instead, since all of them are closeable, Suresh suggested that 
 closeable property should be used at close.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8184) ProtoBuf RPC engine does not need it own reply packet - it can use the IPC layer reply packet.

2012-03-23 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236950#comment-13236950
 ] 

Hudson commented on HADOOP-8184:


Integrated in Hadoop-Hdfs-trunk-Commit #1994 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1994/])
HADOOP-8184.  ProtoBuf RPC engine uses the IPC layer reply packet.  
Contributed by Sanjay Radia (Revision 1304542)

 Result = SUCCESS
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1304542
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/proto/hadoop_rpc.proto


 ProtoBuf RPC engine does not need it own reply packet - it can use the IPC 
 layer reply packet.
 --

 Key: HADOOP-8184
 URL: https://issues.apache.org/jira/browse/HADOOP-8184
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Fix For: 0.23.3, 0.24.0

 Attachments: rpcFixPBHeader2.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8202) stopproxy() is not closing the proxies correctly

2012-03-23 Thread Suresh Srinivas (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236952#comment-13236952
 ] 

Suresh Srinivas commented on HADOOP-8202:
-

bq. Also, given that implementing the ProtocolTranslator interface is necessary 
in order for RPC#getServerAddress to work, perhaps a better solution would be 
to leave RPC#stopProxy as it is now, and change the protocol translators that 
currently implement Closeable to instead implement ProtocolTrnaslator.

I want to look at the ProtocolTranslator and why it is needed in detail. In 
this part of the code clearly ProtocolTranslator is not required.

 stopproxy() is not closing the proxies correctly
 

 Key: HADOOP-8202
 URL: https://issues.apache.org/jira/browse/HADOOP-8202
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.24.0
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Attachments: HADOOP-8202.patch, HADOOP-8202.patch


 I was running testbackupnode and noticed that NNprotocol proxy was not being 
 closed. Talked with Suresh and he observed that most of the protocols do not 
 implement ProtocolTranslator and hence the logic in stopproxy() does not 
 work. Instead, since all of them are closeable, Suresh suggested that 
 closeable property should be used at close.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7030) new topology mapping implementations

2012-03-23 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236960#comment-13236960
 ] 

Hadoop QA commented on HADOOP-7030:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12519688/HADOOP-7030.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 4 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests:
  org.apache.hadoop.ha.TestHealthMonitor

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/756//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/756//console

This message is automatically generated.

 new topology mapping implementations
 

 Key: HADOOP-7030
 URL: https://issues.apache.org/jira/browse/HADOOP-7030
 Project: Hadoop Common
  Issue Type: New Feature
Affects Versions: 0.20.1, 0.20.2, 0.21.0
Reporter: Patrick Angeles
Assignee: Tom White
 Attachments: HADOOP-7030-2.patch, HADOOP-7030.patch, 
 HADOOP-7030.patch, HADOOP-7030.patch, HADOOP-7030.patch, topology.patch


 The default ScriptBasedMapping implementation of DNSToSwitchMapping for 
 determining cluster topology has some drawbacks. Principally, it forks to an 
 OS-specific script.
 This issue proposes two new Java implementations of DNSToSwitchMapping. 
 TableMapping reads a two column text file that maps an IP or hostname to a 
 rack ID. Ip4RangeMapping reads a three column text file where each line 
 represents a start and end IP range plus a rack ID.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8194) viewfs: quota command does not report remaining quotas

2012-03-23 Thread John George (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John George updated HADOOP-8194:


Status: Open  (was: Patch Available)

 viewfs: quota command does not report remaining quotas
 --

 Key: HADOOP-8194
 URL: https://issues.apache.org/jira/browse/HADOOP-8194
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.2, 0.23.3
Reporter: John George
Assignee: John George
 Attachments: HADOOP-8194.patch, HADOOP-8194.patch


 The space and namesapce quotas and remaining are not reported.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8194) viewfs: quota command does not report remaining quotas

2012-03-23 Thread John George (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John George updated HADOOP-8194:


 Target Version/s: 0.23.2, 0.23.3, 0.24.0  (was: 0.23.3)
Affects Version/s: 0.24.0

 viewfs: quota command does not report remaining quotas
 --

 Key: HADOOP-8194
 URL: https://issues.apache.org/jira/browse/HADOOP-8194
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.2, 0.23.3, 0.24.0
Reporter: John George
Assignee: John George
 Attachments: HADOOP-8194.patch, HADOOP-8194.patch


 The space and namesapce quotas and remaining are not reported.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8194) viewfs: quota command does not report remaining quotas

2012-03-23 Thread John George (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John George updated HADOOP-8194:


Attachment: HADOOP-8194.patch

 viewfs: quota command does not report remaining quotas
 --

 Key: HADOOP-8194
 URL: https://issues.apache.org/jira/browse/HADOOP-8194
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.2, 0.23.3, 0.24.0
Reporter: John George
Assignee: John George
 Attachments: HADOOP-8194.patch, HADOOP-8194.patch


 The space and namesapce quotas and remaining are not reported.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8194) viewfs: quota command does not report remaining quotas

2012-03-23 Thread John George (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236963#comment-13236963
 ] 

John George commented on HADOOP-8194:
-

Uploading a new patch with a small new test and a fix.

 viewfs: quota command does not report remaining quotas
 --

 Key: HADOOP-8194
 URL: https://issues.apache.org/jira/browse/HADOOP-8194
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.2, 0.23.3, 0.24.0
Reporter: John George
Assignee: John George
 Attachments: HADOOP-8194.patch, HADOOP-8194.patch


 The space and namesapce quotas and remaining are not reported.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8060) Add a capability to use of consistent checksums for append and copy

2012-03-23 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236965#comment-13236965
 ] 

Todd Lipcon commented on HADOOP-8060:
-

Doing shallow conf comparison as part of the FS key seems a bit dangerous -- 
I'm guessing we'll end up with a lot of leakage issues in long running daemons 
like the NM/RM.

Anyone else have some other ideas how to deal with this? I don't think the 
CreateFlag idea is bad -- maybe better than futzing with the cache.

 Add a capability to use of consistent checksums for append and copy
 ---

 Key: HADOOP-8060
 URL: https://issues.apache.org/jira/browse/HADOOP-8060
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, util
Affects Versions: 0.23.0, 0.23.1, 0.24.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 0.23.2, 0.24.0


 After the improved CRC32C checksum feature became default, some of use cases 
 involving data movement are no longer supported.  For example, when running 
 DistCp to copy from a file stored with the CRC32 checksum to a new cluster 
 with the CRC32C set to default checksum, the final data integrity check fails 
 because of mismatch in checksums.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8163) Improve ActiveStandbyElector to provide hooks for fencing old active

2012-03-23 Thread Bikas Saha (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236966#comment-13236966
 ] 

Bikas Saha commented on HADOOP-8163:


bq. But since we only have one user of this framework at this point (HDFS) and 
we currently only support a single standby node, I would prefer to punt these 
changes to another JIRA as additional improvements.
I would disagree here. The suggestion does not have much to do with HDFS or 
single standby or generality of the framework. It is about keeping fencing 
inside FailoverController instead of being shared with the elector. Clear 
separation of responsibilities.
I agree that the NN work is more important and without knowing more about the 
FailoverController/Automatic NN HA I cannot say how much work it would take to 
change the control flow as described above. My guess is that it would not be 
big but I might be wrong. In my experience API's once made are hard to change. 
It would be hard for someone to change the control flow later once important 
services like NN HA depend on the current flow. So punting it for the future 
would be quite a distant future indeed :P

bq. Doing blocking calls in the callbacks will not result in lost ZK leases, 
etc. To quote from the ZK programmer's guide:
I agree. The IO updates will be processed but the callback notification to the 
client might be impeded if the client is already blocking on the previous 
callbacks. I was more concerned about the later. That is why I was suggesting 
to not do fencing on the client callback. Though I agree that in the current 
patch these calls have to be made synchronously for correctness.



 Improve ActiveStandbyElector to provide hooks for fencing old active
 

 Key: HADOOP-8163
 URL: https://issues.apache.org/jira/browse/HADOOP-8163
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ha
Affects Versions: 0.23.3, 0.24.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-8163.txt, hadoop-8163.txt, hadoop-8163.txt, 
 hadoop-8163.txt


 When a new node becomes active in an HA setup, it may sometimes have to take 
 fencing actions against the node that was formerly active. This JIRA extends 
 the ActiveStandbyElector which adds an extra non-ephemeral node into the ZK 
 directory, which acts as a second copy of the active node's information. 
 Then, if the active loses its ZK session, the next active to be elected may 
 easily locate the unfenced node to take the appropriate actions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8184) ProtoBuf RPC engine does not need it own reply packet - it can use the IPC layer reply packet.

2012-03-23 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236967#comment-13236967
 ] 

Hudson commented on HADOOP-8184:


Integrated in Hadoop-Mapreduce-trunk-Commit #1929 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1929/])
HADOOP-8184.  ProtoBuf RPC engine uses the IPC layer reply packet.  
Contributed by Sanjay Radia (Revision 1304542)

 Result = ABORTED
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1304542
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/proto/hadoop_rpc.proto


 ProtoBuf RPC engine does not need it own reply packet - it can use the IPC 
 layer reply packet.
 --

 Key: HADOOP-8184
 URL: https://issues.apache.org/jira/browse/HADOOP-8184
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Fix For: 0.23.3, 0.24.0

 Attachments: rpcFixPBHeader2.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8184) ProtoBuf RPC engine does not need it own reply packet - it can use the IPC layer reply packet.

2012-03-23 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8184?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236968#comment-13236968
 ] 

Hudson commented on HADOOP-8184:


Integrated in Hadoop-Mapreduce-0.23-Commit #727 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Commit/727/])
svn merge -c 1304542 from trunk for HADOOP-8184. (Revision 1304546)

 Result = ABORTED
szetszwo : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1304546
Files : 
* /hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/ProtobufRpcEngine.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/Server.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/proto/hadoop_rpc.proto


 ProtoBuf RPC engine does not need it own reply packet - it can use the IPC 
 layer reply packet.
 --

 Key: HADOOP-8184
 URL: https://issues.apache.org/jira/browse/HADOOP-8184
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Reporter: Sanjay Radia
Assignee: Sanjay Radia
 Fix For: 0.23.3, 0.24.0

 Attachments: rpcFixPBHeader2.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8163) Improve ActiveStandbyElector to provide hooks for fencing old active

2012-03-23 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236970#comment-13236970
 ] 

Todd Lipcon commented on HADOOP-8163:
-

bq. In my experience API's once made are hard to change. It would be hard for 
someone to change the control flow later once important services like NN HA 
depend on the current flow. So punting it for the future would be quite a 
distant future indeed

Given this is an internal API, there shouldn't be any resistance to changing it 
in the future. It's marked Private/Evolving, meaning that there aren't 
guarantees of compatibility to external consumers, and that even for internal 
consumers it's likely to change as use cases evolve. I'll file a follow-up JIRA 
to consider your recommended API changes, OK?


 Improve ActiveStandbyElector to provide hooks for fencing old active
 

 Key: HADOOP-8163
 URL: https://issues.apache.org/jira/browse/HADOOP-8163
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ha
Affects Versions: 0.23.3, 0.24.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-8163.txt, hadoop-8163.txt, hadoop-8163.txt, 
 hadoop-8163.txt


 When a new node becomes active in an HA setup, it may sometimes have to take 
 fencing actions against the node that was formerly active. This JIRA extends 
 the ActiveStandbyElector which adds an extra non-ephemeral node into the ZK 
 directory, which acts as a second copy of the active node's information. 
 Then, if the active loses its ZK session, the next active to be elected may 
 easily locate the unfenced node to take the appropriate actions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8194) viewfs: quota command does not report remaining quotas

2012-03-23 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236977#comment-13236977
 ] 

Hadoop QA commented on HADOOP-8194:
---

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12519695/HADOOP-8194.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/757//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/757//console

This message is automatically generated.

 viewfs: quota command does not report remaining quotas
 --

 Key: HADOOP-8194
 URL: https://issues.apache.org/jira/browse/HADOOP-8194
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.2, 0.23.3, 0.24.0
Reporter: John George
Assignee: John George
 Attachments: HADOOP-8194.patch, HADOOP-8194.patch


 The space and namesapce quotas and remaining are not reported.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8163) Improve ActiveStandbyElector to provide hooks for fencing old active

2012-03-23 Thread Todd Lipcon (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-8163:


Attachment: hadoop-8163.txt

New patch uses setData rather than delete/create for updating the breadcrumb 
node after fencing

 Improve ActiveStandbyElector to provide hooks for fencing old active
 

 Key: HADOOP-8163
 URL: https://issues.apache.org/jira/browse/HADOOP-8163
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ha
Affects Versions: 0.23.3, 0.24.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-8163.txt, hadoop-8163.txt, hadoop-8163.txt, 
 hadoop-8163.txt, hadoop-8163.txt


 When a new node becomes active in an HA setup, it may sometimes have to take 
 fencing actions against the node that was formerly active. This JIRA extends 
 the ActiveStandbyElector which adds an extra non-ephemeral node into the ZK 
 directory, which acts as a second copy of the active node's information. 
 Then, if the active loses its ZK session, the next active to be elected may 
 easily locate the unfenced node to take the appropriate actions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8193) Refactor FailoverController/HAAdmin code to add an abstract class for target services

2012-03-23 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8193?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236984#comment-13236984
 ] 

Todd Lipcon commented on HADOOP-8193:
-

Also ran findbugs on common and HDFS, there were no additional warnings.

 Refactor FailoverController/HAAdmin code to add an abstract class for 
 target services
 ---

 Key: HADOOP-8193
 URL: https://issues.apache.org/jira/browse/HADOOP-8193
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ha
Affects Versions: 0.23.3, 0.24.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-8193.txt, hadoop-8193.txt


 In working at HADOOP-8077, HDFS-3084, and HDFS-3072, I ran into various 
 difficulties which are an artifact of the current design. A few of these:
 - the service name is resolved from the logical name (eg ns1.nn1) to an IP 
 address at the outer layer of DFSHAAdmin
 -- this means it's difficult to provide the logical name ns1.nn1 to fence 
 scripts (HDFS-3084)
 -- this means it's difficult to configure fencing method per-namespace (since 
 the FailoverController doesn't know what the namespace is) (HADOOP-8077)
 - the configuration for HA HDFS is weirdly split between core-site and 
 hdfs-site, even though most users see this as an HDFS feature. For example, 
 users expect to configure NN fencing configurations in hdfs-site, and expect 
 the keys to have a dfs.* prefix
 - proxies are constructed at the outer layer of the admin commands. This 
 means it's impossible for the inner layers (eg FailoverController.failover) 
 to re-construct proxies with different timeouts (HDFS-3072)
 The proposed refactor is to add a new interface (tentatively named 
 HAServiceTarget) which refers to target for one of the admin commands. An 
 instance of this class is responsible for creating proxies, creating fencers, 
 mapping back to a logical name, etc. The HDFS implementation of this class 
 can then provide different results based on the particular nameservice, can 
 use HDFS-specific configuration prefixes, etc. Using this class as the 
 argument for fencing methods also makes the API more evolvable in the future, 
 since we can add new getters to HAServiceTarget (whereas the current 
 InetSocketAddress is quite limiting)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8202) stopproxy() is not closing the proxies correctly

2012-03-23 Thread Hari Mankude (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236985#comment-13236985
 ] 

Hari Mankude commented on HADOOP-8202:
--

The test was TestBackupNode.  Including the relevant output from the test.

2012-03-23 12:45:31,277 INFO  namenode.NameNode 
(NameNodeRpcServer.java:errorReport(321)) - Error report from 
NamenodeRegistration(localhost:64139, role=Backup Node): Shutting down.

2012-03-23 12:45:31,277 INFO  namenode.FSEditLog 
(FSEditLog.java:releaseBackupStream(1030)) - Removing backup journal 
BackupJournalManager

2012-03-23 12:45:31,277 ERROR ipc.RPC (RPC.java:stopProxy(593)) - Tried to call 
RPC.stopProxy on an object that is not a proxy.
java.lang.IllegalArgumentException: not a proxy instance
at java.lang.reflect.Proxy.getInvocationHandler(Proxy.java:637)
at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:591)
at 
org.apache.hadoop.hdfs.server.namenode.EditLogBackupOutputStream.abort(EditLogBackupOutputStream.java:106)
at 
org.apache.hadoop.hdfs.server.namenode.JournalSet$JournalAndStream.abort(JournalSet.java:98)
at 
org.apache.hadoop.hdfs.server.namenode.JournalSet.remove(JournalSet.java:531)
at 
org.apache.hadoop.hdfs.server.namenode.FSEditLog.releaseBackupStream(FSEditLog.java:1031)
at 
org.apache.hadoop.hdfs.server.namenode.FSNamesystem.releaseBackupNode(FSNamesystem.java:4663)
at 
org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.errorReport(NameNodeRpcServer.java:323)
at 
org.apache.hadoop.hdfs.protocolPB.NamenodeProtocolServerSideTranslatorPB.errorReport(NamenodeProtocolServerSideTranslatorPB.java:125)
at 
org.apache.hadoop.hdfs.protocol.proto.NamenodeProtocolProtos$NamenodeProtocolService$2.callBlockingMethod(NamenodeProtocolProtos.java:8072)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:417)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:884)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1661)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1657)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at 
org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1205)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1655)
2012-03-23 12:45:31,277 ERROR ipc.RPC (RPC.java:stopProxy(603)) - Could not get 
invocation handler null for proxy c
lass class org.apache.hadoop.hdfs.protocolPB.JournalProtocolTranslatorPB, or 
invocation handler is not closeable.
2012-03-23 12:45:31,278 ERROR ipc.RPC (RPC.java:stopProxy(593)) - Tried to call 
RPC.stopProxy on an object that is 
not a proxy.java.lang.IllegalArgumentException: not a proxy instance
at java.lang.reflect.Proxy.getInvocationHandler(Proxy.java:637)
at org.apache.hadoop.ipc.RPC.stopProxy(RPC.java:591)at 
org.apache.hadoop.hdfs.server.namenode.BackupNode.stop(BackupNode.java:194)
at 
org.apache.hadoop.hdfs.server.namenode.TestBackupNode.testBackupNodeTailsEdits(TestBackupNode.java:169)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
  at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:44)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:15)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:41)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:20)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:28)
at 
org.junit.runners.BlockJUnit4ClassRunner.runNotIgnored(BlockJUnit4ClassRunner.java:79)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:71)
at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:49)
at org.junit.runners.ParentRunner$3.run(ParentRunner.java:193)
at org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:52)
at org.junit.runners.ParentRunner.runChildren(ParentRunner.java:191)
at org.junit.runners.ParentRunner.access$000(ParentRunner.java:42)
at org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:184)
at org.junit.runners.ParentRunner.run(ParentRunner.java:236)
at 

[jira] [Commented] (HADOOP-8192) Fix unit test failures with IBM's JDK

2012-03-23 Thread Kumar Ravi (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236990#comment-13236990
 ] 

Kumar Ravi commented on HADOOP-8192:


While debugging this issue it was observed that the order by which the 
racksToBlocks HashMap gets populated seems to matter. As per Robert Evans and 
Devaraj Das, it appears by design that the order should not matter. 

 The reason order happens to play a role here is that as soon as all the blocks 
are accounted for, getMoreSplits() stops iterating through the racks, and 
depending upon which rack(s) each block is replicated on, and depending upon 
when each rack is processed in the loop within getMoreSplits(), one can end up 
with different split counts, and as a result fail the testcase in some 
situations.

Specifically for this testcase, there are 3 racks that are simulated where each 
of these 3 racks have a datanode each. Datanode 1 has replicas of all the 
blocks of all the 3 files (file1, file2, and file3) while Datanode 2 has all 
the blocks of files file2 and file 3 and Datanode 3 has all the blocks of only 
file3. As soon as Rack 1 is processed, getMoreSplits() exits with a split count 
of the number of times it stays in this loop. So in this scenario, if Rack1 
gets processed last, one will end up with a split count of 3. If Rack1 gets 
processed in the beginning, split count will be 1. The testcase is expecting a 
return value of 3 which is the value returned if running on Sun JVM, but a 
value of 1 or 2 may be returned depending on when rack1 gets processed.


 Fix unit test failures with IBM's JDK
 -

 Key: HADOOP-8192
 URL: https://issues.apache.org/jira/browse/HADOOP-8192
 Project: Hadoop Common
  Issue Type: Bug
 Environment: java version 1.6.0
 Java(TM) SE Runtime Environment (build pxi3260sr10-20111208_01(SR10))
 IBM J9 VM (build 2.4, JRE 1.6.0 IBM J9 2.4 Linux x86-32 
 jvmxi3260sr10-20111207_96808 (JIT enabled, AOT enabled)
 J9VM - 20111207_096808
 JIT  - r9_2007_21307ifx1
 GC   - 20110519_AA)
 JCL  - 2004_02
Reporter: Devaraj Das

 Some tests fail with IBM's JDK. They are 
 org.apache.hadoop.mapred.lib.TestCombineFileInputFormat, 
 org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat, 
 org.apache.hadoop.streaming.TestStreamingBadRecords, 
 org.apache.hadoop.mapred.TestCapacityScheduler. This jira is to track fixing 
 these.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8202) stopproxy() is not closing the proxies correctly

2012-03-23 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13236996#comment-13236996
 ] 

Aaron T. Myers commented on HADOOP-8202:


bq. Not necessary - all invocation handlers in Hadoop are RpcInvocationHandlers 
and they implement Closeable.

Can you guarantee that all future invocation handlers in Hadoop will implement 
Closeable?

bq. I think this seems wholly unnecessary. Currently stopProxy is completely 
failing. I know that multiple times things have changed in this part of the 
code, and has been broken for some time.

This is a great reason to add a test - so that such things don't regress in the 
future.

bq. This is your coding style. I am not sure if it should be followed by every 
one.

Sure, it's a preference, but that in itself isn't a good reason to not address 
the comment. Can you comment on why it's better to split the reasons for an 
error across several log statements?

bq. I want to look at the ProtocolTranslator and why it is needed in detail. In 
this part of the code clearly ProtocolTranslator is not required.

If you have a better way to implement RPC.getServerAddress, I'm happy to hear 
it.

bq. The test was TestBackupNode.

Thanks for the info, Hari.

 stopproxy() is not closing the proxies correctly
 

 Key: HADOOP-8202
 URL: https://issues.apache.org/jira/browse/HADOOP-8202
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.24.0
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Attachments: HADOOP-8202.patch, HADOOP-8202.patch


 I was running testbackupnode and noticed that NNprotocol proxy was not being 
 closed. Talked with Suresh and he observed that most of the protocols do not 
 implement ProtocolTranslator and hence the logic in stopproxy() does not 
 work. Instead, since all of them are closeable, Suresh suggested that 
 closeable property should be used at close.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8202) stopproxy() is not closing the proxies correctly

2012-03-23 Thread Hari Mankude (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Mankude updated HADOOP-8202:
-

Attachment: HADOOP-8202-1.patch

 stopproxy() is not closing the proxies correctly
 

 Key: HADOOP-8202
 URL: https://issues.apache.org/jira/browse/HADOOP-8202
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.24.0
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Attachments: HADOOP-8202-1.patch, HADOOP-8202.patch, HADOOP-8202.patch


 I was running testbackupnode and noticed that NNprotocol proxy was not being 
 closed. Talked with Suresh and he observed that most of the protocols do not 
 implement ProtocolTranslator and hence the logic in stopproxy() does not 
 work. Instead, since all of them are closeable, Suresh suggested that 
 closeable property should be used at close.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8202) stopproxy() is not closing the proxies correctly

2012-03-23 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237005#comment-13237005
 ] 

Hadoop QA commented on HADOOP-8202:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12519700/HADOOP-8202-1.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/758//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/758//console

This message is automatically generated.

 stopproxy() is not closing the proxies correctly
 

 Key: HADOOP-8202
 URL: https://issues.apache.org/jira/browse/HADOOP-8202
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.24.0
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Attachments: HADOOP-8202-1.patch, HADOOP-8202.patch, HADOOP-8202.patch


 I was running testbackupnode and noticed that NNprotocol proxy was not being 
 closed. Talked with Suresh and he observed that most of the protocols do not 
 implement ProtocolTranslator and hence the logic in stopproxy() does not 
 work. Instead, since all of them are closeable, Suresh suggested that 
 closeable property should be used at close.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8163) Improve ActiveStandbyElector to provide hooks for fencing old active

2012-03-23 Thread Bikas Saha (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237088#comment-13237088
 ] 

Bikas Saha commented on HADOOP-8163:


Looks good!
+1
Thanks!


 Improve ActiveStandbyElector to provide hooks for fencing old active
 

 Key: HADOOP-8163
 URL: https://issues.apache.org/jira/browse/HADOOP-8163
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ha
Affects Versions: 0.23.3, 0.24.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-8163.txt, hadoop-8163.txt, hadoop-8163.txt, 
 hadoop-8163.txt, hadoop-8163.txt


 When a new node becomes active in an HA setup, it may sometimes have to take 
 fencing actions against the node that was formerly active. This JIRA extends 
 the ActiveStandbyElector which adds an extra non-ephemeral node into the ZK 
 directory, which acts as a second copy of the active node's information. 
 Then, if the active loses its ZK session, the next active to be elected may 
 easily locate the unfenced node to take the appropriate actions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8204) TestHealthMonitor fails occasionally

2012-03-23 Thread Tom White (Created) (JIRA)
TestHealthMonitor fails occasionally 
-

 Key: HADOOP-8204
 URL: https://issues.apache.org/jira/browse/HADOOP-8204
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tom White


See e.g. 
https://builds.apache.org/job/PreCommit-HADOOP-Build/756//testReport/org.apache.hadoop.ha/TestHealthMonitor/testMonitor/

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8204) TestHealthMonitor fails occasionally

2012-03-23 Thread Todd Lipcon (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-8204:


Target Version/s: 0.23.3, 0.24.0

 TestHealthMonitor fails occasionally 
 -

 Key: HADOOP-8204
 URL: https://issues.apache.org/jira/browse/HADOOP-8204
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tom White
Assignee: Todd Lipcon

 See e.g. 
 https://builds.apache.org/job/PreCommit-HADOOP-Build/756//testReport/org.apache.hadoop.ha/TestHealthMonitor/testMonitor/

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HADOOP-8204) TestHealthMonitor fails occasionally

2012-03-23 Thread Todd Lipcon (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8204?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon reassigned HADOOP-8204:
---

Assignee: Todd Lipcon

 TestHealthMonitor fails occasionally 
 -

 Key: HADOOP-8204
 URL: https://issues.apache.org/jira/browse/HADOOP-8204
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Tom White
Assignee: Todd Lipcon

 See e.g. 
 https://builds.apache.org/job/PreCommit-HADOOP-Build/756//testReport/org.apache.hadoop.ha/TestHealthMonitor/testMonitor/

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Assigned] (HADOOP-8139) Path does not allow metachars to be escaped

2012-03-23 Thread Daryn Sharp (Assigned) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8139?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp reassigned HADOOP-8139:
---

Assignee: (was: Daryn Sharp)

I do not have the resources or knowledge necessary to test on windows.  I hope 
a windows user will find one of my patches useful.

 Path does not allow metachars to be escaped
 ---

 Key: HADOOP-8139
 URL: https://issues.apache.org/jira/browse/HADOOP-8139
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 0.24.0
Reporter: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-8139-2.patch, HADOOP-8139-3.patch, 
 HADOOP-8139-4.patch, HADOOP-8139-5.patch, HADOOP-8139-6.patch, 
 HADOOP-8139.patch, HADOOP-8139.patch


 Path converts \ into /, probably for windows support?  This means it's 
 impossible for the user to escape metachars in a path name.  Glob expansion 
 can have deadly results.
 Here are the most egregious examples. A user accidentally creates a path like 
 /user/me/*/file.  Now they want to remove it.
 {noformat}hadoop fs -rmr -skipTrash '/user/me/\*' becomes...
 hadoop fs -rmr -skipTrash /user/me/*{noformat}
 * User/Admin: Nuked their home directory or any given directory
 {noformat}hadoop fs -rmr -skipTrash '\*' becomes...
 hadoop fs -rmr -skipTrash /*{noformat}
 * User:  Deleted _everything_ they have access to on the cluster
 * Admin: *Nukes the entire cluster*
 Note: FsShell is shown for illustrative purposes, however the problem is in 
 the Path object, not FsShell.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7030) Add TableMapping topology implementation to read host to rack mapping from a file

2012-03-23 Thread Tom White (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7030:
--

Target Version/s: 1.1.0, 0.23.3  (was: 0.23.3, 1.1.0)
 Summary: Add TableMapping topology implementation to read host to 
rack mapping from a file  (was: new topology mapping implementations)

 Add TableMapping topology implementation to read host to rack mapping from a 
 file
 -

 Key: HADOOP-7030
 URL: https://issues.apache.org/jira/browse/HADOOP-7030
 Project: Hadoop Common
  Issue Type: New Feature
Affects Versions: 0.20.1, 0.20.2, 0.21.0
Reporter: Patrick Angeles
Assignee: Tom White
 Attachments: HADOOP-7030-2.patch, HADOOP-7030.patch, 
 HADOOP-7030.patch, HADOOP-7030.patch, HADOOP-7030.patch, topology.patch


 The default ScriptBasedMapping implementation of DNSToSwitchMapping for 
 determining cluster topology has some drawbacks. Principally, it forks to an 
 OS-specific script.
 This issue proposes two new Java implementations of DNSToSwitchMapping. 
 TableMapping reads a two column text file that maps an IP or hostname to a 
 rack ID. Ip4RangeMapping reads a three column text file where each line 
 represents a start and end IP range plus a rack ID.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-6387) FsShell -getmerge source file pattern is broken

2012-03-23 Thread Daryn Sharp (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6387?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-6387:


   Resolution: Duplicate
Fix Version/s: 0.23.2
 Assignee: Daryn Sharp  (was: XieXianshan)
   Status: Resolved  (was: Patch Available)

 FsShell -getmerge source file pattern is broken
 ---

 Key: HADOOP-6387
 URL: https://issues.apache.org/jira/browse/HADOOP-6387
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0
Reporter: Eli Collins
Assignee: Daryn Sharp
Priority: Minor
 Fix For: 0.23.2, 0.24.0

 Attachments: HADOOP-6387.patch


 The FsShell -getmerge command doesn't work if the source file pattern 
 matches files. See below. If the current behavior is intended then we should 
 update the help documentation and java docs to match, but it would be nice if 
 the user could specify a set of files in a directory rather than just 
 directories.
 {code}
 $ hadoop fs -help getmerge
 -getmerge src localdst:  Get all the files in the directories that 
   match the source file pattern and merge and sort them to only
   one file on local fs. src is kept.
 $ hadoop fs -ls
 Found 3 items
 -rw-r--r--   1 eli supergroup  2 2009-11-23 17:39 /user/eli/1.txt
 -rw-r--r--   1 eli supergroup  2 2009-11-23 17:39 /user/eli/2.txt
 -rw-r--r--   1 eli supergroup  2 2009-11-23 17:39 /user/eli/3.txt
 $ hadoop fs -getmerge /user/eli/*.txt sorted.txt
 $ cat sorted.txt
 cat: sorted.txt: No such file or directory
 $ hadoop fs -getmerge /user/eli/* sorted.txt
 $ cat sorted.txt
 cat: sorted.txt: No such file or directory
 $ hadoop fs -getmerge /user/* sorted.txt
 $ cat sorted.txt 
 1
 2
 3
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7030) Add TableMapping topology implementation to read host to rack mapping from a file

2012-03-23 Thread Tom White (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7030:
--

  Resolution: Fixed
   Fix Version/s: 0.23.3
Target Version/s: 1.1.0, 0.23.3  (was: 0.23.3, 1.1.0)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

I just committed this. Thanks Patrick!

I opened HADOOP-8204 for the test failure, which was unrelated to the change.

 Add TableMapping topology implementation to read host to rack mapping from a 
 file
 -

 Key: HADOOP-7030
 URL: https://issues.apache.org/jira/browse/HADOOP-7030
 Project: Hadoop Common
  Issue Type: New Feature
Affects Versions: 0.20.1, 0.20.2, 0.21.0
Reporter: Patrick Angeles
Assignee: Tom White
 Fix For: 0.23.3

 Attachments: HADOOP-7030-2.patch, HADOOP-7030.patch, 
 HADOOP-7030.patch, HADOOP-7030.patch, HADOOP-7030.patch, topology.patch


 The default ScriptBasedMapping implementation of DNSToSwitchMapping for 
 determining cluster topology has some drawbacks. Principally, it forks to an 
 OS-specific script.
 This issue proposes two new Java implementations of DNSToSwitchMapping. 
 TableMapping reads a two column text file that maps an IP or hostname to a 
 rack ID. Ip4RangeMapping reads a three column text file where each line 
 represents a start and end IP range plus a rack ID.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7030) Add TableMapping topology implementation to read host to rack mapping from a file

2012-03-23 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237130#comment-13237130
 ] 

Hudson commented on HADOOP-7030:


Integrated in Hadoop-Hdfs-0.23-Commit #711 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Commit/711/])
Merge -r 1304596:1304597 from trunk to branch-0.23. Fixes: HADOOP-7030 
(Revision 1304599)

 Result = SUCCESS
tomwhite : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1304599
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/TableMapping.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestTableMapping.java


 Add TableMapping topology implementation to read host to rack mapping from a 
 file
 -

 Key: HADOOP-7030
 URL: https://issues.apache.org/jira/browse/HADOOP-7030
 Project: Hadoop Common
  Issue Type: New Feature
Affects Versions: 0.20.1, 0.20.2, 0.21.0
Reporter: Patrick Angeles
Assignee: Tom White
 Fix For: 0.23.3

 Attachments: HADOOP-7030-2.patch, HADOOP-7030.patch, 
 HADOOP-7030.patch, HADOOP-7030.patch, HADOOP-7030.patch, topology.patch


 The default ScriptBasedMapping implementation of DNSToSwitchMapping for 
 determining cluster topology has some drawbacks. Principally, it forks to an 
 OS-specific script.
 This issue proposes two new Java implementations of DNSToSwitchMapping. 
 TableMapping reads a two column text file that maps an IP or hostname to a 
 rack ID. Ip4RangeMapping reads a three column text file where each line 
 represents a start and end IP range plus a rack ID.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7030) Add TableMapping topology implementation to read host to rack mapping from a file

2012-03-23 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237137#comment-13237137
 ] 

Hudson commented on HADOOP-7030:


Integrated in Hadoop-Common-0.23-Commit #721 (See 
[https://builds.apache.org/job/Hadoop-Common-0.23-Commit/721/])
Merge -r 1304596:1304597 from trunk to branch-0.23. Fixes: HADOOP-7030 
(Revision 1304599)

 Result = SUCCESS
tomwhite : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1304599
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/TableMapping.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestTableMapping.java


 Add TableMapping topology implementation to read host to rack mapping from a 
 file
 -

 Key: HADOOP-7030
 URL: https://issues.apache.org/jira/browse/HADOOP-7030
 Project: Hadoop Common
  Issue Type: New Feature
Affects Versions: 0.20.1, 0.20.2, 0.21.0
Reporter: Patrick Angeles
Assignee: Tom White
 Fix For: 0.23.3

 Attachments: HADOOP-7030-2.patch, HADOOP-7030.patch, 
 HADOOP-7030.patch, HADOOP-7030.patch, HADOOP-7030.patch, topology.patch


 The default ScriptBasedMapping implementation of DNSToSwitchMapping for 
 determining cluster topology has some drawbacks. Principally, it forks to an 
 OS-specific script.
 This issue proposes two new Java implementations of DNSToSwitchMapping. 
 TableMapping reads a two column text file that maps an IP or hostname to a 
 rack ID. Ip4RangeMapping reads a three column text file where each line 
 represents a start and end IP range plus a rack ID.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8163) Improve ActiveStandbyElector to provide hooks for fencing old active

2012-03-23 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237151#comment-13237151
 ] 

Aaron T. Myers commented on HADOOP-8163:


I just reviewed the diff from the latest patch to the one I last reviewed, and 
the changes look good.

+1 pending Jenkins.

 Improve ActiveStandbyElector to provide hooks for fencing old active
 

 Key: HADOOP-8163
 URL: https://issues.apache.org/jira/browse/HADOOP-8163
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ha
Affects Versions: 0.23.3, 0.24.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-8163.txt, hadoop-8163.txt, hadoop-8163.txt, 
 hadoop-8163.txt, hadoop-8163.txt


 When a new node becomes active in an HA setup, it may sometimes have to take 
 fencing actions against the node that was formerly active. This JIRA extends 
 the ActiveStandbyElector which adds an extra non-ephemeral node into the ZK 
 directory, which acts as a second copy of the active node's information. 
 Then, if the active loses its ZK session, the next active to be elected may 
 easily locate the unfenced node to take the appropriate actions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7030) Add TableMapping topology implementation to read host to rack mapping from a file

2012-03-23 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237149#comment-13237149
 ] 

Hudson commented on HADOOP-7030:


Integrated in Hadoop-Common-trunk-Commit #1922 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1922/])
HADOOP-7030. Add TableMapping topology implementation to read host to rack 
mapping from a file. Contributed by Patrick Angeles and tomwhite. (Revision 
1304597)

 Result = SUCCESS
tomwhite : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1304597
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/TableMapping.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestTableMapping.java


 Add TableMapping topology implementation to read host to rack mapping from a 
 file
 -

 Key: HADOOP-7030
 URL: https://issues.apache.org/jira/browse/HADOOP-7030
 Project: Hadoop Common
  Issue Type: New Feature
Affects Versions: 0.20.1, 0.20.2, 0.21.0
Reporter: Patrick Angeles
Assignee: Tom White
 Fix For: 0.23.3

 Attachments: HADOOP-7030-2.patch, HADOOP-7030.patch, 
 HADOOP-7030.patch, HADOOP-7030.patch, HADOOP-7030.patch, topology.patch


 The default ScriptBasedMapping implementation of DNSToSwitchMapping for 
 determining cluster topology has some drawbacks. Principally, it forks to an 
 OS-specific script.
 This issue proposes two new Java implementations of DNSToSwitchMapping. 
 TableMapping reads a two column text file that maps an IP or hostname to a 
 rack ID. Ip4RangeMapping reads a three column text file where each line 
 represents a start and end IP range plus a rack ID.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7030) Add TableMapping topology implementation to read host to rack mapping from a file

2012-03-23 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237154#comment-13237154
 ] 

Hudson commented on HADOOP-7030:


Integrated in Hadoop-Hdfs-trunk-Commit #1996 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1996/])
HADOOP-7030. Add TableMapping topology implementation to read host to rack 
mapping from a file. Contributed by Patrick Angeles and tomwhite. (Revision 
1304597)

 Result = SUCCESS
tomwhite : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1304597
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/TableMapping.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestTableMapping.java


 Add TableMapping topology implementation to read host to rack mapping from a 
 file
 -

 Key: HADOOP-7030
 URL: https://issues.apache.org/jira/browse/HADOOP-7030
 Project: Hadoop Common
  Issue Type: New Feature
Affects Versions: 0.20.1, 0.20.2, 0.21.0
Reporter: Patrick Angeles
Assignee: Tom White
 Fix For: 0.23.3

 Attachments: HADOOP-7030-2.patch, HADOOP-7030.patch, 
 HADOOP-7030.patch, HADOOP-7030.patch, HADOOP-7030.patch, topology.patch


 The default ScriptBasedMapping implementation of DNSToSwitchMapping for 
 determining cluster topology has some drawbacks. Principally, it forks to an 
 OS-specific script.
 This issue proposes two new Java implementations of DNSToSwitchMapping. 
 TableMapping reads a two column text file that maps an IP or hostname to a 
 rack ID. Ip4RangeMapping reads a three column text file where each line 
 represents a start and end IP range plus a rack ID.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8202) stopproxy() is not closing the proxies correctly

2012-03-23 Thread Suresh Srinivas (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237207#comment-13237207
 ] 

Suresh Srinivas commented on HADOOP-8202:
-

The problem with the code that was written is, it silently ignored the error 
and just printed a log and did not indicate the error. This is what is being 
fixed now.

bq. Can you guarantee that all future invocation handlers in Hadoop will 
implement Closeable?
Write the code with either proxy is closeable or has an invocation handler that 
is closeable. If that is not the case, then it is programming error! Throw 
RuntimeException, so it is found early and not silently ignored.

bq. This is a great reason to add a test - so that such things don't regress in 
the future.
HADOOP-7607 did not add tests either. Hence this bug. I suggest, lets practice 
what we preach!

On a related note, I am also not happy for simple changes, we keep mandating 
adding unit tests. Some times, it is okay to use judgement call and not add 
unnecessary tests. Adding useless tests comes at a cost. When we did the 
federation feature, we spent half the time fixing poorly documented and lame 
tests. Not that these tests were finding bugs, the tests simply did not work 
well the code changes.

That said, Hari, if you want you can add tests.

bq. Sure, it's a preference, but that in itself isn't a good reason to not 
address the comment. Can you comment on why it's better to split the reasons 
for an error across several log statements?
You cannot argue matter of taste. Is the code incorrect? Else, I like this code 
better, because I do not have to handle exception twice, but in one single 
place around closeable. I am observing that our code reviews are becoming too 
strict. Not every patch I review should look like the code I would write. As 
long it is correct, follows coding standards, it should be good. I have been 
seeing some comments these days, to say, can we call variable name as ioe 
instead of e. I believe, we should relax these.

bq. If you have a better way to implement RPC.getServerAddress, I'm happy to 
hear it.
Perhaps this is the only way. I need to look into it. But that is not required 
while stopping proxy. It is orthogonal.

Hari, after thinking a bit, I believe we should throw 
HadoopIllegalArgumentException, if either the proxy is not closeable or does 
not have Invocation handler.


 stopproxy() is not closing the proxies correctly
 

 Key: HADOOP-8202
 URL: https://issues.apache.org/jira/browse/HADOOP-8202
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.24.0
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Attachments: HADOOP-8202-1.patch, HADOOP-8202.patch, HADOOP-8202.patch


 I was running testbackupnode and noticed that NNprotocol proxy was not being 
 closed. Talked with Suresh and he observed that most of the protocols do not 
 implement ProtocolTranslator and hence the logic in stopproxy() does not 
 work. Instead, since all of them are closeable, Suresh suggested that 
 closeable property should be used at close.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7030) Add TableMapping topology implementation to read host to rack mapping from a file

2012-03-23 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237213#comment-13237213
 ] 

Hudson commented on HADOOP-7030:


Integrated in Hadoop-Mapreduce-trunk-Commit #1931 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/1931/])
HADOOP-7030. Add TableMapping topology implementation to read host to rack 
mapping from a file. Contributed by Patrick Angeles and tomwhite. (Revision 
1304597)

 Result = ABORTED
tomwhite : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1304597
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/TableMapping.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestTableMapping.java


 Add TableMapping topology implementation to read host to rack mapping from a 
 file
 -

 Key: HADOOP-7030
 URL: https://issues.apache.org/jira/browse/HADOOP-7030
 Project: Hadoop Common
  Issue Type: New Feature
Affects Versions: 0.20.1, 0.20.2, 0.21.0
Reporter: Patrick Angeles
Assignee: Tom White
 Fix For: 0.23.3

 Attachments: HADOOP-7030-2.patch, HADOOP-7030.patch, 
 HADOOP-7030.patch, HADOOP-7030.patch, HADOOP-7030.patch, topology.patch


 The default ScriptBasedMapping implementation of DNSToSwitchMapping for 
 determining cluster topology has some drawbacks. Principally, it forks to an 
 OS-specific script.
 This issue proposes two new Java implementations of DNSToSwitchMapping. 
 TableMapping reads a two column text file that maps an IP or hostname to a 
 rack ID. Ip4RangeMapping reads a three column text file where each line 
 represents a start and end IP range plus a rack ID.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8088) User-group mapping cache incorrectly does negative caching on transient failures

2012-03-23 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8088?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237215#comment-13237215
 ] 

Hadoop QA commented on HADOOP-8088:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12518767/hadoop-8088-trunk.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/760//console

This message is automatically generated.

 User-group mapping cache incorrectly does negative caching on transient 
 failures
 

 Key: HADOOP-8088
 URL: https://issues.apache.org/jira/browse/HADOOP-8088
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 0.20.205.0, 1.0.0, 1.1.0, 0.23.1, 0.24.0
Reporter: Kihwal Lee
 Fix For: 1.0.2, 0.23.2, 0.24.0

 Attachments: hadoop-8088-branch-1.patch, hadoop-8088-branch-1.patch, 
 hadoop-8088-trunk.patch, hadoop-8088-trunk.patch, hadoop-8088-trunk.patch


 We've seen a case where some getGroups() calls fail when the ldap server or 
 the network is having transient failures. Looking at the code, the 
 shell-based and the JNI-based implementations swallow exceptions and return 
 an empty or partial list. The caller, Groups#getGroups() adds this likely 
 empty list into the mapping cache for the user. This will function as 
 negative caching until the cache expires. I don't think we want negative 
 caching here, but even if we do, it should be intelligent enough to 
 distinguish transient failures from ENOENT. The log message in the jni-based 
 impl also needs an improvement. It should print what exception it encountered 
 instead of just saying one happened.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7689) Process cannot exit when there is many RPC readers more that actual client

2012-03-23 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7689?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237219#comment-13237219
 ] 

Hadoop QA commented on HADOOP-7689:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12517661/HADOOP-7689.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/761//console

This message is automatically generated.

 Process cannot exit when there is many RPC readers more that actual client
 --

 Key: HADOOP-7689
 URL: https://issues.apache.org/jira/browse/HADOOP-7689
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.21.0
 Environment: local
Reporter: Denny Ye
  Labels: rpc
 Attachments: HADOOP-7689.patch


 I met a strange issue that process cannot exit when I run RPC test cases in 
 my Eclipse.
 Conditions:
 1. Only one Server and one client(local)
 2. I have set many readers(conf.setInt(ipc.server.read.threadpool.size, 
 5)), it great than client number.
 After any test cases, the process cannot exit. I tested with several cases 
 and found the root cause.
 RPC serves socket with reader(transferring binary to Call), and even shutdown 
 the thread pool. But all the free readers are blocked at 
 readSelector.select() (they are useless by Listener)
 Those threads and process cannot exit always.
 It can be fixed by invoking corresponding selector for each reader.The same 
 thing was done at 0.20 version.   
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7739) Reconcile FileUtil and SecureIOUtils APIs between 20x and trunk

2012-03-23 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7739?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237225#comment-13237225
 ] 

Hadoop QA commented on HADOOP-7739:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12516944/HADOOP-7739.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/762//console

This message is automatically generated.

 Reconcile FileUtil and SecureIOUtils APIs between 20x and trunk
 ---

 Key: HADOOP-7739
 URL: https://issues.apache.org/jira/browse/HADOOP-7739
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 0.23.0
Reporter: Todd Lipcon
Assignee: Csaba Miklos
 Attachments: HADOOP-7739.patch


 The 0.20.20x has introduced various public APIs to these classes which aren't 
 in trunk. For example, FileUtil.setPermission exists in 20x but not trunk. If 
 people start to depend on these public APIs, they will break when they 
 upgrade.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8163) Improve ActiveStandbyElector to provide hooks for fencing old active

2012-03-23 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237224#comment-13237224
 ] 

Hadoop QA commented on HADOOP-8163:
---

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12519696/hadoop-8163.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 6 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/759//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/759//console

This message is automatically generated.

 Improve ActiveStandbyElector to provide hooks for fencing old active
 

 Key: HADOOP-8163
 URL: https://issues.apache.org/jira/browse/HADOOP-8163
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ha
Affects Versions: 0.23.3, 0.24.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-8163.txt, hadoop-8163.txt, hadoop-8163.txt, 
 hadoop-8163.txt, hadoop-8163.txt


 When a new node becomes active in an HA setup, it may sometimes have to take 
 fencing actions against the node that was formerly active. This JIRA extends 
 the ActiveStandbyElector which adds an extra non-ephemeral node into the ZK 
 directory, which acts as a second copy of the active node's information. 
 Then, if the active loses its ZK session, the next active to be elected may 
 easily locate the unfenced node to take the appropriate actions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7030) Add TableMapping topology implementation to read host to rack mapping from a file

2012-03-23 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7030?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237227#comment-13237227
 ] 

Hudson commented on HADOOP-7030:


Integrated in Hadoop-Mapreduce-0.23-Commit #729 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Commit/729/])
Merge -r 1304596:1304597 from trunk to branch-0.23. Fixes: HADOOP-7030 
(Revision 1304599)

 Result = ABORTED
tomwhite : 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1304599
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/CommonConfigurationKeysPublic.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/TableMapping.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/resources/core-default.xml
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestTableMapping.java


 Add TableMapping topology implementation to read host to rack mapping from a 
 file
 -

 Key: HADOOP-7030
 URL: https://issues.apache.org/jira/browse/HADOOP-7030
 Project: Hadoop Common
  Issue Type: New Feature
Affects Versions: 0.20.1, 0.20.2, 0.21.0
Reporter: Patrick Angeles
Assignee: Tom White
 Fix For: 0.23.3

 Attachments: HADOOP-7030-2.patch, HADOOP-7030.patch, 
 HADOOP-7030.patch, HADOOP-7030.patch, HADOOP-7030.patch, topology.patch


 The default ScriptBasedMapping implementation of DNSToSwitchMapping for 
 determining cluster topology has some drawbacks. Principally, it forks to an 
 OS-specific script.
 This issue proposes two new Java implementations of DNSToSwitchMapping. 
 TableMapping reads a two column text file that maps an IP or hostname to a 
 rack ID. Ip4RangeMapping reads a three column text file where each line 
 represents a start and end IP range plus a rack ID.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-6871) When the value of a configuration key is set to its unresolved form, it causes the IllegalStateException in Configuration.get() stating that substitution depth is too

2012-03-23 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237243#comment-13237243
 ] 

Hadoop QA commented on HADOOP-6871:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12513183/HADOOP-6871-3.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests:
  org.apache.hadoop.ha.TestHealthMonitor

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/764//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/764//console

This message is automatically generated.

 When the value of a configuration key is set to its unresolved form, it 
 causes the IllegalStateException in Configuration.get() stating that 
 substitution depth is too large.
 -

 Key: HADOOP-6871
 URL: https://issues.apache.org/jira/browse/HADOOP-6871
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Reporter: Arvind Prabhakar
 Attachments: HADOOP-6871-1.patch, HADOOP-6871-2.patch, 
 HADOOP-6871-3.patch, HADOOP-6871.patch


 When a configuration value is set to its unresolved expression string, it 
 leads to recursive substitution attempts in 
 {{Configuration.substituteVars(String)}} method until the max substitution 
 check kicks in and raises an IllegalStateException indicating that the 
 substitution depth is too large. For example, the configuration key 
 {{foobar}} with a value set to {{$\{foobar\}}} will cause this behavior. 
 While this is not a usual use case, it can happen in build environments where 
 a property value is not specified and yet being passed into the test 
 mechanism leading to failures due to this limitation.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7490) Add caching to NativeS3FileSystem

2012-03-23 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237244#comment-13237244
 ] 

Hadoop QA commented on HADOOP-7490:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12516632/HADOOP-7490-2.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

+1 javadoc.  The javadoc tool did not generate any warning messages.

-1 javac.  The applied patch generated 1017 javac compiler warnings (more 
than the trunk's current 1014 warnings).

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

-1 findbugs.  The patch appears to introduce 2 new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/763//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/763//artifact/trunk/hadoop-common-project/patchprocess/newPatchFindbugsWarningshadoop-common.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/763//console

This message is automatically generated.

 Add caching to NativeS3FileSystem
 -

 Key: HADOOP-7490
 URL: https://issues.apache.org/jira/browse/HADOOP-7490
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 0.20.0
Reporter: Vaibhav Aggarwal
 Attachments: HADOOP-7490-2.patch, HADOOP-7490.patch


 I added ability to cache consecutive results from s3 in NativeS3FileSystem, 
 in order to improve performance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7490) Add caching to NativeS3FileSystem

2012-03-23 Thread Vaibhav Aggarwal (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7490?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237248#comment-13237248
 ] 

Vaibhav Aggarwal commented on HADOOP-7490:
--

Hi Robert

This is the first time I am submitting a patch to Hadoop.
Could you please explain the next steps for me based on the previous test 
results.

Thanks
Vaibhav

 Add caching to NativeS3FileSystem
 -

 Key: HADOOP-7490
 URL: https://issues.apache.org/jira/browse/HADOOP-7490
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 0.20.0
Reporter: Vaibhav Aggarwal
 Attachments: HADOOP-7490-2.patch, HADOOP-7490.patch


 I added ability to cache consecutive results from s3 in NativeS3FileSystem, 
 in order to improve performance.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7294) FileUtil uses wrong stat command for FreeBSD

2012-03-23 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237246#comment-13237246
 ] 

Hadoop QA commented on HADOOP-7294:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12512729/7294-trunk.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/767//console

This message is automatically generated.

 FileUtil uses wrong stat command for FreeBSD
 

 Key: HADOOP-7294
 URL: https://issues.apache.org/jira/browse/HADOOP-7294
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.21.0
 Environment: FreeBSD 8.0-STABLE
Reporter: Vitalii Tymchyshyn
 Attachments: 7294-trunk.patch, patch.diff


 I get next exception when try to use append:
 2011-05-16 17:07:54,648 ERROR 
 org.apache.hadoop.hdfs.server.datanode.DataNode: 
 DatanodeRegistration(10.112.0.207:50010, storageID=DS-1047171559-
 10.112.0.207-50010-1302796304164, infoPort=50075, ipcPort=50020):DataXceiver
 java.io.IOException: Failed to get link count on file 
 /var/data/hdfs/data/current/finalized/subdir26/subdir17/subdir55/blk_-1266943884751786595:
  message=null; error=stat: illegal option -- c; exit value=1
 at org.apache.hadoop.fs.FileUtil.createIOException(FileUtil.java:709)
 at org.apache.hadoop.fs.FileUtil.access$000(FileUtil.java:42)
 at 
 org.apache.hadoop.fs.FileUtil$HardLink.getLinkCount(FileUtil.java:682)
 at 
 org.apache.hadoop.hdfs.server.datanode.ReplicaInfo.unlinkBlock(ReplicaInfo.java:215)
 at 
 org.apache.hadoop.hdfs.server.datanode.FSDataset.append(FSDataset.java:1116)
 It seems that FreeBSD is treated like UNIX and so calls 'stat -c%h', while 
 FreeBSD is much more like Mac (since they have same BSD roots):
 $ stat --help
 stat: illegal option -- -
 usage: stat [-FlLnqrsx] [-f format] [-t timefmt] [file ...]
 $ stat -f%l a_file
 1

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-6453) Hadoop wrapper script shouldn't ignore an existing JAVA_LIBRARY_PATH

2012-03-23 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6453?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237247#comment-13237247
 ] 

Hadoop QA commented on HADOOP-6453:
---

-1 overall.  Here are the results of testing the latest attachment 
  
http://issues.apache.org/jira/secure/attachment/12503143/HADOOP-6453-0.20v3.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/766//console

This message is automatically generated.

 Hadoop wrapper script shouldn't ignore an existing JAVA_LIBRARY_PATH
 

 Key: HADOOP-6453
 URL: https://issues.apache.org/jira/browse/HADOOP-6453
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 0.20.2, 0.21.0, 0.22.0
Reporter: Chad Metcalf
Assignee: Matt Foley
Priority: Minor
 Fix For: 0.22.1

 Attachments: HADOOP-6453-0.20.patch, HADOOP-6453-0.20v2.patch, 
 HADOOP-6453-0.20v3.patch, HADOOP-6453-trunkv2.patch, 
 HADOOP-6453-trunkv3.patch, HADOOP-6453.trunk.patch


 Currently the hadoop wrapper script assumes its the only place that uses 
 JAVA_LIBRARY_PATH and initializes it to a blank line.
 JAVA_LIBRARY_PATH=''
 This prevents anyone from setting this outside of the hadoop wrapper (say 
 hadoop-config.sh) for their own native libraries.
 The fix is pretty simple. Don't initialize it to '' and append the native 
 libs like normal. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-7030) Add TableMapping topology implementation to read host to rack mapping from a file

2012-03-23 Thread Tom White (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7030?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom White updated HADOOP-7030:
--

Attachment: HADOOP-7030-branch-1.patch

Attaching a backport for branch-1.

 Add TableMapping topology implementation to read host to rack mapping from a 
 file
 -

 Key: HADOOP-7030
 URL: https://issues.apache.org/jira/browse/HADOOP-7030
 Project: Hadoop Common
  Issue Type: New Feature
Affects Versions: 0.20.1, 0.20.2, 0.21.0
Reporter: Patrick Angeles
Assignee: Tom White
 Fix For: 0.23.3

 Attachments: HADOOP-7030-2.patch, HADOOP-7030-branch-1.patch, 
 HADOOP-7030.patch, HADOOP-7030.patch, HADOOP-7030.patch, HADOOP-7030.patch, 
 topology.patch


 The default ScriptBasedMapping implementation of DNSToSwitchMapping for 
 determining cluster topology has some drawbacks. Principally, it forks to an 
 OS-specific script.
 This issue proposes two new Java implementations of DNSToSwitchMapping. 
 TableMapping reads a two column text file that maps an IP or hostname to a 
 rack ID. Ip4RangeMapping reads a three column text file where each line 
 represents a start and end IP range plus a rack ID.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7350) Use ServiceLoader to discover compression codec classes

2012-03-23 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7350?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237250#comment-13237250
 ] 

Hadoop QA commented on HADOOP-7350:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12514395/HADOOP-7350.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/768//console

This message is automatically generated.

 Use ServiceLoader to discover compression codec classes
 ---

 Key: HADOOP-7350
 URL: https://issues.apache.org/jira/browse/HADOOP-7350
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf, io
Reporter: Tom White
Assignee: Tom White
 Attachments: HADOOP-7350.patch, HADOOP-7350.patch, HADOOP-7350.patch, 
 HADOOP-7350.patch, HADOOP-7350.patch, HADOOP-7350.patch, HADOOP-7350.patch


 By using a ServiceLoader users wouldn't have to add codec classes to 
 io.compression.codecs for codecs that aren't shipped with Hadoop (e.g. LZO), 
 since they would be automatically picked up from the classpath.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-3886) javadoc for Reporter confused?

2012-03-23 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-3886?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237253#comment-13237253
 ] 

Hadoop QA commented on HADOOP-3886:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12505205/HADOOP-3886.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+0 tests included.  The patch appears to be a documentation patch that 
doesn't require tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/771//console

This message is automatically generated.

 javadoc for Reporter confused?
 --

 Key: HADOOP-3886
 URL: https://issues.apache.org/jira/browse/HADOOP-3886
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 0.17.1, 0.23.0
Reporter: brien colwell
Assignee: Jingguo Yao
Priority: Minor
 Attachments: HADOOP-3886.patch


 The javadoc for Reporter says:
 In scenarios where the application takes an insignificant amount of time to 
 process individual key/value pairs
 Shouldn't this read /significant/ instead of insignificant?

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-1381) The distance between sync blocks in SequenceFiles should be configurable rather than hard coded to 2000 bytes

2012-03-23 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-1381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237257#comment-13237257
 ] 

Hadoop QA commented on HADOOP-1381:
---

+1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12507830/HADOOP-1381.r5.diff
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 7 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

+1 core tests.  The patch passed unit tests in .

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/769//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/769//console

This message is automatically generated.

 The distance between sync blocks in SequenceFiles should be configurable 
 rather than hard coded to 2000 bytes
 -

 Key: HADOOP-1381
 URL: https://issues.apache.org/jira/browse/HADOOP-1381
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Affects Versions: 0.22.0
Reporter: Owen O'Malley
Assignee: Harsh J
 Fix For: 0.24.0

 Attachments: HADOOP-1381.r1.diff, HADOOP-1381.r2.diff, 
 HADOOP-1381.r3.diff, HADOOP-1381.r4.diff, HADOOP-1381.r5.diff


 Currently SequenceFiles put in sync blocks every 2000 bytes. It would be much 
 better if it was configurable with a much higher default (1mb or so?).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-6802) Remove FS_CLIENT_BUFFER_DIR_KEY = fs.client.buffer.dir from CommonConfigurationKeys.java (not used, deprecated)

2012-03-23 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6802?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237259#comment-13237259
 ] 

Hadoop QA commented on HADOOP-6802:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12516021/HADOOP-6802.txt
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests:
  org.apache.hadoop.fs.viewfs.TestViewFsTrash

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/765//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/765//console

This message is automatically generated.

 Remove FS_CLIENT_BUFFER_DIR_KEY = fs.client.buffer.dir from 
 CommonConfigurationKeys.java (not used, deprecated)
 -

 Key: HADOOP-6802
 URL: https://issues.apache.org/jira/browse/HADOOP-6802
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0
Reporter: Erik Steffl
Assignee: Sho Shimauchi
  Labels: newbie
 Attachments: HADOOP-6802.txt, HADOOP-6802.txt


 In CommonConfigurationKeys.java:
 public static final String  FS_CLIENT_BUFFER_DIR_KEY = fs.client.buffer.dir;
 The variable FS_CLIENT_BUFFER_DIR_KEY and string fs.client.buffer.dir are 
 not used anywhere (Checked Hadoop Common, Hdfs and Mapred projects), it seems 
 they should be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8192) Fix unit test failures with IBM's JDK

2012-03-23 Thread Devaraj Das (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237260#comment-13237260
 ] 

Devaraj Das commented on HADOOP-8192:
-

One option is to make the relevant hashmaps sorted in CombineFileInputFormat, 
and then fix the testcase to check the correct (and consistent) values in the 
asserts. Would that fix the testcase problem? The other option is to make the 
testcase more robust so that it can tolerate the fact that ordering could be 
different on different JVMs. 

 Fix unit test failures with IBM's JDK
 -

 Key: HADOOP-8192
 URL: https://issues.apache.org/jira/browse/HADOOP-8192
 Project: Hadoop Common
  Issue Type: Bug
 Environment: java version 1.6.0
 Java(TM) SE Runtime Environment (build pxi3260sr10-20111208_01(SR10))
 IBM J9 VM (build 2.4, JRE 1.6.0 IBM J9 2.4 Linux x86-32 
 jvmxi3260sr10-20111207_96808 (JIT enabled, AOT enabled)
 J9VM - 20111207_096808
 JIT  - r9_2007_21307ifx1
 GC   - 20110519_AA)
 JCL  - 2004_02
Reporter: Devaraj Das

 Some tests fail with IBM's JDK. They are 
 org.apache.hadoop.mapred.lib.TestCombineFileInputFormat, 
 org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat, 
 org.apache.hadoop.streaming.TestStreamingBadRecords, 
 org.apache.hadoop.mapred.TestCapacityScheduler. This jira is to track fixing 
 these.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8079) Proposal for enhancements to Hadoop for Windows Server and Windows Azure development and runtime environments

2012-03-23 Thread Sanjay Radia (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8079?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237261#comment-13237261
 ] 

Sanjay Radia commented on HADOOP-8079:
--

A fair number of tests are failing. I suggest that the team works on a 
commit-then-review in the branch-1-win and iterate to improve the solution 
and fix tests to get a working branch. Comments in the jiras will be addressed. 
Following that the team posts a set of small trunk-patches to make it 
convenient for review.



 Proposal for enhancements to Hadoop for Windows Server and Windows Azure 
 development and runtime environments
 -

 Key: HADOOP-8079
 URL: https://issues.apache.org/jira/browse/HADOOP-8079
 Project: Hadoop Common
  Issue Type: Improvement
  Components: native
Affects Versions: 1.0.0
Reporter: Alexander Stojanovic
  Labels: hadoop
 Attachments: azurenative.zip, general-utils-windows.patch, 
 hadoop-8079.AzureBlobStore.patch, hadoop-8079.patch, hadoopcmdscripts.zip, 
 mapred-tasks.patch, microsoft-windowsazure-api-0.1.2.jar, security.patch, 
 windows-cmd-scripts.patch

   Original Estimate: 2,016h
  Remaining Estimate: 2,016h

 This JIRA is intended to capture discussion around proposed work to enhance 
 Apache Hadoop to run well on Windows.  Apache Hadoop has worked on Microsoft 
 Windows since its inception, but Windows support has never been a priority. 
 Currently Windows works as a development and testing platform for Hadoop, but 
 Hadoop is not natively integrated, full-featured or performance and 
 scalability tuned for Windows Server or Windows Azure.  We would like to 
 change this and engage in a dialog with the broader community on the 
 architectural design points for making Windows (enterprise and cloud) an 
 excellent runtime and deployment environment for Hadoop.  
  
 The Isotope team at Microsoft (names below) has developed an Apache Hadoop 
 1.0 patch set that addresses these performance, integration and feature gaps, 
 allowing Apache Hadoop to be used with Azure and Windows Server without 
 recourse to virtualization technologies such as Cygwin. We have significant 
 interest in the deployment of Hadoop across many multi-tenant, PaaS and IaaS 
 environments - which bring their own unique requirements. 
 Microsoft has recently completed a CCLA with Apache and would like to 
 contribute these enhancements back to the Apache Hadoop community.
 In the interest of improving Apache Hadoop so that it runs more smoothly on 
 all platforms, including Windows, we propose first contributing this work to 
 the Apache community by attaching it to this JIRA.  From there we would like 
 to work with the community to refine the patch set until it is ready to be 
 merged into the Apache trunk.
 Your feedback solicited,
  
 Alexander Stojanovic
 Min Wei
 David Lao
 Lengning Liu
 David Zhang
 Asad Khan

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8192) Fix unit test failures with IBM's JDK

2012-03-23 Thread Devaraj Das (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8192?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237262#comment-13237262
 ] 

Devaraj Das commented on HADOOP-8192:
-

Basically somehow make the order of inserts/gets consistent across JVMs by 
using appropriate datastructures...

 Fix unit test failures with IBM's JDK
 -

 Key: HADOOP-8192
 URL: https://issues.apache.org/jira/browse/HADOOP-8192
 Project: Hadoop Common
  Issue Type: Bug
 Environment: java version 1.6.0
 Java(TM) SE Runtime Environment (build pxi3260sr10-20111208_01(SR10))
 IBM J9 VM (build 2.4, JRE 1.6.0 IBM J9 2.4 Linux x86-32 
 jvmxi3260sr10-20111207_96808 (JIT enabled, AOT enabled)
 J9VM - 20111207_096808
 JIT  - r9_2007_21307ifx1
 GC   - 20110519_AA)
 JCL  - 2004_02
Reporter: Devaraj Das

 Some tests fail with IBM's JDK. They are 
 org.apache.hadoop.mapred.lib.TestCombineFileInputFormat, 
 org.apache.hadoop.mapreduce.lib.input.TestCombineFileInputFormat, 
 org.apache.hadoop.streaming.TestStreamingBadRecords, 
 org.apache.hadoop.mapred.TestCapacityScheduler. This jira is to track fixing 
 these.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8202) stopproxy() is not closing the proxies correctly

2012-03-23 Thread Hari Mankude (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237271#comment-13237271
 ] 

Hari Mankude commented on HADOOP-8202:
--

Including a patch that takes care of concerns

1. Throws HadoopIllegalArgumentexception if proxy is not closeable
2. Throws HadoopIllegalArgumentexception if invocation handler is not closeable
3. New test to catch the error scenario when a null proxy is used.
4. Modified existing negative test so that it does not fail since exception is 
thrown.
5. Fixed javadoc comments


 stopproxy() is not closing the proxies correctly
 

 Key: HADOOP-8202
 URL: https://issues.apache.org/jira/browse/HADOOP-8202
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.24.0
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Attachments: HADOOP-8202-1.patch, HADOOP-8202.patch, HADOOP-8202.patch


 I was running testbackupnode and noticed that NNprotocol proxy was not being 
 closed. Talked with Suresh and he observed that most of the protocols do not 
 implement ProtocolTranslator and hence the logic in stopproxy() does not 
 work. Instead, since all of them are closeable, Suresh suggested that 
 closeable property should be used at close.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8202) stopproxy() is not closing the proxies correctly

2012-03-23 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237278#comment-13237278
 ] 

Aaron T. Myers commented on HADOOP-8202:


bq. The problem with the code that was written is, it silently ignored the 
error and just printed a log and did not indicate the error. This is what is 
being fixed now.

Note that when HADOOP-7607 was being implemented, it was a conscious decision 
to log a warning instead of throwing, so as to maintain backward compatibility 
with the previous implementation. See [this 
comment|https://issues.apache.org/jira/browse/HADOOP-7607?focusedCommentId=13097236page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13097236].
 We're now making a conscious decision to change that here, which is fine, but 
it should be noted.

bq. HADOOP-7607 did not add tests either. Hence this bug. I suggest, lets 
practice what we preach!

HADOOP-7607 didn't introduce this bug. HADOOP-7607 was just a refactor, hence 
why it had no tests.

This bug was introduced because of the following events:

# It used to be that all proxy objects used by client classes were directly 
referenced by RPC engines. During this time, the code in RPC#stopProxy worked 
just fine, but required users of proxy objects that wrapped other proxy objects 
to hold a reference to the underlying proxy object just for the purpose of 
calling RPC#stopProxy. This is why for a long time DFSClient had two references 
to ClientProtocol objects - one to call methods on, the other to call 
RPC#stopProxy on.
# HADOOP-7607 refactored the code in RPC#stopProxy to actually call close on 
the invocation handler of the proxy object directly, instead of going through 
the RPCEngine. This would allow the invocation handler to bubble this down to 
underlying proxies. This was committed in September of 2011.
# We began to introduce protocol translators for protobuf support in December 
of 2011. This caused the code in RPC#stopProxy to stop actually releasing 
underlying resources when RPC#stopProxy was called with a translator object 
provided as the argument, since the objects we were calling RPC#stopProxy on 
were no longer actual proxy objects, and hence Proxy#getInvocationHandler would 
fail. Unfortunately, no one noticed this until now. The introduction of 
protocol translators also caused RPC#getServerAddress to break, but again no 
one noticed. I introduced the ProtocolTranslator interface because I needed 
RPC#getServerAddress to work in order to implement HDFS-2792.

bq. On a related note, I am also not happy for simple changes, we keep 
mandating adding unit tests. Some times, it is okay to use judgement call and 
not add unnecessary tests.

Sure, I agree with you. The question here is what constitutes a simple change.

FWIW, we often add tests for simple changes, not to prove that the new code 
works, but to prevent it from ever regressing in the future. See HDFS-3099 as 
an example.

bq. I am observing that our code reviews are becoming too strict. Not every 
patch I review should look like the code I would write. As long it is correct, 
follows coding standards, it should be good. I have been seeing some comments 
these days, to say, can we call variable name as ioe instead of e. I believe, 
we should relax these.

Sure, they're definitely just nits, but it's not like they're difficult to 
address. It almost certainly would take less time to address these comments 
than it does to argue about them.

 stopproxy() is not closing the proxies correctly
 

 Key: HADOOP-8202
 URL: https://issues.apache.org/jira/browse/HADOOP-8202
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.24.0
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Attachments: HADOOP-8202-1.patch, HADOOP-8202-2.patch, 
 HADOOP-8202.patch, HADOOP-8202.patch


 I was running testbackupnode and noticed that NNprotocol proxy was not being 
 closed. Talked with Suresh and he observed that most of the protocols do not 
 implement ProtocolTranslator and hence the logic in stopproxy() does not 
 work. Instead, since all of them are closeable, Suresh suggested that 
 closeable property should be used at close.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8202) stopproxy() is not closing the proxies correctly

2012-03-23 Thread Hari Mankude (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Hari Mankude updated HADOOP-8202:
-

Attachment: HADOOP-8202-2.patch

 stopproxy() is not closing the proxies correctly
 

 Key: HADOOP-8202
 URL: https://issues.apache.org/jira/browse/HADOOP-8202
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.24.0
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Attachments: HADOOP-8202-1.patch, HADOOP-8202-2.patch, 
 HADOOP-8202.patch, HADOOP-8202.patch


 I was running testbackupnode and noticed that NNprotocol proxy was not being 
 closed. Talked with Suresh and he observed that most of the protocols do not 
 implement ProtocolTranslator and hence the logic in stopproxy() does not 
 work. Instead, since all of them are closeable, Suresh suggested that 
 closeable property should be used at close.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8202) stopproxy() is not closing the proxies correctly

2012-03-23 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237295#comment-13237295
 ] 

Aaron T. Myers commented on HADOOP-8202:


The latest patch looks good to me. +1 pending Jenkins.

 stopproxy() is not closing the proxies correctly
 

 Key: HADOOP-8202
 URL: https://issues.apache.org/jira/browse/HADOOP-8202
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.24.0
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Attachments: HADOOP-8202-1.patch, HADOOP-8202-2.patch, 
 HADOOP-8202.patch, HADOOP-8202.patch


 I was running testbackupnode and noticed that NNprotocol proxy was not being 
 closed. Talked with Suresh and he observed that most of the protocols do not 
 implement ProtocolTranslator and hence the logic in stopproxy() does not 
 work. Instead, since all of them are closeable, Suresh suggested that 
 closeable property should be used at close.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8202) stopproxy() is not closing the proxies correctly

2012-03-23 Thread Suresh Srinivas (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237317#comment-13237317
 ] 

Suresh Srinivas commented on HADOOP-8202:
-

bq. HADOOP-7607
I take what you are saying - since I do not want to waste time on this.

bq. Sure, they're definitely just nits, but it's not like they're difficult to 
address. It almost certainly would take less time to address these comments 
than it does to argue about them.
In the comments, I post this kind of comments as optional - as a suggestion. If 
the contributor likes it/wants to address it, then it is up to him. Perhaps, 
adding optional/suggestion for such comments would be good, instead of cycles 
of nits and time wasted on addressing them. We can make better use of time.

 stopproxy() is not closing the proxies correctly
 

 Key: HADOOP-8202
 URL: https://issues.apache.org/jira/browse/HADOOP-8202
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.24.0
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Attachments: HADOOP-8202-1.patch, HADOOP-8202-2.patch, 
 HADOOP-8202.patch, HADOOP-8202.patch


 I was running testbackupnode and noticed that NNprotocol proxy was not being 
 closed. Talked with Suresh and he observed that most of the protocols do not 
 implement ProtocolTranslator and hence the logic in stopproxy() does not 
 work. Instead, since all of them are closeable, Suresh suggested that 
 closeable property should be used at close.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8205) Potential design improvements for ActiveStandbyElector API

2012-03-23 Thread Todd Lipcon (Created) (JIRA)
Potential design improvements for ActiveStandbyElector API
--

 Key: HADOOP-8205
 URL: https://issues.apache.org/jira/browse/HADOOP-8205
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ha
Reporter: Todd Lipcon


Bikas suggested some improvements to the API for ActiveStandbyElector in 
HADOOP-8163:
{quote}

I have a feeling that putting the fencing concept into the elector is diluting 
the distinctness between the elector and the failover controller. In my mind, 
the elector is a distributed leader election library that signals candidates 
about being made leader or standby. In the ideal world, where the HA service 
behaves perfectly and does not execute any instruction unless it is a leader, 
we only need the elector. But the world is not ideal and we can have errant 
leader who need to be fenced etc. Here is where the Failover controller comes 
in. It manages the HA service by using the elector to do distributed leader 
selection and get those notifications passed onto the HAservice. In addition is 
guards service sanity by making sure that the signal is passed only when it is 
safe to do so. 
How about this slightly different alternative flow. Elector gets leader lock. 
For all intents and purposes it is the new leader. It passes the signal to the 
failover controller with the breadcrumb of the last leader.
appClient-becomeActive(breadcrumb);
the failoverController now has to ensure that all previous master are fenced 
before making its service the master. the breadcrumb is an optimization that 
lets it know that such an operation may not be necessary. If it is necessary, 
then it performs fencing. If fencing is successful, it calls.
elector-becameActive() or elector-transitionedToActive() at which point the 
elector can overwrite the breadcrumb with its own info. I havent thought 
through if this should be called before or after a successful call to 
HAService-transitionToActive() but my gut feeling is for the former.
This keeps the notion of fencing inside the controller instead of being in both 
the elector and the controller.

Secondly, we are performing blocking calls on the ZKClient callback that 
happens on the ZK threads. It is advisable to not block ZK client threads for 
long. The create and delete methods might be ok but I would try to move the 
fencing operation and transitioning to active operations away from the ZK 
thread. i.e. when the FailoverController is notified about becoming master, it 
returns the call and then processes fencing/transitioning on some other 
thread/threadpool. The above flow allows for this.
{quote}
This JIRA is to further discuss/implement these suggestions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8163) Improve ActiveStandbyElector to provide hooks for fencing old active

2012-03-23 Thread Todd Lipcon (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8163?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-8163:


   Resolution: Fixed
Fix Version/s: 0.24.0
   0.23.3
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed to 0.23 and trunk. Thanks for the reviews, Bikas and Aaron. I filed 
HADOOP-8205 to further discuss Bikas's suggestions.

 Improve ActiveStandbyElector to provide hooks for fencing old active
 

 Key: HADOOP-8163
 URL: https://issues.apache.org/jira/browse/HADOOP-8163
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ha
Affects Versions: 0.23.3, 0.24.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 0.23.3, 0.24.0

 Attachments: hadoop-8163.txt, hadoop-8163.txt, hadoop-8163.txt, 
 hadoop-8163.txt, hadoop-8163.txt


 When a new node becomes active in an HA setup, it may sometimes have to take 
 fencing actions against the node that was formerly active. This JIRA extends 
 the ActiveStandbyElector which adds an extra non-ephemeral node into the ZK 
 directory, which acts as a second copy of the active node's information. 
 Then, if the active loses its ZK session, the next active to be elected may 
 easily locate the unfenced node to take the appropriate actions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8202) stopproxy() is not closing the proxies correctly

2012-03-23 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8202?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237331#comment-13237331
 ] 

Hadoop QA commented on HADOOP-8202:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12519747/HADOOP-8202-2.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 3 new or modified tests.

+1 javadoc.  The javadoc tool did not generate any warning messages.

+1 javac.  The applied patch does not increase the total number of javac 
compiler warnings.

+1 eclipse:eclipse.  The patch built with eclipse:eclipse.

+1 findbugs.  The patch does not introduce any new Findbugs (version 1.3.9) 
warnings.

+1 release audit.  The applied patch does not increase the total number of 
release audit warnings.

-1 core tests.  The patch failed these unit tests:
  org.apache.hadoop.ha.TestHealthMonitor

+1 contrib tests.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/772//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/772//console

This message is automatically generated.

 stopproxy() is not closing the proxies correctly
 

 Key: HADOOP-8202
 URL: https://issues.apache.org/jira/browse/HADOOP-8202
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 0.24.0
Reporter: Hari Mankude
Assignee: Hari Mankude
Priority: Minor
 Attachments: HADOOP-8202-1.patch, HADOOP-8202-2.patch, 
 HADOOP-8202.patch, HADOOP-8202.patch


 I was running testbackupnode and noticed that NNprotocol proxy was not being 
 closed. Talked with Suresh and he observed that most of the protocols do not 
 implement ProtocolTranslator and hence the logic in stopproxy() does not 
 work. Instead, since all of them are closeable, Suresh suggested that 
 closeable property should be used at close.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8163) Improve ActiveStandbyElector to provide hooks for fencing old active

2012-03-23 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237349#comment-13237349
 ] 

Hudson commented on HADOOP-8163:


Integrated in Hadoop-Common-trunk-Commit #1923 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1923/])
HADOOP-8163. Improve ActiveStandbyElector to provide hooks for fencing old 
active. Contributed by Todd Lipcon. (Revision 1304675)

 Result = SUCCESS
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1304675
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElector.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElectorRealZK.java


 Improve ActiveStandbyElector to provide hooks for fencing old active
 

 Key: HADOOP-8163
 URL: https://issues.apache.org/jira/browse/HADOOP-8163
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ha
Affects Versions: 0.23.3, 0.24.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 0.23.3, 0.24.0

 Attachments: hadoop-8163.txt, hadoop-8163.txt, hadoop-8163.txt, 
 hadoop-8163.txt, hadoop-8163.txt


 When a new node becomes active in an HA setup, it may sometimes have to take 
 fencing actions against the node that was formerly active. This JIRA extends 
 the ActiveStandbyElector which adds an extra non-ephemeral node into the ZK 
 directory, which acts as a second copy of the active node's information. 
 Then, if the active loses its ZK session, the next active to be elected may 
 easily locate the unfenced node to take the appropriate actions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8163) Improve ActiveStandbyElector to provide hooks for fencing old active

2012-03-23 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237352#comment-13237352
 ] 

Hudson commented on HADOOP-8163:


Integrated in Hadoop-Hdfs-0.23-Commit #712 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Commit/712/])
HADOOP-8163. Improve ActiveStandbyElector to provide hooks for fencing old 
active. Contributed by Todd Lipcon. (Revision 1304676)

 Result = SUCCESS
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1304676
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElector.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElectorRealZK.java


 Improve ActiveStandbyElector to provide hooks for fencing old active
 

 Key: HADOOP-8163
 URL: https://issues.apache.org/jira/browse/HADOOP-8163
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ha
Affects Versions: 0.23.3, 0.24.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 0.23.3, 0.24.0

 Attachments: hadoop-8163.txt, hadoop-8163.txt, hadoop-8163.txt, 
 hadoop-8163.txt, hadoop-8163.txt


 When a new node becomes active in an HA setup, it may sometimes have to take 
 fencing actions against the node that was formerly active. This JIRA extends 
 the ActiveStandbyElector which adds an extra non-ephemeral node into the ZK 
 directory, which acts as a second copy of the active node's information. 
 Then, if the active loses its ZK session, the next active to be elected may 
 easily locate the unfenced node to take the appropriate actions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8163) Improve ActiveStandbyElector to provide hooks for fencing old active

2012-03-23 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13237359#comment-13237359
 ] 

Hudson commented on HADOOP-8163:


Integrated in Hadoop-Hdfs-trunk-Commit #1997 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1997/])
HADOOP-8163. Improve ActiveStandbyElector to provide hooks for fencing old 
active. Contributed by Todd Lipcon. (Revision 1304675)

 Result = SUCCESS
todd : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1304675
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ha/ActiveStandbyElector.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElector.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/ha/TestActiveStandbyElectorRealZK.java


 Improve ActiveStandbyElector to provide hooks for fencing old active
 

 Key: HADOOP-8163
 URL: https://issues.apache.org/jira/browse/HADOOP-8163
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ha
Affects Versions: 0.23.3, 0.24.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 0.23.3, 0.24.0

 Attachments: hadoop-8163.txt, hadoop-8163.txt, hadoop-8163.txt, 
 hadoop-8163.txt, hadoop-8163.txt


 When a new node becomes active in an HA setup, it may sometimes have to take 
 fencing actions against the node that was formerly active. This JIRA extends 
 the ActiveStandbyElector which adds an extra non-ephemeral node into the ZK 
 directory, which acts as a second copy of the active node's information. 
 Then, if the active loses its ZK session, the next active to be elected may 
 easily locate the unfenced node to take the appropriate actions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




  1   2   >