[jira] [Commented] (HADOOP-10786) Fix UGI#reloginFromKeytab on Java 8

2014-11-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14204684#comment-14204684
 ] 

Hudson commented on HADOOP-10786:
-

SUCCESS: Integrated in Hadoop-Yarn-trunk #739 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/739/])
HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. Contributed by Stephen Chu. 
(wheat9: rev a37a993453c02048a618f71b5b9bc63b5a44dbf6)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
HADOOP-10786. Moved to hadoop-2.7.X. (acmurthy: rev 
14b87b70a8dfc03801dcf5f33caa7fd2cc589840)
* hadoop-common-project/hadoop-common/CHANGES.txt


 Fix UGI#reloginFromKeytab on Java 8
 ---

 Key: HADOOP-10786
 URL: https://issues.apache.org/jira/browse/HADOOP-10786
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.6.0
Reporter: Tobi Vollebregt
Assignee: Stephen Chu
 Fix For: 2.7.0

 Attachments: HADOOP-10786.2.patch, HADOOP-10786.3.patch, 
 HADOOP-10786.3.patch, HADOOP-10786.4.patch, HADOOP-10786.5.patch, 
 HADOOP-10786.patch


 Krb5LoginModule changed subtly in java 8: in particular, if useKeyTab and 
 storeKey are specified, then only a KeyTab object is added to the Subject's 
 private credentials, whereas in java = 7 both a KeyTab and some number of 
 KerberosKey objects were added.
 The UGI constructor checks whether or not a keytab was used to login by 
 looking if there are any KerberosKey objects in the Subject's private 
 credentials. If there are, then isKeyTab is set to true, and otherwise it's 
 set to false.
 Thus, in java 8 isKeyTab is always false given the current UGI 
 implementation, which makes UGI#reloginFromKeytab fail silently.
 Attached patch will check for a KeyTab object on the Subject, instead of a 
 KerberosKey object. This fixes relogins from kerberos keytabs on Oracle java 
 8, and works on Oracle java 7 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10786) Fix UGI#reloginFromKeytab on Java 8

2014-11-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14204812#comment-14204812
 ] 

Hudson commented on HADOOP-10786:
-

FAILURE: Integrated in Hadoop-Hdfs-trunk #1929 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1929/])
HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. Contributed by Stephen Chu. 
(wheat9: rev a37a993453c02048a618f71b5b9bc63b5a44dbf6)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
HADOOP-10786. Moved to hadoop-2.7.X. (acmurthy: rev 
14b87b70a8dfc03801dcf5f33caa7fd2cc589840)
* hadoop-common-project/hadoop-common/CHANGES.txt


 Fix UGI#reloginFromKeytab on Java 8
 ---

 Key: HADOOP-10786
 URL: https://issues.apache.org/jira/browse/HADOOP-10786
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.6.0
Reporter: Tobi Vollebregt
Assignee: Stephen Chu
 Fix For: 2.7.0

 Attachments: HADOOP-10786.2.patch, HADOOP-10786.3.patch, 
 HADOOP-10786.3.patch, HADOOP-10786.4.patch, HADOOP-10786.5.patch, 
 HADOOP-10786.patch


 Krb5LoginModule changed subtly in java 8: in particular, if useKeyTab and 
 storeKey are specified, then only a KeyTab object is added to the Subject's 
 private credentials, whereas in java = 7 both a KeyTab and some number of 
 KerberosKey objects were added.
 The UGI constructor checks whether or not a keytab was used to login by 
 looking if there are any KerberosKey objects in the Subject's private 
 credentials. If there are, then isKeyTab is set to true, and otherwise it's 
 set to false.
 Thus, in java 8 isKeyTab is always false given the current UGI 
 implementation, which makes UGI#reloginFromKeytab fail silently.
 Attached patch will check for a KeyTab object on the Subject, instead of a 
 KerberosKey object. This fixes relogins from kerberos keytabs on Oracle java 
 8, and works on Oracle java 7 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11288) yarn.resourcemanager.scheduler.class wrongly set in yarn-default.xml documentation

2014-11-10 Thread DeepakVohra (JIRA)
DeepakVohra created HADOOP-11288:


 Summary: yarn.resourcemanager.scheduler.class wrongly set in 
yarn-default.xml documentation
 Key: HADOOP-11288
 URL: https://issues.apache.org/jira/browse/HADOOP-11288
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: DeepakVohra


The yarn.resourcemanager.scheduler.class property is wrongly set to 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.
 CapacitySchduler is not even supported. Should be 
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11288) yarn.resourcemanager.scheduler.class wrongly set in yarn-default.xml documentation

2014-11-10 Thread DeepakVohra (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14204864#comment-14204864
 ] 

DeepakVohra commented on HADOOP-11288:
--

http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-common/yarn-default.xml

 yarn.resourcemanager.scheduler.class wrongly set in yarn-default.xml 
 documentation
 --

 Key: HADOOP-11288
 URL: https://issues.apache.org/jira/browse/HADOOP-11288
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: DeepakVohra

 The yarn.resourcemanager.scheduler.class property is wrongly set to 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.
  CapacitySchduler is not even supported. Should be 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10786) Fix UGI#reloginFromKeytab on Java 8

2014-11-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14204870#comment-14204870
 ] 

Hudson commented on HADOOP-10786:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1953 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1953/])
HADOOP-10786. Fix UGI#reloginFromKeytab on Java 8. Contributed by Stephen Chu. 
(wheat9: rev a37a993453c02048a618f71b5b9bc63b5a44dbf6)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/UserGroupInformation.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestUGILoginFromKeytab.java
HADOOP-10786. Moved to hadoop-2.7.X. (acmurthy: rev 
14b87b70a8dfc03801dcf5f33caa7fd2cc589840)
* hadoop-common-project/hadoop-common/CHANGES.txt


 Fix UGI#reloginFromKeytab on Java 8
 ---

 Key: HADOOP-10786
 URL: https://issues.apache.org/jira/browse/HADOOP-10786
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.6.0
Reporter: Tobi Vollebregt
Assignee: Stephen Chu
 Fix For: 2.7.0

 Attachments: HADOOP-10786.2.patch, HADOOP-10786.3.patch, 
 HADOOP-10786.3.patch, HADOOP-10786.4.patch, HADOOP-10786.5.patch, 
 HADOOP-10786.patch


 Krb5LoginModule changed subtly in java 8: in particular, if useKeyTab and 
 storeKey are specified, then only a KeyTab object is added to the Subject's 
 private credentials, whereas in java = 7 both a KeyTab and some number of 
 KerberosKey objects were added.
 The UGI constructor checks whether or not a keytab was used to login by 
 looking if there are any KerberosKey objects in the Subject's private 
 credentials. If there are, then isKeyTab is set to true, and otherwise it's 
 set to false.
 Thus, in java 8 isKeyTab is always false given the current UGI 
 implementation, which makes UGI#reloginFromKeytab fail silently.
 Attached patch will check for a KeyTab object on the Subject, instead of a 
 KerberosKey object. This fixes relogins from kerberos keytabs on Oracle java 
 8, and works on Oracle java 7 as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-11288) yarn.resourcemanager.scheduler.class wrongly set in yarn-default.xml documentation

2014-11-10 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14204893#comment-14204893
 ] 

Jason Lowe edited comment on HADOOP-11288 at 11/10/14 3:32 PM:
---

The CapacityScheduler is very much supported and is actively being developed.  
The setting as the default scheduler is intentional, see YARN-137.


was (Author: jlowe):
The CapacityScheduler is very much supported, and is actively being developed.  
It's setting as the default scheduler is intentional, see YARN-137.

 yarn.resourcemanager.scheduler.class wrongly set in yarn-default.xml 
 documentation
 --

 Key: HADOOP-11288
 URL: https://issues.apache.org/jira/browse/HADOOP-11288
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: DeepakVohra

 The yarn.resourcemanager.scheduler.class property is wrongly set to 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.
  CapacitySchduler is not even supported. Should be 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11288) yarn.resourcemanager.scheduler.class wrongly set in yarn-default.xml documentation

2014-11-10 Thread Jason Lowe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Lowe resolved HADOOP-11288.
-
Resolution: Invalid

The CapacityScheduler is very much supported, and is actively being developed.  
It's setting as the default scheduler is intentional, see YARN-137.

 yarn.resourcemanager.scheduler.class wrongly set in yarn-default.xml 
 documentation
 --

 Key: HADOOP-11288
 URL: https://issues.apache.org/jira/browse/HADOOP-11288
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: DeepakVohra

 The yarn.resourcemanager.scheduler.class property is wrongly set to 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.
  CapacitySchduler is not even supported. Should be 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8009) Create hadoop-client and hadoop-minicluster artifacts for downstream projects

2014-11-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14204910#comment-14204910
 ] 

ASF GitHub Bot commented on HADOOP-8009:


Github user rmetzger commented on a diff in the pull request:

https://github.com/apache/incubator-flink/pull/172#discussion_r20089597
  
--- Diff: flink-addons/flink-hbase/pom.xml ---
@@ -116,20 +109,74 @@ under the License.
/exclusions
/dependency
/dependencies
-   !-- dependency
-   groupIdorg.apache.hbase/groupId
-   artifactIdhbase-server/artifactId
-   version${hbase.version}/version
-   /dependency
-   dependency
-   groupIdorg.apache.hbase/groupId
-   artifactIdhbase-client/artifactId
-   version${hbase.version}/version
-   /dependency
---
 
-   !-- hadoop-client is available for yarn and non-yarn, so there is no 
need 
-   to use profiles See ticket 
https://issues.apache.org/jira/browse/HADOOP-8009 
-   for description of hadoop-clients --
+   profiles
+   profile
+   idhadoop-1/id
+   activation
+   property
+   !-- Please do not remove the 'hadoop1' 
comment. See ./tools/generate_specific_pom.sh --
+   !--hadoop1 --
+   name!hadoop.profile/name
+   /property
+   /activation
+   properties
+   
hbase.version${hbase.hadoop1.version}/hbase.version
+   /properties
+   /profile
+   profile
+   idhadoop-2/id
+   activation
+   property
+   !-- Please do not remove the 'hadoop1' 
comment. See ./tools/generate_specific_pom.sh --
+   !--hadoop2 --
+   namehadoop.profile/name
+   value2/value
+   /property
+   /activation
+   properties
+   
hbase.version${hbase.hadoop2.version}/hbase.version
+   /properties
+   dependencies
+   !-- Force hadoop-common dependency --
+   dependency
+   groupIdorg.apache.hadoop/groupId
+   artifactIdhadoop-common/artifactId
+   /dependency
+   /dependencies
+   /profile
+   profile
+   idcdh5.1.3/id
--- End diff --

Why do we need this additional profile?
Can't users select the hadoop2 profile and then set the specific hadoop and 
hbase versions through properties, like `-Dhbase.version=0.98.1-cdh5` ?


 Create hadoop-client and hadoop-minicluster artifacts for downstream projects 
 --

 Key: HADOOP-8009
 URL: https://issues.apache.org/jira/browse/HADOOP-8009
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 1.0.0, 0.22.0, 0.23.0, 0.23.1, 0.24.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 1.0.1, 0.23.1

 Attachments: HADOOP-8009-branch-0_22.patch, 
 HADOOP-8009-branch-1-add.patch, HADOOP-8009-branch-1.patch, 
 HADOOP-8009-existing-releases.patch, HADOOP-8009.patch


 Using Hadoop from projects like Pig/Hive/Sqoop/Flume/Oozie or any in-house 
 system that interacts with Hadoop is quite challenging for the following 
 reasons:
 * *Different versions of Hadoop produce different artifacts:* Before Hadoop 
 0.23 there was a single artifact hadoop-core, starting with Hadoop 0.23 there 
 are several (common, hdfs, mapred*, yarn*)
 * *There are no 'client' artifacts:* Current artifacts include all JARs 
 needed to run the services, thus bringing into clients several JARs that are 
 not used for job submission/monitoring (servlet, jsp, tomcat, jersey, etc.)
 * *Doing testing on the client side is also quite challenging as more 
 artifacts have to be included than the dependencies define:* for example, the 
 history-server artifact has to be explicitly included. If using Hadoop 1 
 artifacts, jersey-server has to be explicitly included.
 * *3rd party dependencies change in Hadoop from 

[jira] [Commented] (HADOOP-11288) yarn.resourcemanager.scheduler.class wrongly set in yarn-default.xml documentation

2014-11-10 Thread DeepakVohra (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14204921#comment-14204921
 ] 

DeepakVohra commented on HADOOP-11288:
--

Why the Note?
Cloudera does not support the Capacity Scheduler in YARN.

http://www.cloudera.com/content/cloudera/en/documentation/cdh5/v5-0-0/CDH5-Installation-Guide/cdh5ig_mapreduce_to_yarn_migrate.html?scroll=concept_zzt_smy_xl_unique_2

 yarn.resourcemanager.scheduler.class wrongly set in yarn-default.xml 
 documentation
 --

 Key: HADOOP-11288
 URL: https://issues.apache.org/jira/browse/HADOOP-11288
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: DeepakVohra

 The yarn.resourcemanager.scheduler.class property is wrongly set to 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.
  CapacitySchduler is not even supported. Should be 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8009) Create hadoop-client and hadoop-minicluster artifacts for downstream projects

2014-11-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14204923#comment-14204923
 ] 

ASF GitHub Bot commented on HADOOP-8009:


Github user fpompermaier commented on a diff in the pull request:

https://github.com/apache/incubator-flink/pull/172#discussion_r20090375
  
--- Diff: flink-addons/flink-hbase/pom.xml ---
@@ -116,20 +109,74 @@ under the License.
/exclusions
/dependency
/dependencies
-   !-- dependency
-   groupIdorg.apache.hbase/groupId
-   artifactIdhbase-server/artifactId
-   version${hbase.version}/version
-   /dependency
-   dependency
-   groupIdorg.apache.hbase/groupId
-   artifactIdhbase-client/artifactId
-   version${hbase.version}/version
-   /dependency
---
 
-   !-- hadoop-client is available for yarn and non-yarn, so there is no 
need 
-   to use profiles See ticket 
https://issues.apache.org/jira/browse/HADOOP-8009 
-   for description of hadoop-clients --
+   profiles
+   profile
+   idhadoop-1/id
+   activation
+   property
+   !-- Please do not remove the 'hadoop1' 
comment. See ./tools/generate_specific_pom.sh --
+   !--hadoop1 --
+   name!hadoop.profile/name
+   /property
+   /activation
+   properties
+   
hbase.version${hbase.hadoop1.version}/hbase.version
+   /properties
+   /profile
+   profile
+   idhadoop-2/id
+   activation
+   property
+   !-- Please do not remove the 'hadoop1' 
comment. See ./tools/generate_specific_pom.sh --
+   !--hadoop2 --
+   namehadoop.profile/name
+   value2/value
+   /property
+   /activation
+   properties
+   
hbase.version${hbase.hadoop2.version}/hbase.version
+   /properties
+   dependencies
+   !-- Force hadoop-common dependency --
+   dependency
+   groupIdorg.apache.hadoop/groupId
+   artifactIdhadoop-common/artifactId
+   /dependency
+   /dependencies
+   /profile
+   profile
+   idcdh5.1.3/id
--- End diff --

Unfortunately cloudera hbase 0.98.1-cdh5.1.3 requires hadoop-commons 
2.3.0-cdh5.1.3  which requires hadoop-core 2.3.0-mr1-cdh5.1.3. Without 
specifying this profile for cloudera it is not possible to manage properly this 
dependency.
I don't know why Cloudera did this...


 Create hadoop-client and hadoop-minicluster artifacts for downstream projects 
 --

 Key: HADOOP-8009
 URL: https://issues.apache.org/jira/browse/HADOOP-8009
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 1.0.0, 0.22.0, 0.23.0, 0.23.1, 0.24.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 1.0.1, 0.23.1

 Attachments: HADOOP-8009-branch-0_22.patch, 
 HADOOP-8009-branch-1-add.patch, HADOOP-8009-branch-1.patch, 
 HADOOP-8009-existing-releases.patch, HADOOP-8009.patch


 Using Hadoop from projects like Pig/Hive/Sqoop/Flume/Oozie or any in-house 
 system that interacts with Hadoop is quite challenging for the following 
 reasons:
 * *Different versions of Hadoop produce different artifacts:* Before Hadoop 
 0.23 there was a single artifact hadoop-core, starting with Hadoop 0.23 there 
 are several (common, hdfs, mapred*, yarn*)
 * *There are no 'client' artifacts:* Current artifacts include all JARs 
 needed to run the services, thus bringing into clients several JARs that are 
 not used for job submission/monitoring (servlet, jsp, tomcat, jersey, etc.)
 * *Doing testing on the client side is also quite challenging as more 
 artifacts have to be included than the dependencies define:* for example, the 
 history-server artifact has to be explicitly included. If using Hadoop 1 
 artifacts, 

[jira] [Commented] (HADOOP-11288) yarn.resourcemanager.scheduler.class wrongly set in yarn-default.xml documentation

2014-11-10 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14204937#comment-14204937
 ] 

Jason Lowe commented on HADOOP-11288:
-

That's something you'll need to bring up with Cloudera.  They are free to 
choose not to support the CapacityScheduler.  However that decision does not 
directly lead to that scheduler being an invalid setting in Apache Hadoop.

 yarn.resourcemanager.scheduler.class wrongly set in yarn-default.xml 
 documentation
 --

 Key: HADOOP-11288
 URL: https://issues.apache.org/jira/browse/HADOOP-11288
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: DeepakVohra

 The yarn.resourcemanager.scheduler.class property is wrongly set to 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.
  CapacitySchduler is not even supported. Should be 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11288) yarn.resourcemanager.scheduler.class wrongly set in yarn-default.xml documentation

2014-11-10 Thread DeepakVohra (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14204942#comment-14204942
 ] 

DeepakVohra commented on HADOOP-11288:
--

Thanks Jason.

 yarn.resourcemanager.scheduler.class wrongly set in yarn-default.xml 
 documentation
 --

 Key: HADOOP-11288
 URL: https://issues.apache.org/jira/browse/HADOOP-11288
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: DeepakVohra

 The yarn.resourcemanager.scheduler.class property is wrongly set to 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.
  CapacitySchduler is not even supported. Should be 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11288) yarn.resourcemanager.scheduler.class wrongly set in yarn-default.xml documentation

2014-11-10 Thread DeepakVohra (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14204944#comment-14204944
 ] 

DeepakVohra commented on HADOOP-11288:
--

The yarn.resourcemanager.scheduler.class is set to
org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler
in CDH5 yarn-default.xml.
http://archive.cloudera.com/cdh5/cdh/5/hadoop/hadoop-yarn/hadoop-yarn-common/yarn-default.xml

 yarn.resourcemanager.scheduler.class wrongly set in yarn-default.xml 
 documentation
 --

 Key: HADOOP-11288
 URL: https://issues.apache.org/jira/browse/HADOOP-11288
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: DeepakVohra

 The yarn.resourcemanager.scheduler.class property is wrongly set to 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.
  CapacitySchduler is not even supported. Should be 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11288) yarn.resourcemanager.scheduler.class wrongly set in yarn-default.xml documentation

2014-11-10 Thread DeepakVohra (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14204961#comment-14204961
 ] 

DeepakVohra commented on HADOOP-11288:
--

Though vendor implementations may choose a different default why are some of 
the most commonly used implementations using a different default than 
CapacityScheduler?

MapR has FIFO as default.
http://doc.mapr.com/display/MapR/Job+Scheduling


 yarn.resourcemanager.scheduler.class wrongly set in yarn-default.xml 
 documentation
 --

 Key: HADOOP-11288
 URL: https://issues.apache.org/jira/browse/HADOOP-11288
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: DeepakVohra

 The yarn.resourcemanager.scheduler.class property is wrongly set to 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.
  CapacitySchduler is not even supported. Should be 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8009) Create hadoop-client and hadoop-minicluster artifacts for downstream projects

2014-11-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14204962#comment-14204962
 ] 

ASF GitHub Bot commented on HADOOP-8009:


Github user rmetzger commented on a diff in the pull request:

https://github.com/apache/incubator-flink/pull/172#discussion_r20093842
  
--- Diff: flink-addons/flink-hbase/pom.xml ---
@@ -116,20 +109,74 @@ under the License.
/exclusions
/dependency
/dependencies
-   !-- dependency
-   groupIdorg.apache.hbase/groupId
-   artifactIdhbase-server/artifactId
-   version${hbase.version}/version
-   /dependency
-   dependency
-   groupIdorg.apache.hbase/groupId
-   artifactIdhbase-client/artifactId
-   version${hbase.version}/version
-   /dependency
---
 
-   !-- hadoop-client is available for yarn and non-yarn, so there is no 
need 
-   to use profiles See ticket 
https://issues.apache.org/jira/browse/HADOOP-8009 
-   for description of hadoop-clients --
+   profiles
+   profile
+   idhadoop-1/id
+   activation
+   property
+   !-- Please do not remove the 'hadoop1' 
comment. See ./tools/generate_specific_pom.sh --
+   !--hadoop1 --
+   name!hadoop.profile/name
+   /property
+   /activation
+   properties
+   
hbase.version${hbase.hadoop1.version}/hbase.version
+   /properties
+   /profile
+   profile
+   idhadoop-2/id
+   activation
+   property
+   !-- Please do not remove the 'hadoop1' 
comment. See ./tools/generate_specific_pom.sh --
+   !--hadoop2 --
+   namehadoop.profile/name
+   value2/value
+   /property
+   /activation
+   properties
+   
hbase.version${hbase.hadoop2.version}/hbase.version
+   /properties
+   dependencies
+   !-- Force hadoop-common dependency --
+   dependency
+   groupIdorg.apache.hadoop/groupId
+   artifactIdhadoop-common/artifactId
+   /dependency
+   /dependencies
+   /profile
+   profile
+   idcdh5.1.3/id
--- End diff --

What I dislike about the profile is that its very specific about the 
version. We basically need to manually maintain the CDH versions and force 
users into specific CDH versions.

Would it be possible to add the `dependencyManagement` section with the 
hadoop-core dependency into the `hadoop2` profile and set the 
hadoop.core.version to hadoop-2.2.0 by default? 
This way users could actually specify their specific hadoop versions if 
they want to build flink against a particular CDH build?


 Create hadoop-client and hadoop-minicluster artifacts for downstream projects 
 --

 Key: HADOOP-8009
 URL: https://issues.apache.org/jira/browse/HADOOP-8009
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 1.0.0, 0.22.0, 0.23.0, 0.23.1, 0.24.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 1.0.1, 0.23.1

 Attachments: HADOOP-8009-branch-0_22.patch, 
 HADOOP-8009-branch-1-add.patch, HADOOP-8009-branch-1.patch, 
 HADOOP-8009-existing-releases.patch, HADOOP-8009.patch


 Using Hadoop from projects like Pig/Hive/Sqoop/Flume/Oozie or any in-house 
 system that interacts with Hadoop is quite challenging for the following 
 reasons:
 * *Different versions of Hadoop produce different artifacts:* Before Hadoop 
 0.23 there was a single artifact hadoop-core, starting with Hadoop 0.23 there 
 are several (common, hdfs, mapred*, yarn*)
 * *There are no 'client' artifacts:* Current artifacts include all JARs 
 needed to run the services, thus bringing into clients several JARs that are 
 not used for job submission/monitoring (servlet, jsp, tomcat, jersey, etc.)
 * *Doing testing on 

[jira] [Created] (HADOOP-11289) Fix typo in RpcInfo log message

2014-11-10 Thread Charles Lamb (JIRA)
Charles Lamb created HADOOP-11289:
-

 Summary: Fix typo in RpcInfo log message
 Key: HADOOP-11289
 URL: https://issues.apache.org/jira/browse/HADOOP-11289
 Project: Hadoop Common
  Issue Type: Bug
  Components: net
Affects Versions: 2.7.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Trivial


From RpcUtil.java:

LOG.info(Malfromed RPC request from  + e.getRemoteAddress());

s/Malfromed/malformed/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8009) Create hadoop-client and hadoop-minicluster artifacts for downstream projects

2014-11-10 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14204975#comment-14204975
 ] 

ASF GitHub Bot commented on HADOOP-8009:


Github user fpompermaier commented on a diff in the pull request:

https://github.com/apache/incubator-flink/pull/172#discussion_r20094591
  
--- Diff: flink-addons/flink-hbase/pom.xml ---
@@ -116,20 +109,74 @@ under the License.
/exclusions
/dependency
/dependencies
-   !-- dependency
-   groupIdorg.apache.hbase/groupId
-   artifactIdhbase-server/artifactId
-   version${hbase.version}/version
-   /dependency
-   dependency
-   groupIdorg.apache.hbase/groupId
-   artifactIdhbase-client/artifactId
-   version${hbase.version}/version
-   /dependency
---
 
-   !-- hadoop-client is available for yarn and non-yarn, so there is no 
need 
-   to use profiles See ticket 
https://issues.apache.org/jira/browse/HADOOP-8009 
-   for description of hadoop-clients --
+   profiles
+   profile
+   idhadoop-1/id
+   activation
+   property
+   !-- Please do not remove the 'hadoop1' 
comment. See ./tools/generate_specific_pom.sh --
+   !--hadoop1 --
+   name!hadoop.profile/name
+   /property
+   /activation
+   properties
+   
hbase.version${hbase.hadoop1.version}/hbase.version
+   /properties
+   /profile
+   profile
+   idhadoop-2/id
+   activation
+   property
+   !-- Please do not remove the 'hadoop1' 
comment. See ./tools/generate_specific_pom.sh --
+   !--hadoop2 --
+   namehadoop.profile/name
+   value2/value
+   /property
+   /activation
+   properties
+   
hbase.version${hbase.hadoop2.version}/hbase.version
+   /properties
+   dependencies
+   !-- Force hadoop-common dependency --
+   dependency
+   groupIdorg.apache.hadoop/groupId
+   artifactIdhadoop-common/artifactId
+   /dependency
+   /dependencies
+   /profile
+   profile
+   idcdh5.1.3/id
--- End diff --

Yes you could but then you have to introduce the hadoop.core.version 
variable also in the root pom..


 Create hadoop-client and hadoop-minicluster artifacts for downstream projects 
 --

 Key: HADOOP-8009
 URL: https://issues.apache.org/jira/browse/HADOOP-8009
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 1.0.0, 0.22.0, 0.23.0, 0.23.1, 0.24.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 1.0.1, 0.23.1

 Attachments: HADOOP-8009-branch-0_22.patch, 
 HADOOP-8009-branch-1-add.patch, HADOOP-8009-branch-1.patch, 
 HADOOP-8009-existing-releases.patch, HADOOP-8009.patch


 Using Hadoop from projects like Pig/Hive/Sqoop/Flume/Oozie or any in-house 
 system that interacts with Hadoop is quite challenging for the following 
 reasons:
 * *Different versions of Hadoop produce different artifacts:* Before Hadoop 
 0.23 there was a single artifact hadoop-core, starting with Hadoop 0.23 there 
 are several (common, hdfs, mapred*, yarn*)
 * *There are no 'client' artifacts:* Current artifacts include all JARs 
 needed to run the services, thus bringing into clients several JARs that are 
 not used for job submission/monitoring (servlet, jsp, tomcat, jersey, etc.)
 * *Doing testing on the client side is also quite challenging as more 
 artifacts have to be included than the dependencies define:* for example, the 
 history-server artifact has to be explicitly included. If using Hadoop 1 
 artifacts, jersey-server has to be explicitly included.
 * *3rd party dependencies change in Hadoop from version to version:* This 
 makes things complicated for projects that have to deal with 

[jira] [Updated] (HADOOP-11289) Fix typo in RpcUtil log message

2014-11-10 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HADOOP-11289:
--
Summary: Fix typo in RpcUtil log message  (was: Fix typo in RpcInfo log 
message)

 Fix typo in RpcUtil log message
 ---

 Key: HADOOP-11289
 URL: https://issues.apache.org/jira/browse/HADOOP-11289
 Project: Hadoop Common
  Issue Type: Bug
  Components: net
Affects Versions: 2.7.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Trivial

 From RpcUtil.java:
 LOG.info(Malfromed RPC request from  + e.getRemoteAddress());
 s/Malfromed/malformed/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11288) yarn.resourcemanager.scheduler.class wrongly set in yarn-default.xml documentation

2014-11-10 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14204990#comment-14204990
 ] 

Jason Lowe commented on HADOOP-11288:
-

Different vendors may have different goals for a default scheduler or what 
schedulers they want to support.  Hortonworks uses the CapacityScheduler, for 
example.  Again these are vendor decisions and not Apache Hadoop decisions.

If someone wants to propose changing the default scheduler in Apache Hadoop to 
FairScheduler and has good reasons to do so then that's something we can 
discuss on a separate JIRA.  I just am pointing out the CapacityScheduler is 
supported by the Apache Hadoop community and having that scheduler as the 
default is not invalid.

 yarn.resourcemanager.scheduler.class wrongly set in yarn-default.xml 
 documentation
 --

 Key: HADOOP-11288
 URL: https://issues.apache.org/jira/browse/HADOOP-11288
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: DeepakVohra

 The yarn.resourcemanager.scheduler.class property is wrongly set to 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.
  CapacitySchduler is not even supported. Should be 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11289) Fix typo in RpcUtil log message

2014-11-10 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HADOOP-11289:
--
Attachment: HADOOP-11289.001.patch

Patch fixes the typo.

 Fix typo in RpcUtil log message
 ---

 Key: HADOOP-11289
 URL: https://issues.apache.org/jira/browse/HADOOP-11289
 Project: Hadoop Common
  Issue Type: Bug
  Components: net
Affects Versions: 2.7.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Trivial
 Attachments: HADOOP-11289.001.patch


 From RpcUtil.java:
 LOG.info(Malfromed RPC request from  + e.getRemoteAddress());
 s/Malfromed/malformed/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11289) Fix typo in RpcUtil log message

2014-11-10 Thread Charles Lamb (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Charles Lamb updated HADOOP-11289:
--
Status: Patch Available  (was: Open)

 Fix typo in RpcUtil log message
 ---

 Key: HADOOP-11289
 URL: https://issues.apache.org/jira/browse/HADOOP-11289
 Project: Hadoop Common
  Issue Type: Bug
  Components: net
Affects Versions: 2.7.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Trivial
 Attachments: HADOOP-11289.001.patch


 From RpcUtil.java:
 LOG.info(Malfromed RPC request from  + e.getRemoteAddress());
 s/Malfromed/malformed/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11288) yarn.resourcemanager.scheduler.class wrongly set in yarn-default.xml documentation

2014-11-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11288?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205004#comment-14205004
 ] 

Steve Loughran commented on HADOOP-11288:
-

That's a decision made by those organisations, probably by which one they have 
most experience of working with

ASF cares about the ASF source releases, which have the CapacityScheduler as a 
default. It is used in production in some of the largest hadoop clusters. 
Others use the FairScheduler: their choice

http://wiki.apache.org/hadoop/InvalidJiraIssues

 yarn.resourcemanager.scheduler.class wrongly set in yarn-default.xml 
 documentation
 --

 Key: HADOOP-11288
 URL: https://issues.apache.org/jira/browse/HADOOP-11288
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.0
Reporter: DeepakVohra

 The yarn.resourcemanager.scheduler.class property is wrongly set to 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler.
  CapacitySchduler is not even supported. Should be 
 org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11289) Fix typo in RpcUtil log message

2014-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205060#comment-14205060
 ] 

Hadoop QA commented on HADOOP-11289:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12680603/HADOOP-11289.001.patch
  against trunk revision ab30d51.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-nfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5055//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5055//console

This message is automatically generated.

 Fix typo in RpcUtil log message
 ---

 Key: HADOOP-11289
 URL: https://issues.apache.org/jira/browse/HADOOP-11289
 Project: Hadoop Common
  Issue Type: Bug
  Components: net
Affects Versions: 2.7.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Trivial
 Attachments: HADOOP-11289.001.patch


 From RpcUtil.java:
 LOG.info(Malfromed RPC request from  + e.getRemoteAddress());
 s/Malfromed/malformed/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11289) Fix typo in RpcUtil log message

2014-11-10 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205144#comment-14205144
 ] 

Haohui Mai commented on HADOOP-11289:
-

+1. I'll commit it shortly.

 Fix typo in RpcUtil log message
 ---

 Key: HADOOP-11289
 URL: https://issues.apache.org/jira/browse/HADOOP-11289
 Project: Hadoop Common
  Issue Type: Bug
  Components: net
Affects Versions: 2.7.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Trivial
 Attachments: HADOOP-11289.001.patch


 From RpcUtil.java:
 LOG.info(Malfromed RPC request from  + e.getRemoteAddress());
 s/Malfromed/malformed/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11252) RPC client write does not time out by default

2014-11-10 Thread Ming Ma (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11252?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205152#comment-14205152
 ] 

Ming Ma commented on HADOOP-11252:
--

Should we use another name other than {{ipc.client.write.timeout}} given it can 
cover scenarios besides RPC request write time out?

* HDFS-4858 covers the case where The RPC server is unplugged before RPC call 
is delivered to the RPC server TCP stack. That is where write timeout applies.
* RPC request has been delivered to the RPC server, but client doesn't get any 
response. That could happen as in YARN-2714 where RPC server swallows 
OutOfMemoryError and just drops the response. Or the RPC request is still in 
RPC server call queue when RPC server is unplugged.

It seems like we want to define some end to end timeout, measure between the 
time when the RPC client writes the RPC call to client TCP stack and the time 
when RPC client reads the RPC response from client TCP stack.

 RPC client write does not time out by default
 -

 Key: HADOOP-11252
 URL: https://issues.apache.org/jira/browse/HADOOP-11252
 Project: Hadoop Common
  Issue Type: Bug
  Components: ipc
Affects Versions: 2.5.0
Reporter: Wilfred Spiegelenburg
Priority: Critical

 The RPC client has a default timeout set to 0 when no timeout is passed in. 
 This means that the network connection created will not timeout when used to 
 write data. The issue has shown in YARN-2578 and HDFS-4858. Timeouts for 
 writes then fall back to the tcp level retry (configured via tcp_retries2) 
 and timeouts between the 15-30 minutes. Which is too long for a default 
 behaviour.
 Using 0 as the default value for timeout is incorrect. We should use a sane 
 value for the timeout and the ipc.ping.interval configuration value is a 
 logical choice for it. The default behaviour should be changed from 0 to the 
 value read for the ping interval from the Configuration.
 Fixing it in common makes more sense than finding and changing all other 
 points in the code that do not pass in a timeout.
 Offending code lines:
 https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java#L488
 and 
 https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/ipc/RPC.java#L350



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11289) Fix typo in RpcUtil log message

2014-11-10 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11289?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-11289:

   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~clamb] for the 
contribution.

 Fix typo in RpcUtil log message
 ---

 Key: HADOOP-11289
 URL: https://issues.apache.org/jira/browse/HADOOP-11289
 Project: Hadoop Common
  Issue Type: Bug
  Components: net
Affects Versions: 2.7.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Trivial
 Fix For: 2.7.0

 Attachments: HADOOP-11289.001.patch


 From RpcUtil.java:
 LOG.info(Malfromed RPC request from  + e.getRemoteAddress());
 s/Malfromed/malformed/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11290) Typo on web page http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-common/NativeLibraries.html

2014-11-10 Thread Jason Pyeron (JIRA)
Jason Pyeron created HADOOP-11290:
-

 Summary: Typo on web page 
http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-common/NativeLibraries.html
 Key: HADOOP-11290
 URL: https://issues.apache.org/jira/browse/HADOOP-11290
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.3.0
Reporter: Jason Pyeron
Priority: Minor


Once you installed the prerequisite packages use the standard hadoop pom.xml 
file and pass along the native flag to build the native hadoop library:

   $ mvn package -Pdist,native -Dskiptests -Dtar


-Dskiptests

should be 

-DskipTests



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11289) Fix typo in RpcUtil log message

2014-11-10 Thread Charles Lamb (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205180#comment-14205180
 ] 

Charles Lamb commented on HADOOP-11289:
---

Thank you [~wheat9] for the quick review and commit. I should mention FTR that 
the patch doesn't need any tests since it is a log message typo fix.


 Fix typo in RpcUtil log message
 ---

 Key: HADOOP-11289
 URL: https://issues.apache.org/jira/browse/HADOOP-11289
 Project: Hadoop Common
  Issue Type: Bug
  Components: net
Affects Versions: 2.7.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Trivial
 Fix For: 2.7.0

 Attachments: HADOOP-11289.001.patch


 From RpcUtil.java:
 LOG.info(Malfromed RPC request from  + e.getRemoteAddress());
 s/Malfromed/malformed/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11289) Fix typo in RpcUtil log message

2014-11-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11289?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205182#comment-14205182
 ] 

Hudson commented on HADOOP-11289:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #6504 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6504/])
HADOOP-11289. Fix typo in RpcUtil log message. Contributed by Charles Lamb. 
(wheat9: rev eace218411a7733abb8dfca6aaa4eb0557e25e0c)
* 
hadoop-common-project/hadoop-nfs/src/main/java/org/apache/hadoop/oncrpc/RpcUtil.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 Fix typo in RpcUtil log message
 ---

 Key: HADOOP-11289
 URL: https://issues.apache.org/jira/browse/HADOOP-11289
 Project: Hadoop Common
  Issue Type: Bug
  Components: net
Affects Versions: 2.7.0
Reporter: Charles Lamb
Assignee: Charles Lamb
Priority: Trivial
 Fix For: 2.7.0

 Attachments: HADOOP-11289.001.patch


 From RpcUtil.java:
 LOG.info(Malfromed RPC request from  + e.getRemoteAddress());
 s/Malfromed/malformed/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11257) Update hadoop jar documentation to warn against using it for launching yarn jars

2014-11-10 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-11257:
--
Summary: Update hadoop jar documentation to warn against using it for 
launching yarn jars  (was: Deprecate 'hadoop jar')

 Update hadoop jar documentation to warn against using it for launching yarn 
 jars
 --

 Key: HADOOP-11257
 URL: https://issues.apache.org/jira/browse/HADOOP-11257
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Allen Wittenauer
Assignee: Masatake Iwasaki
 Attachments: HADOOP-11257.1.patch, HADOOP-11257.1.patch, 
 HADOOP-11257.2.patch, HADOOP-11257.3.patch


 Given that 'hadoop jar' and 'yarn jar' work differently, we should mark 
 'hadoop jar' as deprecated in 2.7 and remove it in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9576) Make NetUtils.wrapException throw EOFException instead of wrapping it as IOException

2014-11-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9576:
---
Attachment: HADOOP-9576-003.patch

updated patch which 
# follows Jian He's exception
# adds a test

 Make NetUtils.wrapException throw EOFException instead of wrapping it as 
 IOException
 

 Key: HADOOP-9576
 URL: https://issues.apache.org/jira/browse/HADOOP-9576
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.6.0
Reporter: Jian He
Assignee: Steve Loughran
 Attachments: HADOOP-9576-001.patch, HADOOP-9576-003.patch


 In case of EOFException, NetUtils is now wrapping it as IOException, we may 
 want to throw EOFException as it is, since EOFException can happen when 
 connection is lost in the middle, the client may want to explicitly handle 
 such exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9576) Make NetUtils.wrapException throw EOFException instead of wrapping it as IOException

2014-11-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9576:
---
Affects Version/s: (was: 1.2.1)
   2.6.0
   Status: Open  (was: Patch Available)

 Make NetUtils.wrapException throw EOFException instead of wrapping it as 
 IOException
 

 Key: HADOOP-9576
 URL: https://issues.apache.org/jira/browse/HADOOP-9576
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.6.0
Reporter: Jian He
Assignee: Steve Loughran
 Attachments: HADOOP-9576-001.patch, HADOOP-9576-003.patch


 In case of EOFException, NetUtils is now wrapping it as IOException, we may 
 want to throw EOFException as it is, since EOFException can happen when 
 connection is lost in the middle, the client may want to explicitly handle 
 such exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11257) Deprecate 'hadoop jar'

2014-11-10 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205207#comment-14205207
 ] 

Colin Patrick McCabe commented on HADOOP-11257:
---

+1.  [~aw], any comments, or should we commit this?

 Deprecate 'hadoop jar'
 --

 Key: HADOOP-11257
 URL: https://issues.apache.org/jira/browse/HADOOP-11257
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Allen Wittenauer
Assignee: Masatake Iwasaki
 Attachments: HADOOP-11257.1.patch, HADOOP-11257.1.patch, 
 HADOOP-11257.2.patch, HADOOP-11257.3.patch


 Given that 'hadoop jar' and 'yarn jar' work differently, we should mark 
 'hadoop jar' as deprecated in 2.7 and remove it in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9576) Make NetUtils.wrapException throw EOFException instead of wrapping it as IOException

2014-11-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9576:
---
Target Version/s: 2.6.0  (was: 2.1.0-beta)
  Status: Patch Available  (was: Open)

 Make NetUtils.wrapException throw EOFException instead of wrapping it as 
 IOException
 

 Key: HADOOP-9576
 URL: https://issues.apache.org/jira/browse/HADOOP-9576
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.6.0
Reporter: Jian He
Assignee: Steve Loughran
 Attachments: HADOOP-9576-001.patch, HADOOP-9576-003.patch


 In case of EOFException, NetUtils is now wrapping it as IOException, we may 
 want to throw EOFException as it is, since EOFException can happen when 
 connection is lost in the middle, the client may want to explicitly handle 
 such exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11257) Update hadoop jar documentation to warn against using it for launching yarn jars

2014-11-10 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-11257:
--
Description: We should update the hadoop jar documentation to warn 
against using it for launching yarn jars.  (was: Given that 'hadoop jar' and 
'yarn jar' work differently, we should mark 'hadoop jar' as deprecated in 2.7 
and remove it in trunk.)

 Update hadoop jar documentation to warn against using it for launching yarn 
 jars
 --

 Key: HADOOP-11257
 URL: https://issues.apache.org/jira/browse/HADOOP-11257
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Allen Wittenauer
Assignee: Masatake Iwasaki
 Attachments: HADOOP-11257.1.patch, HADOOP-11257.1.patch, 
 HADOOP-11257.2.patch, HADOOP-11257.3.patch


 We should update the hadoop jar documentation to warn against using it for 
 launching yarn jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11291) Log the cause of SASL connection failures

2014-11-10 Thread Stephen Chu (JIRA)
Stephen Chu created HADOOP-11291:


 Summary: Log the cause of SASL connection failures
 Key: HADOOP-11291
 URL: https://issues.apache.org/jira/browse/HADOOP-11291
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.5.0
Reporter: Stephen Chu
Assignee: Stephen Chu
Priority: Minor


{{UGI#doAs}} will no longer log a PriviledgedActionException unless 
LOG.isDebugEnabled() == true. HADOOP-10015 made this change because it was 
decided that users calling {{UGI#doAs}} should be responsible for logging the 
error when catching an exception. Also, the log was confusing in certain 
situations (see more details in HADOOP-10015).

However, as Daryn noted, this log message was very helpful in cases of 
debugging security issues.

As an example, we would use to see this in the DN logs before HADOOP-10015:
{code}
2014-10-20 11:28:02,112 WARN org.apache.hadoop.security.UserGroupInformation: 
PriviledgedActionException as:hdfs/hosta@realm.com (auth:KERBEROS) 
cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
GSSException: No valid credentials provided (Mechanism level: Generic error 
(description in e-text) (60) - NO PREAUTH)]
2014-10-20 11:28:02,112 WARN org.apache.hadoop.ipc.Client: Couldn't setup 
connection for hdfs/hosta@realm.com to hostB.com/101.01.010:8022
2014-10-20 11:28:02,112 WARN org.apache.hadoop.security.UserGroupInformation: 
PriviledgedActionException as:hdfs/hosta@realm.com (auth:KERBEROS) 
cause:java.io.IOException: Couldn't setup connection for 
hdfs/hosta@realm.com to hostB.com/101.01.010:8022
{code}

After the fix went in, the DN was upgraded, and only logs:
{code}
2014-10-20 14:11:40,712 WARN org.apache.hadoop.ipc.Client: Couldn't setup 
connection for hdfs/hosta@realm.com to hostB.com/101.01.010:8022
2014-10-20 14:11:40,713 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
Problem connecting to server: hostB.com/101.01.010:8022
{code}

It'd be good to add more logging information about the cause of a SASL 
connection failure.

Thanks to [~qwertymaniac] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11291) Log the cause of SASL connection failures

2014-11-10 Thread Stephen Chu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Chu updated HADOOP-11291:
-
Labels: supportability  (was: )

 Log the cause of SASL connection failures
 -

 Key: HADOOP-11291
 URL: https://issues.apache.org/jira/browse/HADOOP-11291
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.5.0
Reporter: Stephen Chu
Assignee: Stephen Chu
Priority: Minor
  Labels: supportability

 {{UGI#doAs}} will no longer log a PriviledgedActionException unless 
 LOG.isDebugEnabled() == true. HADOOP-10015 made this change because it was 
 decided that users calling {{UGI#doAs}} should be responsible for logging the 
 error when catching an exception. Also, the log was confusing in certain 
 situations (see more details in HADOOP-10015).
 However, as Daryn noted, this log message was very helpful in cases of 
 debugging security issues.
 As an example, we would use to see this in the DN logs before HADOOP-10015:
 {code}
 2014-10-20 11:28:02,112 WARN org.apache.hadoop.security.UserGroupInformation: 
 PriviledgedActionException as:hdfs/hosta@realm.com (auth:KERBEROS) 
 cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
 GSSException: No valid credentials provided (Mechanism level: Generic error 
 (description in e-text) (60) - NO PREAUTH)]
 2014-10-20 11:28:02,112 WARN org.apache.hadoop.ipc.Client: Couldn't setup 
 connection for hdfs/hosta@realm.com to hostB.com/101.01.010:8022
 2014-10-20 11:28:02,112 WARN org.apache.hadoop.security.UserGroupInformation: 
 PriviledgedActionException as:hdfs/hosta@realm.com (auth:KERBEROS) 
 cause:java.io.IOException: Couldn't setup connection for 
 hdfs/hosta@realm.com to hostB.com/101.01.010:8022
 {code}
 After the fix went in, the DN was upgraded, and only logs:
 {code}
 2014-10-20 14:11:40,712 WARN org.apache.hadoop.ipc.Client: Couldn't setup 
 connection for hdfs/hosta@realm.com to hostB.com/101.01.010:8022
 2014-10-20 14:11:40,713 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Problem connecting to server: hostB.com/101.01.010:8022
 {code}
 It'd be good to add more logging information about the cause of a SASL 
 connection failure.
 Thanks to [~qwertymaniac] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9576) Make NetUtils.wrapException throw EOFException instead of wrapping it as IOException

2014-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205242#comment-14205242
 ] 

Hadoop QA commented on HADOOP-9576:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12586310/HADOOP-9576-001.patch
  against trunk revision eace218.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5056//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5056//console

This message is automatically generated.

 Make NetUtils.wrapException throw EOFException instead of wrapping it as 
 IOException
 

 Key: HADOOP-9576
 URL: https://issues.apache.org/jira/browse/HADOOP-9576
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.6.0
Reporter: Jian He
Assignee: Steve Loughran
 Attachments: HADOOP-9576-001.patch, HADOOP-9576-003.patch


 In case of EOFException, NetUtils is now wrapping it as IOException, we may 
 want to throw EOFException as it is, since EOFException can happen when 
 connection is lost in the middle, the client may want to explicitly handle 
 such exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11292) mvm package reports error when using Java 1.8

2014-11-10 Thread Chen He (JIRA)
Chen He created HADOOP-11292:


 Summary: mvm package reports error when using Java 1.8 
 Key: HADOOP-11292
 URL: https://issues.apache.org/jira/browse/HADOOP-11292
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chen He
Assignee: Chen He


mvn package -Pdist -Dtar -DskipTests reports following error based on latest 
trunk:

[INFO] BUILD FAILURE

[INFO] 

[INFO] Total time: 11.010 s

[INFO] Finished at: 2014-11-10T11:23:49-08:00

[INFO] Final Memory: 51M/555M

[INFO] 

[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-javadoc-plugin:2.8.1:jar (module-javadocs) on 
project hadoop-maven-plugins: MavenReportException: Error while creating 
archive:

[ERROR] Exit code: 1 - 
./hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java:45:
 error: unknown tag: String

[ERROR] * @param command ListString containing command and all arguments

[ERROR] ^

[ERROR] 
./develop/hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java:46:
 error: unknown tag: String

[ERROR] * @param output ListString in/out parameter to receive command output

[ERROR] ^

[ERROR] 
./hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/FileSetUtils.java:50:
 error: unknown tag: File

[ERROR] * @return ListFile containing every element of the FileSet as a File

[ERROR] ^

[ERROR] 

[ERROR] Command line was: 
/Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/bin/javadoc 
-J-Dhttp.proxySet=true -J-Dhttp.proxyHost=www-proxy.us.oracle.com 
-J-Dhttp.proxyPort=80 @options @packages

[ERROR] 

[ERROR] Refer to the generated Javadoc files in 
'./hadoop/hadoop/hadoop-maven-plugins/target' dir.

[ERROR] - [Help 1]

[ERROR] 

[ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
switch.

[ERROR] Re-run Maven using the -X switch to enable full debug logging.

[ERROR] 

[ERROR] For more information about the errors and possible solutions, please 
read the following articles:

[ERROR] [Help 1] 
http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException

[ERROR] 

[ERROR] After correcting the problems, you can resume the build with the command

[ERROR]   mvn goals -rf :hadoop-maven-plugins



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11291) Log the cause of SASL connection failures

2014-11-10 Thread Stephen Chu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Chu updated HADOOP-11291:
-
Attachment: HADOOP-11291.1.patch

Attaching a patch that adds the cause to the Couldn't setup connection log in 
handleSaslConnectionFailure.

Also, for troubleshooters, it'll be useful to get the stacktrace in this 
situation, so also added printing of the stacktrace of the cause.

 Log the cause of SASL connection failures
 -

 Key: HADOOP-11291
 URL: https://issues.apache.org/jira/browse/HADOOP-11291
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.5.0
Reporter: Stephen Chu
Assignee: Stephen Chu
Priority: Minor
  Labels: supportability
 Attachments: HADOOP-11291.1.patch


 {{UGI#doAs}} will no longer log a PriviledgedActionException unless 
 LOG.isDebugEnabled() == true. HADOOP-10015 made this change because it was 
 decided that users calling {{UGI#doAs}} should be responsible for logging the 
 error when catching an exception. Also, the log was confusing in certain 
 situations (see more details in HADOOP-10015).
 However, as Daryn noted, this log message was very helpful in cases of 
 debugging security issues.
 As an example, we would use to see this in the DN logs before HADOOP-10015:
 {code}
 2014-10-20 11:28:02,112 WARN org.apache.hadoop.security.UserGroupInformation: 
 PriviledgedActionException as:hdfs/hosta@realm.com (auth:KERBEROS) 
 cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
 GSSException: No valid credentials provided (Mechanism level: Generic error 
 (description in e-text) (60) - NO PREAUTH)]
 2014-10-20 11:28:02,112 WARN org.apache.hadoop.ipc.Client: Couldn't setup 
 connection for hdfs/hosta@realm.com to hostB.com/101.01.010:8022
 2014-10-20 11:28:02,112 WARN org.apache.hadoop.security.UserGroupInformation: 
 PriviledgedActionException as:hdfs/hosta@realm.com (auth:KERBEROS) 
 cause:java.io.IOException: Couldn't setup connection for 
 hdfs/hosta@realm.com to hostB.com/101.01.010:8022
 {code}
 After the fix went in, the DN was upgraded, and only logs:
 {code}
 2014-10-20 14:11:40,712 WARN org.apache.hadoop.ipc.Client: Couldn't setup 
 connection for hdfs/hosta@realm.com to hostB.com/101.01.010:8022
 2014-10-20 14:11:40,713 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Problem connecting to server: hostB.com/101.01.010:8022
 {code}
 It'd be good to add more logging information about the cause of a SASL 
 connection failure.
 Thanks to [~qwertymaniac] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11291) Log the cause of SASL connection failures

2014-11-10 Thread Stephen Chu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stephen Chu updated HADOOP-11291:
-
Status: Patch Available  (was: Open)

 Log the cause of SASL connection failures
 -

 Key: HADOOP-11291
 URL: https://issues.apache.org/jira/browse/HADOOP-11291
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.5.0
Reporter: Stephen Chu
Assignee: Stephen Chu
Priority: Minor
  Labels: supportability
 Attachments: HADOOP-11291.1.patch


 {{UGI#doAs}} will no longer log a PriviledgedActionException unless 
 LOG.isDebugEnabled() == true. HADOOP-10015 made this change because it was 
 decided that users calling {{UGI#doAs}} should be responsible for logging the 
 error when catching an exception. Also, the log was confusing in certain 
 situations (see more details in HADOOP-10015).
 However, as Daryn noted, this log message was very helpful in cases of 
 debugging security issues.
 As an example, we would use to see this in the DN logs before HADOOP-10015:
 {code}
 2014-10-20 11:28:02,112 WARN org.apache.hadoop.security.UserGroupInformation: 
 PriviledgedActionException as:hdfs/hosta@realm.com (auth:KERBEROS) 
 cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
 GSSException: No valid credentials provided (Mechanism level: Generic error 
 (description in e-text) (60) - NO PREAUTH)]
 2014-10-20 11:28:02,112 WARN org.apache.hadoop.ipc.Client: Couldn't setup 
 connection for hdfs/hosta@realm.com to hostB.com/101.01.010:8022
 2014-10-20 11:28:02,112 WARN org.apache.hadoop.security.UserGroupInformation: 
 PriviledgedActionException as:hdfs/hosta@realm.com (auth:KERBEROS) 
 cause:java.io.IOException: Couldn't setup connection for 
 hdfs/hosta@realm.com to hostB.com/101.01.010:8022
 {code}
 After the fix went in, the DN was upgraded, and only logs:
 {code}
 2014-10-20 14:11:40,712 WARN org.apache.hadoop.ipc.Client: Couldn't setup 
 connection for hdfs/hosta@realm.com to hostB.com/101.01.010:8022
 2014-10-20 14:11:40,713 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Problem connecting to server: hostB.com/101.01.010:8022
 {code}
 It'd be good to add more logging information about the cause of a SASL 
 connection failure.
 Thanks to [~qwertymaniac] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11292) mvm package reports error when using Java 1.8

2014-11-10 Thread Chen He (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205257#comment-14205257
 ] 

Chen He commented on HADOOP-11292:
--

This is because JDK 1.8 introduce doclint which applies strict javadoc rules. 

 mvm package reports error when using Java 1.8 
 

 Key: HADOOP-11292
 URL: https://issues.apache.org/jira/browse/HADOOP-11292
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chen He
Assignee: Chen He

 mvn package -Pdist -Dtar -DskipTests reports following error based on latest 
 trunk:
 [INFO] BUILD FAILURE
 [INFO] 
 
 [INFO] Total time: 11.010 s
 [INFO] Finished at: 2014-11-10T11:23:49-08:00
 [INFO] Final Memory: 51M/555M
 [INFO] 
 
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-javadoc-plugin:2.8.1:jar (module-javadocs) on 
 project hadoop-maven-plugins: MavenReportException: Error while creating 
 archive:
 [ERROR] Exit code: 1 - 
 ./hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java:45:
  error: unknown tag: String
 [ERROR] * @param command ListString containing command and all arguments
 [ERROR] ^
 [ERROR] 
 ./develop/hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java:46:
  error: unknown tag: String
 [ERROR] * @param output ListString in/out parameter to receive command 
 output
 [ERROR] ^
 [ERROR] 
 ./hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/FileSetUtils.java:50:
  error: unknown tag: File
 [ERROR] * @return ListFile containing every element of the FileSet as a File
 [ERROR] ^
 [ERROR] 
 [ERROR] Command line was: 
 /Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/bin/javadoc 
 -J-Dhttp.proxySet=true -J-Dhttp.proxyHost=www-proxy.us.oracle.com 
 -J-Dhttp.proxyPort=80 @options @packages
 [ERROR] 
 [ERROR] Refer to the generated Javadoc files in 
 './hadoop/hadoop/hadoop-maven-plugins/target' dir.
 [ERROR] - [Help 1]
 [ERROR] 
 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
 switch.
 [ERROR] Re-run Maven using the -X switch to enable full debug logging.
 [ERROR] 
 [ERROR] For more information about the errors and possible solutions, please 
 read the following articles:
 [ERROR] [Help 1] 
 http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
 [ERROR] 
 [ERROR] After correcting the problems, you can resume the build with the 
 command
 [ERROR]   mvn goals -rf :hadoop-maven-plugins



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9576) Make NetUtils.wrapException throw EOFException instead of wrapping it as IOException

2014-11-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9576:
---
Attachment: (was: HADOOP-9576-001.patch)

 Make NetUtils.wrapException throw EOFException instead of wrapping it as 
 IOException
 

 Key: HADOOP-9576
 URL: https://issues.apache.org/jira/browse/HADOOP-9576
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.6.0
Reporter: Jian He
Assignee: Steve Loughran
 Attachments: HADOOP-9576-003.patch


 In case of EOFException, NetUtils is now wrapping it as IOException, we may 
 want to throw EOFException as it is, since EOFException can happen when 
 connection is lost in the middle, the client may want to explicitly handle 
 such exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9576) Make NetUtils.wrapException throw EOFException instead of wrapping it as IOException

2014-11-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9576:
---
Status: Patch Available  (was: Open)

 Make NetUtils.wrapException throw EOFException instead of wrapping it as 
 IOException
 

 Key: HADOOP-9576
 URL: https://issues.apache.org/jira/browse/HADOOP-9576
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.6.0
Reporter: Jian He
Assignee: Steve Loughran
 Attachments: HADOOP-9576-003.patch


 In case of EOFException, NetUtils is now wrapping it as IOException, we may 
 want to throw EOFException as it is, since EOFException can happen when 
 connection is lost in the middle, the client may want to explicitly handle 
 such exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9576) Make NetUtils.wrapException throw EOFException instead of wrapping it as IOException

2014-11-10 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-9576:
---
Status: Open  (was: Patch Available)

jenkins applied the 2013 patch. deleting that .patch and resubmitting

 Make NetUtils.wrapException throw EOFException instead of wrapping it as 
 IOException
 

 Key: HADOOP-9576
 URL: https://issues.apache.org/jira/browse/HADOOP-9576
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.6.0
Reporter: Jian He
Assignee: Steve Loughran
 Attachments: HADOOP-9576-003.patch


 In case of EOFException, NetUtils is now wrapping it as IOException, we may 
 want to throw EOFException as it is, since EOFException can happen when 
 connection is lost in the middle, the client may want to explicitly handle 
 such exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11293) Factor OSType out from Shell

2014-11-10 Thread Yongjun Zhang (JIRA)
Yongjun Zhang created HADOOP-11293:
--

 Summary: Factor OSType out from Shell
 Key: HADOOP-11293
 URL: https://issues.apache.org/jira/browse/HADOOP-11293
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang


Currently the code that detects the OS type is located in Shell.java. Code that 
need to check OS type refers to Shell, even if no other stuff of Shell is 
needed. 

I am proposing to refactor OSType out to  its own class, so to make the OSType 
easier to access and the dependency cleaner.
 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9576) Make NetUtils.wrapException throw EOFException instead of wrapping it as IOException

2014-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205302#comment-14205302
 ] 

Hadoop QA commented on HADOOP-9576:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12680636/HADOOP-9576-003.patch
  against trunk revision eace218.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5057//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5057//console

This message is automatically generated.

 Make NetUtils.wrapException throw EOFException instead of wrapping it as 
 IOException
 

 Key: HADOOP-9576
 URL: https://issues.apache.org/jira/browse/HADOOP-9576
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.6.0
Reporter: Jian He
Assignee: Steve Loughran
 Attachments: HADOOP-9576-003.patch


 In case of EOFException, NetUtils is now wrapping it as IOException, we may 
 want to throw EOFException as it is, since EOFException can happen when 
 connection is lost in the middle, the client may want to explicitly handle 
 such exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11292) mvm package reports error when using Java 1.8

2014-11-10 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He updated HADOOP-11292:
-
Attachment: HADOOP-11292.patch

 mvm package reports error when using Java 1.8 
 

 Key: HADOOP-11292
 URL: https://issues.apache.org/jira/browse/HADOOP-11292
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chen He
Assignee: Chen He
 Attachments: HADOOP-11292.patch


 mvn package -Pdist -Dtar -DskipTests reports following error based on latest 
 trunk:
 [INFO] BUILD FAILURE
 [INFO] 
 
 [INFO] Total time: 11.010 s
 [INFO] Finished at: 2014-11-10T11:23:49-08:00
 [INFO] Final Memory: 51M/555M
 [INFO] 
 
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-javadoc-plugin:2.8.1:jar (module-javadocs) on 
 project hadoop-maven-plugins: MavenReportException: Error while creating 
 archive:
 [ERROR] Exit code: 1 - 
 ./hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java:45:
  error: unknown tag: String
 [ERROR] * @param command ListString containing command and all arguments
 [ERROR] ^
 [ERROR] 
 ./develop/hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java:46:
  error: unknown tag: String
 [ERROR] * @param output ListString in/out parameter to receive command 
 output
 [ERROR] ^
 [ERROR] 
 ./hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/FileSetUtils.java:50:
  error: unknown tag: File
 [ERROR] * @return ListFile containing every element of the FileSet as a File
 [ERROR] ^
 [ERROR] 
 [ERROR] Command line was: 
 /Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/bin/javadoc 
 -J-Dhttp.proxySet=true -J-Dhttp.proxyHost=www-proxy.us.oracle.com 
 -J-Dhttp.proxyPort=80 @options @packages
 [ERROR] 
 [ERROR] Refer to the generated Javadoc files in 
 './hadoop/hadoop/hadoop-maven-plugins/target' dir.
 [ERROR] - [Help 1]
 [ERROR] 
 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
 switch.
 [ERROR] Re-run Maven using the -X switch to enable full debug logging.
 [ERROR] 
 [ERROR] For more information about the errors and possible solutions, please 
 read the following articles:
 [ERROR] [Help 1] 
 http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
 [ERROR] 
 [ERROR] After correcting the problems, you can resume the build with the 
 command
 [ERROR]   mvn goals -rf :hadoop-maven-plugins



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11291) Log the cause of SASL connection failures

2014-11-10 Thread Stephen Chu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205335#comment-14205335
 ] 

Stephen Chu commented on HADOOP-11291:
--

No tests added because this is a minor logging change.

TestZKFailoverControllerStress failure is unrelated to this change. There's an 
outstanding JIRA for it at HADOOP-10668.

 Log the cause of SASL connection failures
 -

 Key: HADOOP-11291
 URL: https://issues.apache.org/jira/browse/HADOOP-11291
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.5.0
Reporter: Stephen Chu
Assignee: Stephen Chu
Priority: Minor
  Labels: supportability
 Attachments: HADOOP-11291.1.patch


 {{UGI#doAs}} will no longer log a PriviledgedActionException unless 
 LOG.isDebugEnabled() == true. HADOOP-10015 made this change because it was 
 decided that users calling {{UGI#doAs}} should be responsible for logging the 
 error when catching an exception. Also, the log was confusing in certain 
 situations (see more details in HADOOP-10015).
 However, as Daryn noted, this log message was very helpful in cases of 
 debugging security issues.
 As an example, we would use to see this in the DN logs before HADOOP-10015:
 {code}
 2014-10-20 11:28:02,112 WARN org.apache.hadoop.security.UserGroupInformation: 
 PriviledgedActionException as:hdfs/hosta@realm.com (auth:KERBEROS) 
 cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
 GSSException: No valid credentials provided (Mechanism level: Generic error 
 (description in e-text) (60) - NO PREAUTH)]
 2014-10-20 11:28:02,112 WARN org.apache.hadoop.ipc.Client: Couldn't setup 
 connection for hdfs/hosta@realm.com to hostB.com/101.01.010:8022
 2014-10-20 11:28:02,112 WARN org.apache.hadoop.security.UserGroupInformation: 
 PriviledgedActionException as:hdfs/hosta@realm.com (auth:KERBEROS) 
 cause:java.io.IOException: Couldn't setup connection for 
 hdfs/hosta@realm.com to hostB.com/101.01.010:8022
 {code}
 After the fix went in, the DN was upgraded, and only logs:
 {code}
 2014-10-20 14:11:40,712 WARN org.apache.hadoop.ipc.Client: Couldn't setup 
 connection for hdfs/hosta@realm.com to hostB.com/101.01.010:8022
 2014-10-20 14:11:40,713 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Problem connecting to server: hostB.com/101.01.010:8022
 {code}
 It'd be good to add more logging information about the cause of a SASL 
 connection failure.
 Thanks to [~qwertymaniac] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11292) mvm package reports error when using Java 1.8

2014-11-10 Thread Chen He (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205338#comment-14205338
 ] 

Chen He commented on HADOOP-11292:
--

There are some rules like :

no self-closed HTML tags, such as br / or a id=x /
no unclosed HTML tags, such as ul without matching /ul
no invalid HTML end tags, such as /br
no invalid HTML attributes, based on doclint's interpretation of W3C HTML 4.01
no duplicate HTML id attribute
no empty HTML href attribute
no incorrectly nested headers, such as class documentation must have h3, not 
h4
no invalid HTML tags, such as ListString (where you forgot to escape using 
lt;)
no broken @link references
no broken @param references, they must match the actual parameter name
no broken @throws references, the first word must be a class name

.. etc.

 mvm package reports error when using Java 1.8 
 

 Key: HADOOP-11292
 URL: https://issues.apache.org/jira/browse/HADOOP-11292
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chen He
Assignee: Chen He
 Attachments: HADOOP-11292.patch


 mvn package -Pdist -Dtar -DskipTests reports following error based on latest 
 trunk:
 [INFO] BUILD FAILURE
 [INFO] 
 
 [INFO] Total time: 11.010 s
 [INFO] Finished at: 2014-11-10T11:23:49-08:00
 [INFO] Final Memory: 51M/555M
 [INFO] 
 
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-javadoc-plugin:2.8.1:jar (module-javadocs) on 
 project hadoop-maven-plugins: MavenReportException: Error while creating 
 archive:
 [ERROR] Exit code: 1 - 
 ./hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java:45:
  error: unknown tag: String
 [ERROR] * @param command ListString containing command and all arguments
 [ERROR] ^
 [ERROR] 
 ./develop/hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java:46:
  error: unknown tag: String
 [ERROR] * @param output ListString in/out parameter to receive command 
 output
 [ERROR] ^
 [ERROR] 
 ./hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/FileSetUtils.java:50:
  error: unknown tag: File
 [ERROR] * @return ListFile containing every element of the FileSet as a File
 [ERROR] ^
 [ERROR] 
 [ERROR] Command line was: 
 /Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/bin/javadoc 
 -J-Dhttp.proxySet=true -J-Dhttp.proxyHost=www-proxy.us.oracle.com 
 -J-Dhttp.proxyPort=80 @options @packages
 [ERROR] 
 [ERROR] Refer to the generated Javadoc files in 
 './hadoop/hadoop/hadoop-maven-plugins/target' dir.
 [ERROR] - [Help 1]
 [ERROR] 
 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
 switch.
 [ERROR] Re-run Maven using the -X switch to enable full debug logging.
 [ERROR] 
 [ERROR] For more information about the errors and possible solutions, please 
 read the following articles:
 [ERROR] [Help 1] 
 http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
 [ERROR] 
 [ERROR] After correcting the problems, you can resume the build with the 
 command
 [ERROR]   mvn goals -rf :hadoop-maven-plugins



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11292) mvm package reports error when using Java 1.8

2014-11-10 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He updated HADOOP-11292:
-
Status: Patch Available  (was: Open)

 mvm package reports error when using Java 1.8 
 

 Key: HADOOP-11292
 URL: https://issues.apache.org/jira/browse/HADOOP-11292
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chen He
Assignee: Chen He
 Attachments: HADOOP-11292.patch


 mvn package -Pdist -Dtar -DskipTests reports following error based on latest 
 trunk:
 [INFO] BUILD FAILURE
 [INFO] 
 
 [INFO] Total time: 11.010 s
 [INFO] Finished at: 2014-11-10T11:23:49-08:00
 [INFO] Final Memory: 51M/555M
 [INFO] 
 
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-javadoc-plugin:2.8.1:jar (module-javadocs) on 
 project hadoop-maven-plugins: MavenReportException: Error while creating 
 archive:
 [ERROR] Exit code: 1 - 
 ./hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java:45:
  error: unknown tag: String
 [ERROR] * @param command ListString containing command and all arguments
 [ERROR] ^
 [ERROR] 
 ./develop/hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java:46:
  error: unknown tag: String
 [ERROR] * @param output ListString in/out parameter to receive command 
 output
 [ERROR] ^
 [ERROR] 
 ./hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/FileSetUtils.java:50:
  error: unknown tag: File
 [ERROR] * @return ListFile containing every element of the FileSet as a File
 [ERROR] ^
 [ERROR] 
 [ERROR] Command line was: 
 /Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/bin/javadoc 
 -J-Dhttp.proxySet=true -J-Dhttp.proxyHost=www-proxy.us.oracle.com 
 -J-Dhttp.proxyPort=80 @options @packages
 [ERROR] 
 [ERROR] Refer to the generated Javadoc files in 
 './hadoop/hadoop/hadoop-maven-plugins/target' dir.
 [ERROR] - [Help 1]
 [ERROR] 
 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
 switch.
 [ERROR] Re-run Maven using the -X switch to enable full debug logging.
 [ERROR] 
 [ERROR] For more information about the errors and possible solutions, please 
 read the following articles:
 [ERROR] [Help 1] 
 http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
 [ERROR] 
 [ERROR] After correcting the problems, you can resume the build with the 
 command
 [ERROR]   mvn goals -rf :hadoop-maven-plugins



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11292) mvm package reports error when using Java 1.8

2014-11-10 Thread Chen He (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205319#comment-14205319
 ] 

Chen He commented on HADOOP-11292:
--

Just simply disable doclint.

 mvm package reports error when using Java 1.8 
 

 Key: HADOOP-11292
 URL: https://issues.apache.org/jira/browse/HADOOP-11292
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chen He
Assignee: Chen He
 Attachments: HADOOP-11292.patch


 mvn package -Pdist -Dtar -DskipTests reports following error based on latest 
 trunk:
 [INFO] BUILD FAILURE
 [INFO] 
 
 [INFO] Total time: 11.010 s
 [INFO] Finished at: 2014-11-10T11:23:49-08:00
 [INFO] Final Memory: 51M/555M
 [INFO] 
 
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-javadoc-plugin:2.8.1:jar (module-javadocs) on 
 project hadoop-maven-plugins: MavenReportException: Error while creating 
 archive:
 [ERROR] Exit code: 1 - 
 ./hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java:45:
  error: unknown tag: String
 [ERROR] * @param command ListString containing command and all arguments
 [ERROR] ^
 [ERROR] 
 ./develop/hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java:46:
  error: unknown tag: String
 [ERROR] * @param output ListString in/out parameter to receive command 
 output
 [ERROR] ^
 [ERROR] 
 ./hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/FileSetUtils.java:50:
  error: unknown tag: File
 [ERROR] * @return ListFile containing every element of the FileSet as a File
 [ERROR] ^
 [ERROR] 
 [ERROR] Command line was: 
 /Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/bin/javadoc 
 -J-Dhttp.proxySet=true -J-Dhttp.proxyHost=www-proxy.us.oracle.com 
 -J-Dhttp.proxyPort=80 @options @packages
 [ERROR] 
 [ERROR] Refer to the generated Javadoc files in 
 './hadoop/hadoop/hadoop-maven-plugins/target' dir.
 [ERROR] - [Help 1]
 [ERROR] 
 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
 switch.
 [ERROR] Re-run Maven using the -X switch to enable full debug logging.
 [ERROR] 
 [ERROR] For more information about the errors and possible solutions, please 
 read the following articles:
 [ERROR] [Help 1] 
 http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
 [ERROR] 
 [ERROR] After correcting the problems, you can resume the build with the 
 command
 [ERROR]   mvn goals -rf :hadoop-maven-plugins



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11291) Log the cause of SASL connection failures

2014-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11291?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205333#comment-14205333
 ] 

Hadoop QA commented on HADOOP-11291:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12680641/HADOOP-11291.1.patch
  against trunk revision eace218.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ha.TestZKFailoverControllerStress

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5058//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5058//console

This message is automatically generated.

 Log the cause of SASL connection failures
 -

 Key: HADOOP-11291
 URL: https://issues.apache.org/jira/browse/HADOOP-11291
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.5.0
Reporter: Stephen Chu
Assignee: Stephen Chu
Priority: Minor
  Labels: supportability
 Attachments: HADOOP-11291.1.patch


 {{UGI#doAs}} will no longer log a PriviledgedActionException unless 
 LOG.isDebugEnabled() == true. HADOOP-10015 made this change because it was 
 decided that users calling {{UGI#doAs}} should be responsible for logging the 
 error when catching an exception. Also, the log was confusing in certain 
 situations (see more details in HADOOP-10015).
 However, as Daryn noted, this log message was very helpful in cases of 
 debugging security issues.
 As an example, we would use to see this in the DN logs before HADOOP-10015:
 {code}
 2014-10-20 11:28:02,112 WARN org.apache.hadoop.security.UserGroupInformation: 
 PriviledgedActionException as:hdfs/hosta@realm.com (auth:KERBEROS) 
 cause:javax.security.sasl.SaslException: GSS initiate failed [Caused by 
 GSSException: No valid credentials provided (Mechanism level: Generic error 
 (description in e-text) (60) - NO PREAUTH)]
 2014-10-20 11:28:02,112 WARN org.apache.hadoop.ipc.Client: Couldn't setup 
 connection for hdfs/hosta@realm.com to hostB.com/101.01.010:8022
 2014-10-20 11:28:02,112 WARN org.apache.hadoop.security.UserGroupInformation: 
 PriviledgedActionException as:hdfs/hosta@realm.com (auth:KERBEROS) 
 cause:java.io.IOException: Couldn't setup connection for 
 hdfs/hosta@realm.com to hostB.com/101.01.010:8022
 {code}
 After the fix went in, the DN was upgraded, and only logs:
 {code}
 2014-10-20 14:11:40,712 WARN org.apache.hadoop.ipc.Client: Couldn't setup 
 connection for hdfs/hosta@realm.com to hostB.com/101.01.010:8022
 2014-10-20 14:11:40,713 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: 
 Problem connecting to server: hostB.com/101.01.010:8022
 {code}
 It'd be good to add more logging information about the cause of a SASL 
 connection failure.
 Thanks to [~qwertymaniac] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11292) mvn package reports error when using Java 1.8

2014-11-10 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HADOOP-11292:

Summary: mvn package reports error when using Java 1.8   (was: mvm 
package reports error when using Java 1.8 )

 mvn package reports error when using Java 1.8 
 

 Key: HADOOP-11292
 URL: https://issues.apache.org/jira/browse/HADOOP-11292
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chen He
Assignee: Chen He
 Attachments: HADOOP-11292.patch


 mvn package -Pdist -Dtar -DskipTests reports following error based on latest 
 trunk:
 [INFO] BUILD FAILURE
 [INFO] 
 
 [INFO] Total time: 11.010 s
 [INFO] Finished at: 2014-11-10T11:23:49-08:00
 [INFO] Final Memory: 51M/555M
 [INFO] 
 
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-javadoc-plugin:2.8.1:jar (module-javadocs) on 
 project hadoop-maven-plugins: MavenReportException: Error while creating 
 archive:
 [ERROR] Exit code: 1 - 
 ./hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java:45:
  error: unknown tag: String
 [ERROR] * @param command ListString containing command and all arguments
 [ERROR] ^
 [ERROR] 
 ./develop/hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java:46:
  error: unknown tag: String
 [ERROR] * @param output ListString in/out parameter to receive command 
 output
 [ERROR] ^
 [ERROR] 
 ./hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/FileSetUtils.java:50:
  error: unknown tag: File
 [ERROR] * @return ListFile containing every element of the FileSet as a File
 [ERROR] ^
 [ERROR] 
 [ERROR] Command line was: 
 /Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/bin/javadoc 
 -J-Dhttp.proxySet=true -J-Dhttp.proxyHost=www-proxy.us.oracle.com 
 -J-Dhttp.proxyPort=80 @options @packages
 [ERROR] 
 [ERROR] Refer to the generated Javadoc files in 
 './hadoop/hadoop/hadoop-maven-plugins/target' dir.
 [ERROR] - [Help 1]
 [ERROR] 
 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
 switch.
 [ERROR] Re-run Maven using the -X switch to enable full debug logging.
 [ERROR] 
 [ERROR] For more information about the errors and possible solutions, please 
 read the following articles:
 [ERROR] [Help 1] 
 http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
 [ERROR] 
 [ERROR] After correcting the problems, you can resume the build with the 
 command
 [ERROR]   mvn goals -rf :hadoop-maven-plugins



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11292) mvm package reports error when using Java 1.8

2014-11-10 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205324#comment-14205324
 ] 

Andrew Wang commented on HADOOP-11292:
--

Hey Chen, do you have a sense for how hard it'd be to just clean up these 
errors? Ignoring is okay, but it seems better to fix the javadoc if it's not 
too much work.

 mvm package reports error when using Java 1.8 
 

 Key: HADOOP-11292
 URL: https://issues.apache.org/jira/browse/HADOOP-11292
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chen He
Assignee: Chen He
 Attachments: HADOOP-11292.patch


 mvn package -Pdist -Dtar -DskipTests reports following error based on latest 
 trunk:
 [INFO] BUILD FAILURE
 [INFO] 
 
 [INFO] Total time: 11.010 s
 [INFO] Finished at: 2014-11-10T11:23:49-08:00
 [INFO] Final Memory: 51M/555M
 [INFO] 
 
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-javadoc-plugin:2.8.1:jar (module-javadocs) on 
 project hadoop-maven-plugins: MavenReportException: Error while creating 
 archive:
 [ERROR] Exit code: 1 - 
 ./hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java:45:
  error: unknown tag: String
 [ERROR] * @param command ListString containing command and all arguments
 [ERROR] ^
 [ERROR] 
 ./develop/hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java:46:
  error: unknown tag: String
 [ERROR] * @param output ListString in/out parameter to receive command 
 output
 [ERROR] ^
 [ERROR] 
 ./hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/FileSetUtils.java:50:
  error: unknown tag: File
 [ERROR] * @return ListFile containing every element of the FileSet as a File
 [ERROR] ^
 [ERROR] 
 [ERROR] Command line was: 
 /Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/bin/javadoc 
 -J-Dhttp.proxySet=true -J-Dhttp.proxyHost=www-proxy.us.oracle.com 
 -J-Dhttp.proxyPort=80 @options @packages
 [ERROR] 
 [ERROR] Refer to the generated Javadoc files in 
 './hadoop/hadoop/hadoop-maven-plugins/target' dir.
 [ERROR] - [Help 1]
 [ERROR] 
 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
 switch.
 [ERROR] Re-run Maven using the -X switch to enable full debug logging.
 [ERROR] 
 [ERROR] For more information about the errors and possible solutions, please 
 read the following articles:
 [ERROR] [Help 1] 
 http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
 [ERROR] 
 [ERROR] After correcting the problems, you can resume the build with the 
 command
 [ERROR]   mvn goals -rf :hadoop-maven-plugins



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11292) mvn package reports error when using Java 1.8

2014-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205347#comment-14205347
 ] 

Hadoop QA commented on HADOOP-11292:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12680651/HADOOP-11292.patch
  against trunk revision eace218.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in .

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5059//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5059//console

This message is automatically generated.

 mvn package reports error when using Java 1.8 
 

 Key: HADOOP-11292
 URL: https://issues.apache.org/jira/browse/HADOOP-11292
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chen He
Assignee: Chen He
 Attachments: HADOOP-11292.patch


 mvn package -Pdist -Dtar -DskipTests reports following error based on latest 
 trunk:
 [INFO] BUILD FAILURE
 [INFO] 
 
 [INFO] Total time: 11.010 s
 [INFO] Finished at: 2014-11-10T11:23:49-08:00
 [INFO] Final Memory: 51M/555M
 [INFO] 
 
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-javadoc-plugin:2.8.1:jar (module-javadocs) on 
 project hadoop-maven-plugins: MavenReportException: Error while creating 
 archive:
 [ERROR] Exit code: 1 - 
 ./hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java:45:
  error: unknown tag: String
 [ERROR] * @param command ListString containing command and all arguments
 [ERROR] ^
 [ERROR] 
 ./develop/hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java:46:
  error: unknown tag: String
 [ERROR] * @param output ListString in/out parameter to receive command 
 output
 [ERROR] ^
 [ERROR] 
 ./hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/FileSetUtils.java:50:
  error: unknown tag: File
 [ERROR] * @return ListFile containing every element of the FileSet as a File
 [ERROR] ^
 [ERROR] 
 [ERROR] Command line was: 
 /Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/bin/javadoc 
 -J-Dhttp.proxySet=true -J-Dhttp.proxyHost=www-proxy.us.oracle.com 
 -J-Dhttp.proxyPort=80 @options @packages
 [ERROR] 
 [ERROR] Refer to the generated Javadoc files in 
 './hadoop/hadoop/hadoop-maven-plugins/target' dir.
 [ERROR] - [Help 1]
 [ERROR] 
 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
 switch.
 [ERROR] Re-run Maven using the -X switch to enable full debug logging.
 [ERROR] 
 [ERROR] For more information about the errors and possible solutions, please 
 read the following articles:
 [ERROR] [Help 1] 
 http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
 [ERROR] 
 [ERROR] After correcting the problems, you can resume the build with the 
 command
 [ERROR]   mvn goals -rf :hadoop-maven-plugins



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11238) Group cache expiry causes namenode slowdown

2014-11-10 Thread Chris Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Li updated HADOOP-11238:
--
Status: Patch Available  (was: Open)

 Group cache expiry causes namenode slowdown
 ---

 Key: HADOOP-11238
 URL: https://issues.apache.org/jira/browse/HADOOP-11238
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.1
Reporter: Chris Li
Assignee: Chris Li
Priority: Minor
 Attachments: HADOOP-11238.patch


 Our namenode pauses for 12-60 seconds several times every hour. During these 
 pauses, no new requests can come in.
 Around the time of pauses, we have log messages such as:
 2014-10-22 13:24:22,688 WARN org.apache.hadoop.security.Groups: Potential 
 performance problem: getGroups(user=x) took 34507 milliseconds.
 The current theory is:
 1. Groups has a cache that is refreshed periodically. Each entry has a cache 
 expiry.
 2. When a cache entry expires, multiple threads can see this expiration and 
 then we have a thundering herd effect where all these threads hit the wire 
 and overwhelm our LDAP servers (we are using ShellBasedUnixGroupsMapping with 
 sssd, how this happens has yet to be established)
 3. group resolution queries begin to take longer, I've observed it taking 1.2 
 seconds instead of the usual 0.01-0.03 seconds when measuring in the shell 
 `time groups myself`
 4. If there is mutual exclusion somewhere along this path, a 1 second pause 
 could lead to a 60 second pause as all the threads compete for the resource. 
 The exact cause hasn't been established
 Potential solutions include:
 1. Increasing group cache time, which will make the issue less frequent
 2. Rolling evictions of the cache so we prevent the large spike in LDAP 
 queries
 3. Gate the cache refresh so that only one thread is responsible for 
 refreshing the cache



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11238) Group cache expiry causes namenode slowdown

2014-11-10 Thread Chris Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11238?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Li updated HADOOP-11238:
--
Attachment: HADOOP-11238.patch

Uploading patch

 Group cache expiry causes namenode slowdown
 ---

 Key: HADOOP-11238
 URL: https://issues.apache.org/jira/browse/HADOOP-11238
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.1
Reporter: Chris Li
Assignee: Chris Li
Priority: Minor
 Attachments: HADOOP-11238.patch


 Our namenode pauses for 12-60 seconds several times every hour. During these 
 pauses, no new requests can come in.
 Around the time of pauses, we have log messages such as:
 2014-10-22 13:24:22,688 WARN org.apache.hadoop.security.Groups: Potential 
 performance problem: getGroups(user=x) took 34507 milliseconds.
 The current theory is:
 1. Groups has a cache that is refreshed periodically. Each entry has a cache 
 expiry.
 2. When a cache entry expires, multiple threads can see this expiration and 
 then we have a thundering herd effect where all these threads hit the wire 
 and overwhelm our LDAP servers (we are using ShellBasedUnixGroupsMapping with 
 sssd, how this happens has yet to be established)
 3. group resolution queries begin to take longer, I've observed it taking 1.2 
 seconds instead of the usual 0.01-0.03 seconds when measuring in the shell 
 `time groups myself`
 4. If there is mutual exclusion somewhere along this path, a 1 second pause 
 could lead to a 60 second pause as all the threads compete for the resource. 
 The exact cause hasn't been established
 Potential solutions include:
 1. Increasing group cache time, which will make the issue less frequent
 2. Rolling evictions of the cache so we prevent the large spike in LDAP 
 queries
 3. Gate the cache refresh so that only one thread is responsible for 
 refreshing the cache



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11294) Nfs3FileAttributes should not change the values of nlink and size in the constructor

2014-11-10 Thread Brandon Li (JIRA)
Brandon Li created HADOOP-11294:
---

 Summary: Nfs3FileAttributes should not change the values of nlink 
and size in the constructor 
 Key: HADOOP-11294
 URL: https://issues.apache.org/jira/browse/HADOOP-11294
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Minor


In stead, it should just take the values passed in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11294) Nfs3FileAttributes should not change the values of rdev, nlink and size in the constructor

2014-11-10 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-11294:

Summary: Nfs3FileAttributes should not change the values of rdev, nlink and 
size in the constructor   (was: Nfs3FileAttributes should not change the values 
of nlink and size in the constructor )

 Nfs3FileAttributes should not change the values of rdev, nlink and size in 
 the constructor 
 ---

 Key: HADOOP-11294
 URL: https://issues.apache.org/jira/browse/HADOOP-11294
 Project: Hadoop Common
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Minor

 In stead, it should just take the values passed in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11294) Nfs3FileAttributes should not change the values of rdev, nlink and size in the constructor

2014-11-10 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-11294:

Affects Version/s: 2.2.0

 Nfs3FileAttributes should not change the values of rdev, nlink and size in 
 the constructor 
 ---

 Key: HADOOP-11294
 URL: https://issues.apache.org/jira/browse/HADOOP-11294
 Project: Hadoop Common
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Minor

 In stead, it should just take the values passed in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11294) Nfs3FileAttributes should not change the values of rdev, nlink and size in the constructor

2014-11-10 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-11294:

Component/s: nfs

 Nfs3FileAttributes should not change the values of rdev, nlink and size in 
 the constructor 
 ---

 Key: HADOOP-11294
 URL: https://issues.apache.org/jira/browse/HADOOP-11294
 Project: Hadoop Common
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Minor

 In stead, it should just take the values passed in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11238) Group cache expiry causes namenode slowdown

2014-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205437#comment-14205437
 ] 

Hadoop QA commented on HADOOP-11238:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12680662/HADOOP-11238.patch
  against trunk revision eace218.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.security.ssl.TestReloadingX509TrustManager

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5060//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5060//console

This message is automatically generated.

 Group cache expiry causes namenode slowdown
 ---

 Key: HADOOP-11238
 URL: https://issues.apache.org/jira/browse/HADOOP-11238
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.1
Reporter: Chris Li
Assignee: Chris Li
Priority: Minor
 Attachments: HADOOP-11238.patch


 Our namenode pauses for 12-60 seconds several times every hour. During these 
 pauses, no new requests can come in.
 Around the time of pauses, we have log messages such as:
 2014-10-22 13:24:22,688 WARN org.apache.hadoop.security.Groups: Potential 
 performance problem: getGroups(user=x) took 34507 milliseconds.
 The current theory is:
 1. Groups has a cache that is refreshed periodically. Each entry has a cache 
 expiry.
 2. When a cache entry expires, multiple threads can see this expiration and 
 then we have a thundering herd effect where all these threads hit the wire 
 and overwhelm our LDAP servers (we are using ShellBasedUnixGroupsMapping with 
 sssd, how this happens has yet to be established)
 3. group resolution queries begin to take longer, I've observed it taking 1.2 
 seconds instead of the usual 0.01-0.03 seconds when measuring in the shell 
 `time groups myself`
 4. If there is mutual exclusion somewhere along this path, a 1 second pause 
 could lead to a 60 second pause as all the threads compete for the resource. 
 The exact cause hasn't been established
 Potential solutions include:
 1. Increasing group cache time, which will make the issue less frequent
 2. Rolling evictions of the cache so we prevent the large spike in LDAP 
 queries
 3. Gate the cache refresh so that only one thread is responsible for 
 refreshing the cache



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11292) mvn package reports error when using Java 1.8

2014-11-10 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205444#comment-14205444
 ] 

Haohui Mai commented on HADOOP-11292:
-

+1 for fixing the javadoc instead of disabling the lint.

 mvn package reports error when using Java 1.8 
 

 Key: HADOOP-11292
 URL: https://issues.apache.org/jira/browse/HADOOP-11292
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chen He
Assignee: Chen He
 Attachments: HADOOP-11292.patch


 mvn package -Pdist -Dtar -DskipTests reports following error based on latest 
 trunk:
 [INFO] BUILD FAILURE
 [INFO] 
 
 [INFO] Total time: 11.010 s
 [INFO] Finished at: 2014-11-10T11:23:49-08:00
 [INFO] Final Memory: 51M/555M
 [INFO] 
 
 [ERROR] Failed to execute goal 
 org.apache.maven.plugins:maven-javadoc-plugin:2.8.1:jar (module-javadocs) on 
 project hadoop-maven-plugins: MavenReportException: Error while creating 
 archive:
 [ERROR] Exit code: 1 - 
 ./hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java:45:
  error: unknown tag: String
 [ERROR] * @param command ListString containing command and all arguments
 [ERROR] ^
 [ERROR] 
 ./develop/hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/Exec.java:46:
  error: unknown tag: String
 [ERROR] * @param output ListString in/out parameter to receive command 
 output
 [ERROR] ^
 [ERROR] 
 ./hadoop/hadoop/hadoop-maven-plugins/src/main/java/org/apache/hadoop/maven/plugin/util/FileSetUtils.java:50:
  error: unknown tag: File
 [ERROR] * @return ListFile containing every element of the FileSet as a File
 [ERROR] ^
 [ERROR] 
 [ERROR] Command line was: 
 /Library/Java/JavaVirtualMachines/jdk1.8.0_25.jdk/Contents/Home/bin/javadoc 
 -J-Dhttp.proxySet=true -J-Dhttp.proxyHost=www-proxy.us.oracle.com 
 -J-Dhttp.proxyPort=80 @options @packages
 [ERROR] 
 [ERROR] Refer to the generated Javadoc files in 
 './hadoop/hadoop/hadoop-maven-plugins/target' dir.
 [ERROR] - [Help 1]
 [ERROR] 
 [ERROR] To see the full stack trace of the errors, re-run Maven with the -e 
 switch.
 [ERROR] Re-run Maven using the -X switch to enable full debug logging.
 [ERROR] 
 [ERROR] For more information about the errors and possible solutions, please 
 read the following articles:
 [ERROR] [Help 1] 
 http://cwiki.apache.org/confluence/display/MAVEN/MojoExecutionException
 [ERROR] 
 [ERROR] After correcting the problems, you can resume the build with the 
 command
 [ERROR]   mvn goals -rf :hadoop-maven-plugins



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11294) Nfs3FileAttributes should not change the values of rdev, nlink and size in the constructor

2014-11-10 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-11294:

Attachment: HADOOP-11294.001.patch

 Nfs3FileAttributes should not change the values of rdev, nlink and size in 
 the constructor 
 ---

 Key: HADOOP-11294
 URL: https://issues.apache.org/jira/browse/HADOOP-11294
 Project: Hadoop Common
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Minor
 Attachments: HADOOP-11294.001.patch


 In stead, it should just take the values passed in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11294) Nfs3FileAttributes should not change the values of rdev, nlink and size in the constructor

2014-11-10 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-11294:

Status: Patch Available  (was: Open)

 Nfs3FileAttributes should not change the values of rdev, nlink and size in 
 the constructor 
 ---

 Key: HADOOP-11294
 URL: https://issues.apache.org/jira/browse/HADOOP-11294
 Project: Hadoop Common
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Minor
 Attachments: HADOOP-11294.001.patch


 In stead, it should just take the values passed in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9576) Make NetUtils.wrapException throw EOFException instead of wrapping it as IOException

2014-11-10 Thread Jian He (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205505#comment-14205505
 ] 

Jian He commented on HADOOP-9576:
-

+1, committing

 Make NetUtils.wrapException throw EOFException instead of wrapping it as 
 IOException
 

 Key: HADOOP-9576
 URL: https://issues.apache.org/jira/browse/HADOOP-9576
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.6.0
Reporter: Jian He
Assignee: Steve Loughran
 Attachments: HADOOP-9576-003.patch


 In case of EOFException, NetUtils is now wrapping it as IOException, we may 
 want to throw EOFException as it is, since EOFException can happen when 
 connection is lost in the middle, the client may want to explicitly handle 
 such exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11294) Nfs3FileAttributes should not change the values of rdev, nlink and size in the constructor

2014-11-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205517#comment-14205517
 ] 

Hadoop QA commented on HADOOP-11294:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12680672/HADOOP-11294.001.patch
  against trunk revision eace218.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-nfs hadoop-hdfs-project/hadoop-hdfs-nfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5061//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/5061//console

This message is automatically generated.

 Nfs3FileAttributes should not change the values of rdev, nlink and size in 
 the constructor 
 ---

 Key: HADOOP-11294
 URL: https://issues.apache.org/jira/browse/HADOOP-11294
 Project: Hadoop Common
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Minor
 Attachments: HADOOP-11294.001.patch


 In stead, it should just take the values passed in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11294) Nfs3FileAttributes should not change the values of rdev, nlink and size in the constructor

2014-11-10 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205544#comment-14205544
 ] 

Haohui Mai commented on HADOOP-11294:
-

+1. I'll commit it shortly.

 Nfs3FileAttributes should not change the values of rdev, nlink and size in 
 the constructor 
 ---

 Key: HADOOP-11294
 URL: https://issues.apache.org/jira/browse/HADOOP-11294
 Project: Hadoop Common
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Minor
 Attachments: HADOOP-11294.001.patch


 In stead, it should just take the values passed in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11294) Nfs3FileAttributes should not change the values of rdev, nlink and size in the constructor

2014-11-10 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-11294:

   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk and branch-2. Thanks [~brandonli] for the 
contribution.

 Nfs3FileAttributes should not change the values of rdev, nlink and size in 
 the constructor 
 ---

 Key: HADOOP-11294
 URL: https://issues.apache.org/jira/browse/HADOOP-11294
 Project: Hadoop Common
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Minor
 Fix For: 2.7.0

 Attachments: HADOOP-11294.001.patch


 In stead, it should just take the values passed in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11294) Nfs3FileAttributes should not change the values of rdev, nlink and size in the constructor

2014-11-10 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205579#comment-14205579
 ] 

Brandon Li commented on HADOOP-11294:
-

Thank you, [~wheat9].

 Nfs3FileAttributes should not change the values of rdev, nlink and size in 
 the constructor 
 ---

 Key: HADOOP-11294
 URL: https://issues.apache.org/jira/browse/HADOOP-11294
 Project: Hadoop Common
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Minor
 Fix For: 2.7.0

 Attachments: HADOOP-11294.001.patch


 In stead, it should just take the values passed in.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11260) Patch up Jetty to disable SSLv3

2014-11-10 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-11260:
--
Fix Version/s: (was: 2.6.0)
   2.5.2

Included this in 2.5.2 as well. 

 Patch up Jetty to disable SSLv3
 ---

 Key: HADOOP-11260
 URL: https://issues.apache.org/jira/browse/HADOOP-11260
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.5.1
Reporter: Karthik Kambatla
Assignee: Mike Yoder
Priority: Blocker
 Fix For: 2.5.2

 Attachments: HADOOP-11260.001.patch, HADOOP-11260.002.patch


 Hadoop uses an older version of Jetty that allows SSLv3. We should fix it up. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11238) Group cache expiry causes namenode slowdown

2014-11-10 Thread Benoy Antony (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11238?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205602#comment-14205602
 ] 

Benoy Antony commented on HADOOP-11238:
---

Chris , Could you please describe the solution ?

 Group cache expiry causes namenode slowdown
 ---

 Key: HADOOP-11238
 URL: https://issues.apache.org/jira/browse/HADOOP-11238
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.5.1
Reporter: Chris Li
Assignee: Chris Li
Priority: Minor
 Attachments: HADOOP-11238.patch


 Our namenode pauses for 12-60 seconds several times every hour. During these 
 pauses, no new requests can come in.
 Around the time of pauses, we have log messages such as:
 2014-10-22 13:24:22,688 WARN org.apache.hadoop.security.Groups: Potential 
 performance problem: getGroups(user=x) took 34507 milliseconds.
 The current theory is:
 1. Groups has a cache that is refreshed periodically. Each entry has a cache 
 expiry.
 2. When a cache entry expires, multiple threads can see this expiration and 
 then we have a thundering herd effect where all these threads hit the wire 
 and overwhelm our LDAP servers (we are using ShellBasedUnixGroupsMapping with 
 sssd, how this happens has yet to be established)
 3. group resolution queries begin to take longer, I've observed it taking 1.2 
 seconds instead of the usual 0.01-0.03 seconds when measuring in the shell 
 `time groups myself`
 4. If there is mutual exclusion somewhere along this path, a 1 second pause 
 could lead to a 60 second pause as all the threads compete for the resource. 
 The exact cause hasn't been established
 Potential solutions include:
 1. Increasing group cache time, which will make the issue less frequent
 2. Rolling evictions of the cache so we prevent the large spike in LDAP 
 queries
 3. Gate the cache refresh so that only one thread is responsible for 
 refreshing the cache



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11293) Factor OSType out from Shell

2014-11-10 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HADOOP-11293:
---
Status: Patch Available  (was: Open)

 Factor OSType out from Shell
 

 Key: HADOOP-11293
 URL: https://issues.apache.org/jira/browse/HADOOP-11293
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Attachments: HADOOP-11293.001.patch


 Currently the code that detects the OS type is located in Shell.java. Code 
 that need to check OS type refers to Shell, even if no other stuff of Shell 
 is needed. 
 I am proposing to refactor OSType out to  its own class, so to make the 
 OSType easier to access and the dependency cleaner.
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11293) Factor OSType out from Shell

2014-11-10 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11293?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HADOOP-11293:
---
Attachment: HADOOP-11293.001.patch

 Factor OSType out from Shell
 

 Key: HADOOP-11293
 URL: https://issues.apache.org/jira/browse/HADOOP-11293
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Attachments: HADOOP-11293.001.patch


 Currently the code that detects the OS type is located in Shell.java. Code 
 that need to check OS type refers to Shell, even if no other stuff of Shell 
 is needed. 
 I am proposing to refactor OSType out to  its own class, so to make the 
 OSType easier to access and the dependency cleaner.
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11293) Factor OSType out from Shell

2014-11-10 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205612#comment-14205612
 ] 

Yongjun Zhang commented on HADOOP-11293:


Submitted patch rev 001. This is a massive change that touches a lot of files, 
but I think it would make the code a cleaner.

Hi [~cmccabe], thanks for your encouragement when I told you that I found 
making a change like this would be nice. So I went ahead and made the changes. 
Would you please help taking a look at the patch when you have time? thanks.



 Factor OSType out from Shell
 

 Key: HADOOP-11293
 URL: https://issues.apache.org/jira/browse/HADOOP-11293
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Attachments: HADOOP-11293.001.patch


 Currently the code that detects the OS type is located in Shell.java. Code 
 that need to check OS type refers to Shell, even if no other stuff of Shell 
 is needed. 
 I am proposing to refactor OSType out to  its own class, so to make the 
 OSType easier to access and the dependency cleaner.
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9576) Make NetUtils.wrapException throw EOFException instead of wrapping it as IOException

2014-11-10 Thread Jian He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jian He updated HADOOP-9576:

  Resolution: Fixed
   Fix Version/s: 2.7.0
Target Version/s: 2.7.0  (was: 2.6.0)
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

Committed to trunk and branch-2.  thanks Steve !

 Make NetUtils.wrapException throw EOFException instead of wrapping it as 
 IOException
 

 Key: HADOOP-9576
 URL: https://issues.apache.org/jira/browse/HADOOP-9576
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.6.0
Reporter: Jian He
Assignee: Steve Loughran
 Fix For: 2.7.0

 Attachments: HADOOP-9576-003.patch


 In case of EOFException, NetUtils is now wrapping it as IOException, we may 
 want to throw EOFException as it is, since EOFException can happen when 
 connection is lost in the middle, the client may want to explicitly handle 
 such exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9576) Make NetUtils.wrapException throw EOFException instead of wrapping it as IOException

2014-11-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205760#comment-14205760
 ] 

Hudson commented on HADOOP-9576:


FAILURE: Integrated in Hadoop-trunk-Commit #6507 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6507/])
HADOOP-9576. Changed NetUtils#wrapException to throw EOFException instead of 
wrapping it as IOException. Contributed by Steve Loughran (jianhe: rev 
86bf8c7193013834f67e03bd67a320cc080ef32c)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/net/NetUtils.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/net/TestNetUtils.java


 Make NetUtils.wrapException throw EOFException instead of wrapping it as 
 IOException
 

 Key: HADOOP-9576
 URL: https://issues.apache.org/jira/browse/HADOOP-9576
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.6.0
Reporter: Jian He
Assignee: Steve Loughran
 Fix For: 2.7.0

 Attachments: HADOOP-9576-003.patch


 In case of EOFException, NetUtils is now wrapping it as IOException, we may 
 want to throw EOFException as it is, since EOFException can happen when 
 connection is lost in the middle, the client may want to explicitly handle 
 such exception



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11293) Factor OSType out from Shell

2014-11-10 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205826#comment-14205826
 ] 

Colin Patrick McCabe commented on HADOOP-11293:
---

Good idea.

How about calling this {{CurrentOperatingSystem}} instead of {{OSTypeUtil}}?   
{{OSTypeUtil}} suggests that this is a class with utility methods.  But it's 
not, really.  Also, perhaps we should rename {{WINDOWS}} to {{IS_WINDOWS}}, and 
so forth.

Also there are a bunch of unrelated whitespace changes in this patch-- let's 
get rid of those.

 Factor OSType out from Shell
 

 Key: HADOOP-11293
 URL: https://issues.apache.org/jira/browse/HADOOP-11293
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Attachments: HADOOP-11293.001.patch


 Currently the code that detects the OS type is located in Shell.java. Code 
 that need to check OS type refers to Shell, even if no other stuff of Shell 
 is needed. 
 I am proposing to refactor OSType out to  its own class, so to make the 
 OSType easier to access and the dependency cleaner.
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-11293) Factor OSType out from Shell

2014-11-10 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205826#comment-14205826
 ] 

Colin Patrick McCabe edited comment on HADOOP-11293 at 11/11/14 2:15 AM:
-

Good idea.

How about calling this {{CurrentOperatingSystem}} instead of {{OSTypeUtil}}?   
{{OSTypeUtil}} suggests that this is a class with utility methods.  But it's 
not, really.  Also, perhaps we should rename {{WINDOWS}} to {{IS_WINDOWS}}, and 
so forth.

[edit: perhaps naming this class {{OperatingSystem}} would work as well, if we 
use the IS_ methods everywhere.]

Also there are a bunch of unrelated whitespace changes in this patch-- let's 
get rid of those.


was (Author: cmccabe):
Good idea.

How about calling this {{CurrentOperatingSystem}} instead of {{OSTypeUtil}}?   
{{OSTypeUtil}} suggests that this is a class with utility methods.  But it's 
not, really.  Also, perhaps we should rename {{WINDOWS}} to {{IS_WINDOWS}}, and 
so forth.

Also there are a bunch of unrelated whitespace changes in this patch-- let's 
get rid of those.

 Factor OSType out from Shell
 

 Key: HADOOP-11293
 URL: https://issues.apache.org/jira/browse/HADOOP-11293
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Attachments: HADOOP-11293.001.patch


 Currently the code that detects the OS type is located in Shell.java. Code 
 that need to check OS type refers to Shell, even if no other stuff of Shell 
 is needed. 
 I am proposing to refactor OSType out to  its own class, so to make the 
 OSType easier to access and the dependency cleaner.
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11044) FileSystem counters can overflow for large number of readOps, largeReadOps, writeOps

2014-11-10 Thread Swapnil Daingade (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11044?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Swapnil Daingade reassigned HADOOP-11044:
-

Assignee: Swapnil Daingade

 FileSystem counters can overflow for large number of readOps, largeReadOps, 
 writeOps
 

 Key: HADOOP-11044
 URL: https://issues.apache.org/jira/browse/HADOOP-11044
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.5.0, 2.4.1
Reporter: Swapnil Daingade
Assignee: Swapnil Daingade
Priority: Minor
 Attachments: 11044.patch4, 11044.patch6


 The org.apache.hadoop.fs.FileSystem.Statistics.StatisticsData class defines 
 readOps, largeReadOps, writeOps as int. Also the The 
 org.apache.hadoop.fs.FileSystem.Statistics class has methods like 
 getReadOps(), getLargeReadOps() and getWriteOps() that return int. These int 
 values can overflow if the exceed 2^31-1 showing negative values. It would be 
 nice if these can be changed to long.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11293) Factor OSType out from Shell

2014-11-10 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11293?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205845#comment-14205845
 ] 

Yongjun Zhang commented on HADOOP-11293:


HI [~cmccabe],

Thanks for the good suggestions! Can we use {{CurrentOS}} to make it shorter?  
I will preappend all the flags with {{IS_}}.

All the unrelated whitespaces are added by eclipse to separate the imports into 
different sections, which makes the code easier to read. Do you think it's ok 
not to remove them?

Thanks again!





 Factor OSType out from Shell
 

 Key: HADOOP-11293
 URL: https://issues.apache.org/jira/browse/HADOOP-11293
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Yongjun Zhang
Assignee: Yongjun Zhang
 Attachments: HADOOP-11293.001.patch


 Currently the code that detects the OS type is located in Shell.java. Code 
 that need to check OS type refers to Shell, even if no other stuff of Shell 
 is needed. 
 I am proposing to refactor OSType out to  its own class, so to make the 
 OSType easier to access and the dependency cleaner.
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10895) HTTP KerberosAuthenticator fallback should have a flag to disable it

2014-11-10 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205888#comment-14205888
 ] 

Yongjun Zhang commented on HADOOP-10895:


Hi [~acmurthy], thanks for your patience and understanding too

Hi [~tucu00], 
I hope rev008 addressed your comments, would you please help taking a look? if 
you think it's not yet, would you please answer my question in 
https://issues.apache.org/jira/browse/HADOOP-10895?focusedCommentId=14203225page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14203225
? thanks a lot.


 HTTP KerberosAuthenticator fallback should have a flag to disable it
 

 Key: HADOOP-10895
 URL: https://issues.apache.org/jira/browse/HADOOP-10895
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.4.1
Reporter: Alejandro Abdelnur
Assignee: Yongjun Zhang
Priority: Blocker
 Attachments: HADOOP-10895.001.patch, HADOOP-10895.002.patch, 
 HADOOP-10895.003.patch, HADOOP-10895.003v1.patch, HADOOP-10895.003v2.patch, 
 HADOOP-10895.003v2improved.patch, HADOOP-10895.004.patch, 
 HADOOP-10895.005.patch, HADOOP-10895.006.patch, HADOOP-10895.007.patch, 
 HADOOP-10895.008.patch


 Per review feedback in HADOOP-10771, {{KerberosAuthenticator}} and the 
 delegation token version coming in with HADOOP-10771 should have a flag to 
 disable fallback to pseudo, similarly to the one that was introduced in 
 Hadoop RPC client with HADOOP-9698.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11290) Typo on web page http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-common/NativeLibraries.html

2014-11-10 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11290?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14205963#comment-14205963
 ] 

Akira AJISAKA commented on HADOOP-11290:


Already fixed by HADOOP-10972. You'll see the doc is fixed in the next release 
(2.6.0).
Thanks for the report!

 Typo on web page 
 http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-common/NativeLibraries.html
 

 Key: HADOOP-11290
 URL: https://issues.apache.org/jira/browse/HADOOP-11290
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.3.0
Reporter: Jason Pyeron
Priority: Minor

 Once you installed the prerequisite packages use the standard hadoop pom.xml 
 file and pass along the native flag to build the native hadoop library:
$ mvn package -Pdist,native -Dskiptests -Dtar
 -Dskiptests
 should be 
 -DskipTests



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11290) Typo on web page http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-common/NativeLibraries.html

2014-11-10 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA resolved HADOOP-11290.

Resolution: Duplicate

 Typo on web page 
 http://hadoop.apache.org/docs/r2.3.0/hadoop-project-dist/hadoop-common/NativeLibraries.html
 

 Key: HADOOP-11290
 URL: https://issues.apache.org/jira/browse/HADOOP-11290
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.3.0
Reporter: Jason Pyeron
Priority: Minor

 Once you installed the prerequisite packages use the standard hadoop pom.xml 
 file and pass along the native flag to build the native hadoop library:
$ mvn package -Pdist,native -Dskiptests -Dtar
 -Dskiptests
 should be 
 -DskipTests



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11295) RPC Reader thread can't be shutdowned if RPCCallQueue is full

2014-11-10 Thread Ming Ma (JIRA)
Ming Ma created HADOOP-11295:


 Summary: RPC Reader thread can't be shutdowned if RPCCallQueue is 
full
 Key: HADOOP-11295
 URL: https://issues.apache.org/jira/browse/HADOOP-11295
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Ming Ma


You RPC server is asked to stop when RPCCallQueue is full, {{reader.join()}} 
will just wait there. That is because

1. The reader thread is blocked on {{callQueue.put(call);}}.
2. When RPC server is asked to stop, it will interrupt all handler threads and 
thus no threads will drain the callQueue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)