[jira] [Commented] (HADOOP-8906) paths with multiple globs are unreliable

2012-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13473865#comment-13473865
 ] 

Hadoop QA commented on HADOOP-8906:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12548681/HADOOP-8906.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  
org.apache.hadoop.hdfs.server.namenode.ha.TestStandbyCheckpoints

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1607//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1607//console

This message is automatically generated.

 paths with multiple globs are unreliable
 

 Key: HADOOP-8906
 URL: https://issues.apache.org/jira/browse/HADOOP-8906
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-8906.patch, HADOOP-8906.patch, HADOOP-8906.patch, 
 HADOOP-8906.patch


 Let's say we have have a structure of $date/$user/stuff/file.  Multiple 
 globs are unreliable unless every directory in the structure exists.
 These work:
 date*/user
 date*/user/stuff
 date*/user/stuff/file
 These fail:
 date*/user/*
 date*/user/*/*
 date*/user/stu*
 date*/user/stu*/*
 date*/user/stu*/file
 date*/user/stuff/*
 date*/user/stuff/f*

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8913) hadoop-metrics2.properties should give units in comment for sampling period

2012-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13473873#comment-13473873
 ] 

Hudson commented on HADOOP-8913:


Integrated in Hadoop-Mapreduce-trunk-Commit #2867 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2867/])
HADOOP-8913. hadoop-metrics2.properties should give units in comment for 
sampling period. Contributed by Sandy Ryza. (Revision 1396904)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1396904
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/conf/hadoop-metrics2.properties
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/conf/hadoop-metrics2.properties


 hadoop-metrics2.properties should give units in comment for sampling period
 ---

 Key: HADOOP-8913
 URL: https://issues.apache.org/jira/browse/HADOOP-8913
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 2.0.1-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
Priority: Minor
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-8913.patch


 the default hadoop-metrics2.properties contains the lines
 #default sampling period
 *.period=10
 it should be made clear that the units are seconds

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8878) uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem and fsck to fail when security is on

2012-10-11 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8878:


   Resolution: Fixed
Fix Version/s: 2.0.3-alpha
   1.2.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

+1 for the patch. I committed it to trunk , branch-2 and branch-1. Thank you 
Arpit.

 uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem 
 and fsck to fail when security is on
 

 Key: HADOOP-8878
 URL: https://issues.apache.org/jira/browse/HADOOP-8878
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0, 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Fix For: 1.2.0, 2.0.3-alpha

 Attachments: HADOOP-8878.branch-1.patch, HADOOP-8878.branch-1.patch, 
 HADOOP-8878.branch-1.patch, HADOOP-8878.patch, HADOOP-8878.patch, 
 HADOOP-8878.patch


 This was noticed on a secure cluster where the namenode had an upper case 
 hostname and the following command was issued
 hadoop dfs -ls webhdfs://NN:PORT/PATH
 the above command failed because delegation token retrieval failed.
 Upon looking at the kerberos logs it was determined that we tried to get the 
 ticket for kerberos principal with upper case hostnames and that host did not 
 exit in kerberos. We should convert the hostnames to lower case. Take a look 
 at HADOOP-7988 where the same fix was applied on a different class.
 I have noticed this issue exists on branch-1. Will investigate trunk and 
 branch-2 and update accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8868) FileUtil#chmod should normalize the path before calling into shell APIs

2012-10-11 Thread Suresh Srinivas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Suresh Srinivas updated HADOOP-8868:


Status: Open  (was: Patch Available)

 FileUtil#chmod should normalize the path before calling into shell APIs
 ---

 Key: HADOOP-8868
 URL: https://issues.apache.org/jira/browse/HADOOP-8868
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-8868.branch-1-win.chmod.patch


 We have seen cases where paths passed in from FileUtil#chmod to Shell APIs 
 can contain both forward and backward slashes on Windows.
 This causes problems, since some Windows APIs do not work well with mixed 
 slashes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8878) uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem and fsck to fail when security is on

2012-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13473911#comment-13473911
 ] 

Hudson commented on HADOOP-8878:


Integrated in Hadoop-Mapreduce-trunk-Commit #2868 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Commit/2868/])
HADOOP-8878. Uppercase namenode hostname causes hadoop dfs calls with 
webhdfs filesystem and fsck to fail when security is on. Contributed by Arpit 
Gupta. (Revision 1396922)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1396922
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosUtil.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosUtil.java
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem 
 and fsck to fail when security is on
 

 Key: HADOOP-8878
 URL: https://issues.apache.org/jira/browse/HADOOP-8878
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0, 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Fix For: 1.2.0, 2.0.3-alpha

 Attachments: HADOOP-8878.branch-1.patch, HADOOP-8878.branch-1.patch, 
 HADOOP-8878.branch-1.patch, HADOOP-8878.patch, HADOOP-8878.patch, 
 HADOOP-8878.patch


 This was noticed on a secure cluster where the namenode had an upper case 
 hostname and the following command was issued
 hadoop dfs -ls webhdfs://NN:PORT/PATH
 the above command failed because delegation token retrieval failed.
 Upon looking at the kerberos logs it was determined that we tried to get the 
 ticket for kerberos principal with upper case hostnames and that host did not 
 exit in kerberos. We should convert the hostnames to lower case. Take a look 
 at HADOOP-7988 where the same fix was applied on a different class.
 I have noticed this issue exists on branch-1. Will investigate trunk and 
 branch-2 and update accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-6616) Improve documentation for rack awareness

2012-10-11 Thread Adam Faris (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6616?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Adam Faris updated HADOOP-6616:
---

Attachment: hadoop-6616.patch.3

Sorry for the delay but wanted to rethink the examples to explain how this 
works.  I updated the bash script to show how simple topology scripts could be. 
 I removed the perl example as both the perl and bash script were doing the 
same thing of splitting the IP on dots.  Finally the python script has been 
updated to print the network instead of relying on matching host names in a 
contrived example.

-- Thanks, Adam

 Improve documentation for rack awareness
 

 Key: HADOOP-6616
 URL: https://issues.apache.org/jira/browse/HADOOP-6616
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation
Reporter: Jeff Hammerbacher
  Labels: newbie
 Attachments: hadoop-6616.patch, hadoop-6616.patch.2, 
 hadoop-6616.patch.3


 The current documentation for rack awareness 
 (http://hadoop.apache.org/common/docs/r0.20.0/cluster_setup.html#Hadoop+Rack+Awareness)
  should be augmented to include a sample script.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8868) FileUtil#chmod should normalize the path before calling into shell APIs

2012-10-11 Thread Ivan Mitic (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13473914#comment-13473914
 ] 

Ivan Mitic commented on HADOOP-8868:


Thanks Bikas for the review.

bq. So we are using JAVA API to resolve the path to a normalized form? Ideally 
the FileUtil method could take File arguments instead of strings but we'd like 
to avoid changing the public API.
Right. Having File APIs would be great (as Java polishes this up nicely), 
however, for compat reasons this makes more sense.

bq. In what cases can we get a mix of slashes on the string path?
In one case, a path with only forward slashes was sent to winutils. The path 
was longer to MAX_PATH (260 chars) so it was prepended with  \\?\ to tell the 
OS that the path is longer than 260. In this scenario we had both forward and 
backward slashes. Now, it makes more sense to do the slash conversion in Java 
then in winutils, given that Java provides better/tested cross-platform 
support. Make sense?

 FileUtil#chmod should normalize the path before calling into shell APIs
 ---

 Key: HADOOP-8868
 URL: https://issues.apache.org/jira/browse/HADOOP-8868
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1-win
Reporter: Ivan Mitic
Assignee: Ivan Mitic
 Attachments: HADOOP-8868.branch-1-win.chmod.patch


 We have seen cases where paths passed in from FileUtil#chmod to Shell APIs 
 can contain both forward and backward slashes on Windows.
 This causes problems, since some Windows APIs do not work well with mixed 
 slashes.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8913) hadoop-metrics2.properties should give units in comment for sampling period

2012-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13473916#comment-13473916
 ] 

Hudson commented on HADOOP-8913:


Integrated in Hadoop-Common-trunk-Commit #2844 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2844/])
HADOOP-8913. hadoop-metrics2.properties should give units in comment for 
sampling period. Contributed by Sandy Ryza. (Revision 1396904)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1396904
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/conf/hadoop-metrics2.properties
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/conf/hadoop-metrics2.properties


 hadoop-metrics2.properties should give units in comment for sampling period
 ---

 Key: HADOOP-8913
 URL: https://issues.apache.org/jira/browse/HADOOP-8913
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 2.0.1-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
Priority: Minor
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-8913.patch


 the default hadoop-metrics2.properties contains the lines
 #default sampling period
 *.period=10
 it should be made clear that the units are seconds

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8878) uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem and fsck to fail when security is on

2012-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13473915#comment-13473915
 ] 

Hudson commented on HADOOP-8878:


Integrated in Hadoop-Common-trunk-Commit #2844 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/2844/])
HADOOP-8878. Uppercase namenode hostname causes hadoop dfs calls with 
webhdfs filesystem and fsck to fail when security is on. Contributed by Arpit 
Gupta. (Revision 1396922)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1396922
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosUtil.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosUtil.java
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem 
 and fsck to fail when security is on
 

 Key: HADOOP-8878
 URL: https://issues.apache.org/jira/browse/HADOOP-8878
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0, 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Fix For: 1.2.0, 2.0.3-alpha

 Attachments: HADOOP-8878.branch-1.patch, HADOOP-8878.branch-1.patch, 
 HADOOP-8878.branch-1.patch, HADOOP-8878.patch, HADOOP-8878.patch, 
 HADOOP-8878.patch


 This was noticed on a secure cluster where the namenode had an upper case 
 hostname and the following command was issued
 hadoop dfs -ls webhdfs://NN:PORT/PATH
 the above command failed because delegation token retrieval failed.
 Upon looking at the kerberos logs it was determined that we tried to get the 
 ticket for kerberos principal with upper case hostnames and that host did not 
 exit in kerberos. We should convert the hostnames to lower case. Take a look 
 at HADOOP-7988 where the same fix was applied on a different class.
 I have noticed this issue exists on branch-1. Will investigate trunk and 
 branch-2 and update accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8878) uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem and fsck to fail when security is on

2012-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13473919#comment-13473919
 ] 

Hudson commented on HADOOP-8878:


Integrated in Hadoop-Hdfs-trunk-Commit #2906 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/2906/])
HADOOP-8878. Uppercase namenode hostname causes hadoop dfs calls with 
webhdfs filesystem and fsck to fail when security is on. Contributed by Arpit 
Gupta. (Revision 1396922)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1396922
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosUtil.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosUtil.java
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem 
 and fsck to fail when security is on
 

 Key: HADOOP-8878
 URL: https://issues.apache.org/jira/browse/HADOOP-8878
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0, 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Fix For: 1.2.0, 2.0.3-alpha

 Attachments: HADOOP-8878.branch-1.patch, HADOOP-8878.branch-1.patch, 
 HADOOP-8878.branch-1.patch, HADOOP-8878.patch, HADOOP-8878.patch, 
 HADOOP-8878.patch


 This was noticed on a secure cluster where the namenode had an upper case 
 hostname and the following command was issued
 hadoop dfs -ls webhdfs://NN:PORT/PATH
 the above command failed because delegation token retrieval failed.
 Upon looking at the kerberos logs it was determined that we tried to get the 
 ticket for kerberos principal with upper case hostnames and that host did not 
 exit in kerberos. We should convert the hostnames to lower case. Take a look 
 at HADOOP-7988 where the same fix was applied on a different class.
 I have noticed this issue exists on branch-1. Will investigate trunk and 
 branch-2 and update accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-7895) HADOOP_LOG_DIR has to be set explicitly when running from the tarball

2012-10-11 Thread Jianbin Wei (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7895?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13473937#comment-13473937
 ] 

Jianbin Wei commented on HADOOP-7895:
-

Is this a duplication of HADOOP-8443 
(https://issues.apache.org/jira/browse/HADOOP-8433)?

 HADOOP_LOG_DIR has to be set explicitly when running from the tarball
 -

 Key: HADOOP-7895
 URL: https://issues.apache.org/jira/browse/HADOOP-7895
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.0
Reporter: Eli Collins

 When running bin and sbin commands from the tarball if HADOOP_LOG_DIR is not 
 explicitly set in hadoop-env.sh it doesn't use HADOOP_HOME/logs by default 
 like it used to, instead picks a wrong dir:
 {noformat}
 localhost: mkdir: cannot create directory `/eli': Permission denied
 localhost: chown: cannot access `/eli/eli': No such file or directory
 {noformat}
 We should have it default to HADOOP_HOME/logs or at least fail with a message 
 if the dir doesn't exist, the env var isn't set.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8874) HADOOP_HOME and -Dhadoop.home (from hadoop wrapper script) are not uniformly handled

2012-10-11 Thread John Gordon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Gordon updated HADOOP-8874:


Attachment: fix_home.patch

This patch adds a consistency layer for HADOOP_HOME lookups and provides 
abstractions to qualify bin paths of hadoop binary components.

 HADOOP_HOME and -Dhadoop.home (from hadoop wrapper script) are not uniformly 
 handled
 

 Key: HADOOP-8874
 URL: https://issues.apache.org/jira/browse/HADOOP-8874
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: scripts, security
Affects Versions: 1-win
 Environment: Called from external process with -D flag vs HADOOP_HOME 
 set.
Reporter: John Gordon
  Labels: security
 Fix For: 1-win

 Attachments: fix_home.patch


 There is a -D flag to set hadoop.home, which is specified in the hadoop 
 wrapper scripts.  This is particularly useful if you want SxS execution of 
 two or more versions of hadoop (e.g. rolling upgrade).  However, it isn't 
 honored at all.  HADOOP_HOME is used in 3-4 places to find non-java hadoop 
 components such as schedulers, scripts, shared libraries, or with the Windows 
 changes -- binaries.
 Ideally, these should all resolve the path in a consistent manner, and 
 callers shuold have a similar onus applied when trying to resolve an invalid 
 path to their components.  This is particularly relevant to scripts or 
 binaries that may have security impact, as absolute path resolution is 
 generally safer and more stable than relative path resolution.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8874) HADOOP_HOME and -Dhadoop.home (from hadoop wrapper script) are not uniformly handled

2012-10-11 Thread John Gordon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8874?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Gordon updated HADOOP-8874:


Status: Patch Available  (was: Open)

This patch adds consistent error handling and reporting when HADOOP_HOME is not 
set, and refactors references to support the -Dhadoop.home.dir option as well 
as HADOOP_HOME.  Previously, hadoop.home.dir seemed to be an option, but was 
not honored.  When the variable was not set, paths were often simply 
concatenated with a null string and it often made it more difficult than 
necessary to root-cause config issues, there.

 HADOOP_HOME and -Dhadoop.home (from hadoop wrapper script) are not uniformly 
 handled
 

 Key: HADOOP-8874
 URL: https://issues.apache.org/jira/browse/HADOOP-8874
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: scripts, security
Affects Versions: 1-win
 Environment: Called from external process with -D flag vs HADOOP_HOME 
 set.
Reporter: John Gordon
  Labels: security
 Fix For: 1-win

 Attachments: fix_home.patch


 There is a -D flag to set hadoop.home, which is specified in the hadoop 
 wrapper scripts.  This is particularly useful if you want SxS execution of 
 two or more versions of hadoop (e.g. rolling upgrade).  However, it isn't 
 honored at all.  HADOOP_HOME is used in 3-4 places to find non-java hadoop 
 components such as schedulers, scripts, shared libraries, or with the Windows 
 changes -- binaries.
 Ideally, these should all resolve the path in a consistent manner, and 
 callers shuold have a similar onus applied when trying to resolve an invalid 
 path to their components.  This is particularly relevant to scripts or 
 binaries that may have security impact, as absolute path resolution is 
 generally safer and more stable than relative path resolution.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8874) HADOOP_HOME and -Dhadoop.home (from hadoop wrapper script) are not uniformly handled

2012-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474048#comment-13474048
 ] 

Hadoop QA commented on HADOOP-8874:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12548728/fix_home.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1609//console

This message is automatically generated.

 HADOOP_HOME and -Dhadoop.home (from hadoop wrapper script) are not uniformly 
 handled
 

 Key: HADOOP-8874
 URL: https://issues.apache.org/jira/browse/HADOOP-8874
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: scripts, security
Affects Versions: 1-win
 Environment: Called from external process with -D flag vs HADOOP_HOME 
 set.
Reporter: John Gordon
  Labels: security
 Fix For: 1-win

 Attachments: fix_home.patch


 There is a -D flag to set hadoop.home, which is specified in the hadoop 
 wrapper scripts.  This is particularly useful if you want SxS execution of 
 two or more versions of hadoop (e.g. rolling upgrade).  However, it isn't 
 honored at all.  HADOOP_HOME is used in 3-4 places to find non-java hadoop 
 components such as schedulers, scripts, shared libraries, or with the Windows 
 changes -- binaries.
 Ideally, these should all resolve the path in a consistent manner, and 
 callers shuold have a similar onus applied when trying to resolve an invalid 
 path to their components.  This is particularly relevant to scripts or 
 binaries that may have security impact, as absolute path resolution is 
 generally safer and more stable than relative path resolution.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8916) make it possible to build hadoop tarballs without java5+ forrest

2012-10-11 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-8916:
--

 Summary: make it possible to build hadoop tarballs without java5+ 
forrest
 Key: HADOOP-8916
 URL: https://issues.apache.org/jira/browse/HADOOP-8916
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 1.0.3
Reporter: Steve Loughran
Priority: Trivial


Although you can build hadoop binaries without java5  Forrest, you can't do 
the tarballs as {{tar}} depends on {{packaged}}, which depends on the {{docs}} 
and {{cn-docs}}, which both depend on the forrest/java5 checks. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8916) make it possible to build hadoop tarballs without java5+ forrest

2012-10-11 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474080#comment-13474080
 ] 

Steve Loughran commented on HADOOP-8916:


the two docs targets are nominally conditional on the {{if=forrest.home}} 
check, but as they have explicit dependency on the {{forrest.check}} target, 
that guard is moot -the build fails before it gets that far.



 make it possible to build hadoop tarballs without java5+ forrest
 

 Key: HADOOP-8916
 URL: https://issues.apache.org/jira/browse/HADOOP-8916
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 1.0.3
Reporter: Steve Loughran
Priority: Trivial
   Original Estimate: 1m
  Remaining Estimate: 1m

 Although you can build hadoop binaries without java5  Forrest, you can't do 
 the tarballs as {{tar}} depends on {{packaged}}, which depends on the 
 {{docs}} and {{cn-docs}}, which both depend on the forrest/java5 checks. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8916) make it possible to build hadoop tarballs without java5+ forrest

2012-10-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-8916:
---

Fix Version/s: 1.1.1
   Status: Patch Available  (was: Open)

 make it possible to build hadoop tarballs without java5+ forrest
 

 Key: HADOOP-8916
 URL: https://issues.apache.org/jira/browse/HADOOP-8916
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 1.0.3
Reporter: Steve Loughran
Priority: Trivial
 Fix For: 1.1.1

 Attachments: HADOOP-8916.patch

   Original Estimate: 1m
  Remaining Estimate: 1m

 Although you can build hadoop binaries without java5  Forrest, you can't do 
 the tarballs as {{tar}} depends on {{packaged}}, which depends on the 
 {{docs}} and {{cn-docs}}, which both depend on the forrest/java5 checks. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8916) make it possible to build hadoop tarballs without java5+ forrest

2012-10-11 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-8916:
---

Attachment: HADOOP-8916.patch

patch removes the checks, and also sets docs up to depend on init, which it 
ought to have done already

 make it possible to build hadoop tarballs without java5+ forrest
 

 Key: HADOOP-8916
 URL: https://issues.apache.org/jira/browse/HADOOP-8916
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 1.0.3
Reporter: Steve Loughran
Priority: Trivial
 Fix For: 1.1.1

 Attachments: HADOOP-8916.patch

   Original Estimate: 1m
  Remaining Estimate: 1m

 Although you can build hadoop binaries without java5  Forrest, you can't do 
 the tarballs as {{tar}} depends on {{packaged}}, which depends on the 
 {{docs}} and {{cn-docs}}, which both depend on the forrest/java5 checks. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8916) make it possible to build hadoop tarballs without java5+ forrest

2012-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474089#comment-13474089
 ] 

Hadoop QA commented on HADOOP-8916:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12548734/HADOOP-8916.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1610//console

This message is automatically generated.

 make it possible to build hadoop tarballs without java5+ forrest
 

 Key: HADOOP-8916
 URL: https://issues.apache.org/jira/browse/HADOOP-8916
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 1.0.3
Reporter: Steve Loughran
Priority: Trivial
 Fix For: 1.1.1

 Attachments: HADOOP-8916.patch

   Original Estimate: 1m
  Remaining Estimate: 1m

 Although you can build hadoop binaries without java5  Forrest, you can't do 
 the tarballs as {{tar}} depends on {{packaged}}, which depends on the 
 {{docs}} and {{cn-docs}}, which both depend on the forrest/java5 checks. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8878) uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem and fsck to fail when security is on

2012-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474118#comment-13474118
 ] 

Hudson commented on HADOOP-8878:


Integrated in Hadoop-Hdfs-trunk #1192 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1192/])
HADOOP-8878. Uppercase namenode hostname causes hadoop dfs calls with 
webhdfs filesystem and fsck to fail when security is on. Contributed by Arpit 
Gupta. (Revision 1396922)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1396922
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosUtil.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosUtil.java
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem 
 and fsck to fail when security is on
 

 Key: HADOOP-8878
 URL: https://issues.apache.org/jira/browse/HADOOP-8878
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0, 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Fix For: 1.2.0, 2.0.3-alpha

 Attachments: HADOOP-8878.branch-1.patch, HADOOP-8878.branch-1.patch, 
 HADOOP-8878.branch-1.patch, HADOOP-8878.patch, HADOOP-8878.patch, 
 HADOOP-8878.patch


 This was noticed on a secure cluster where the namenode had an upper case 
 hostname and the following command was issued
 hadoop dfs -ls webhdfs://NN:PORT/PATH
 the above command failed because delegation token retrieval failed.
 Upon looking at the kerberos logs it was determined that we tried to get the 
 ticket for kerberos principal with upper case hostnames and that host did not 
 exit in kerberos. We should convert the hostnames to lower case. Take a look 
 at HADOOP-7988 where the same fix was applied on a different class.
 I have noticed this issue exists on branch-1. Will investigate trunk and 
 branch-2 and update accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8913) hadoop-metrics2.properties should give units in comment for sampling period

2012-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474124#comment-13474124
 ] 

Hudson commented on HADOOP-8913:


Integrated in Hadoop-Hdfs-trunk #1192 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1192/])
HADOOP-8913. hadoop-metrics2.properties should give units in comment for 
sampling period. Contributed by Sandy Ryza. (Revision 1396904)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1396904
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/conf/hadoop-metrics2.properties
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/conf/hadoop-metrics2.properties


 hadoop-metrics2.properties should give units in comment for sampling period
 ---

 Key: HADOOP-8913
 URL: https://issues.apache.org/jira/browse/HADOOP-8913
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 2.0.1-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
Priority: Minor
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-8913.patch


 the default hadoop-metrics2.properties contains the lines
 #default sampling period
 *.period=10
 it should be made clear that the units are seconds

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8878) uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem and fsck to fail when security is on

2012-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474134#comment-13474134
 ] 

Hudson commented on HADOOP-8878:


Integrated in Hadoop-Mapreduce-trunk #1223 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1223/])
HADOOP-8878. Uppercase namenode hostname causes hadoop dfs calls with 
webhdfs filesystem and fsck to fail when security is on. Contributed by Arpit 
Gupta. (Revision 1396922)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1396922
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/client/KerberosAuthenticator.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/KerberosUtil.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestKerberosUtil.java
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt


 uppercase namenode hostname causes hadoop dfs calls with webhdfs filesystem 
 and fsck to fail when security is on
 

 Key: HADOOP-8878
 URL: https://issues.apache.org/jira/browse/HADOOP-8878
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0, 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Fix For: 1.2.0, 2.0.3-alpha

 Attachments: HADOOP-8878.branch-1.patch, HADOOP-8878.branch-1.patch, 
 HADOOP-8878.branch-1.patch, HADOOP-8878.patch, HADOOP-8878.patch, 
 HADOOP-8878.patch


 This was noticed on a secure cluster where the namenode had an upper case 
 hostname and the following command was issued
 hadoop dfs -ls webhdfs://NN:PORT/PATH
 the above command failed because delegation token retrieval failed.
 Upon looking at the kerberos logs it was determined that we tried to get the 
 ticket for kerberos principal with upper case hostnames and that host did not 
 exit in kerberos. We should convert the hostnames to lower case. Take a look 
 at HADOOP-7988 where the same fix was applied on a different class.
 I have noticed this issue exists on branch-1. Will investigate trunk and 
 branch-2 and update accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8913) hadoop-metrics2.properties should give units in comment for sampling period

2012-10-11 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8913?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474140#comment-13474140
 ] 

Hudson commented on HADOOP-8913:


Integrated in Hadoop-Mapreduce-trunk #1223 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1223/])
HADOOP-8913. hadoop-metrics2.properties should give units in comment for 
sampling period. Contributed by Sandy Ryza. (Revision 1396904)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1396904
Files : 
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/conf/hadoop-metrics2.properties
* 
/hadoop/common/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/conf/hadoop-metrics2.properties


 hadoop-metrics2.properties should give units in comment for sampling period
 ---

 Key: HADOOP-8913
 URL: https://issues.apache.org/jira/browse/HADOOP-8913
 Project: Hadoop Common
  Issue Type: Bug
  Components: metrics
Affects Versions: 2.0.1-alpha
Reporter: Sandy Ryza
Assignee: Sandy Ryza
Priority: Minor
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-8913.patch


 the default hadoop-metrics2.properties contains the lines
 #default sampling period
 *.period=10
 it should be made clear that the units are seconds

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8911) CRLF characters in source and text files

2012-10-11 Thread Suresh Srinivas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8911?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474180#comment-13474180
 ] 

Suresh Srinivas commented on HADOOP-8911:
-

Raja, for some reason when Jenkins is trying to compile findbugs it fails with:
{noformat}
==
==
Determining number of patched Findbugs warnings.
==
==


dirname: invalid option -- '^M'
Try `dirname --help' for more information.
{noformat}

see the report here - 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1608/consoleFull

 CRLF characters in source and text files
 

 Key: HADOOP-8911
 URL: https://issues.apache.org/jira/browse/HADOOP-8911
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 3.0.0, 1-win
Reporter: Raja Aluri
 Attachments: HADOOP-8911.branch-1-win.patch, 
 HADOOP-8911.branch-2.patch, HADOOP-8911.trunk.patch, HADOOP-8911.trunk.patch


 Source code in hadoop-common repo has a bunch of files that have CRLF endings.
 With more development happening on windows there is a higher chance of more 
 CRLF files getting into the source tree.
 I would like to avoid that by creating .gitattributes file which prevents 
 sources from having CRLF entries in text files.
 But before adding the .gitattributes file we need to normalize the existing 
 tree, so that people when they sync after .giattributes change wont end up 
 with a bunch of modified files in their workspace.
 I am adding a couple of links here to give more primer on what exactly is the 
 issue and how we are trying to fix it.
 # http://git-scm.com/docs/gitattributes#_checking_out_and_checking_in
 # 
 http://stackoverflow.com/questions/170961/whats-the-best-crlf-handling-strategy-with-git
 I will submit a separate bug and patch for .gitattributes

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8906) paths with multiple globs are unreliable

2012-10-11 Thread Daryn Sharp (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Daryn Sharp updated HADOOP-8906:


Attachment: HADOOP-8906.patch

Address corner case where a custom filter of a can return the wrong result.  If 
the glob has no pattern, do just one stat instead of all the components.

 paths with multiple globs are unreliable
 

 Key: HADOOP-8906
 URL: https://issues.apache.org/jira/browse/HADOOP-8906
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-8906.patch, HADOOP-8906.patch, HADOOP-8906.patch, 
 HADOOP-8906.patch, HADOOP-8906.patch


 Let's say we have have a structure of $date/$user/stuff/file.  Multiple 
 globs are unreliable unless every directory in the structure exists.
 These work:
 date*/user
 date*/user/stuff
 date*/user/stuff/file
 These fail:
 date*/user/*
 date*/user/*/*
 date*/user/stu*
 date*/user/stu*/*
 date*/user/stu*/file
 date*/user/stuff/*
 date*/user/stuff/f*

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8906) paths with multiple globs are unreliable

2012-10-11 Thread Robert Parker (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8906?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Parker updated HADOOP-8906:
--

Attachment: HADOOP-8906-branch_0.23.patch

patch for branch 0.23

 paths with multiple globs are unreliable
 

 Key: HADOOP-8906
 URL: https://issues.apache.org/jira/browse/HADOOP-8906
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-8906-branch_0.23.patch, HADOOP-8906.patch, 
 HADOOP-8906.patch, HADOOP-8906.patch, HADOOP-8906.patch, HADOOP-8906.patch


 Let's say we have have a structure of $date/$user/stuff/file.  Multiple 
 globs are unreliable unless every directory in the structure exists.
 These work:
 date*/user
 date*/user/stuff
 date*/user/stuff/file
 These fail:
 date*/user/*
 date*/user/*/*
 date*/user/stu*
 date*/user/stu*/*
 date*/user/stu*/file
 date*/user/stuff/*
 date*/user/stuff/f*

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8906) paths with multiple globs are unreliable

2012-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474218#comment-13474218
 ] 

Hadoop QA commented on HADOOP-8906:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12548747/HADOOP-8906-branch_0.23.patch
  against trunk revision .

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1612//console

This message is automatically generated.

 paths with multiple globs are unreliable
 

 Key: HADOOP-8906
 URL: https://issues.apache.org/jira/browse/HADOOP-8906
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-8906-branch_0.23.patch, HADOOP-8906.patch, 
 HADOOP-8906.patch, HADOOP-8906.patch, HADOOP-8906.patch, HADOOP-8906.patch


 Let's say we have have a structure of $date/$user/stuff/file.  Multiple 
 globs are unreliable unless every directory in the structure exists.
 These work:
 date*/user
 date*/user/stuff
 date*/user/stuff/file
 These fail:
 date*/user/*
 date*/user/*/*
 date*/user/stu*
 date*/user/stu*/*
 date*/user/stu*/file
 date*/user/stuff/*
 date*/user/stuff/f*

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8882) uppercase namenode host name causes fsck to fail when useKsslAuth is on

2012-10-11 Thread Arpit Gupta (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Gupta updated HADOOP-8882:


Attachment: HADOOP-8882.branch-1.patch

updated the patch to use the method in kerberosUtil.getServicePrincipal.

That method uses LOCALE.US in toLowerCase

 uppercase namenode host name causes fsck to fail when useKsslAuth is on
 ---

 Key: HADOOP-8882
 URL: https://issues.apache.org/jira/browse/HADOOP-8882
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-8882.branch-1.patch, HADOOP-8882.branch-1.patch


 {code}
  public static void fetchServiceTicket(URL remoteHost) throws IOException {
 if(!UserGroupInformation.isSecurityEnabled())
   return;
 
 String serviceName = host/ + remoteHost.getHost();
 {code}
 the hostname should be converted to lower case. Saw this in branch 1, will 
 look at trunk and update the bug accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Created] (HADOOP-8917) add LOCALE.US to toLowerCase in SecurityUtil.replacePattern

2012-10-11 Thread Arpit Gupta (JIRA)
Arpit Gupta created HADOOP-8917:
---

 Summary: add LOCALE.US to toLowerCase in 
SecurityUtil.replacePattern
 Key: HADOOP-8917
 URL: https://issues.apache.org/jira/browse/HADOOP-8917
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.2.0, 3.0.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta


see 
https://issues.apache.org/jira/browse/HADOOP-8878?focusedCommentId=13472245page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13472245
 for more details

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8906) paths with multiple globs are unreliable

2012-10-11 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474251#comment-13474251
 ] 

Jason Lowe commented on HADOOP-8906:


The corner case is now handled, but the new tests added don't test for it.  
There should be a test for a non-globbed path for an existing file with the 
false filter, and all the false filter tests check for either a globbed path or 
non-existent files.

I tried adding a test locally with the false filter for / and noticed that it 
didn't return null.  Instead it returned / because the filter isn't applied 
in the special cases of / and , which seems wrong.  It turns out that the 
existing code also had this bug, so I suppose it's at least consistent with the 
previous version's behavior.


 paths with multiple globs are unreliable
 

 Key: HADOOP-8906
 URL: https://issues.apache.org/jira/browse/HADOOP-8906
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-8906-branch_0.23.patch, HADOOP-8906.patch, 
 HADOOP-8906.patch, HADOOP-8906.patch, HADOOP-8906.patch, HADOOP-8906.patch


 Let's say we have have a structure of $date/$user/stuff/file.  Multiple 
 globs are unreliable unless every directory in the structure exists.
 These work:
 date*/user
 date*/user/stuff
 date*/user/stuff/file
 These fail:
 date*/user/*
 date*/user/*/*
 date*/user/stu*
 date*/user/stu*/*
 date*/user/stu*/file
 date*/user/stuff/*
 date*/user/stuff/f*

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8906) paths with multiple globs are unreliable

2012-10-11 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474256#comment-13474256
 ] 

Daryn Sharp commented on HADOOP-8906:
-

Yes, I had that case but somehow accidentally removed in the final patch.

The case of a non-glob path with a user-supplied filter is an interesting one.  
null means the path isn't a glob AND doesn't exist.  When an existing 
non-glob path is removed by the filter, then arguably maybe it should return 
empty array since it's not that the path doesn't exist but that the filter had 
no matches.  In essence, perhaps a user filter means the query is always a 
glob?  I can see it going either way.

 paths with multiple globs are unreliable
 

 Key: HADOOP-8906
 URL: https://issues.apache.org/jira/browse/HADOOP-8906
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-8906-branch_0.23.patch, HADOOP-8906.patch, 
 HADOOP-8906.patch, HADOOP-8906.patch, HADOOP-8906.patch, HADOOP-8906.patch


 Let's say we have have a structure of $date/$user/stuff/file.  Multiple 
 globs are unreliable unless every directory in the structure exists.
 These work:
 date*/user
 date*/user/stuff
 date*/user/stuff/file
 These fail:
 date*/user/*
 date*/user/*/*
 date*/user/stu*
 date*/user/stu*/*
 date*/user/stu*/file
 date*/user/stuff/*
 date*/user/stuff/f*

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8882) uppercase namenode host name causes fsck to fail when useKsslAuth is on

2012-10-11 Thread Arpit Gupta (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474259#comment-13474259
 ] 

Arpit Gupta commented on HADOOP-8882:
-

here is the test patch output

{code}
[exec] 
 [exec] -1 overall.  
 [exec] 
 [exec] +1 @author.  The patch does not contain any @author tags.
 [exec] 
 [exec] -1 tests included.  The patch doesn't appear to include any new 
or modified tests.
 [exec] Please justify why no tests are needed for 
this patch.
 [exec] 
 [exec] +1 javadoc.  The javadoc tool did not generate any warning 
messages.
 [exec] 
 [exec] +1 javac.  The applied patch does not increase the total number 
of javac compiler warnings.
 [exec] 
 [exec] -1 findbugs.  The patch appears to introduce 9 new Findbugs 
(version 1.3.9) warnings.
 [exec] 
 [exec] 
 [exec] 
 [exec] 
 [exec] 
==
 [exec] 
==
 [exec] Finished build.
 [exec] 
==
 [exec] 
==
{code}

Findbugs warnings are not related to this patch.

No tests added as TestSecurityUtil as appropriate coverage.

 uppercase namenode host name causes fsck to fail when useKsslAuth is on
 ---

 Key: HADOOP-8882
 URL: https://issues.apache.org/jira/browse/HADOOP-8882
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.3, 1.1.0, 1.2.0
Reporter: Arpit Gupta
Assignee: Arpit Gupta
 Attachments: HADOOP-8882.branch-1.patch, HADOOP-8882.branch-1.patch


 {code}
  public static void fetchServiceTicket(URL remoteHost) throws IOException {
 if(!UserGroupInformation.isSecurityEnabled())
   return;
 
 String serviceName = host/ + remoteHost.getHost();
 {code}
 the hostname should be converted to lower case. Saw this in branch 1, will 
 look at trunk and update the bug accordingly.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8906) paths with multiple globs are unreliable

2012-10-11 Thread Jason Lowe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474267#comment-13474267
 ] 

Jason Lowe commented on HADOOP-8906:


bq. In essence, perhaps a user filter means the query is always a glob? I can 
see it going either way.

Yes, I thought about that as well.  Maybe it would be more consistent to return 
empty instead of null in that case, but I was erring on the side of caution to 
maintain compatibility with the previous version's behavior.  It all comes down 
to what a result of null really means.  If it's being used to check for globs 
in the path then arguably we should continue to return null because someone 
could be using/abusing globStatus(path, falseFilter) to check for globs in a 
path even if the path exists in the filesystem.

 paths with multiple globs are unreliable
 

 Key: HADOOP-8906
 URL: https://issues.apache.org/jira/browse/HADOOP-8906
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-8906-branch_0.23.patch, HADOOP-8906.patch, 
 HADOOP-8906.patch, HADOOP-8906.patch, HADOOP-8906.patch, HADOOP-8906.patch


 Let's say we have have a structure of $date/$user/stuff/file.  Multiple 
 globs are unreliable unless every directory in the structure exists.
 These work:
 date*/user
 date*/user/stuff
 date*/user/stuff/file
 These fail:
 date*/user/*
 date*/user/*/*
 date*/user/stu*
 date*/user/stu*/*
 date*/user/stu*/file
 date*/user/stuff/*
 date*/user/stuff/f*

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8916) make it possible to build hadoop tarballs without java5+ forrest

2012-10-11 Thread Eli Collins (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8916?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474290#comment-13474290
 ] 

Eli Collins commented on HADOOP-8916:
-

+1

 make it possible to build hadoop tarballs without java5+ forrest
 

 Key: HADOOP-8916
 URL: https://issues.apache.org/jira/browse/HADOOP-8916
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 1.0.3
Reporter: Steve Loughran
Priority: Trivial
 Fix For: 1.1.1

 Attachments: HADOOP-8916.patch

   Original Estimate: 1m
  Remaining Estimate: 1m

 Although you can build hadoop binaries without java5  Forrest, you can't do 
 the tarballs as {{tar}} depends on {{packaged}}, which depends on the 
 {{docs}} and {{cn-docs}}, which both depend on the forrest/java5 checks. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Commented] (HADOOP-8906) paths with multiple globs are unreliable

2012-10-11 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8906?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13474297#comment-13474297
 ] 

Hadoop QA commented on HADOOP-8906:
---

{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12548745/HADOOP-8906.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  The javadoc tool did not generate any 
warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 1.3.9) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1611//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/1611//console

This message is automatically generated.

 paths with multiple globs are unreliable
 

 Key: HADOOP-8906
 URL: https://issues.apache.org/jira/browse/HADOOP-8906
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Attachments: HADOOP-8906-branch_0.23.patch, HADOOP-8906.patch, 
 HADOOP-8906.patch, HADOOP-8906.patch, HADOOP-8906.patch, HADOOP-8906.patch


 Let's say we have have a structure of $date/$user/stuff/file.  Multiple 
 globs are unreliable unless every directory in the structure exists.
 These work:
 date*/user
 date*/user/stuff
 date*/user/stuff/file
 These fail:
 date*/user/*
 date*/user/*/*
 date*/user/stu*
 date*/user/stu*/*
 date*/user/stu*/file
 date*/user/stuff/*
 date*/user/stuff/f*

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8738) junit JAR is showing up in the distro

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8738?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8738:
--

Fix Version/s: (was: 2.0.2-alpha)
   2.0.3-alpha

 junit JAR is showing up in the distro
 -

 Key: HADOOP-8738
 URL: https://issues.apache.org/jira/browse/HADOOP-8738
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-8738.patch


 It seems that with the move of YARN module to trunk/ level the test scope in 
 junit got lost. This makes the junit JAR to show up in the TAR

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8643) hadoop-client should exclude hadoop-annotations from hadoop-common dependency

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8643?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8643:
--

Fix Version/s: (was: 2.0.2-alpha)
   2.0.3-alpha

 hadoop-client should exclude hadoop-annotations from hadoop-common dependency
 -

 Key: HADOOP-8643
 URL: https://issues.apache.org/jira/browse/HADOOP-8643
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: hadoop-8643.txt


 When reviewing HADOOP-8370 I've missed that changing the scope to compile for 
 hadoop-annotations in hadoop-common it would make hadoop-annotations to 
 bubble up in hadoop-client. Because of this we need to explicitly exclude it.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8500) Javadoc jars contain entire target directory

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8500?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8500:
--

Fix Version/s: (was: 2.0.2-alpha)
   2.0.3-alpha

 Javadoc jars contain entire target directory
 

 Key: HADOOP-8500
 URL: https://issues.apache.org/jira/browse/HADOOP-8500
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.0-alpha
 Environment: N/A
Reporter: EJ Ciramella
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: HADOOP-8500.patch, site-redo.tar

   Original Estimate: 24h
  Remaining Estimate: 24h

 The javadoc jars contain the contents of the target directory - which 
 includes classes and all sorts of binary files that it shouldn't.
 Sometimes the resulting javadoc jar is 10X bigger than it should be.
 The fix is to reconfigure maven to use api as it's destDir for javadoc 
 generation.
 I have a patch/diff incoming.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8631) The description of net.topology.table.file.name in core-default.xml is misleading

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8631?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8631:
--

Fix Version/s: (was: 2.0.2-alpha)
   2.0.3-alpha

 The description of net.topology.table.file.name in core-default.xml is 
 misleading
 -

 Key: HADOOP-8631
 URL: https://issues.apache.org/jira/browse/HADOOP-8631
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0-alpha, 2.0.1-alpha, 2.0.2-alpha
Reporter: Han Xiao
Priority: Minor
 Fix For: 2.0.3-alpha

 Attachments: core-default.xml.patch


 The net.topology.table.file.name is used when 
 net.topology.node.switch.mapping.impl property is set to 
 org.apache.hadoop.net.TableMapping.
 However, in the description in core-default.xml, 
 net.topology.script.file.name property is asked to set to 
 org.apache.hadoop.net.TableMapping.
 This could mislead user into wrong configuration.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8345) HttpServer adds SPNEGO filter mapping but does not register the SPNEGO filter

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8345?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8345:
--

Fix Version/s: (was: 2.0.2-alpha)
   2.0.3-alpha

 HttpServer adds SPNEGO filter mapping but does not register the SPNEGO filter
 -

 Key: HADOOP-8345
 URL: https://issues.apache.org/jira/browse/HADOOP-8345
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.0.3-alpha


 It seems the mapping was added to fullfil HDFS requirements, where the SPNEGO 
 filter is registered.
 The registration o the SPNEGO filter should be done at common level instead 
 to it is avail for all components using HttpServer if security is ON.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8369) Failing tests in branch-2

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8369?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8369:
--

Fix Version/s: (was: 2.0.2-alpha)
   2.0.3-alpha

 Failing tests in branch-2
 -

 Key: HADOOP-8369
 URL: https://issues.apache.org/jira/browse/HADOOP-8369
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Arun C Murthy
 Fix For: 2.0.3-alpha


 Running org.apache.hadoop.io.compress.TestCodec
 Tests run: 20, Failures: 1, Errors: 0, Skipped: 0, Time elapsed: 27.789 sec 
  FAILURE!
 --
 Running org.apache.hadoop.fs.viewfs.TestFSMainOperationsLocalFileSystem
 Tests run: 98, Failures: 0, Errors: 98, Skipped: 0, Time elapsed: 1.633 sec 
  FAILURE!
 --
 Running org.apache.hadoop.fs.viewfs.TestViewFsTrash
 Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 0.658 sec  
 FAILURE!
 
 TestCodec failed since I didn't pass -Pnative, the test could be improved to 
 ensure snappy tests are skipped if native hadoop isn't present.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Updated] (HADOOP-8160) HardLink.getLinkCount() is getting stuck in eclipse ( Cygwin) for long file names, due to MS-Dos style Path.

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8160?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy updated HADOOP-8160:
--

Fix Version/s: (was: 2.0.2-alpha)
   2.0.3-alpha

 HardLink.getLinkCount() is getting stuck in eclipse ( Cygwin) for long file 
 names, due to MS-Dos style Path.
 

 Key: HADOOP-8160
 URL: https://issues.apache.org/jira/browse/HADOOP-8160
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 0.23.1, 0.24.0
 Environment: Cygwin
Reporter: Vinay
Assignee: Vinay
Priority: Minor
 Fix For: 3.0.0, 2.0.3-alpha

 Attachments: HADOOP-8160.patch

   Original Estimate: 2m
  Remaining Estimate: 2m

 HardLink.getLinkCount() is getting stuck in cygwin for long file names, due 
 to MS-DOS style path.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8403) bump up POMs version to 2.0.1-SNAPSHOT

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8403.
-


 bump up POMs version to 2.0.1-SNAPSHOT
 --

 Key: HADOOP-8403
 URL: https://issues.apache.org/jira/browse/HADOOP-8403
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Affects Versions: 2.0.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.2-alpha

 Attachments: HADOOP-8403.patch, HADOOP-8403.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8748) Move dfsclient retry to a util class

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8748.
-


 Move dfsclient retry to a util class
 

 Key: HADOOP-8748
 URL: https://issues.apache.org/jira/browse/HADOOP-8748
 Project: Hadoop Common
  Issue Type: Improvement
  Components: io
Reporter: Arun C Murthy
Assignee: Arun C Murthy
Priority: Minor
 Fix For: 1.1.0, 2.0.2-alpha

 Attachments: HADOOP-8748_branch1.patch, HADOOP-8748_branch1.patch, 
 HADOOP-8748.patch, HADOOP-8748.patch


 HDFS-3504 introduced mechanisms to retry RPCs. I want to move that to common 
 to allow MAPREDUCE-4603 to share it too. Should be a trivial patch.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8725) MR is broken when security is off

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8725.
-


 MR is broken when security is off
 -

 Key: HADOOP-8725
 URL: https://issues.apache.org/jira/browse/HADOOP-8725
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker
 Fix For: 0.23.3, 2.0.2-alpha

 Attachments: HADOOP-8725.patch


 HADOOP-8225 broke MR when security is off.  MR was changed to stop re-reading 
 the credentials that UGI had already read, and to stop putting those tokens 
 back into the UGI where they already were.  UGI only reads a credentials file 
 when security is enabled, but MR uses tokens (ie. job token) even when 
 security is disabled...

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8547) Package hadoop-pipes examples/bin directory (again)

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8547?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8547.
-


 Package hadoop-pipes examples/bin directory (again)
 ---

 Key: HADOOP-8547
 URL: https://issues.apache.org/jira/browse/HADOOP-8547
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.0.2-alpha

 Attachments: HDFS-3589.001.patch


 It looks like since MAPREDUCE-4267, we're no longer exporting the hadooppipes 
 examples/bin directory to hadoop-dist as part of a mvn package build.  This 
 seems unintentional, so we should export those binaries again.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8484) Prevent Configuration getter methods that are passed a default value from throwing RuntimeException

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8484.
-


 Prevent Configuration getter methods that are passed a default value from 
 throwing RuntimeException
 ---

 Key: HADOOP-8484
 URL: https://issues.apache.org/jira/browse/HADOOP-8484
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Ahmed Radwan
 Fix For: 2.0.2-alpha


 Configuration getter methods that are passed default values can throw 
 RuntimeExceptions if the value provided is invalid (e.g. 
 NumberFormatException).
 In many cases such exception results in more serious consequences (failure to 
 sart a service, see for example NodeManager DeletionService). This can be 
 avoided by returning the default value and just printing a warning message.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8611) Allow fall-back to the shell-based implementation when JNI-based users-group mapping fails

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8611?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8611.
-


 Allow fall-back to the shell-based implementation when JNI-based users-group 
 mapping fails
 --

 Key: HADOOP-8611
 URL: https://issues.apache.org/jira/browse/HADOOP-8611
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.0.3, 0.23.0, 2.0.0-alpha
Reporter: Kihwal Lee
Assignee: Robert Parker
 Fix For: 0.23.3, 2.0.2-alpha

 Attachments: HADOOP-8611-branch1.patch, HADOOP-8611-branch1.patch, 
 HADOOP-8611.patch, HADOOP-8611.patch, HADOOP-8611.patch


 When the JNI-based users-group mapping is enabled, the process/command will 
 fail if the native library, libhadoop.so, cannot be found. This mostly 
 happens at client-side where users may use hadoop programatically. Instead of 
 failing, falling back to the shell-based implementation will be desirable. 
 Depending on how cluster is configured, use of the native netgroup mapping 
 cannot be subsituted by the shell-based default. For this reason, this 
 behavior must be configurable with the default being disabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8458) Add management hook to AuthenticationHandler to enable delegation token operations support

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8458.
-


 Add management hook to AuthenticationHandler to enable delegation token 
 operations support
 --

 Key: HADOOP-8458
 URL: https://issues.apache.org/jira/browse/HADOOP-8458
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.2-alpha

 Attachments: HADOOP-8458.patch, HADOOP-8458.patch


 Currently hadoop-auth AuthenticationHandler only authenticates a request.
 While it can easily be extended to authenticate delegation tokens, it cannot 
 handle the delegation token get/renew/cancel operations.
 The motivation of this new feature is that the above delegation token 
 operations should be handled by a security component (hadoop-auth) instead of 
 a functional component (httpfs implementation). Ideally we should have a 
 complete separation of concerns between delegation token management and 
 FileSystem/MapReduce/YARN API, but we don't. This change is a step on that 
 directory for HTTP based services (like HttpFS).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8550) hadoop fs -touchz automatically created parent directories

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8550?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8550.
-


 hadoop fs -touchz automatically created parent directories
 --

 Key: HADOOP-8550
 URL: https://issues.apache.org/jira/browse/HADOOP-8550
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.3, 2.0.0-alpha, 3.0.0
Reporter: Robert Joseph Evans
Assignee: John George
 Fix For: 0.23.3, 2.0.2-alpha

 Attachments: HADOOP-8550.patch, HADOOP-8550.patch, HADOOP-8550.patch, 
 HADOOP-8550.patch, HADOOP-8550.patch


 Recently many of the fsShell commands were updated to be more POSIX 
 compliant.  touchz appears to have been missed, or has regressed.  If it has 
 regressed then the target version should be 0.23.3.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8368) Use CMake rather than autotools to build native code

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8368.
-


 Use CMake rather than autotools to build native code
 

 Key: HADOOP-8368
 URL: https://issues.apache.org/jira/browse/HADOOP-8368
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.0.2-alpha

 Attachments: HADOOP-8368.001.patch, HADOOP-8368.005.patch, 
 HADOOP-8368.006.patch, HADOOP-8368.007.patch, HADOOP-8368.008.patch, 
 HADOOP-8368.009.patch, HADOOP-8368.010.patch, HADOOP-8368.012.half.patch, 
 HADOOP-8368.012.patch, HADOOP-8368.012.rm.patch, 
 HADOOP-8368.014.trimmed.patch, HADOOP-8368.015.trimmed.patch, 
 HADOOP-8368.016.trimmed.patch, HADOOP-8368.018.trimmed.patch, 
 HADOOP-8368.020.rm.patch, HADOOP-8368.020.trimmed.patch, 
 HADOOP-8368.021.trimmed.patch, HADOOP-8368.023.trimmed.patch, 
 HADOOP-8368.024.trimmed.patch, HADOOP-8368.025.trimmed.patch, 
 HADOOP-8368.026.rm.patch, HADOOP-8368.026.trimmed.patch, 
 HADOOP-8368.028.rm.patch, HADOOP-8368.028.trimmed.patch, 
 HADOOP-8368.029.patch, HADOOP-8368.030.patch, HADOOP-8368.030.patch, 
 HADOOP-8368.030.rm.patch, HADOOP-8368.030.trimmed.patch, 
 HADOOP-8368-b2.001.patch, HADOOP-8368-b2.001.rm.patch, 
 HADOOP-8368-b2.001.trimmed.patch, HADOOP-8368-b2.002.rm.patch, 
 HADOOP-8368-b2.002.trimmed.patch, HADOOP-8368-b2.003.rm.patch, 
 HADOOP-8368-b2.003.trimmed.patch


 It would be good to use cmake rather than autotools to build the native 
 (C/C++) code in Hadoop.
 Rationale:
 1. automake depends on shell scripts, which often have problems running on 
 different operating systems.  It would be extremely difficult, and perhaps 
 impossible, to use autotools under Windows.  Even if it were possible, it 
 might require horrible workarounds like installing cygwin.  Even on Linux 
 variants like Ubuntu 12.04, there are major build issues because /bin/sh is 
 the Dash shell, rather than the Bash shell as it is in other Linux versions.  
 It is currently impossible to build the native code under Ubuntu 12.04 
 because of this problem.
 CMake has robust cross-platform support, including Windows.  It does not use 
 shell scripts.
 2. automake error messages are very confusing.  For example, autoreconf: 
 cannot empty /tmp/ar0.4849: Is a directory or Can't locate object method 
 path via package Autom4te... are common error messages.  In order to even 
 start debugging automake problems you need to learn shell, m4, sed, and the a 
 bunch of other things.  With CMake, all you have to learn is the syntax of 
 CMakeLists.txt, which is simple.
 CMake can do all the stuff autotools can, such as making sure that required 
 libraries are installed.  There is a Maven plugin for CMake as well.
 3. Different versions of autotools can have very different behaviors.  For 
 example, the version installed under openSUSE defaults to putting libraries 
 in /usr/local/lib64, whereas the version shipped with Ubuntu 11.04 defaults 
 to installing the same libraries under /usr/local/lib.  (This is why the FUSE 
 build is currently broken when using OpenSUSE.)  This is another source of 
 build failures and complexity.  If things go wrong, you will often get an 
 error message which is incomprehensible to normal humans (see point #2).
 CMake allows you to specify the minimum_required_version of CMake that a 
 particular CMakeLists.txt will accept.  In addition, CMake maintains strict 
 backwards compatibility between different versions.  This prevents build bugs 
 due to version skew.
 4. autoconf, automake, and libtool are large and rather slow.  This adds to 
 build time.
 For all these reasons, I think we should switch to CMake for compiling native 
 (C/C++) code in Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8438) hadoop-validate-setup.sh refers to examples jar file which doesn't exist

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8438?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8438.
-


 hadoop-validate-setup.sh refers to examples jar file which doesn't exist
 

 Key: HADOOP-8438
 URL: https://issues.apache.org/jira/browse/HADOOP-8438
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha, 3.0.0
Reporter: Devaraj K
Assignee: Devaraj K
 Fix For: 3.0.0, 2.0.2-alpha

 Attachments: HADOOP-8438.patch


 hadoop-validate-setup.sh is trying to find the file with the name 
 hadoop-examples-\*.jar and it is failing to find because the examples jar is 
 renamed to hadoop-mapreduce-examples-\*.jar.
 {code:xml}
 linux-rj72:/home/hadoop/hadoop-3.0.0-SNAPSHOT/sbin # 
 ./hadoop-validate-setup.sh
 find: `/usr/share/hadoop': No such file or directory
 Did not find hadoop-examples-*.jar under '/home/hadoop-3.0.0-SNAPSHOT or 
 '/usr/share/hadoop'
 linux-rj72:/home/hadoop-3.0.0-SNAPSHOT/sbin #
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8692) TestLocalDirAllocator fails intermittently with JDK7

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8692?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8692.
-


 TestLocalDirAllocator fails intermittently with JDK7
 

 Key: HADOOP-8692
 URL: https://issues.apache.org/jira/browse/HADOOP-8692
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
  Labels: java7
 Fix For: 0.23.3, 2.0.2-alpha

 Attachments: HADOOP-8692.patch


 Failed tests:   test0[0](org.apache.hadoop.fs.TestLocalDirAllocator): 
 Checking for build/test/temp/RELATIVE1 in 
 build/test/temp/RELATIVE0/block2860496281880890121.tmp - FAILED!
   test0[1](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for 
 /data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE1
  in 
 /data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/ABSOLUTE0/block7540717865042594902.tmp
  - FAILED!
   test0[2](org.apache.hadoop.fs.TestLocalDirAllocator): Checking for 
 file:/data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED1
  in 
 /data/md0/hadoop-common/hadoop-common-project/hadoop-common/build/test/temp/QUALIFIED0/block591739547204821805.tmp
  - FAILED!
 The recently added {{testRemoveContext()}} (MAPREDUCE-4379) does not clean up 
 after itself, so if it runs before test0 (due to undefined test ordering on 
 JDK7), test0 fails. This can be fixed by wrapping it with {code}try { ... } 
 finally { rmBufferDirs(); }{code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8060) Add a capability to discover and set checksum types per file.

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8060.
-


 Add a capability to discover and set checksum types per file.
 -

 Key: HADOOP-8060
 URL: https://issues.apache.org/jira/browse/HADOOP-8060
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, util
Affects Versions: 0.23.0, 0.23.1, 0.24.0
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 0.23.3, 2.0.2-alpha


 After the improved CRC32C checksum feature became default, some of use cases 
 involving data movement are no longer supported.  For example, when running 
 DistCp to copy from a file stored with the CRC32 checksum to a new cluster 
 with the CRC32C set to default checksum, the final data integrity check fails 
 because of mismatch in checksums.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8699) some common testcases create core-site.xml in test-classes making other testcases to fail

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8699?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8699.
-


 some common testcases create core-site.xml in test-classes making other 
 testcases to fail
 -

 Key: HADOOP-8699
 URL: https://issues.apache.org/jira/browse/HADOOP-8699
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.2-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Critical
 Fix For: 2.0.2-alpha

 Attachments: HADOOP-8699.patch, HADOOP-8699.patch


 Some of the testcases (HADOOP-8581, MAPREDUCE-4417) create core-site.xml 
 files on the fly in test-classes, overriding the core-site.xml that is part 
 of the test/resources.
 Things fail/pass depending on the order testcases are run (which seems 
 dependent on the platform/jvm you are using).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8599) Non empty response from FileSystem.getFileBlockLocations when asking for data beyond the end of file

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8599?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8599.
-


 Non empty response from FileSystem.getFileBlockLocations when asking for data 
 beyond the end of file 
 -

 Key: HADOOP-8599
 URL: https://issues.apache.org/jira/browse/HADOOP-8599
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 1.0.3, 0.23.1, 2.0.0-alpha
Reporter: Andrey Klochkov
Assignee: Andrey Klochkov
 Fix For: 0.23.3, 2.0.2-alpha

 Attachments: HADOOP-8859-branch-0.23.patch


 When FileSystem.getFileBlockLocations(file,start,len) is called with start 
 argument equal to the file size, the response is not empty. There is a test 
 TestGetFileBlockLocations.testGetFileBlockLocations2 which uses randomly 
 generated start and len arguments when calling 
 FileSystem.getFileBlockLocations and the test fails randomly (when the 
 generated start value equals to the file size).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8400) All commands warn Kerberos krb5 configuration not found when security is not enabled

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8400.
-


 All commands warn Kerberos krb5 configuration not found when security is 
 not enabled
 --

 Key: HADOOP-8400
 URL: https://issues.apache.org/jira/browse/HADOOP-8400
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Alejandro Abdelnur
 Fix For: 2.0.2-alpha

 Attachments: HADOOP-8400.patch


 Post HADOOP-8086 I get Kerberos krb5 configuration not found, setting 
 default realm to empty warnings when running Hadoop commands even though I 
 don't have kerb enabled.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8543) Invalid pom.xml files on 0.23 branch

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8543?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8543.
-


 Invalid pom.xml files on 0.23 branch
 

 Key: HADOOP-8543
 URL: https://issues.apache.org/jira/browse/HADOOP-8543
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.3
 Environment: FreeBSD 8.2, 64bit, Artifactory
Reporter: Radim Kolar
Assignee: Radim Kolar
  Labels: build
 Fix For: 0.23.3, 2.0.2-alpha

 Attachments: hadoop-invalid-pom-023-2.txt, 
 hadoop-invalid-pom-023-3.txt, hadoop-invalid-pom-023.txt


 This is backport of HADOOP-8268 to 0.23 branch. It fixes invalid pom.xml 
 which allows them to be uploaded into artifactory maven repository management 
 and adds schema declarations which allows to use XML validating tools.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8648) libhadoop: native CRC32 validation crashes when io.bytes.per.checksum=1

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8648?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8648.
-


 libhadoop:  native CRC32 validation crashes when io.bytes.per.checksum=1
 

 Key: HADOOP-8648
 URL: https://issues.apache.org/jira/browse/HADOOP-8648
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 2.0.2-alpha

 Attachments: HADOOP-8648.001.patch, HADOOP-8648.002.patch, 
 HADOOP-8648.003.patch, HADOOP-8648.004.patch, HADOOP-8648.005.patch


 The native CRC32 code, found in {{pipelined_crc32c}}, crashes when chunksize 
 is set to 1.
 {code}
 12:27:14,886  INFO NativeCodeLoader:50 - Loaded the native-hadoop library
 #
 # A fatal error has been detected by the Java Runtime Environment:
 #
 #  SIGSEGV (0xb) at pc=0x7fa00ee5a340, pid=24100, tid=140326058854144
 #
 # JRE version: 6.0_29-b11
 # Java VM: Java HotSpot(TM) 64-Bit Server VM (20.4-b02 mixed mode linux-amd64 
 compressed oops)
 # Problematic frame:
 # C  [libhadoop.so.1.0.0+0x8340]  pipelined_crc32c+0xa0
 #
 # An error report file with more information is saved as:
 # /h/hs_err_pid24100.log
 #
 # If you would like to submit a bug report, please visit:
 #   http://java.sun.com/webapps/bugreport/crash.jsp
 #
 Aborted
 {code}
 The Java CRC code works fine in this case.
 Choosing blocksize=1 is a __very__ odd choice.  It means that we're storing a 
 4-byte checksum for every byte. 
 {code}
 -rw-r--r--  1 cmccabe users  49398 Aug  3 11:33 blk_4702510289566780538
 -rw-r--r--  1 cmccabe users 197599 Aug  3 11:33 
 blk_4702510289566780538_1199.meta
 {code}
 However, obviously crashing is never the right thing to do.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8689) Make trash a server side configuration option

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8689?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8689.
-


 Make trash a server side configuration option
 -

 Key: HADOOP-8689
 URL: https://issues.apache.org/jira/browse/HADOOP-8689
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins
 Fix For: 2.0.2-alpha

 Attachments: hadoop-8689.txt, hadoop-8689.txt


 Per ATM's suggestion in HADOOP-8598 for v2 let's make {{fs.trash.interval}} 
 configured server side. If it is not configured server side then the client 
 side configuration is used. The {{fs.trash.checkpoint.interval}} option is 
 already server side as the emptier runs in the NameNode. Clients may manually 
 run an emptier via hadoop org.apache.hadoop.fs.Trash but it's OK if it uses a 
 separate interval. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8624) ProtobufRpcEngine should log all RPCs if TRACE logging is enabled

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8624?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8624.
-


 ProtobufRpcEngine should log all RPCs if TRACE logging is enabled
 -

 Key: HADOOP-8624
 URL: https://issues.apache.org/jira/browse/HADOOP-8624
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 3.0.0, 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Fix For: 3.0.0, 2.0.2-alpha

 Attachments: hadoop-8624.txt


 Since all RPC requests/responses are now ProtoBufs, it's easy to add a TRACE 
 level logging output for ProtobufRpcEngine that actually shows the full 
 content of all calls. This is very handy especially when writing/debugging 
 unit tests, but might also be useful to enable at runtime for short periods 
 of time to debug certain production issues.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8362) Improve exception message when Configuration.set() is called with a null key or value

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8362?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8362.
-


 Improve exception message when Configuration.set() is called with a null key 
 or value
 -

 Key: HADOOP-8362
 URL: https://issues.apache.org/jira/browse/HADOOP-8362
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Todd Lipcon
Assignee: madhukara phatak
Priority: Trivial
  Labels: newbie
 Fix For: 2.0.2-alpha

 Attachments: HADOOP-8362.10.patch, HADOOP-8362-1.patch, 
 HADOOP-8362-2.patch, HADOOP-8362-3.patch, HADOOP-8362-4.patch, 
 HADOOP-8362-5.patch, HADOOP-8362-6.patch, HADOOP-8362-7.patch, 
 HADOOP-8362-8.patch, HADOOP-8362.9.patch, HADOOP-8362.patch


 Currently, calling Configuration.set(...) with a null value results in a 
 NullPointerException within Properties.setProperty. We should check for null 
 key/value and throw a better exception.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8488) test-patch.sh gives +1 even if the native build fails.

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8488?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8488.
-


 test-patch.sh gives +1 even if the native build fails.
 --

 Key: HADOOP-8488
 URL: https://issues.apache.org/jira/browse/HADOOP-8488
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.0.2-alpha

 Attachments: HADOOP-8488.001.patch


 It seems that Jenkins doesn't fail the build if the native part of the build 
 doesn't succeed.  This should be fixed!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8794) Modifiy bin/hadoop to point to HADOOP_YARN_HOME

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8794.
-


 Modifiy bin/hadoop to point to HADOOP_YARN_HOME
 ---

 Key: HADOOP-8794
 URL: https://issues.apache.org/jira/browse/HADOOP-8794
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0, 2.0.1-alpha
Reporter: Vinod Kumar Vavilapalli
Assignee: Vinod Kumar Vavilapalli
 Fix For: 2.0.2-alpha

 Attachments: HADOOP-8794-20120912.txt, HADOOP-8794-20120923.txt


 YARN-9 renames YARN_HOME to HADOOP_YARN_HOME. bin/hadoop script also needs to 
 do the same.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8632) Configuration leaking class-loaders

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8632.
-


 Configuration leaking class-loaders
 ---

 Key: HADOOP-8632
 URL: https://issues.apache.org/jira/browse/HADOOP-8632
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Costin Leau
Assignee: Costin Leau
 Fix For: 3.0.0, 2.0.2-alpha

 Attachments: 
 0001-wrapping-classes-with-WeakRefs-in-CLASS_CACHE.patch, HADOOP-8632.patch, 
 HADOOP-8632-trunk-no-tabs.patch, HADOOP-8632-trunk.patch


 The newly introduced CACHE_CLASSES leaks class loaders causing associated 
 classes to not be reclaimed.
 One solution is to remove the cache itself since each class loader 
 implementation caches the classes it loads automatically and preventing an 
 exception from being raised is just a micro-optimization that, as one can 
 tell, causes bugs instead of improving anything.
 In fact, I would argue in a highly-concurrent environment, the weakhashmap 
 synchronization/lookup probably costs more then creating the exception itself.
 Another is to prevent the leak from occurring, by inserting the loadedclass 
 into the WeakHashMap wrapped in a WeakReference. Otherwise the class has a 
 strong reference to its classloader (the key) meaning neither gets GC'ed.
 And since the cache_class is static, even if the originating Configuration 
 instance gets GC'ed, its classloader won't.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8431) Running distcp wo args throws IllegalArgumentException

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8431?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8431.
-


 Running distcp wo args throws IllegalArgumentException
 --

 Key: HADOOP-8431
 URL: https://issues.apache.org/jira/browse/HADOOP-8431
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Sandy Ryza
  Labels: newbie
 Fix For: 2.0.2-alpha

 Attachments: diff1.txt


 Running distcp w/o args results in the following:
 {noformat}
 hadoop-3.0.0-SNAPSHOT $ ./bin/hadoop distcp
 12/05/23 18:49:04 ERROR tools.DistCp: Invalid arguments: 
 java.lang.IllegalArgumentException: Target path not specified
   at org.apache.hadoop.tools.OptionsParser.parse(OptionsParser.java:86)
   at org.apache.hadoop.tools.DistCp.run(DistCp.java:102)
   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
   at org.apache.hadoop.tools.DistCp.main(DistCp.java:368)
 Invalid arguments: Target path not specified
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8361) Avoid out-of-memory problems when deserializing strings

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8361?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8361.
-


 Avoid out-of-memory problems when deserializing strings
 ---

 Key: HADOOP-8361
 URL: https://issues.apache.org/jira/browse/HADOOP-8361
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.0.2-alpha

 Attachments: HADOOP-8361.001.patch, HADOOP-8361.002.patch, 
 HADOOP-8361.003.patch, HADOOP-8361.004.patch, HADOOP-8361.005.patch, 
 HADOOP-8361.006.patch, HADOOP-8361.007.patch


 In HDFS, we want to be able to read the edit log without crashing on an OOM 
 condition.  Unfortunately, we currently cannot do this, because there are no 
 limits on the length of certain data types we pull from the edit log.  We 
 often read strings without setting any upper limit on the length we're 
 prepared to accept.
 It's not that we don't have limits on strings-- for example, HDFS limits the 
 maximum path length to 8000 UCS-2 characters.  Linux limits the maximum user 
 name length to either 64 or 128 bytes, depending on what version you are 
 running.  It's just that we're not exposing these limits to the 
 deserialization functions that need to be aware of them.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8563) don't package hadoop-pipes examples/bin

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8563.
-


 don't package hadoop-pipes examples/bin
 ---

 Key: HADOOP-8563
 URL: https://issues.apache.org/jira/browse/HADOOP-8563
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.0.2-alpha

 Attachments: HADOOP-8563.001.patch


 Let's not package hadoop-pipes examples/bin

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8614) IOUtils#skipFully hangs forever on EOF

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8614?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8614.
-


 IOUtils#skipFully hangs forever on EOF
 --

 Key: HADOOP-8614
 URL: https://issues.apache.org/jira/browse/HADOOP-8614
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 0.23.3, 2.0.2-alpha

 Attachments: HADOOP-8614.001.patch


 IOUtils#skipFully contains this code:
 {code}
   public static void skipFully(InputStream in, long len) throws IOException {
 while (len  0) {
   long ret = in.skip(len);
   if (ret  0) {
 throw new IOException( Premature EOF from inputStream);
   }
   len -= ret;
 }
   }
 {code}
 The Java documentation is silent about what exactly skip is supposed to do in 
 the event of EOF.  However, I looked at both InputStream#skip and 
 ByteArrayInputStream#skip, and they both simply return 0 on EOF (no 
 exception).  So it seems safe to assume that this is the standard Java way of 
 doing things in an InputStream.
 Currently IOUtils#skipFully will loop forever if you ask it to skip past EOF!

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8581) add support for HTTPS to the web UIs

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8581?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8581.
-


 add support for HTTPS to the web UIs
 

 Key: HADOOP-8581
 URL: https://issues.apache.org/jira/browse/HADOOP-8581
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Fix For: 2.0.2-alpha

 Attachments: HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch, 
 HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch, HADOOP-8581.patch


 HDFS/MR web UIs don't work over HTTPS, there are places where 'http://' is 
 hardcoded.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-7703) WebAppContext should also be stopped and cleared

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-7703.
-


 WebAppContext should also be stopped and cleared
 

 Key: HADOOP-7703
 URL: https://issues.apache.org/jira/browse/HADOOP-7703
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.24.0
Reporter: Devaraj K
Assignee: Devaraj K
 Fix For: 2.0.2-alpha

 Attachments: HADOOP-7703.patch


 1. If listener stop method throws any exception then the webserver stop 
 method will not be called
 {code}
 public void stop() throws Exception {
 listener.close();
 webServer.stop();
 }
 {code}
 2. also, WebAppContext stores all the context attributes, which does not get 
 cleared if only webServer is stopped.
 so following calls are necessary to ensure clean and complete stop.
 {code}
 webAppContext.clearAttributes();
 webAppContext.stop();
 {code}
 3. Also the WebAppContext display name can be the name passed to HttpServer 
 instance.
 {code}
 webAppContext.setDisplayName(name);
 {code}
 instead of
 {code}
 webAppContext.setDisplayName(WepAppsContext);
 {code}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8654) TextInputFormat delimiter bug:- Input Text portion ends with Delimiter starts with same char/char sequence

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8654?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8654.
-


 TextInputFormat delimiter  bug:- Input Text portion ends with  Delimiter 
 starts with same char/char sequence
 -

 Key: HADOOP-8654
 URL: https://issues.apache.org/jira/browse/HADOOP-8654
 Project: Hadoop Common
  Issue Type: Bug
  Components: util
Affects Versions: 0.20.204.0, 1.0.3, 0.21.0, 2.0.0-alpha
 Environment: Linux
Reporter: Gelesh
  Labels: patch
 Fix For: 3.0.0, 2.0.2-alpha

 Attachments: HADOOP-8654.patch, MAPREDUCE-4512.txt

   Original Estimate: 1m
  Remaining Estimate: 1m

 TextInputFormat delimiter  bug scenario , a character sequence of the input 
 text,  in which the first character matches with the first character of 
 delimiter, and the remaining input text character sequence  matches with the 
 entire delimiter character sequence from the  starting position of the 
 delimiter.
 eg   delimiter =record;
 and Text = record 1:- name = Gelesh e mail = gelesh.had...@gmail.com 
 Location Bangalore record 2: name = sdf  ..  location =Bangalorrecord 3: name 
   
 Here string =Bangalorrecord 3:  satisfy two conditions 
 1) contains the delimiter record
 2) The character / character sequence immediately before the delimiter (ie ' 
 r ') matches with first character (or character sequence ) of delimiter.  (ie 
 =Bangalor ends with and Delimiter starts with same character/char sequence 
 'r' ),
 Here the delimiter is not encountered by the program resulting in improper 
 value text in map that contains the delimiter   

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8766) FileContextMainOperationsBaseTest should randomize the root dir

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8766.
-


 FileContextMainOperationsBaseTest should randomize the root dir 
 

 Key: HADOOP-8766
 URL: https://issues.apache.org/jira/browse/HADOOP-8766
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Colin Patrick McCabe
  Labels: newbie
 Fix For: 2.0.2-alpha

 Attachments: HADOOP-8766.001.patch


 FileContextMainOperationsBaseTest should randomize the name of the root 
 directory it creates. It currently hardcodes LOCAL_FS_ROOT_URI to 
 {{/tmp/test}}.
 This causes the job to fail if it clashes with another jobs that also uses 
 that path. Eg
 {noformat}
 org.apache.hadoop.fs.FileAlreadyExistsException: Parent path is not a 
 directory: file:/tmp/test
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:362)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:373)
 at org.apache.hadoop.fs.FileSystem.primitiveMkdir(FileSystem.java:931)
 at 
 org.apache.hadoop.fs.DelegateToFileSystem.mkdir(DelegateToFileSystem.java:143)
 at org.apache.hadoop.fs.FilterFs.mkdir(FilterFs.java:189)
 at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:706)
 at org.apache.hadoop.fs.FileContext$4.next(FileContext.java:703)
 at 
 org.apache.hadoop.fs.FileContext$FSLinkResolver.resolve(FileContext.java:2333)
 at org.apache.hadoop.fs.FileContext.mkdir(FileContext.java:703)
 at 
 org.apache.hadoop.fs.FileContextMainOperationsBaseTest.testWorkingDirectory(FileContextMainOperationsBaseTest.java:178)
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8737) cmake: always use JAVA_HOME to find libjvm.so, jni.h, jni_md.h

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8737?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8737.
-


 cmake: always use JAVA_HOME to find libjvm.so, jni.h, jni_md.h
 --

 Key: HADOOP-8737
 URL: https://issues.apache.org/jira/browse/HADOOP-8737
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.0.2-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.0.2-alpha

 Attachments: HADOOP-8737.001.patch, HADOOP-8737.002.patch


 We should always use the {{libjvm.so}}, {{jni.h}}, and {{jni_md.h}} under 
 {{JAVA_HOME}}, rather than trying to look for them in system paths.  Since we 
 compile with Maven, we know that we'll have a valid {{JAVA_HOME}} at all 
 times.  There is no point digging in system paths, and it can lead to host 
 contamination if the user has multiple JVMs installed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-7967) Need generalized multi-token filesystem support

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7967?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-7967.
-


 Need generalized multi-token filesystem support
 ---

 Key: HADOOP-7967
 URL: https://issues.apache.org/jira/browse/HADOOP-7967
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs, security
Affects Versions: 0.23.1, 0.24.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Fix For: 0.23.3, 2.0.2-alpha

 Attachments: HADOOP-7967-2.patch, HADOOP-7967-3.patch, 
 HADOOP-7967-4.patch, HADOOP-7967-compat.patch, hadoop7967-deltas.patch, 
 hadoop7967-javadoc.patch, HADOOP-7967.newapi.2.patch, 
 HADOOP-7967.newapi.3.patch, HADOOP-7967.newapi.4.patch, 
 HADOOP-7967.newapi.5.patch, HADOOP-7967.newapi.patch, HADOOP-7967.patch


 Multi-token filesystem support and its interactions with the MR 
 {{TokenCache}} is problematic.  The {{TokenCache}} tries to assume it has the 
 knowledge to know if the tokens for a filesystem are available, which it 
 can't possibly know for multi-token filesystems.  Filtered filesystems are 
 also problematic, such as har on viewfs.  When mergeFs is implemented, it too 
 will become a problem with the current implementation.  Currently 
 {{FileSystem}} will leak tokens even when some tokens are already present.
 The decision for token acquisition, and which tokens, should be pushed all 
 the way down into the {{FileSystem}} level.  The {{TokenCache}} should be 
 ignorant and simply request tokens from each {{FileSystem}}.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8316) Audit logging should be disabled by default

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8316?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8316.
-


 Audit logging should be disabled by default
 ---

 Key: HADOOP-8316
 URL: https://issues.apache.org/jira/browse/HADOOP-8316
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins
 Fix For: 2.0.2-alpha

 Attachments: hadoop-8316.txt


 HADOOP-7633 made hdfs, mr and security audit logging on by default (INFO 
 level) in log4j.properties used for the packages, this then got copied over 
 to the non-packaging log4j.properties in HADOOP-8216 (which made them 
 consistent).
 Seems like we should keep with the v1.x setting which is disabled (WARNING 
 level) by default. There's a performance overhead to audit logging, and 
 HADOOP-7633 provided not rationale (just We should add the audit logs as 
 part of default confs) as to why they were enabled for the packages.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8390) TestFileSystemCanonicalization fails with JDK7

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8390?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8390.
-


 TestFileSystemCanonicalization fails with JDK7
 --

 Key: HADOOP-8390
 URL: https://issues.apache.org/jira/browse/HADOOP-8390
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
 Environment: Apache Maven 3.0.4 (r1232337; 2012-01-17 02:44:56-0600)
 Maven home: /usr/local/apache-maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-24-generic, arch: amd64, family: unix
 Ubuntu 12.04 LTS
Reporter: Trevor Robinson
Assignee: Trevor Robinson
  Labels: java7
 Fix For: 0.23.3, 2.0.2-alpha

 Attachments: HADOOP-8390-BeforeClass.patch, HADOOP-8390.patch


 Failed tests:
  testShortAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host:123
  testPartialAuthority(org.apache.hadoop.fs.TestFileSystemCanonicalization):
 expected:myfs://host.a.b:123 but was:myfs://host.a:123
 Passes on same machine with JDK 1.6.0_32.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8340) SNAPSHOT build versions should compare as less than their eventual final release

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8340?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8340.
-


 SNAPSHOT build versions should compare as less than their eventual final 
 release
 

 Key: HADOOP-8340
 URL: https://issues.apache.org/jira/browse/HADOOP-8340
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Affects Versions: 2.0.0-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor
 Fix For: 2.0.2-alpha

 Attachments: hadoop-8340.txt, hadoop-8340.txt


 We recently added a utility function to compare two version strings, based on 
 splitting on '.'s and comparing each component. However, it considers a 
 version like 2.0.0-SNAPSHOT as being greater than 2.0.0. This isn't right, 
 since SNAPSHOT builds come before the final release.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8633) Interrupted FsShell copies may leave tmp files

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8633?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8633.
-


 Interrupted FsShell copies may leave tmp files
 --

 Key: HADOOP-8633
 URL: https://issues.apache.org/jira/browse/HADOOP-8633
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 2.0.0-alpha, 3.0.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Critical
 Fix For: 0.23.3, 2.0.2-alpha

 Attachments: HADOOP-8633.patch


 Interrupting a copy, ex. via SIGINT, may cause tmp files to not be removed.  
 If the user is copying large files then the remnants will eat into the user's 
 quota.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8586) Fixup a bunch of SPNEGO misspellings

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8586?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8586.
-


 Fixup a bunch of SPNEGO misspellings
 

 Key: HADOOP-8586
 URL: https://issues.apache.org/jira/browse/HADOOP-8586
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins
 Fix For: 1.2.0, 2.0.2-alpha

 Attachments: hadoop-8586-b1.txt, hadoop-8586.txt


 SPNEGO is misspelled as SPENGO a bunch of places.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8703) distcpV2: turn CRC checking off for 0 byte size

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8703.
-


 distcpV2: turn CRC checking off for 0 byte size
 ---

 Key: HADOOP-8703
 URL: https://issues.apache.org/jira/browse/HADOOP-8703
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.3
Reporter: Dave Thompson
Assignee: Dave Thompson
 Fix For: 0.23.3, 2.0.2-alpha

 Attachments: HADOOP-8703-branch-0.23.patch, 
 HADOOP-8703-branch-0.23.patch


 DistcpV2 (hadoop-tools/hadoop-distcp/..) can fail from checksum failure, 
 sometimes when copying a 0 byte file. Root cause of this may have to do with 
 an inconsistent nature of HDFS when creating 0 byte files, however distcp can 
 avoid this issue by not checking CRC when size is zero.
 This issue was reported as part of HADOOP-8233, though it seems like a better 
 idea to treat this particular aspect on it's own.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8827) Upgrade jets3t to the latest

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8827?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8827.
-


 Upgrade jets3t to the latest
 

 Key: HADOOP-8827
 URL: https://issues.apache.org/jira/browse/HADOOP-8827
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/s3
Affects Versions: 2.0.0-alpha
 Environment: Ubuntu 12.04 LTS 64bit
Reporter: Zack
  Labels: hadoop
 Fix For: 2.0.2-alpha

   Original Estimate: 72h
  Remaining Estimate: 72h

 As of hadoop-2.0.1-alpha, the bundles jets3t-0.6.1.jar, but the latest jets3t 
 is already at 0.9.0 http://jets3t.s3.amazonaws.com/downloads.html. Perhaps 
 it would be good to upgrade to use a more up-to-date version.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8541) Better high-percentile latency metrics

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8541?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8541.
-


 Better high-percentile latency metrics
 --

 Key: HADOOP-8541
 URL: https://issues.apache.org/jira/browse/HADOOP-8541
 Project: Hadoop Common
  Issue Type: Improvement
  Components: metrics
Affects Versions: 2.0.0-alpha
Reporter: Andrew Wang
Assignee: Andrew Wang
 Fix For: 2.0.2-alpha

 Attachments: hadoop-8541-1.patch, hadoop-8541-2.patch, 
 hadoop-8541-3.patch, hadoop-8541-4.patch, hadoop-8541-5.patch, 
 hadoop-8541-6.patch


 Based on discussion in HBASE-6261 and with some HDFS devs, I'd like to make 
 better high-percentile latency metrics a part of hadoop-common.
 I've already got a working implementation of [1], an efficient algorithm for 
 estimating quantiles on a stream of values. It allows you to specify 
 arbitrary quantiles to track (e.g. 50th, 75th, 90th, 95th, 99th), along with 
 tight error bounds. This estimator can be snapshotted and reset periodically 
 to get a feel for how these percentiles are changing over time.
 I propose creating a new MutableQuantiles class that does this. [1] isn't 
 completely without overhead (~1MB memory for reasonably sized windows), which 
 is why I hesitate to add it to the existing MutableStat class.
 [1] Cormode, Korn, Muthukrishnan, and Srivastava. Effective Computation of 
 Biased Quantiles over Data Streams in ICDE 2005.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8406) CompressionCodecFactory.CODEC_PROVIDERS iteration is thread-unsafe

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8406?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8406.
-


 CompressionCodecFactory.CODEC_PROVIDERS iteration is thread-unsafe
 --

 Key: HADOOP-8406
 URL: https://issues.apache.org/jira/browse/HADOOP-8406
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 2.0.0-alpha
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Fix For: 2.0.2-alpha

 Attachments: hadoop-8406.txt


 CompressionCodecFactory defines CODEC_PROVIDERS as:
 {code}
   private static final ServiceLoaderCompressionCodec CODEC_PROVIDERS =
 ServiceLoader.load(CompressionCodec.class);
 {code}
 but this is a lazy collection which is thread-unsafe to iterate. We either 
 need to synchronize when we iterate over it, or we need to materialize it 
 during class-loading time by copying to a non-lazy collection

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8697) TestWritableName fails intermittently with JDK7

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8697?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8697.
-


 TestWritableName fails intermittently with JDK7
 ---

 Key: HADOOP-8697
 URL: https://issues.apache.org/jira/browse/HADOOP-8697
 Project: Hadoop Common
  Issue Type: Bug
  Components: test
Affects Versions: 0.23.3, 3.0.0, 2.0.2-alpha
 Environment: Apache Maven 3.0.4
 Maven home: /usr/share/maven
 Java version: 1.7.0_04, vendor: Oracle Corporation
 Java home: /usr/lib/jvm/jdk1.7.0_04/jre
 Default locale: en_US, platform encoding: ISO-8859-1
 OS name: linux, version: 3.2.0-25-generic, arch: amd64, family: unix
Reporter: Trevor Robinson
Assignee: Trevor Robinson
  Labels: java7
 Fix For: 0.23.3, 2.0.2-alpha

 Attachments: HADOOP-8697.patch


 On JDK7, {{testAddName}} can run before {{testSetName}}, which causes it to 
 fail with:
 {noformat}
 testAddName(org.apache.hadoop.io.TestWritableName): WritableName can't load 
 class: mystring
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8538) CMake builds fail on ARM

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8538?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8538.
-


 CMake builds fail on ARM
 

 Key: HADOOP-8538
 URL: https://issues.apache.org/jira/browse/HADOOP-8538
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
 Environment: gcc version 4.6.3 (Ubuntu/Linaro 4.6.3-1ubuntu5)
Reporter: Trevor Robinson
Assignee: Trevor Robinson
 Fix For: 2.0.2-alpha

 Attachments: hadoop-cmake.patch


 CMake native builds fail with this error:
 cc1: error: unrecognized command line option '-m32'
 -m32 is only defined by GCC for x86, PowerPC, and SPARC.
 The following files specify -m32 when the JVM data model is 32-bit:
 hadoop-common-project/hadoop-common/src/CMakeLists.txt
 hadoop-hdfs-project/hadoop-hdfs/src/CMakeLists.txt
 hadoop-tools/hadoop-pipes/src/CMakeLists.txt
 hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-nodemanager/src/CMakeLists.txt
 This is a partial regression of HDFS-1920.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8626) Typo in default setting for hadoop.security.group.mapping.ldap.search.filter.user

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8626.
-


 Typo in default setting for 
 hadoop.security.group.mapping.ldap.search.filter.user
 -

 Key: HADOOP-8626
 URL: https://issues.apache.org/jira/browse/HADOOP-8626
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.0.0-alpha
Reporter: Jonathan Natkins
Assignee: Jonathan Natkins
 Fix For: 2.0.2-alpha

 Attachments: HADOOP-8626.patch


 (amp;(objectClass=user)(sAMAccountName={0}) should have a trailing 
 parenthesis at the end

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8499) Lower min.user.id to 500 for the tests

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8499?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8499.
-


 Lower min.user.id to 500 for the tests
 --

 Key: HADOOP-8499
 URL: https://issues.apache.org/jira/browse/HADOOP-8499
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
Priority: Minor
 Fix For: 2.0.2-alpha

 Attachments: HADOOP-8499.002.patch


 On Linux platforms where user IDs start at 500 rather than 1000, the build 
 currently is broken.  This includes CentOS, RHEL, Fedora, SuSE, and probably 
 most other Linux platforms.  It does happen to work on Debian and Ubuntu, 
 which explains why Jenkins hasn't caught it yet.
 Other users will see something like this:
 {code}
 [INFO] Requested user cmccabe has id 500, which is below the minimum allowed 
 1000
 [INFO] FAIL: test-container-executor
 [INFO] 
 [INFO] 1 of 1 test failed
 [INFO] Please report to mapreduce-...@hadoop.apache.org
 [INFO] 
 [INFO] make[1]: *** [check-TESTS] Error 1
 [INFO] make[1]: Leaving directory 
 `/home/cmccabe/hadoop4/hadoop-mapreduce-project/hadoop-yarn/hadoop-yarn
 -server/hadoop-yarn-server-nodemanager/target/native/container-executor'
 {code}
 And then the build fails.  Since native unit tests are currently unskippable 
 (HADOOP-8480) this makes the project unbuildable.
 The easy solution to this is to relax the constraint for the unit test.  
 Since the unit test already writes its own configuration file, we just need 
 to change it there.
 In general, I believe that it would make sense to change this to 500 across 
 the board.  I'm not aware of any Linuxes that create system users with IDs 
 higher than or equal to 500.  System user IDs tend to be below 200.
 However, if we do nothing else, we should at least fix the build by relaxing 
 the constraint for unit tests.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8710) Remove ability for users to easily run the trash emptier

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8710?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8710.
-


 Remove ability for users to easily run the trash emptier
 

 Key: HADOOP-8710
 URL: https://issues.apache.org/jira/browse/HADOOP-8710
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Affects Versions: 1.0.0, 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins
 Fix For: 2.0.2-alpha

 Attachments: hadoop-8710.txt


 Users can currently run the emptier via {{hadoop 
 org.apache.hadoop.fs.Trash}}, which seems error prone as there's nothing in 
 that command that suggests it runs the emptier and nothing that asks you 
 before deleting the trash for all users (that the current user is capable of 
 deleting). Given that the trash emptier runs server side (eg on the NN) let's 
 remove the ability to easily run it client side.  Marking as an incompatible 
 change since someone expecting the hadoop command with this class specified 
 to empty trash will no longer be able to (they'll need to create their own 
 class that does this).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8721) ZKFC should not retry 45 times when attempting a graceful fence during a failover

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8721?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8721.
-


 ZKFC should not retry 45 times when attempting a graceful fence during a 
 failover
 -

 Key: HADOOP-8721
 URL: https://issues.apache.org/jira/browse/HADOOP-8721
 Project: Hadoop Common
  Issue Type: Bug
  Components: auto-failover, ha
Affects Versions: 2.0.0-alpha
Reporter: suja s
Assignee: Vinay
Priority: Critical
 Fix For: 2.0.2-alpha

 Attachments: HDFS-3561-2.patch, HDFS-3561-3.patch, HDFS-3561.patch


 Scenario:
 Active NN on machine1
 Standby NN on machine2
 Machine1 is isolated from the network (machine1 network cable unplugged)
 After zk session timeout ZKFC at machine2 side gets notification that NN1 is 
 not there.
 ZKFC tries to failover NN2 as active.
 As part of this during fencing it tries to connect to machine1 and kill NN1. 
 (sshfence technique configured)
 This connection retry happens for 45 times( as it takes  
 ipc.client.connect.max.socket.retries)
 Also after that standby NN is not able to take over as active (because of 
 fencing failure).
 Suggestion: If ZKFC is not able to reach other NN for specified time/no of 
 retries it can consider that NN as dead and instruct the other NN to take 
 over as active as there is no chance of the other NN (NN1) retaining its 
 state as active after zk session timeout when its isolated from network
 From ZKFC log:
 {noformat}
 2012-06-21 17:46:14,378 INFO org.apache.hadoop.ipc.Client: Retrying connect 
 to server: HOST-xx-xx-xx-102/xx.xx.xx.102:65110. Already tried 22 time(s).
 2012-06-21 17:46:35,378 INFO org.apache.hadoop.ipc.Client: Retrying connect 
 to server: HOST-xx-xx-xx-102/xx.xx.xx.102:65110. Already tried 23 time(s).
 2012-06-21 17:46:56,378 INFO org.apache.hadoop.ipc.Client: Retrying connect 
 to server: HOST-xx-xx-xx-102/xx.xx.xx.102:65110. Already tried 24 time(s).
 2012-06-21 17:47:17,378 INFO org.apache.hadoop.ipc.Client: Retrying connect 
 to server: HOST-xx-xx-xx-102/xx.xx.xx.102:65110. Already tried 25 time(s).
 2012-06-21 17:47:38,382 INFO org.apache.hadoop.ipc.Client: Retrying connect 
 to server: HOST-xx-xx-xx-102/xx.xx.xx.102:65110. Already tried 26 time(s).
 2012-06-21 17:47:59,382 INFO org.apache.hadoop.ipc.Client: Retrying connect 
 to server: HOST-xx-xx-xx-102/xx.xx.xx.102:65110. Already tried 27 time(s).
 2012-06-21 17:48:20,386 INFO org.apache.hadoop.ipc.Client: Retrying connect 
 to server: HOST-xx-xx-xx-102/xx.xx.xx.102:65110. Already tried 28 time(s).
 2012-06-21 17:48:41,386 INFO org.apache.hadoop.ipc.Client: Retrying connect 
 to server: HOST-xx-xx-xx-102/xx.xx.xx.102:65110. Already tried 29 time(s).
 2012-06-21 17:49:02,386 INFO org.apache.hadoop.ipc.Client: Retrying connect 
 to server: HOST-xx-xx-xx-102/xx.xx.xx.102:65110. Already tried 30 time(s).
 2012-06-21 17:49:23,386 INFO org.apache.hadoop.ipc.Client: Retrying connect 
 to server: HOST-xx-xx-xx-102/xx.xx.xx.102:65110. Already tried 31 time(s).
 {noformat}
  

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8700) Move the checksum type constants to an enum

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8700.
-


 Move the checksum type constants to an enum
 ---

 Key: HADOOP-8700
 URL: https://issues.apache.org/jira/browse/HADOOP-8700
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Tsz Wo (Nicholas), SZE
Priority: Minor
 Fix For: 0.23.3, 2.0.2-alpha

 Attachments: c8700_20120815b.patch, c8700_20120815.patch, 
 hadoop-8700-branch-0.23.patch.txt


 In DataChecksum, there are constants for crc types, crc names and crc sizes.  
 We should move them to an enum for better coding style.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8660) TestPseudoAuthenticator failing with NPE

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8660?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8660.
-


 TestPseudoAuthenticator failing with NPE
 

 Key: HADOOP-8660
 URL: https://issues.apache.org/jira/browse/HADOOP-8660
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Alejandro Abdelnur
 Fix For: 2.0.2-alpha

 Attachments: HADOOP-8660.patch


 This test started failing recently, on top of trunk:
 testAuthenticationAnonymousAllowed(org.apache.hadoop.security.authentication.client.TestPseudoAuthenticator)
   Time elapsed: 0.241 sec   ERROR!
 java.lang.NullPointerException
 at 
 org.apache.hadoop.security.authentication.client.PseudoAuthenticator.authenticate(PseudoAuthenticator.java:75)
 at 
 org.apache.hadoop.security.authentication.client.AuthenticatedURL.openConnection(AuthenticatedURL.java:232)
 at 
 org.apache.hadoop.security.authentication.client.AuthenticatorTestCase._testAuthentication(AuthenticatorTestCase.java:127)
 at 
 org.apache.hadoop.security.authentication.client.TestPseudoAuthenticator.testAuthenticationAnonymousAllowed(TestPseudoAuthenticator.java:65)

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8249) invalid hadoop-auth cookies should trigger authentication if info is avail before returning HTTP 401

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8249?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8249.
-


 invalid hadoop-auth cookies should trigger authentication if info is avail 
 before returning HTTP 401
 

 Key: HADOOP-8249
 URL: https://issues.apache.org/jira/browse/HADOOP-8249
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 1.1.0, 2.0.0-alpha
Reporter: bc Wong
Assignee: Alejandro Abdelnur
 Fix For: 1.2.0, 2.0.2-alpha

 Attachments: HADOOP-8249.patch, HDFS-3198_branch-1.patch


 WebHdfs gives out cookies. But when the client passes them back, it'd 
 sometimes reject them and return a HTTP 401 instead. (Sometimes as in after 
 a restart.) The interesting thing is that if the client doesn't pass the 
 cookie back, WebHdfs will be totally happy.
 The correct behaviour should be to ignore the cookie if it looks invalid, and 
 attempt to proceed with the request handling.
 I haven't tried HttpFs to see whether it handles restart better.
 Reproducing it with curl:
 {noformat}
 
 ## Initial curl. Storing cookie to file.
 
 [root@vbox2 ~]# curl -c /tmp/webhdfs.cookie -i 
 'http://localhost:50070/webhdfs/v1/?op=LISTSTATUSuser.name=bcwalrus'
 HTTP/1.1 200 OK
 Content-Type: application/json
 Expires: Thu, 01-Jan-1970 00:00:00 GMT
 Set-Cookie: 
 hadoop.auth=u=bcwalrusp=bcwalrust=simplee=1333614686366s=z2w5xpFlufnnEoOHxVRiXqxwtqM=;Path=/
 Content-Length: 597
 Server: Jetty(6.1.26)
 {FileStatuses:{FileStatus:[
 {accessTime:0,blockSize:0,group:supergroup,length:0,modificationTime:1333577906198,owner:mapred,pathSuffix:tmp,permission:1777,replication:0,type:DIRECTORY},
 {accessTime:0,blockSize:0,group:supergroup,length:0,modificationTime:1333577511848,owner:hdfs,pathSuffix:user,permission:1777,replication:0,type:DIRECTORY},
 {accessTime:0,blockSize:0,group:supergroup,length:0,modificationTime:1333428745116,owner:mapred,pathSuffix:var,permission:755,replication:0,type:DIRECTORY}
 ]}}
 
 ## Another curl. Using the cookie jar.
 
 [root@vbox2 ~]# curl -b /tmp/webhdfs.cookie -i 
 'http://localhost:50070/webhdfs/v1/?op=LISTSTATUSuser.name=bcwalrus'
 HTTP/1.1 200 OK
 Content-Type: application/json
 Content-Length: 597
 Server: Jetty(6.1.26)
 {FileStatuses:{FileStatus:[
 {accessTime:0,blockSize:0,group:supergroup,length:0,modificationTime:1333577906198,owner:mapred,pathSuffix:tmp,permission:1777,replication:0,type:DIRECTORY},
 {accessTime:0,blockSize:0,group:supergroup,length:0,modificationTime:1333577511848,owner:hdfs,pathSuffix:user,permission:1777,replication:0,type:DIRECTORY},
 {accessTime:0,blockSize:0,group:supergroup,length:0,modificationTime:1333428745116,owner:mapred,pathSuffix:var,permission:755,replication:0,type:DIRECTORY}
 ]}}
 
 ## Restart NN.
 
 [root@vbox2 ~]# /etc/init.d/hadoop-hdfs-namenode restartStopping Hadoop 
 namenode:  [  OK  ]
 stopping namenode
 Starting Hadoop namenode:  [  OK  ]
 starting namenode, logging to 
 /var/log/hadoop-hdfs/hadoop-hdfs-namenode-vbox2.out
 
 ## Curl using cookie jar gives error.
 
 [root@vbox2 ~]# curl -b /tmp/webhdfs.cookie -i 
 'http://localhost:50070/webhdfs/v1/?op=LISTSTATUSuser.name=bcwalrus'
 HTTP/1.1 401 org.apache.hadoop.security.authentication.util.SignerException: 
 Invalid signature
 Content-Type: text/html; charset=iso-8859-1
 Set-Cookie: hadoop.auth=;Path=/;Expires=Thu, 01-Jan-1970 00:00:00 GMT
 Cache-Control: must-revalidate,no-cache,no-store
 Content-Length: 1520
 Server: Jetty(6.1.26)
 html
 head
 meta http-equiv=Content-Type content=text/html; charset=ISO-8859-1/
 titleError 401 
 org.apache.hadoop.security.authentication.util.SignerException: Invalid 
 signature/title
 /head
 bodyh2HTTP ERROR 401/h2
 pProblem accessing /webhdfs/v1/. Reason:
 preorg.apache.hadoop.security.authentication.util.SignerException: 
 Invalid signature/pre/phr /ismallPowered by 
 Jetty:///small/ibr/
 ...
 
 ## Curl without cookie jar is ok.
 
 [root@vbox2 ~]# curl -i 
 'http://localhost:50070/webhdfs/v1/?op=LISTSTATUSuser.name=bcwalrus'
 HTTP/1.1 200 OK
 Content-Type: application/json
 Expires: Thu, 01-Jan-1970 00:00:00 

[jira] [Closed] (HADOOP-6802) Remove FS_CLIENT_BUFFER_DIR_KEY = fs.client.buffer.dir from CommonConfigurationKeys.java (not used, deprecated)

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6802?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-6802.
-


 Remove FS_CLIENT_BUFFER_DIR_KEY = fs.client.buffer.dir from 
 CommonConfigurationKeys.java (not used, deprecated)
 -

 Key: HADOOP-6802
 URL: https://issues.apache.org/jira/browse/HADOOP-6802
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf, fs
Affects Versions: 0.23.0
Reporter: Erik Steffl
Assignee: Sho Shimauchi
  Labels: newbie
 Fix For: 2.0.2-alpha

 Attachments: HADOOP-6802.txt, HADOOP-6802.txt


 In CommonConfigurationKeys.java:
 public static final String  FS_CLIENT_BUFFER_DIR_KEY = fs.client.buffer.dir;
 The variable FS_CLIENT_BUFFER_DIR_KEY and string fs.client.buffer.dir are 
 not used anywhere (Checked Hadoop Common, Hdfs and Mapred projects), it seems 
 they should be removed.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8749) HADOOP-8031 changed the way in which relative xincludes are handled in Configuration.

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8749?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8749.
-


 HADOOP-8031 changed the way in which relative xincludes are handled in 
 Configuration.
 -

 Key: HADOOP-8749
 URL: https://issues.apache.org/jira/browse/HADOOP-8749
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Reporter: Ahmed Radwan
Assignee: Ahmed Radwan
 Fix For: 2.0.2-alpha

 Attachments: HADOOP-8749.patch, HADOOP-8749_rev2.patch, 
 HADOOP-8749_rev3.patch, HADOOP-8749_rev4.patch


 The patch from HADOOP-8031 changed the xml parsing to use 
 DocumentBuilder#parse(InputStream uri.openStream()) instead of 
 DocumentBuilder#parse(String uri.toString()).I looked into the implementation 
 of javax.xml.parsers.DocumentBuilder and org.xml.sax.InputSource and there is 
 a difference when the DocumentBuilder parse(String) method is used versus 
 parse(InputStream).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8485) Don't hardcode Apache Hadoop 0.23 in the docs

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8485?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8485.
-


 Don't hardcode Apache Hadoop 0.23 in the docs
 ---

 Key: HADOOP-8485
 URL: https://issues.apache.org/jira/browse/HADOOP-8485
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.0.0-alpha
Reporter: Eli Collins
Assignee: Eli Collins
Priority: Minor
 Fix For: 2.0.2-alpha

 Attachments: hadoop-8485.txt


 The docs currently hardcode the string Apache Hadoop 0.23 and 
 hadoop-0.20.205 in the main page.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8408) MR doesn't work with a non-default ViewFS mount table and security enabled

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8408?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8408.
-


 MR doesn't work with a non-default ViewFS mount table and security enabled
 --

 Key: HADOOP-8408
 URL: https://issues.apache.org/jira/browse/HADOOP-8408
 Project: Hadoop Common
  Issue Type: Bug
  Components: viewfs
Affects Versions: 2.0.0-alpha
Reporter: Aaron T. Myers
Assignee: Aaron T. Myers
 Fix For: 2.0.2-alpha

 Attachments: HADOOP-8408-amendment.patch, 
 HADOOP-8408-amendment.patch, HDFS-8408.patch


 With security enabled, if one sets up a ViewFS mount table using the default 
 mount table name, everything works as expected. However, if you try to create 
 a ViewFS mount table with a non-default name, you'll end up getting an error 
 like the following (in this case vfs-cluster was the name of the mount 
 table) when running an MR job:
 {noformat}
 java.lang.IllegalArgumentException: java.net.UnknownHostException: vfs-cluster
 {noformat}

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8770) NN should not RPC to self to find trash defaults (causes deadlock)

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8770?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8770.
-


 NN should not RPC to self to find trash defaults (causes deadlock)
 --

 Key: HADOOP-8770
 URL: https://issues.apache.org/jira/browse/HADOOP-8770
 Project: Hadoop Common
  Issue Type: Bug
  Components: trash
Affects Versions: 2.0.2-alpha
Reporter: Todd Lipcon
Assignee: Eli Collins
Priority: Blocker
 Fix For: 2.0.2-alpha

 Attachments: hdfs-3876.txt, hdfs-3876.txt, hdfs-3876.txt, 
 hdfs-3876.txt


 When transitioning a SBN to active, I ran into the following situation:
 - the TrashPolicy first gets loaded by an IPC Server Handler thread. The 
 {{initialize}} function then tries to make an RPC to the same node to find 
 out the defaults.
 - This is happening inside the NN write lock (since it's part of the active 
 initialization). Hence, all of the other handler threads are already blocked 
 waiting to get the NN lock.
 - Since no handler threads are free, the RPC blocks forever and the NN never 
 enters active state.
 We need to have a general policy that the NN should never make RPCs to itself 
 for any reason, due to potential for deadlocks like this.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8110) TestViewFsTrash occasionally fails

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8110?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8110.
-


 TestViewFsTrash occasionally fails
 --

 Key: HADOOP-8110
 URL: https://issues.apache.org/jira/browse/HADOOP-8110
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.3, 0.24.0
Reporter: Tsz Wo (Nicholas), SZE
Assignee: Jason Lowe
 Fix For: 0.23.3, 2.0.2-alpha

 Attachments: HADOOP-8110.patch, HADOOP-8110.patch, HADOOP-8110.patch


 {noformat}
 junit.framework.AssertionFailedError: -expunge failed expected:0 but was:1
   at junit.framework.Assert.fail(Assert.java:47)
   at junit.framework.Assert.failNotEquals(Assert.java:283)
   at junit.framework.Assert.assertEquals(Assert.java:64)
   at junit.framework.Assert.assertEquals(Assert.java:195)
   at org.apache.hadoop.fs.TestTrash.trashShell(TestTrash.java:322)
   at 
 org.apache.hadoop.fs.viewfs.TestViewFsTrash.testTrash(TestViewFsTrash.java:73)
   ...
 {noformat}
 There are quite a few TestViewFsTrash failures recently.  E.g. [build #624 
 for 
 trunk|https://builds.apache.org/job/PreCommit-HADOOP-Build/624//testReport/org.apache.hadoop.fs.viewfs/TestViewFsTrash/testTrash/]
  and [build #2 for 
 0.23-PB|https://builds.apache.org/view/G-L/view/Hadoop/job/Hadoop-Common-0.23-PB-Build/2/testReport/junit/org.apache.hadoop.fs.viewfs/TestViewFsTrash/testTrash/].

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


[jira] [Closed] (HADOOP-8239) Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used

2012-10-11 Thread Arun C Murthy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8239?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun C Murthy closed HADOOP-8239.
-


 Extend MD5MD5CRC32FileChecksum to show the actual checksum type being used
 --

 Key: HADOOP-8239
 URL: https://issues.apache.org/jira/browse/HADOOP-8239
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs
Reporter: Kihwal Lee
Assignee: Kihwal Lee
 Fix For: 0.23.3, 2.0.2-alpha

 Attachments: hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-after-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, 
 hadoop-8239-before-hadoop-8240.patch.txt, hadoop-8239-branch-0.23.patch.txt, 
 hadoop-8239-trunk-branch2.patch.txt, hadoop-8239-trunk-branch2.patch.txt, 
 hadoop-8239-trunk-branch2.patch.txt


 In order to support HADOOP-8060, MD5MD5CRC32FileChecksum needs to be extended 
 to carry the information on the actual checksum type being used. The 
 interoperability between the extended version and branch-1 should be 
 guaranteed when Filesystem.getFileChecksum() is called over hftp, webhdfs or 
 httpfs.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira


  1   2   3   >