[jira] [Commented] (HADOOP-4885) Try to restore failed replicas of Name Node storage (at checkpoint time)

2012-03-14 Thread Brandon Li (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-4885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229004#comment-13229004
 ] 

Brandon Li commented on HADOOP-4885:


Hi Eli,
Thanks for the comments!
The code base in branch-1 is slightly different with 0.21. 
Adding directories to removedStorageDirs in original patch is already in 
branch-1. 
I didn't get your second question: my patch uses addStorageDir too. 
The same test with minor modification(e.g., comparing md5 instead of length for 
edits files) is included in the backport patch.

Thanks.

 Try to restore failed replicas of Name Node storage (at checkpoint time)
 

 Key: HADOOP-4885
 URL: https://issues.apache.org/jira/browse/HADOOP-4885
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Boris Shkolnik
Assignee: Boris Shkolnik
 Fix For: 0.21.0

 Attachments: HADOOP-4885-1.patch, HADOOP-4885-3.patch, 
 HADOOP-4885-3.patch, HADOOP-4885.branch-1.patch, 
 HADOOP-4885.branch-1.patch.2, HADOOP-4885.patch, HADOOP-4885.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8165) Code contribution of GlusterFS implementation of Hadoop FileSystem Interface

2012-03-14 Thread Venky Shankar (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8165?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229006#comment-13229006
 ] 

Venky Shankar commented on HADOOP-8165:
---

Eli,

We have tested our plugin with Hadoop version 0.20.2. I see you have moved the 
target version of this bug to 0.24.0. Is it ok if I send the patch against 
Hadoop 0.20.2 
(http://svn.apache.org/repos/asf/hadoop/common/tags/release-0.20.2/) ?

Thanks,
-Venky

 Code contribution of GlusterFS implementation of Hadoop FileSystem Interface
 

 Key: HADOOP-8165
 URL: https://issues.apache.org/jira/browse/HADOOP-8165
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Reporter: Venky Shankar
Assignee: Venky Shankar
 Attachments: glusterfs-hadoop-code.tar.gz, glusterfs-hadoop-jar.tar.gz


 GlusterFS is a software-only, highly available, scalable, centrally managed 
 storage pool for public and private cloud environments. GlusterFS has been 
 integrated with Hadoop using Hadoop's FileSystem interface.
 This ticket is filed so as to get our code/libs to be included with Hadoop.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-5528) Binary partitioner

2012-03-14 Thread Klaas Bosteels (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-5528?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229047#comment-13229047
 ] 

Klaas Bosteels commented on HADOOP-5528:


Hey Arun, since you seem to have been adding the typed bytes / binary streaming 
patches to 1.0, I wanted to suggest including this one as well because then 1.0 
would be fully compatible with Dumbo out of the box...

 Binary partitioner
 --

 Key: HADOOP-5528
 URL: https://issues.apache.org/jira/browse/HADOOP-5528
 Project: Hadoop Common
  Issue Type: New Feature
Reporter: Klaas Bosteels
Assignee: Klaas Bosteels
 Fix For: 0.21.0

 Attachments: 5528_20090401.patch, HADOOP-5528-0.18.patch, 
 HADOOP-5528.patch, HADOOP-5528.patch, HADOOP-5528.patch, HADOOP-5528.patch, 
 HADOOP-5528.patch


 It would be useful to have a {{BinaryPartitioner}} that partitions 
 {{BinaryComparable}} keys by hashing a configurable part of the bytes array 
 corresponding to each key.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8164) Handle paths using back slash as path separator for windows only

2012-03-14 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229173#comment-13229173
 ] 

Hudson commented on HADOOP-8164:


Integrated in Hadoop-Mapreduce-0.23-Build #225 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Build/225/])
HADOOP-8164. Merging change 1300290 from trunk to 0.23 (Revision 1300292)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1300292
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Path.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestPath.java


 Handle paths using back slash as path separator for windows only
 

 Key: HADOOP-8164
 URL: https://issues.apache.org/jira/browse/HADOOP-8164
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 0.23.0, 0.24.0
Reporter: Suresh Srinivas
Assignee: Daryn Sharp
 Fix For: 0.24.0, 0.23.2, 0.23.3

 Attachments: HADOOP-8139-6.patch, HADOOP-8164.patch


 Please see the description in HADOOP-8139. Using escape character back slash 
 as path separator could cause accidental deletion of data. This jira for now 
 supports back slash only for windows. Eventually HADOOP-8139 will remove the 
 support for back slash based paths.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8139) Path does not allow metachars to be escaped

2012-03-14 Thread Suresh Srinivas (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229177#comment-13229177
 ] 

Suresh Srinivas commented on HADOOP-8139:
-

bq. Should this jira be resolved as won't fix 
This bug should still be fixed. As discussed earlier, we need to remove support 
for back slash based paths, make the changes needed in RLFS and test it on 
Windows. Alexander is currently working on 1.0 experimental branch. Not sure if 
he will be able to do this in 23 time frame.

 Path does not allow metachars to be escaped
 ---

 Key: HADOOP-8139
 URL: https://issues.apache.org/jira/browse/HADOOP-8139
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 0.24.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-8139-2.patch, HADOOP-8139-3.patch, 
 HADOOP-8139-4.patch, HADOOP-8139-5.patch, HADOOP-8139-6.patch, 
 HADOOP-8139.patch, HADOOP-8139.patch


 Path converts \ into /, probably for windows support?  This means it's 
 impossible for the user to escape metachars in a path name.  Glob expansion 
 can have deadly results.
 Here are the most egregious examples. A user accidentally creates a path like 
 /user/me/*/file.  Now they want to remove it.
 {noformat}hadoop fs -rmr -skipTrash '/user/me/\*' becomes...
 hadoop fs -rmr -skipTrash /user/me/*{noformat}
 * User/Admin: Nuked their home directory or any given directory
 {noformat}hadoop fs -rmr -skipTrash '\*' becomes...
 hadoop fs -rmr -skipTrash /*{noformat}
 * User:  Deleted _everything_ they have access to on the cluster
 * Admin: *Nukes the entire cluster*
 Note: FsShell is shown for illustrative purposes, however the problem is in 
 the Path object, not FsShell.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8164) Handle paths using back slash as path separator for windows only

2012-03-14 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229205#comment-13229205
 ] 

Hudson commented on HADOOP-8164:


Integrated in Hadoop-Mapreduce-trunk #1019 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1019/])
HADOOP-8164. Back slash as path separator is handled for Windows only. 
Contributed by Daryn Sharp. (Revision 1300290)

 Result = SUCCESS
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1300290
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Path.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestPath.java


 Handle paths using back slash as path separator for windows only
 

 Key: HADOOP-8164
 URL: https://issues.apache.org/jira/browse/HADOOP-8164
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 0.23.0, 0.24.0
Reporter: Suresh Srinivas
Assignee: Daryn Sharp
 Fix For: 0.24.0, 0.23.2, 0.23.3

 Attachments: HADOOP-8139-6.patch, HADOOP-8164.patch


 Please see the description in HADOOP-8139. Using escape character back slash 
 as path separator could cause accidental deletion of data. This jira for now 
 supports back slash only for windows. Eventually HADOOP-8139 will remove the 
 support for back slash based paths.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8169) javadoc generation fails with java.lang.OutOfMemoryError: Java heap space

2012-03-14 Thread Thomas Graves (Created) (JIRA)
javadoc generation fails with java.lang.OutOfMemoryError: Java heap space
-

 Key: HADOOP-8169
 URL: https://issues.apache.org/jira/browse/HADOOP-8169
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.3
Reporter: Thomas Graves
Assignee: Thomas Graves
Priority: Critical


building the docs (mvn package -Pdocs -Dtar -DskipTests) on branch-0.23 results 
in a javadoc java.lang.OutOfMemoryError: Java heap space. Note this seems to 
only happen when building with 32 bit java, 64 bit works fine.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8164) Handle paths using back slash as path separator for windows only

2012-03-14 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229257#comment-13229257
 ] 

Hudson commented on HADOOP-8164:


Integrated in Hadoop-Hdfs-trunk #984 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/984/])
HADOOP-8164. Back slash as path separator is handled for Windows only. 
Contributed by Daryn Sharp. (Revision 1300290)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1300290
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Path.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestPath.java


 Handle paths using back slash as path separator for windows only
 

 Key: HADOOP-8164
 URL: https://issues.apache.org/jira/browse/HADOOP-8164
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 0.23.0, 0.24.0
Reporter: Suresh Srinivas
Assignee: Daryn Sharp
 Fix For: 0.24.0, 0.23.2, 0.23.3

 Attachments: HADOOP-8139-6.patch, HADOOP-8164.patch


 Please see the description in HADOOP-8139. Using escape character back slash 
 as path separator could cause accidental deletion of data. This jira for now 
 supports back slash only for windows. Eventually HADOOP-8139 will remove the 
 support for back slash based paths.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Resolved] (HADOOP-8020) Reduce the allowed Javadoc warnings from 13 to 11

2012-03-14 Thread Jonathan Eagles (Resolved) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Eagles resolved HADOOP-8020.
-

Resolution: Invalid

 Reduce the allowed Javadoc warnings from 13 to 11
 -

 Key: HADOOP-8020
 URL: https://issues.apache.org/jira/browse/HADOOP-8020
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.1
Reporter: Jonathan Eagles
Assignee: Jonathan Eagles
 Attachments: HADOOP-8020.patch, patchJavadocWarnings.txt.hadoop-trunk


 OK_JAVADOC_WARNINGS is set too high in 
 hadoop-common-project/dev-support/test-patch.properties
 {noformat}
 $ cd hadoop-common-project/
 $ mvn clean test javadoc:javadoc -DskipTests -Pdocs -DHadoopPatchProcess  
 ~/patchJavadocWarnings.txt.hadoop-trunk 21
 $ grep '\[WARNING\]' ~/patchJavadocWarnings.txt.hadoop-trunk | awk '/Javadoc 
 Warnings/,EOF' | grep warning | awk 'BEGIN {total = 0} {total += 1} END 
 {print total}'
 11
 {noformat}
 {noformat}
 $ cat dev-support/test-patch.properties
 OK_RELEASEAUDIT_WARNINGS=0
 OK_FINDBUGS_WARNINGS=0
 OK_JAVADOC_WARNINGS=13
 {noformat}
 This will allow in 2 new javadoc warnings and still +1 the build

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8169) javadoc generation fails with java.lang.OutOfMemoryError: Java heap space

2012-03-14 Thread Thomas Graves (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-8169:
--

Status: Patch Available  (was: Open)

 javadoc generation fails with java.lang.OutOfMemoryError: Java heap space
 -

 Key: HADOOP-8169
 URL: https://issues.apache.org/jira/browse/HADOOP-8169
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.3
Reporter: Thomas Graves
Assignee: Thomas Graves
Priority: Critical
 Attachments: HADOOP-8169.patch


 building the docs (mvn package -Pdocs -Dtar -DskipTests) on branch-0.23 
 results in a javadoc java.lang.OutOfMemoryError: Java heap space. Note this 
 seems to only happen when building with 32 bit java, 64 bit works fine.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8169) javadoc generation fails with java.lang.OutOfMemoryError: Java heap space

2012-03-14 Thread Thomas Graves (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Graves updated HADOOP-8169:
--

Attachment: HADOOP-8169.patch

set maxmemory to 512m

 javadoc generation fails with java.lang.OutOfMemoryError: Java heap space
 -

 Key: HADOOP-8169
 URL: https://issues.apache.org/jira/browse/HADOOP-8169
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.3
Reporter: Thomas Graves
Assignee: Thomas Graves
Priority: Critical
 Attachments: HADOOP-8169.patch


 building the docs (mvn package -Pdocs -Dtar -DskipTests) on branch-0.23 
 results in a javadoc java.lang.OutOfMemoryError: Java heap space. Note this 
 seems to only happen when building with 32 bit java, 64 bit works fine.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8164) Handle paths using back slash as path separator for windows only

2012-03-14 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8164?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229262#comment-13229262
 ] 

Hudson commented on HADOOP-8164:


Integrated in Hadoop-Hdfs-0.23-Build #197 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Build/197/])
HADOOP-8164. Merging change 1300290 from trunk to 0.23 (Revision 1300292)

 Result = FAILURE
suresh : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1300292
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystem.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/Path.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestPath.java


 Handle paths using back slash as path separator for windows only
 

 Key: HADOOP-8164
 URL: https://issues.apache.org/jira/browse/HADOOP-8164
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 0.23.0, 0.24.0
Reporter: Suresh Srinivas
Assignee: Daryn Sharp
 Fix For: 0.24.0, 0.23.2, 0.23.3

 Attachments: HADOOP-8139-6.patch, HADOOP-8164.patch


 Please see the description in HADOOP-8139. Using escape character back slash 
 as path separator could cause accidental deletion of data. This jira for now 
 supports back slash only for windows. Eventually HADOOP-8139 will remove the 
 support for back slash based paths.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8169) javadoc generation fails with java.lang.OutOfMemoryError: Java heap space

2012-03-14 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229268#comment-13229268
 ] 

Hadoop QA commented on HADOOP-8169:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12518331/HADOOP-8169.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

-1 tests included.  The patch doesn't appear to include any new or modified 
tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/710//console

This message is automatically generated.

 javadoc generation fails with java.lang.OutOfMemoryError: Java heap space
 -

 Key: HADOOP-8169
 URL: https://issues.apache.org/jira/browse/HADOOP-8169
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.3
Reporter: Thomas Graves
Assignee: Thomas Graves
Priority: Critical
 Attachments: HADOOP-8169.patch


 building the docs (mvn package -Pdocs -Dtar -DskipTests) on branch-0.23 
 results in a javadoc java.lang.OutOfMemoryError: Java heap space. Note this 
 seems to only happen when building with 32 bit java, 64 bit works fine.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8169) javadoc generation fails with java.lang.OutOfMemoryError: Java heap space

2012-03-14 Thread Thomas Graves (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229276#comment-13229276
 ] 

Thomas Graves commented on HADOOP-8169:
---

patch it to the hadoop-project-dist directory so jenkins doens't apply it 
correctly.

 javadoc generation fails with java.lang.OutOfMemoryError: Java heap space
 -

 Key: HADOOP-8169
 URL: https://issues.apache.org/jira/browse/HADOOP-8169
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.3
Reporter: Thomas Graves
Assignee: Thomas Graves
Priority: Critical
 Attachments: HADOOP-8169.patch


 building the docs (mvn package -Pdocs -Dtar -DskipTests) on branch-0.23 
 results in a javadoc java.lang.OutOfMemoryError: Java heap space. Note this 
 seems to only happen when building with 32 bit java, 64 bit works fine.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8169) javadoc generation fails with java.lang.OutOfMemoryError: Java heap space

2012-03-14 Thread Robert Joseph Evans (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8169?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Joseph Evans updated HADOOP-8169:


   Resolution: Fixed
Fix Version/s: 0.23.3
   Status: Resolved  (was: Patch Available)

Thanks Tom.  +1 the patch is very small and I verified that the build still 
works.  I checked this into trunk and branch-0.23.

 javadoc generation fails with java.lang.OutOfMemoryError: Java heap space
 -

 Key: HADOOP-8169
 URL: https://issues.apache.org/jira/browse/HADOOP-8169
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.3
Reporter: Thomas Graves
Assignee: Thomas Graves
Priority: Critical
 Fix For: 0.23.3

 Attachments: HADOOP-8169.patch


 building the docs (mvn package -Pdocs -Dtar -DskipTests) on branch-0.23 
 results in a javadoc java.lang.OutOfMemoryError: Java heap space. Note this 
 seems to only happen when building with 32 bit java, 64 bit works fine.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8169) javadoc generation fails with java.lang.OutOfMemoryError: Java heap space

2012-03-14 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229302#comment-13229302
 ] 

Hudson commented on HADOOP-8169:


Integrated in Hadoop-Hdfs-trunk-Commit #1947 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1947/])
HADOOP-8169. javadoc generation fails with java.lang.OutOfMemoryError: Java 
heap space (tgraves via bobby) (Revision 1300619)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1300619
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project-dist/pom.xml


 javadoc generation fails with java.lang.OutOfMemoryError: Java heap space
 -

 Key: HADOOP-8169
 URL: https://issues.apache.org/jira/browse/HADOOP-8169
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.3
Reporter: Thomas Graves
Assignee: Thomas Graves
Priority: Critical
 Fix For: 0.23.3

 Attachments: HADOOP-8169.patch


 building the docs (mvn package -Pdocs -Dtar -DskipTests) on branch-0.23 
 results in a javadoc java.lang.OutOfMemoryError: Java heap space. Note this 
 seems to only happen when building with 32 bit java, 64 bit works fine.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8169) javadoc generation fails with java.lang.OutOfMemoryError: Java heap space

2012-03-14 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229303#comment-13229303
 ] 

Hudson commented on HADOOP-8169:


Integrated in Hadoop-Common-trunk-Commit #1872 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1872/])
HADOOP-8169. javadoc generation fails with java.lang.OutOfMemoryError: Java 
heap space (tgraves via bobby) (Revision 1300619)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1300619
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/trunk/hadoop-project-dist/pom.xml


 javadoc generation fails with java.lang.OutOfMemoryError: Java heap space
 -

 Key: HADOOP-8169
 URL: https://issues.apache.org/jira/browse/HADOOP-8169
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.3
Reporter: Thomas Graves
Assignee: Thomas Graves
Priority: Critical
 Fix For: 0.23.3

 Attachments: HADOOP-8169.patch


 building the docs (mvn package -Pdocs -Dtar -DskipTests) on branch-0.23 
 results in a javadoc java.lang.OutOfMemoryError: Java heap space. Note this 
 seems to only happen when building with 32 bit java, 64 bit works fine.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8169) javadoc generation fails with java.lang.OutOfMemoryError: Java heap space

2012-03-14 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229307#comment-13229307
 ] 

Hudson commented on HADOOP-8169:


Integrated in Hadoop-Hdfs-0.23-Commit #669 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Commit/669/])
svn merge -c 1300619 from trunk to branch-0.23 FIXES: HADOOP-8169. javadoc 
generation fails with java.lang.OutOfMemoryError: Java heap space (tgraves via 
bobby) (Revision 1300620)

 Result = SUCCESS
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1300620
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-project-dist/pom.xml


 javadoc generation fails with java.lang.OutOfMemoryError: Java heap space
 -

 Key: HADOOP-8169
 URL: https://issues.apache.org/jira/browse/HADOOP-8169
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.3
Reporter: Thomas Graves
Assignee: Thomas Graves
Priority: Critical
 Fix For: 0.23.3

 Attachments: HADOOP-8169.patch


 building the docs (mvn package -Pdocs -Dtar -DskipTests) on branch-0.23 
 results in a javadoc java.lang.OutOfMemoryError: Java heap space. Note this 
 seems to only happen when building with 32 bit java, 64 bit works fine.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8167) Configuration deprecation logic breaks backwards compatibility

2012-03-14 Thread Tom White (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229312#comment-13229312
 ] 

Tom White commented on HADOOP-8167:
---

#2 seems like the best option in this case. +1 for the patch.

 Configuration deprecation logic breaks backwards compatibility
 --

 Key: HADOOP-8167
 URL: https://issues.apache.org/jira/browse/HADOOP-8167
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.24.0, 0.23.3
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Blocker
 Fix For: 0.23.3

 Attachments: HADOOP-8167.patch


 The deprecated Configuration logic works as follows:
 For a dK deprecated key in favor of nK:
 * on set(dK, V), it stores (nK,V)
 * on get(dK) it does a reverseLookup of dK to nK and looks for get(nK)
 While this works fine for single set/get operations, the iterator() method 
 that returns an iterator of all config key/values, returns only the new keys.
 This breaks applications that did a set(dK, V) and expect, when iterating 
 over the configuration to find (dK, V).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8170) LdapGroupsMapping should support a configurable search limit

2012-03-14 Thread Jonathan Natkins (Created) (JIRA)
LdapGroupsMapping should support a configurable search limit


 Key: HADOOP-8170
 URL: https://issues.apache.org/jira/browse/HADOOP-8170
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Jonathan Natkins


LDAP servers can be configured with search limits, which would cause the 
LdapGroupsMapping class to throw an exception if it returned more results than 
the search limit allowed. If a user belonged to a very large number of groups, 
or the search limit was set to a fairly low number, this could result in some 
undesirable behavior.

The LdapGroupsMapping should be augmented to page results, if necessary.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8170) LdapGroupsMapping should support a configurable search limit

2012-03-14 Thread Jonathan Natkins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8170?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Natkins updated HADOOP-8170:
-

Issue Type: Improvement  (was: Bug)

 LdapGroupsMapping should support a configurable search limit
 

 Key: HADOOP-8170
 URL: https://issues.apache.org/jira/browse/HADOOP-8170
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Jonathan Natkins

 LDAP servers can be configured with search limits, which would cause the 
 LdapGroupsMapping class to throw an exception if it returned more results 
 than the search limit allowed. If a user belonged to a very large number of 
 groups, or the search limit was set to a fairly low number, this could result 
 in some undesirable behavior.
 The LdapGroupsMapping should be augmented to page results, if necessary.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8121) Active Directory Group Mapping Service

2012-03-14 Thread Jonathan Natkins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Natkins updated HADOOP-8121:
-

Attachment: HADOOP-8121.patch

I've added some documentation to hdfs_permissions_guide.xml to note that the 
implementation exists, and point to the javadocs for more information.

I've also added some additional information on the topics we've discussed to 
the javadocs, and filed HADOOP-8170 to track the search limit improvement.

Additionally, I've updated the patch to hopefully deal with the findbugs 
warnings that popped up last time.

 Active Directory Group Mapping Service
 --

 Key: HADOOP-8121
 URL: https://issues.apache.org/jira/browse/HADOOP-8121
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Reporter: Jonathan Natkins
Assignee: Jonathan Natkins
 Attachments: HADOOP-8121.patch, HADOOP-8121.patch, HADOOP-8121.patch, 
 HADOOP-8121.patch, HADOOP-8121.patch, HADOOP-8121.patch, HADOOP-8121.patch, 
 HADOOP-8121.patch, HADOOP-8121.patch, HADOOP-8121.patch


 Planning on building a group mapping service that will go and talk directly 
 to an Active Directory setup to get group memberships

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8169) javadoc generation fails with java.lang.OutOfMemoryError: Java heap space

2012-03-14 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8169?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229354#comment-13229354
 ] 

Hudson commented on HADOOP-8169:


Integrated in Hadoop-Mapreduce-0.23-Commit #686 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-0.23-Commit/686/])
svn merge -c 1300619 from trunk to branch-0.23 FIXES: HADOOP-8169. javadoc 
generation fails with java.lang.OutOfMemoryError: Java heap space (tgraves via 
bobby) (Revision 1300620)

 Result = ABORTED
bobby : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1300620
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* /hadoop/common/branches/branch-0.23/hadoop-project-dist/pom.xml


 javadoc generation fails with java.lang.OutOfMemoryError: Java heap space
 -

 Key: HADOOP-8169
 URL: https://issues.apache.org/jira/browse/HADOOP-8169
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 0.23.3
Reporter: Thomas Graves
Assignee: Thomas Graves
Priority: Critical
 Fix For: 0.23.3

 Attachments: HADOOP-8169.patch


 building the docs (mvn package -Pdocs -Dtar -DskipTests) on branch-0.23 
 results in a javadoc java.lang.OutOfMemoryError: Java heap space. Note this 
 seems to only happen when building with 32 bit java, 64 bit works fine.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8167) Configuration deprecation logic breaks backwards compatibility

2012-03-14 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229370#comment-13229370
 ] 

Hudson commented on HADOOP-8167:


Integrated in Hadoop-Hdfs-trunk-Commit #1948 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Commit/1948/])
HADOOP-8167. Configuration deprecation logic breaks backwards compatibility 
(tucu) (Revision 1300642)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1300642
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigurationDeprecation.java


 Configuration deprecation logic breaks backwards compatibility
 --

 Key: HADOOP-8167
 URL: https://issues.apache.org/jira/browse/HADOOP-8167
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.24.0, 0.23.3
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Blocker
 Fix For: 0.23.3

 Attachments: HADOOP-8167.patch


 The deprecated Configuration logic works as follows:
 For a dK deprecated key in favor of nK:
 * on set(dK, V), it stores (nK,V)
 * on get(dK) it does a reverseLookup of dK to nK and looks for get(nK)
 While this works fine for single set/get operations, the iterator() method 
 that returns an iterator of all config key/values, returns only the new keys.
 This breaks applications that did a set(dK, V) and expect, when iterating 
 over the configuration to find (dK, V).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8167) Configuration deprecation logic breaks backwards compatibility

2012-03-14 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229374#comment-13229374
 ] 

Hudson commented on HADOOP-8167:


Integrated in Hadoop-Common-0.23-Commit #679 (See 
[https://builds.apache.org/job/Hadoop-Common-0.23-Commit/679/])
Merge -r 1300641:1300642 from trunk to branch. FIXES: HADOOP-8167 (Revision 
1300644)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1300644
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigurationDeprecation.java


 Configuration deprecation logic breaks backwards compatibility
 --

 Key: HADOOP-8167
 URL: https://issues.apache.org/jira/browse/HADOOP-8167
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.24.0, 0.23.3
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Blocker
 Fix For: 0.23.3

 Attachments: HADOOP-8167.patch


 The deprecated Configuration logic works as follows:
 For a dK deprecated key in favor of nK:
 * on set(dK, V), it stores (nK,V)
 * on get(dK) it does a reverseLookup of dK to nK and looks for get(nK)
 While this works fine for single set/get operations, the iterator() method 
 that returns an iterator of all config key/values, returns only the new keys.
 This breaks applications that did a set(dK, V) and expect, when iterating 
 over the configuration to find (dK, V).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8167) Configuration deprecation logic breaks backwards compatibility

2012-03-14 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229394#comment-13229394
 ] 

Hudson commented on HADOOP-8167:


Integrated in Hadoop-Hdfs-0.23-Commit #670 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-0.23-Commit/670/])
Merge -r 1300641:1300642 from trunk to branch. FIXES: HADOOP-8167 (Revision 
1300644)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1300644
Files : 
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/branches/branch-0.23/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigurationDeprecation.java


 Configuration deprecation logic breaks backwards compatibility
 --

 Key: HADOOP-8167
 URL: https://issues.apache.org/jira/browse/HADOOP-8167
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.24.0, 0.23.3
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Blocker
 Fix For: 0.23.3

 Attachments: HADOOP-8167.patch


 The deprecated Configuration logic works as follows:
 For a dK deprecated key in favor of nK:
 * on set(dK, V), it stores (nK,V)
 * on get(dK) it does a reverseLookup of dK to nK and looks for get(nK)
 While this works fine for single set/get operations, the iterator() method 
 that returns an iterator of all config key/values, returns only the new keys.
 This breaks applications that did a set(dK, V) and expect, when iterating 
 over the configuration to find (dK, V).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8167) Configuration deprecation logic breaks backwards compatibility

2012-03-14 Thread Hudson (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8167?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229396#comment-13229396
 ] 

Hudson commented on HADOOP-8167:


Integrated in Hadoop-Common-trunk-Commit #1873 (See 
[https://builds.apache.org/job/Hadoop-Common-trunk-Commit/1873/])
HADOOP-8167. Configuration deprecation logic breaks backwards compatibility 
(tucu) (Revision 1300642)

 Result = SUCCESS
tucu : http://svn.apache.org/viewcvs.cgi/?root=Apache-SVNview=revrev=1300642
Files : 
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/conf/Configuration.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/conf/TestConfigurationDeprecation.java


 Configuration deprecation logic breaks backwards compatibility
 --

 Key: HADOOP-8167
 URL: https://issues.apache.org/jira/browse/HADOOP-8167
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.24.0, 0.23.3
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
Priority: Blocker
 Fix For: 0.23.3

 Attachments: HADOOP-8167.patch


 The deprecated Configuration logic works as follows:
 For a dK deprecated key in favor of nK:
 * on set(dK, V), it stores (nK,V)
 * on get(dK) it does a reverseLookup of dK to nK and looks for get(nK)
 While this works fine for single set/get operations, the iterator() method 
 that returns an iterator of all config key/values, returns only the new keys.
 This breaks applications that did a set(dK, V) and expect, when iterating 
 over the configuration to find (dK, V).

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8149) cap space usage of default log4j rolling policy

2012-03-14 Thread Eli Collins (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8149?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229428#comment-13229428
 ] 

Eli Collins commented on HADOOP-8149:
-

bq. Have you noticed cases of data loss issues. Because I have not heard of 
these instances.

No, me neither. We do see the issue with partitions filling up frequently, 
people expect this to work out of the box.

bq. They should set the logger to RFA only when HADOOP_*_LOGGER is not already 
set

Sounds like a plan. Pat, how about updating the patch to (a) add back the DRFA 
log4j.properties section and (b) on set HADOOP_*_LOGGER if they're already 
unset so they can be overriden in hadoop-env.sh?

 cap space usage of default log4j rolling policy 
 

 Key: HADOOP-8149
 URL: https://issues.apache.org/jira/browse/HADOOP-8149
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Reporter: Patrick Hunt
Assignee: Patrick Hunt
 Attachments: HADOOP-8149.patch, HADOOP-8149.patch


 I've seen several critical production issues because logs are not 
 automatically removed after some time and accumulate. Changes to Hadoop's 
 default log4j file appender would help with this.
 I recommend we move to an appender which:
 1) caps the max file size (configurable)
 2) caps the max number of files to keep (configurable)
 3) uses rolling file appender rather than DRFA, see the warning here:
 http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/DailyRollingFileAppender.html
 Specifically: DailyRollingFileAppender has been observed to exhibit 
 synchronization issues and data loss.
 We'd lose (based on the default log4j configuration) the daily rolling 
 aspect, however increase reliability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-4885) Try to restore failed replicas of Name Node storage (at checkpoint time)

2012-03-14 Thread Tsz Wo (Nicholas), SZE (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-4885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229496#comment-13229496
 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-4885:


+1  The patch also looks good to me.  I will commit this in HDFS-3075.

 Try to restore failed replicas of Name Node storage (at checkpoint time)
 

 Key: HADOOP-4885
 URL: https://issues.apache.org/jira/browse/HADOOP-4885
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Boris Shkolnik
Assignee: Boris Shkolnik
 Fix For: 0.21.0

 Attachments: HADOOP-4885-1.patch, HADOOP-4885-3.patch, 
 HADOOP-4885-3.patch, HADOOP-4885.branch-1.patch, 
 HADOOP-4885.branch-1.patch.2, HADOOP-4885.branch-1.patch.3, 
 HADOOP-4885.patch, HADOOP-4885.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8139) Path does not allow metachars to be escaped

2012-03-14 Thread Daryn Sharp (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229557#comment-13229557
 ] 

Daryn Sharp commented on HADOOP-8139:
-

@Alexander - Yes, nearly every FsShell command supports unix style globs.  
tail is the only exception that comes to mind.  The problem is the subbing of 
\ to / rendered it impossible to quote metachars.  For instance, \ * turned 
into /*!

One of the problems I encountered with trying to modify RLFS is it seems, based 
only on reading, that local files may be returned with \ or with /.  If true, 
we probably don't want to blindly convert \ to /.

If we don't want the as-backwards-compatible-as-possible solution I tried to 
implement, we can try modifying RLFS to sub \ to / during File-Path conversion 
if File.separatorChar != Path.SEPARATOR.  RLFS will also need to override 
globStatus to convert \ to ^ for quoting of metachars.

I think this is doable as long as we are willing to sacrifice c:\dir\... in 23.

 Path does not allow metachars to be escaped
 ---

 Key: HADOOP-8139
 URL: https://issues.apache.org/jira/browse/HADOOP-8139
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.23.0, 0.24.0
Reporter: Daryn Sharp
Assignee: Daryn Sharp
Priority: Blocker
 Attachments: HADOOP-8139-2.patch, HADOOP-8139-3.patch, 
 HADOOP-8139-4.patch, HADOOP-8139-5.patch, HADOOP-8139-6.patch, 
 HADOOP-8139.patch, HADOOP-8139.patch


 Path converts \ into /, probably for windows support?  This means it's 
 impossible for the user to escape metachars in a path name.  Glob expansion 
 can have deadly results.
 Here are the most egregious examples. A user accidentally creates a path like 
 /user/me/*/file.  Now they want to remove it.
 {noformat}hadoop fs -rmr -skipTrash '/user/me/\*' becomes...
 hadoop fs -rmr -skipTrash /user/me/*{noformat}
 * User/Admin: Nuked their home directory or any given directory
 {noformat}hadoop fs -rmr -skipTrash '\*' becomes...
 hadoop fs -rmr -skipTrash /*{noformat}
 * User:  Deleted _everything_ they have access to on the cluster
 * Admin: *Nukes the entire cluster*
 Note: FsShell is shown for illustrative purposes, however the problem is in 
 the Path object, not FsShell.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8149) cap space usage of default log4j rolling policy

2012-03-14 Thread Patrick Hunt (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8149?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Patrick Hunt updated HADOOP-8149:
-

Status: Open  (was: Patch Available)

Sounds reasonable. Notice that in this patch DRFAS was renamed to RFAS and some 
of the appenders (DRFAS/JSA/MRAUDIT/RMSUMMARY) are changed from DailyRolling to 
just Rolling. These are fine or should some action be taken wrt backward 
compatibility?

 cap space usage of default log4j rolling policy 
 

 Key: HADOOP-8149
 URL: https://issues.apache.org/jira/browse/HADOOP-8149
 Project: Hadoop Common
  Issue Type: Improvement
  Components: conf
Reporter: Patrick Hunt
Assignee: Patrick Hunt
 Attachments: HADOOP-8149.patch, HADOOP-8149.patch


 I've seen several critical production issues because logs are not 
 automatically removed after some time and accumulate. Changes to Hadoop's 
 default log4j file appender would help with this.
 I recommend we move to an appender which:
 1) caps the max file size (configurable)
 2) caps the max number of files to keep (configurable)
 3) uses rolling file appender rather than DRFA, see the warning here:
 http://logging.apache.org/log4j/1.2/apidocs/org/apache/log4j/DailyRollingFileAppender.html
 Specifically: DailyRollingFileAppender has been observed to exhibit 
 synchronization issues and data loss.
 We'd lose (based on the default log4j configuration) the daily rolling 
 aspect, however increase reliability.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8163) Improve ActiveStandbyElector to provide hooks for fencing old active

2012-03-14 Thread Hari Mankude (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229574#comment-13229574
 ] 

Hari Mankude commented on HADOOP-8163:
--

Hi Todd,

The question I had was how is the info znode creation prevented when the client 
does not have the ephemeral lock znode?  Is this ensured in the zk client or at 
the zookeeper?

 Improve ActiveStandbyElector to provide hooks for fencing old active
 

 Key: HADOOP-8163
 URL: https://issues.apache.org/jira/browse/HADOOP-8163
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ha
Affects Versions: 0.24.0, 0.23.3
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-8163.txt


 When a new node becomes active in an HA setup, it may sometimes have to take 
 fencing actions against the node that was formerly active. This JIRA extends 
 the ActiveStandbyElector which adds an extra non-ephemeral node into the ZK 
 directory, which acts as a second copy of the active node's information. 
 Then, if the active loses its ZK session, the next active to be elected may 
 easily locate the unfenced node to take the appropriate actions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8163) Improve ActiveStandbyElector to provide hooks for fencing old active

2012-03-14 Thread Todd Lipcon (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229589#comment-13229589
 ] 

Todd Lipcon commented on HADOOP-8163:
-

bq. The question I had was how is the info znode creation prevented when the 
client does not have the ephemeral lock znode? Is this ensured in the zk client 
or at the zookeeper?

This is ensured by ZooKeeper. The only reason the ephemeral node would 
disappear is if the session was expired. This means the leader has marked the 
session as such -- and thus, you can no longer issue commands under that same 
session.

To be sure, I just double checked with Pat Hunt from the ZK team. Apparently 
there was a rare race condition bug ZOOKEEPER-1208 fixed in 3.3.4/3.4.0 about 
this exact case: 
https://issues.apache.org/jira/browse/ZOOKEEPER-1208?focusedCommentId=13149787page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13149787
... but since Hadoop will probably need the krb5 auth from ZK 3.4, it seems a 
reasonable requirement to need at least that version.

 Improve ActiveStandbyElector to provide hooks for fencing old active
 

 Key: HADOOP-8163
 URL: https://issues.apache.org/jira/browse/HADOOP-8163
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ha
Affects Versions: 0.24.0, 0.23.3
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-8163.txt


 When a new node becomes active in an HA setup, it may sometimes have to take 
 fencing actions against the node that was formerly active. This JIRA extends 
 the ActiveStandbyElector which adds an extra non-ephemeral node into the ZK 
 directory, which acts as a second copy of the active node's information. 
 Then, if the active loses its ZK session, the next active to be elected may 
 easily locate the unfenced node to take the appropriate actions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8159) NetworkTopology: getLeaf should check for invalid topologies

2012-03-14 Thread Colin Patrick McCabe (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8159:
-

Attachment: HADOOP-8159.003.patch

* Add unit test of rejecting a bad topology with a sane error message

 NetworkTopology: getLeaf should check for invalid topologies
 

 Key: HADOOP-8159
 URL: https://issues.apache.org/jira/browse/HADOOP-8159
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 1.0.0

 Attachments: HADOOP-8159.002.patch, HADOOP-8159.003.patch


 Currently, in NetworkTopology, getLeaf doesn't do too much validation on the 
 InnerNode object itself. This results in us getting ClassCastException 
 sometimes when the network topology is invalid. We should have a less 
 confusing exception message for this case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8171) add -force option to namenode -format command

2012-03-14 Thread Arpit Gupta (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8171?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229593#comment-13229593
 ] 

Arpit Gupta commented on HADOOP-8171:
-

I will post patches for 1.0 and 0.24 shortly.

 add -force option to namenode -format command
 -

 Key: HADOOP-8171
 URL: https://issues.apache.org/jira/browse/HADOOP-8171
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 0.24.0, 1.0.2
Reporter: Arpit Gupta
Assignee: Arpit Gupta

 Currently the bin/hadoop namenode -format prompts the user for a Y/N to setup 
 the directories in the local file system.
 We should add a -force option which when present the user is not prompted.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Created] (HADOOP-8171) add -force option to namenode -format command

2012-03-14 Thread Arpit Gupta (Created) (JIRA)
add -force option to namenode -format command
-

 Key: HADOOP-8171
 URL: https://issues.apache.org/jira/browse/HADOOP-8171
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 0.24.0, 1.0.2
Reporter: Arpit Gupta
Assignee: Arpit Gupta


Currently the bin/hadoop namenode -format prompts the user for a Y/N to setup 
the directories in the local file system.

We should add a -force option which when present the user is not prompted.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8168) empty-string owners or groups causes {{MissingFormatWidthException}} in o.a.h.fs.shell.Ls.ProcessPath()

2012-03-14 Thread Daryn Sharp (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229614#comment-13229614
 ] 

Daryn Sharp commented on HADOOP-8168:
-

I think you meant {{Math.min}}.  Although I'd suggest maybe something like this 
to avoid spurious whitespace:
{code}
fmt.append((maxOwner  0) ? %- + maxOwner + s  : %s);
{code}


 empty-string owners or groups causes {{MissingFormatWidthException}} in 
 o.a.h.fs.shell.Ls.ProcessPath()
 ---

 Key: HADOOP-8168
 URL: https://issues.apache.org/jira/browse/HADOOP-8168
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.24.0, 0.23.1
Reporter: Eugene Koontz
 Attachments: HADOOP-8168.patch


 In {{adjustColumnWidths()}}, we set the member variable {{lineFormat}}, which 
 is used by {{ProcessPath()}} to print directory entries. Owners and groups 
 are formatted using the formatting conversion {{%-Xs}}, where X is the max 
 length of the owner or group. However, when trying this with an S3 URL, I 
 found that the owner and group were empty (). This caused X to be 0, which 
 means that the formatting conversion is set to {{%-0s}}. This caused a 
 {{MissingFormatWidthException}} to be thrown when the formatting string was 
 used in {{ProcessPath()}}. 
 Formatting conversions are described here: 
 http://docs.oracle.com/javase/1.6.0/docs/api/java/util/Formatter.html#intFlags
 The specific exception thrown (a subtype of {{IllegalFormatException}}) is 
 described here:
 http://docs.oracle.com/javase/1.6.0/docs/api/java/util/MissingFormatWidthException.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8159) NetworkTopology: getLeaf should check for invalid topologies

2012-03-14 Thread Colin Patrick McCabe (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8159:
-

Attachment: HADOOP-8159.004.patch

rebase patch against trunk

 NetworkTopology: getLeaf should check for invalid topologies
 

 Key: HADOOP-8159
 URL: https://issues.apache.org/jira/browse/HADOOP-8159
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 1.0.0

 Attachments: HADOOP-8159.002.patch, HADOOP-8159.003.patch, 
 HADOOP-8159.004.patch


 Currently, in NetworkTopology, getLeaf doesn't do too much validation on the 
 InnerNode object itself. This results in us getting ClassCastException 
 sometimes when the network topology is invalid. We should have a less 
 confusing exception message for this case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8159) NetworkTopology: getLeaf should check for invalid topologies

2012-03-14 Thread Colin Patrick McCabe (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8159:
-

Attachment: (was: HADOOP-8159.002.patch)

 NetworkTopology: getLeaf should check for invalid topologies
 

 Key: HADOOP-8159
 URL: https://issues.apache.org/jira/browse/HADOOP-8159
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 1.0.0

 Attachments: HADOOP-8159.004.patch


 Currently, in NetworkTopology, getLeaf doesn't do too much validation on the 
 InnerNode object itself. This results in us getting ClassCastException 
 sometimes when the network topology is invalid. We should have a less 
 confusing exception message for this case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8159) NetworkTopology: getLeaf should check for invalid topologies

2012-03-14 Thread Colin Patrick McCabe (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8159:
-

Attachment: (was: HADOOP-8159.003.patch)

 NetworkTopology: getLeaf should check for invalid topologies
 

 Key: HADOOP-8159
 URL: https://issues.apache.org/jira/browse/HADOOP-8159
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 1.0.0

 Attachments: HADOOP-8159.004.patch


 Currently, in NetworkTopology, getLeaf doesn't do too much validation on the 
 InnerNode object itself. This results in us getting ClassCastException 
 sometimes when the network topology is invalid. We should have a less 
 confusing exception message for this case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8159) NetworkTopology: getLeaf should check for invalid topologies

2012-03-14 Thread Colin Patrick McCabe (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8159:
-

Attachment: (was: HADOOP-8159.004.patch)

 NetworkTopology: getLeaf should check for invalid topologies
 

 Key: HADOOP-8159
 URL: https://issues.apache.org/jira/browse/HADOOP-8159
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 1.0.0

 Attachments: HADOOP-8159-b1.005.patch, HADOOP-8159.005.patch


 Currently, in NetworkTopology, getLeaf doesn't do too much validation on the 
 InnerNode object itself. This results in us getting ClassCastException 
 sometimes when the network topology is invalid. We should have a less 
 confusing exception message for this case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8159) NetworkTopology: getLeaf should check for invalid topologies

2012-03-14 Thread Colin Patrick McCabe (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8159?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-8159:
-

Attachment: HADOOP-8159.005.patch

 NetworkTopology: getLeaf should check for invalid topologies
 

 Key: HADOOP-8159
 URL: https://issues.apache.org/jira/browse/HADOOP-8159
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 1.0.0

 Attachments: HADOOP-8159-b1.005.patch, HADOOP-8159.005.patch


 Currently, in NetworkTopology, getLeaf doesn't do too much validation on the 
 InnerNode object itself. This results in us getting ClassCastException 
 sometimes when the network topology is invalid. We should have a less 
 confusing exception message for this case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8159) NetworkTopology: getLeaf should check for invalid topologies

2012-03-14 Thread Eli Collins (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8159?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229662#comment-13229662
 ] 

Eli Collins commented on HADOOP-8159:
-

- Why report the entire topology as json in the message? This string is likely 
to be very big on a real cluster right? Probably better to make that accessible 
via the NN web UI (can punt to a separate change)
- An error message like Invalid network location for Datanode: Datanode given 
rack-level network location can not be at rack-level is probably more clear 
than referencing class names not visible to the user
- In testCreateInvalidTopology in the catch clause let's assert the contents of 
e.getMessage, ie that it indicates that h3 is the bad node
- How about adding test cases for other invalid toplogies (eg 3 or 4 level deep 
topology, null, empty string)
- Nit: bracket for catch InvalidTopologyException goes on the same line



 NetworkTopology: getLeaf should check for invalid topologies
 

 Key: HADOOP-8159
 URL: https://issues.apache.org/jira/browse/HADOOP-8159
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 1.0.0
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Fix For: 1.0.0

 Attachments: HADOOP-8159-b1.005.patch, HADOOP-8159.005.patch


 Currently, in NetworkTopology, getLeaf doesn't do too much validation on the 
 InnerNode object itself. This results in us getting ClassCastException 
 sometimes when the network topology is invalid. We should have a less 
 confusing exception message for this case.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8168) empty-string owners or groups causes {{MissingFormatWidthException}} in o.a.h.fs.shell.Ls.ProcessPath()

2012-03-14 Thread Daryn Sharp (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8168?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229710#comment-13229710
 ] 

Daryn Sharp commented on HADOOP-8168:
-

Ignore the -Math.min-. I'm tired.

 empty-string owners or groups causes {{MissingFormatWidthException}} in 
 o.a.h.fs.shell.Ls.ProcessPath()
 ---

 Key: HADOOP-8168
 URL: https://issues.apache.org/jira/browse/HADOOP-8168
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.24.0, 0.23.1
Reporter: Eugene Koontz
 Attachments: HADOOP-8168.patch


 In {{adjustColumnWidths()}}, we set the member variable {{lineFormat}}, which 
 is used by {{ProcessPath()}} to print directory entries. Owners and groups 
 are formatted using the formatting conversion {{%-Xs}}, where X is the max 
 length of the owner or group. However, when trying this with an S3 URL, I 
 found that the owner and group were empty (). This caused X to be 0, which 
 means that the formatting conversion is set to {{%-0s}}. This caused a 
 {{MissingFormatWidthException}} to be thrown when the formatting string was 
 used in {{ProcessPath()}}. 
 Formatting conversions are described here: 
 http://docs.oracle.com/javase/1.6.0/docs/api/java/util/Formatter.html#intFlags
 The specific exception thrown (a subtype of {{IllegalFormatException}}) is 
 described here:
 http://docs.oracle.com/javase/1.6.0/docs/api/java/util/MissingFormatWidthException.html

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8163) Improve ActiveStandbyElector to provide hooks for fencing old active

2012-03-14 Thread Hari Mankude (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8163?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229711#comment-13229711
 ] 

Hari Mankude commented on HADOOP-8163:
--

Thanks Todd for answers.

I had a suggestion. One of the issues with automatic failover is how does a NN 
become active when the other node is not available or when both NNs are 
restarting. If there is a persistant info znode, then it provides a very 
powerful invariant.

1. If info znode is present, then only the owner of info znode can be made 
active by preference. Alternatively, the other NN can be made active, provided 
it can capture all the state from the previously active znode. If the other NN 
cannot get all the edit logs from the previous active, it remains in standby 
state. An admin initiated action can force the takeover with data loss.

2. If info znode is absent, then it is fair game and both nodes can be made 
active.

For this, couple of high level changes would be necessary in the patch.
1. info znode is not cleaned up by the active nn during clean shutdown. (sure, 
this results in unnecessary fencing)
2. after an ephemeral lock takeover and before the previous info znode can be 
deleted, a state equalization by getting all the editlogs is done by the 
become_active() on the new node.

 Improve ActiveStandbyElector to provide hooks for fencing old active
 

 Key: HADOOP-8163
 URL: https://issues.apache.org/jira/browse/HADOOP-8163
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ha
Affects Versions: 0.24.0, 0.23.3
Reporter: Todd Lipcon
Assignee: Todd Lipcon
 Attachments: hadoop-8163.txt


 When a new node becomes active in an HA setup, it may sometimes have to take 
 fencing actions against the node that was formerly active. This JIRA extends 
 the ActiveStandbyElector which adds an extra non-ephemeral node into the ZK 
 directory, which acts as a second copy of the active node's information. 
 Then, if the active loses its ZK session, the next active to be elected may 
 easily locate the unfenced node to take the appropriate actions.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-6941) Support non-SUN JREs in UserGroupInformation

2012-03-14 Thread Devaraj Das (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-6941:


Attachment: 6941-1.patch

Updated patch.

 Support non-SUN JREs in UserGroupInformation
 

 Key: HADOOP-6941
 URL: https://issues.apache.org/jira/browse/HADOOP-6941
 Project: Hadoop Common
  Issue Type: Bug
 Environment: SLES 11, Apache Harmony 6 and SLES 11, IBM Java 6
Reporter: Stephen Watt
Assignee: Luke Lu
 Fix For: 0.24.0

 Attachments: 6941-1.patch, HADOOP-6941.patch, hadoop-6941.patch


 Attempting to format the namenode or attempting to start Hadoop using Apache 
 Harmony or the IBM Java JREs results in the following exception:
 10/09/07 16:35:05 ERROR namenode.NameNode: java.lang.NoClassDefFoundError: 
 com.sun.security.auth.UnixPrincipal
   at 
 org.apache.hadoop.security.UserGroupInformation.clinit(UserGroupInformation.java:223)
   at java.lang.J9VMInternals.initializeImpl(Native Method)
   at java.lang.J9VMInternals.initialize(J9VMInternals.java:200)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setConfigurationParameters(FSNamesystem.java:420)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:391)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1240)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1348)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
 Caused by: java.lang.ClassNotFoundException: 
 com.sun.security.auth.UnixPrincipal
   at java.net.URLClassLoader.findClass(URLClassLoader.java:421)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:652)
   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:346)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:618)
   ... 8 more
 This is a negative regression as previous versions of Hadoop worked with 
 these JREs

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-6941) Support non-SUN JREs in UserGroupInformation

2012-03-14 Thread Devaraj Das (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6941?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Devaraj Das updated HADOOP-6941:


Status: Patch Available  (was: Open)

 Support non-SUN JREs in UserGroupInformation
 

 Key: HADOOP-6941
 URL: https://issues.apache.org/jira/browse/HADOOP-6941
 Project: Hadoop Common
  Issue Type: Bug
 Environment: SLES 11, Apache Harmony 6 and SLES 11, IBM Java 6
Reporter: Stephen Watt
Assignee: Luke Lu
 Fix For: 0.24.0

 Attachments: 6941-1.patch, HADOOP-6941.patch, hadoop-6941.patch


 Attempting to format the namenode or attempting to start Hadoop using Apache 
 Harmony or the IBM Java JREs results in the following exception:
 10/09/07 16:35:05 ERROR namenode.NameNode: java.lang.NoClassDefFoundError: 
 com.sun.security.auth.UnixPrincipal
   at 
 org.apache.hadoop.security.UserGroupInformation.clinit(UserGroupInformation.java:223)
   at java.lang.J9VMInternals.initializeImpl(Native Method)
   at java.lang.J9VMInternals.initialize(J9VMInternals.java:200)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setConfigurationParameters(FSNamesystem.java:420)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:391)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1240)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1348)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
 Caused by: java.lang.ClassNotFoundException: 
 com.sun.security.auth.UnixPrincipal
   at java.net.URLClassLoader.findClass(URLClassLoader.java:421)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:652)
   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:346)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:618)
   ... 8 more
 This is a negative regression as previous versions of Hadoop worked with 
 these JREs

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-4885) Try to restore failed replicas of Name Node storage (at checkpoint time)

2012-03-14 Thread Eli Collins (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-4885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229774#comment-13229774
 ] 

Eli Collins commented on HADOOP-4885:
-

bq. I didn't get your seco- nd question: my patch uses addStorageDir too. 

What I meant was the trunk patch does the following, which is much shorter:
{code}
sd.clearDirectory();
addStorageDir(sd);
{code}

and leverages the fact that checkpoint populates the directory. Why not use the 
same approach here?

- I'd test with a real NFS mount and disconnect/reconnect the network. I found 
some bugs that way when backporting this a while back. Also discovered 
HDFS-2701, HDFS-2702, HDFS-2703 via testing with a real build instead of the 
unit tests. 
- Nit: s/may should be mounted/may be a network mount/

 Try to restore failed replicas of Name Node storage (at checkpoint time)
 

 Key: HADOOP-4885
 URL: https://issues.apache.org/jira/browse/HADOOP-4885
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Boris Shkolnik
Assignee: Boris Shkolnik
 Fix For: 0.21.0

 Attachments: HADOOP-4885-1.patch, HADOOP-4885-3.patch, 
 HADOOP-4885-3.patch, HADOOP-4885.branch-1.patch, 
 HADOOP-4885.branch-1.patch.2, HADOOP-4885.branch-1.patch.3, 
 HADOOP-4885.patch, HADOOP-4885.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-4885) Try to restore failed replicas of Name Node storage (at checkpoint time)

2012-03-14 Thread Tsz Wo (Nicholas), SZE (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-4885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229792#comment-13229792
 ] 

Tsz Wo (Nicholas), SZE commented on HADOOP-4885:


Hi Eli,

Brandon addressed all [your earlier 
comment|https://issues.apache.org/jira/browse/HADOOP-4885?focusedCommentId=13228915page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13228915]
 last night.  I did not see your further comment so that I committed the patch.

You made some good points in [your previous 
comment|https://issues.apache.org/jira/browse/HADOOP-4885?focusedCommentId=13229774page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13229774].
  As always, we could file a JIRA for them.

Does it sound good?

 Try to restore failed replicas of Name Node storage (at checkpoint time)
 

 Key: HADOOP-4885
 URL: https://issues.apache.org/jira/browse/HADOOP-4885
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Boris Shkolnik
Assignee: Boris Shkolnik
 Fix For: 0.21.0

 Attachments: HADOOP-4885-1.patch, HADOOP-4885-3.patch, 
 HADOOP-4885-3.patch, HADOOP-4885.branch-1.patch, 
 HADOOP-4885.branch-1.patch.2, HADOOP-4885.branch-1.patch.3, 
 HADOOP-4885.patch, HADOOP-4885.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8121) Active Directory Group Mapping Service

2012-03-14 Thread Hadoop QA (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229808#comment-13229808
 ] 

Hadoop QA commented on HADOOP-8121:
---

-1 overall.  Here are the results of testing the latest attachment 
  http://issues.apache.org/jira/secure/attachment/12518340/HADOOP-8121.patch
  against trunk revision .

+1 @author.  The patch does not contain any @author tags.

+1 tests included.  The patch appears to include 4 new or modified tests.

-1 patch.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/711//console

This message is automatically generated.

 Active Directory Group Mapping Service
 --

 Key: HADOOP-8121
 URL: https://issues.apache.org/jira/browse/HADOOP-8121
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Reporter: Jonathan Natkins
Assignee: Jonathan Natkins
 Attachments: HADOOP-8121.patch, HADOOP-8121.patch, HADOOP-8121.patch, 
 HADOOP-8121.patch, HADOOP-8121.patch, HADOOP-8121.patch, HADOOP-8121.patch, 
 HADOOP-8121.patch, HADOOP-8121.patch, HADOOP-8121.patch


 Planning on building a group mapping service that will go and talk directly 
 to an Active Directory setup to get group memberships

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-4885) Try to restore failed replicas of Name Node storage (at checkpoint time)

2012-03-14 Thread Brandon Li (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-4885?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229819#comment-13229819
 ] 

Brandon Li commented on HADOOP-4885:


The format-addStorageDir solution make the failed directory active 
immediately even it's not a real active state. The state is visible from the nn 
UI and JMX. If the checkpoint fails, the fake Active state can be misleading.

The copy-over solution may do some extra work but it sets the recovered storage 
directories in the real active state. 


I agree those 3 JIRA issues you mentioned should be back ported too to branch 
1.02 (the backport patch here is for branch-1 not 1.02). 

It's a good point about the network mount problem. :-)  
It's also a problem with original patch: the format-addStorageDir creates the 
storage directory if it doesn't exist. However, if this storage directory is a 
mount point, it shouldn't be created automatically. HDFS-3095 is filed for this 
issue.


 Try to restore failed replicas of Name Node storage (at checkpoint time)
 

 Key: HADOOP-4885
 URL: https://issues.apache.org/jira/browse/HADOOP-4885
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Boris Shkolnik
Assignee: Boris Shkolnik
 Fix For: 0.21.0

 Attachments: HADOOP-4885-1.patch, HADOOP-4885-3.patch, 
 HADOOP-4885-3.patch, HADOOP-4885.branch-1.patch, 
 HADOOP-4885.branch-1.patch.2, HADOOP-4885.branch-1.patch.3, 
 HADOOP-4885.patch, HADOOP-4885.patch




--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-6941) Support non-SUN JREs in UserGroupInformation

2012-03-14 Thread Luke Lu (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-6941?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229839#comment-13229839
 ] 

Luke Lu commented on HADOOP-6941:
-

Thanks for the patch Devaraj! Can you tell me which versions of IBM JDKs you've 
tested with? The current version for IBM JDK for 1.6 is J9 SR10. The patch 
lgtm, btw.

 Support non-SUN JREs in UserGroupInformation
 

 Key: HADOOP-6941
 URL: https://issues.apache.org/jira/browse/HADOOP-6941
 Project: Hadoop Common
  Issue Type: Bug
 Environment: SLES 11, Apache Harmony 6 and SLES 11, IBM Java 6
Reporter: Stephen Watt
Assignee: Luke Lu
 Fix For: 0.24.0

 Attachments: 6941-1.patch, HADOOP-6941.patch, hadoop-6941.patch


 Attempting to format the namenode or attempting to start Hadoop using Apache 
 Harmony or the IBM Java JREs results in the following exception:
 10/09/07 16:35:05 ERROR namenode.NameNode: java.lang.NoClassDefFoundError: 
 com.sun.security.auth.UnixPrincipal
   at 
 org.apache.hadoop.security.UserGroupInformation.clinit(UserGroupInformation.java:223)
   at java.lang.J9VMInternals.initializeImpl(Native Method)
   at java.lang.J9VMInternals.initialize(J9VMInternals.java:200)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.setConfigurationParameters(FSNamesystem.java:420)
   at 
 org.apache.hadoop.hdfs.server.namenode.FSNamesystem.init(FSNamesystem.java:391)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.format(NameNode.java:1240)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1348)
   at 
 org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1368)
 Caused by: java.lang.ClassNotFoundException: 
 com.sun.security.auth.UnixPrincipal
   at java.net.URLClassLoader.findClass(URLClassLoader.java:421)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:652)
   at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:346)
   at java.lang.ClassLoader.loadClass(ClassLoader.java:618)
   ... 8 more
 This is a negative regression as previous versions of Hadoop worked with 
 these JREs

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7682) taskTracker could not start because Failed to set permissions to ttprivate to 0700

2012-03-14 Thread tigar (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229842#comment-13229842
 ] 

tigar commented on HADOOP-7682:
---

same problem with cygin~:('

 taskTracker could not start because Failed to set permissions to ttprivate 
 to 0700
 --

 Key: HADOOP-7682
 URL: https://issues.apache.org/jira/browse/HADOOP-7682
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.20.203.0, 0.20.205.0, 1.0.0
 Environment: OS:WindowsXP SP3 , Filesystem :NTFS, cygwin 1.7.9-1, 
 jdk1.6.0_05
Reporter: Magic Xie

 ERROR org.apache.hadoop.mapred.TaskTracker:Can not start task tracker because 
 java.io.IOException:Failed to set permissions of 
 path:/tmp/hadoop-cyg_server/mapred/local/ttprivate to 0700
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.checkReturnValue(RawLocalFileSystem.java:525)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:499)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:318)
 at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:183)
 at org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:635)
 at org.apache.hadoop.mapred.TaskTracker.(TaskTracker.java:1328)
 at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:3430)
 Since hadoop0.20.203 when the TaskTracker initialize, it checks the 
 permission(TaskTracker Line 624) of 
 (org.apache.hadoop.mapred.TaskTracker.TT_LOG_TMP_DIR,org.apache.hadoop.mapred.TaskTracker.TT_PRIVATE_DIR,
  
 org.apache.hadoop.mapred.TaskTracker.TT_PRIVATE_DIR).RawLocalFileSystem(http://svn.apache.org/viewvc/hadoop/common/tags/release-0.20.203.0/src/core/org/apache/hadoop/fs/RawLocalFileSystem.java?view=markup)
  call setPermission(Line 481) to deal with it, setPermission works fine on 
 *nx, however,it dose not alway works on windows.
 setPermission call setReadable of Java.io.File in the line 498, but according 
 to the Table1 below provided by oracle,setReadable(false) will always return 
 false on windows, the same as setExecutable(false).
 http://java.sun.com/developer/technicalArticles/J2SE/Desktop/javase6/enhancements/
 is it cause the task tracker Failed to set permissions to ttprivate to 
 0700?
 Hadoop 0.20.202 works fine in the same environment. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7682) taskTracker could not start because Failed to set permissions to ttprivate to 0700

2012-03-14 Thread tigar (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229848#comment-13229848
 ] 

tigar commented on HADOOP-7682:
---

ps: my hadoop version is 1.0.1~

 taskTracker could not start because Failed to set permissions to ttprivate 
 to 0700
 --

 Key: HADOOP-7682
 URL: https://issues.apache.org/jira/browse/HADOOP-7682
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs
Affects Versions: 0.20.203.0, 0.20.205.0, 1.0.0
 Environment: OS:WindowsXP SP3 , Filesystem :NTFS, cygwin 1.7.9-1, 
 jdk1.6.0_05
Reporter: Magic Xie

 ERROR org.apache.hadoop.mapred.TaskTracker:Can not start task tracker because 
 java.io.IOException:Failed to set permissions of 
 path:/tmp/hadoop-cyg_server/mapred/local/ttprivate to 0700
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.checkReturnValue(RawLocalFileSystem.java:525)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:499)
 at 
 org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:318)
 at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:183)
 at org.apache.hadoop.mapred.TaskTracker.initialize(TaskTracker.java:635)
 at org.apache.hadoop.mapred.TaskTracker.(TaskTracker.java:1328)
 at org.apache.hadoop.mapred.TaskTracker.main(TaskTracker.java:3430)
 Since hadoop0.20.203 when the TaskTracker initialize, it checks the 
 permission(TaskTracker Line 624) of 
 (org.apache.hadoop.mapred.TaskTracker.TT_LOG_TMP_DIR,org.apache.hadoop.mapred.TaskTracker.TT_PRIVATE_DIR,
  
 org.apache.hadoop.mapred.TaskTracker.TT_PRIVATE_DIR).RawLocalFileSystem(http://svn.apache.org/viewvc/hadoop/common/tags/release-0.20.203.0/src/core/org/apache/hadoop/fs/RawLocalFileSystem.java?view=markup)
  call setPermission(Line 481) to deal with it, setPermission works fine on 
 *nx, however,it dose not alway works on windows.
 setPermission call setReadable of Java.io.File in the line 498, but according 
 to the Table1 below provided by oracle,setReadable(false) will always return 
 false on windows, the same as setExecutable(false).
 http://java.sun.com/developer/technicalArticles/J2SE/Desktop/javase6/enhancements/
 is it cause the task tracker Failed to set permissions to ttprivate to 
 0700?
 Hadoop 0.20.202 works fine in the same environment. 

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Updated] (HADOOP-8121) Active Directory Group Mapping Service

2012-03-14 Thread Jonathan Natkins (Updated) (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8121?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jonathan Natkins updated HADOOP-8121:
-

Attachment: HADOOP-8121.patch

Not sure what happened with that last patch run. I tested the patch, and it 
seemed to apply just fine locally. Attaching a new one to try to kick it again.

 Active Directory Group Mapping Service
 --

 Key: HADOOP-8121
 URL: https://issues.apache.org/jira/browse/HADOOP-8121
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Reporter: Jonathan Natkins
Assignee: Jonathan Natkins
 Attachments: HADOOP-8121.patch, HADOOP-8121.patch, HADOOP-8121.patch, 
 HADOOP-8121.patch, HADOOP-8121.patch, HADOOP-8121.patch, HADOOP-8121.patch, 
 HADOOP-8121.patch, HADOOP-8121.patch, HADOOP-8121.patch, HADOOP-8121.patch


 Planning on building a group mapping service that will go and talk directly 
 to an Active Directory setup to get group memberships

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-8121) Active Directory Group Mapping Service

2012-03-14 Thread Aaron T. Myers (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8121?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229896#comment-13229896
 ] 

Aaron T. Myers commented on HADOOP-8121:


Patch application failed because test-patch doesn't support cross-sub-project 
patches, and this patch changes code in Common and docs in HDFS.

How about you just upload a patch for the docs in HDFS, and a separate patch 
for the Common code changes? That should make test-patch happy.

 Active Directory Group Mapping Service
 --

 Key: HADOOP-8121
 URL: https://issues.apache.org/jira/browse/HADOOP-8121
 Project: Hadoop Common
  Issue Type: New Feature
  Components: security
Reporter: Jonathan Natkins
Assignee: Jonathan Natkins
 Attachments: HADOOP-8121.patch, HADOOP-8121.patch, HADOOP-8121.patch, 
 HADOOP-8121.patch, HADOOP-8121.patch, HADOOP-8121.patch, HADOOP-8121.patch, 
 HADOOP-8121.patch, HADOOP-8121.patch, HADOOP-8121.patch, HADOOP-8121.patch


 Planning on building a group mapping service that will go and talk directly 
 to an Active Directory setup to get group memberships

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7771) NPE when running hdfs dfs -copyToLocal, -get etc

2012-03-14 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229908#comment-13229908
 ] 

Uma Maheswara Rao G commented on HADOOP-7771:
-

{quote}
Do you feel it would be useful to retain the temp file?
{quote}
User will not about temp file paths right? Then there is not point in retaining 
temp files. Only concern is, In older version, if write fails also, other 
clients can read the file till where we have written the file successfully. 
Leter it may get recovered and closed.
But now we are completely invalidating that finalized blocks on write failure.

Also other point is, what if client abuptly shutdown? I think temp files will 
not be cleaned forever right? 


Regards,
Uma

 NPE when running hdfs dfs -copyToLocal, -get etc
 

 Key: HADOOP-7771
 URL: https://issues.apache.org/jira/browse/HADOOP-7771
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0, 0.24.0
Reporter: John George
Assignee: John George
Priority: Blocker
 Fix For: 0.23.0, 0.24.0

 Attachments: HADOOP-7771.patch, HADOOP-7771.patch, HADOOP-7771.patch, 
 HADOOP-7771.patch, HADOOP-7771.patch, HADOOP-7771.patch


 NPE when running hdfs dfs -copyToLocal if the destination directory does not 
 exist. The behavior in branch-0.20-security is to create the directory and 
 copy/get the contents from source.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira




[jira] [Commented] (HADOOP-7771) NPE when running hdfs dfs -copyToLocal, -get etc

2012-03-14 Thread Uma Maheswara Rao G (Commented) (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-7771?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13229911#comment-13229911
 ] 

Uma Maheswara Rao G commented on HADOOP-7771:
-

{quote}Do you feel it would be useful to retain the temp file?{quote}

User will not know about temp file paths right? Then there is not point in 
retaining temp files. Only concern is, In older version, if write fails also, 
other clients can read the file till where we have written the file 
successfully. Later it may get recovered and closed.
But now we are completely invalidating that finalized blocks on write failure.

Also other point is, what if client abuptly shutdown? I think temp files will 
not be cleaned forever right? 

Regards,
Uma


 NPE when running hdfs dfs -copyToLocal, -get etc
 

 Key: HADOOP-7771
 URL: https://issues.apache.org/jira/browse/HADOOP-7771
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.23.0, 0.24.0
Reporter: John George
Assignee: John George
Priority: Blocker
 Fix For: 0.23.0, 0.24.0

 Attachments: HADOOP-7771.patch, HADOOP-7771.patch, HADOOP-7771.patch, 
 HADOOP-7771.patch, HADOOP-7771.patch, HADOOP-7771.patch


 NPE when running hdfs dfs -copyToLocal if the destination directory does not 
 exist. The behavior in branch-0.20-security is to create the directory and 
 copy/get the contents from source.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators: 
https://issues.apache.org/jira/secure/ContactAdministrators!default.jspa
For more information on JIRA, see: http://www.atlassian.com/software/jira