[jira] [Updated] (HADOOP-10872) org.apache.hadoop.fs.shell.TestPathData failed intermittently with "Mkdirs failed to create d1"

2014-07-22 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HADOOP-10872:
---

Attachment: HADOOP-10872.002.patch

Patch 002 to resolve the issue.


> org.apache.hadoop.fs.shell.TestPathData failed intermittently with "Mkdirs 
> failed to create d1"
> ---
>
> Key: HADOOP-10872
> URL: https://issues.apache.org/jira/browse/HADOOP-10872
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.5.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HADOOP-10872.001.dbg.patch, HADOOP-10872.001.dbg.patch, 
> HADOOP-10872.002.patch
>
>
> A bunch of TestPathData tests failed intermittently, e.g.
> https://builds.apache.org/job/PreCommit-HDFS-Build/7416//testReport/
> Example failure log:
> {code}
> Failed
> org.apache.hadoop.fs.shell.TestPathData.testUnqualifiedUriContents
> Failing for the past 1 build (Since Failed#7416 )
> Took 0.46 sec.
> Error Message
> Mkdirs failed to create d1
> Stacktrace
> java.io.IOException: Mkdirs failed to create d1
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:440)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:426)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:849)
>   at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1149)
>   at 
> org.apache.hadoop.fs.shell.TestPathData.initialize(TestPathData.java:54)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10882) Move DirectBufferPool into common util

2014-07-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10882?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14071369#comment-14071369
 ] 

Hadoop QA commented on HADOOP-10882:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12657253/hadoop-10882.txt
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.ipc.TestIPC
  
org.apache.hadoop.hdfs.server.namenode.ha.TestPipelinesFailover

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4344//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4344//console

This message is automatically generated.

> Move DirectBufferPool into common util
> --
>
> Key: HADOOP-10882
> URL: https://issues.apache.org/jira/browse/HADOOP-10882
> Project: Hadoop Common
>  Issue Type: Task
>  Components: util
>Affects Versions: 2.6.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Minor
> Attachments: hadoop-10882.txt
>
>
> MAPREDUCE-2841 uses a direct buffer pool to pass data back and forth between 
> native and Java code. The branch has an implementation which appears to be 
> derived from the one in HDFS. Instead of copy-pasting, we should move the 
> HDFS DirectBufferPool into Common so that MR can make use of it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10872) org.apache.hadoop.fs.shell.TestPathData failed intermittently with "Mkdirs failed to create d1"

2014-07-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14071363#comment-14071363
 ] 

Hadoop QA commented on HADOOP-10872:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12657272/HADOOP-10872.001.dbg.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ipc.TestIPC

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4346//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4346//console

This message is automatically generated.

> org.apache.hadoop.fs.shell.TestPathData failed intermittently with "Mkdirs 
> failed to create d1"
> ---
>
> Key: HADOOP-10872
> URL: https://issues.apache.org/jira/browse/HADOOP-10872
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.5.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HADOOP-10872.001.dbg.patch, HADOOP-10872.001.dbg.patch
>
>
> A bunch of TestPathData tests failed intermittently, e.g.
> https://builds.apache.org/job/PreCommit-HDFS-Build/7416//testReport/
> Example failure log:
> {code}
> Failed
> org.apache.hadoop.fs.shell.TestPathData.testUnqualifiedUriContents
> Failing for the past 1 build (Since Failed#7416 )
> Took 0.46 sec.
> Error Message
> Mkdirs failed to create d1
> Stacktrace
> java.io.IOException: Mkdirs failed to create d1
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:440)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:426)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:849)
>   at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1149)
>   at 
> org.apache.hadoop.fs.shell.TestPathData.initialize(TestPathData.java:54)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Assigned] (HADOOP-10334) make user home directory customizable

2014-07-22 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10334?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang reassigned HADOOP-10334:
--

Assignee: Yongjun Zhang

> make user home directory customizable
> -
>
> Key: HADOOP-10334
> URL: https://issues.apache.org/jira/browse/HADOOP-10334
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.2.0
>Reporter: Kevin Odell
>Assignee: Yongjun Zhang
>Priority: Minor
>
> The path is currently hardcoded:
> public Path getHomeDirectory() {
> return makeQualified(new Path("/user/" + dfs.ugi.getShortUserName()));
>   }
> It would be nice to have that as a customizable value.  
> Thank you



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10872) org.apache.hadoop.fs.shell.TestPathData failed intermittently with "Mkdirs failed to create d1"

2014-07-22 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HADOOP-10872:
---

Attachment: HADOOP-10872.001.dbg.patch

> org.apache.hadoop.fs.shell.TestPathData failed intermittently with "Mkdirs 
> failed to create d1"
> ---
>
> Key: HADOOP-10872
> URL: https://issues.apache.org/jira/browse/HADOOP-10872
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.5.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HADOOP-10872.001.dbg.patch, HADOOP-10872.001.dbg.patch
>
>
> A bunch of TestPathData tests failed intermittently, e.g.
> https://builds.apache.org/job/PreCommit-HDFS-Build/7416//testReport/
> Example failure log:
> {code}
> Failed
> org.apache.hadoop.fs.shell.TestPathData.testUnqualifiedUriContents
> Failing for the past 1 build (Since Failed#7416 )
> Took 0.46 sec.
> Error Message
> Mkdirs failed to create d1
> Stacktrace
> java.io.IOException: Mkdirs failed to create d1
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:440)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:426)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:849)
>   at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1149)
>   at 
> org.apache.hadoop.fs.shell.TestPathData.initialize(TestPathData.java:54)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10872) org.apache.hadoop.fs.shell.TestPathData failed intermittently with "Mkdirs failed to create d1"

2014-07-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14071327#comment-14071327
 ] 

Hadoop QA commented on HADOOP-10872:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12657264/HADOOP-10872.001.dbg.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ipc.TestIPC

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4345//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4345//console

This message is automatically generated.

> org.apache.hadoop.fs.shell.TestPathData failed intermittently with "Mkdirs 
> failed to create d1"
> ---
>
> Key: HADOOP-10872
> URL: https://issues.apache.org/jira/browse/HADOOP-10872
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.5.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HADOOP-10872.001.dbg.patch
>
>
> A bunch of TestPathData tests failed intermittently, e.g.
> https://builds.apache.org/job/PreCommit-HDFS-Build/7416//testReport/
> Example failure log:
> {code}
> Failed
> org.apache.hadoop.fs.shell.TestPathData.testUnqualifiedUriContents
> Failing for the past 1 build (Since Failed#7416 )
> Took 0.46 sec.
> Error Message
> Mkdirs failed to create d1
> Stacktrace
> java.io.IOException: Mkdirs failed to create d1
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:440)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:426)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:849)
>   at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1149)
>   at 
> org.apache.hadoop.fs.shell.TestPathData.initialize(TestPathData.java:54)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10872) org.apache.hadoop.fs.shell.TestPathData failed intermittently with "Mkdirs failed to create d1"

2014-07-22 Thread Yongjun Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14071301#comment-14071301
 ] 

Yongjun Zhang commented on HADOOP-10872:


Submitted patch 001 for debugging purpose, not a solution yet.

> org.apache.hadoop.fs.shell.TestPathData failed intermittently with "Mkdirs 
> failed to create d1"
> ---
>
> Key: HADOOP-10872
> URL: https://issues.apache.org/jira/browse/HADOOP-10872
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.5.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HADOOP-10872.001.dbg.patch
>
>
> A bunch of TestPathData tests failed intermittently, e.g.
> https://builds.apache.org/job/PreCommit-HDFS-Build/7416//testReport/
> Example failure log:
> {code}
> Failed
> org.apache.hadoop.fs.shell.TestPathData.testUnqualifiedUriContents
> Failing for the past 1 build (Since Failed#7416 )
> Took 0.46 sec.
> Error Message
> Mkdirs failed to create d1
> Stacktrace
> java.io.IOException: Mkdirs failed to create d1
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:440)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:426)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:849)
>   at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1149)
>   at 
> org.apache.hadoop.fs.shell.TestPathData.initialize(TestPathData.java:54)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10872) org.apache.hadoop.fs.shell.TestPathData failed intermittently with "Mkdirs failed to create d1"

2014-07-22 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HADOOP-10872:
---

Attachment: HADOOP-10872.001.dbg.patch

a patch to print dbg msg.


> org.apache.hadoop.fs.shell.TestPathData failed intermittently with "Mkdirs 
> failed to create d1"
> ---
>
> Key: HADOOP-10872
> URL: https://issues.apache.org/jira/browse/HADOOP-10872
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.5.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HADOOP-10872.001.dbg.patch
>
>
> A bunch of TestPathData tests failed intermittently, e.g.
> https://builds.apache.org/job/PreCommit-HDFS-Build/7416//testReport/
> Example failure log:
> {code}
> Failed
> org.apache.hadoop.fs.shell.TestPathData.testUnqualifiedUriContents
> Failing for the past 1 build (Since Failed#7416 )
> Took 0.46 sec.
> Error Message
> Mkdirs failed to create d1
> Stacktrace
> java.io.IOException: Mkdirs failed to create d1
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:440)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:426)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:849)
>   at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1149)
>   at 
> org.apache.hadoop.fs.shell.TestPathData.initialize(TestPathData.java:54)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10872) org.apache.hadoop.fs.shell.TestPathData failed intermittently with "Mkdirs failed to create d1"

2014-07-22 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10872?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HADOOP-10872:
---

Status: Patch Available  (was: Open)

> org.apache.hadoop.fs.shell.TestPathData failed intermittently with "Mkdirs 
> failed to create d1"
> ---
>
> Key: HADOOP-10872
> URL: https://issues.apache.org/jira/browse/HADOOP-10872
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 2.5.0
>Reporter: Yongjun Zhang
>Assignee: Yongjun Zhang
> Attachments: HADOOP-10872.001.dbg.patch
>
>
> A bunch of TestPathData tests failed intermittently, e.g.
> https://builds.apache.org/job/PreCommit-HDFS-Build/7416//testReport/
> Example failure log:
> {code}
> Failed
> org.apache.hadoop.fs.shell.TestPathData.testUnqualifiedUriContents
> Failing for the past 1 build (Since Failed#7416 )
> Took 0.46 sec.
> Error Message
> Mkdirs failed to create d1
> Stacktrace
> java.io.IOException: Mkdirs failed to create d1
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:440)
>   at 
> org.apache.hadoop.fs.ChecksumFileSystem.create(ChecksumFileSystem.java:426)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:906)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:887)
>   at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:849)
>   at org.apache.hadoop.fs.FileSystem.createNewFile(FileSystem.java:1149)
>   at 
> org.apache.hadoop.fs.shell.TestPathData.initialize(TestPathData.java:54)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10884) Fix dead link in Configuration javadoc

2014-07-22 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-10884:
--

 Summary: Fix dead link in Configuration javadoc
 Key: HADOOP-10884
 URL: https://issues.apache.org/jira/browse/HADOOP-10884
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: documentation
Affects Versions: 2.0.2-alpha
Reporter: Akira AJISAKA
Priority: Minor


In [Configuration 
javadoc|http://hadoop.apache.org/docs/r2.4.1/api/org/apache/hadoop/conf/Configuration.html],
 the link to core-site.xml is dead. We should fix it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10883) CompositeInputFormat javadoc is broken

2014-07-22 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10883?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10883:
---

Issue Type: Bug  (was: Sub-task)
Parent: (was: HADOOP-10873)

> CompositeInputFormat javadoc is broken
> --
>
> Key: HADOOP-10883
> URL: https://issues.apache.org/jira/browse/HADOOP-10883
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.0.2-alpha
>Reporter: Akira AJISAKA
>Assignee: Akira AJISAKA
>Priority: Minor
>  Labels: newbie
>
> In [CompositeInputFormat 
> javadoc|https://hadoop.apache.org/docs/r2.4.1/api/org/apache/hadoop/mapred/join/CompositeInputFormat.html],
>  some part of the description is converted to hyperlink by {{@see}} tag.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10883) CompositeInputFormat javadoc is broken

2014-07-22 Thread Akira AJISAKA (JIRA)
Akira AJISAKA created HADOOP-10883:
--

 Summary: CompositeInputFormat javadoc is broken
 Key: HADOOP-10883
 URL: https://issues.apache.org/jira/browse/HADOOP-10883
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: documentation
Affects Versions: 2.0.2-alpha
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor


In [CompositeInputFormat 
javadoc|https://hadoop.apache.org/docs/r2.4.1/api/org/apache/hadoop/mapred/join/CompositeInputFormat.html],
 some part of the description is converted to hyperlink by {{@see}} tag.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10882) Move DirectBufferPool into common util

2014-07-22 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-10882:
-

Attachment: hadoop-10882.txt

> Move DirectBufferPool into common util
> --
>
> Key: HADOOP-10882
> URL: https://issues.apache.org/jira/browse/HADOOP-10882
> Project: Hadoop Common
>  Issue Type: Task
>  Components: util
>Affects Versions: 2.6.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Minor
> Attachments: hadoop-10882.txt
>
>
> MAPREDUCE-2841 uses a direct buffer pool to pass data back and forth between 
> native and Java code. The branch has an implementation which appears to be 
> derived from the one in HDFS. Instead of copy-pasting, we should move the 
> HDFS DirectBufferPool into Common so that MR can make use of it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10882) Move DirectBufferPool into common util

2014-07-22 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10882?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-10882:
-

Status: Patch Available  (was: Open)

> Move DirectBufferPool into common util
> --
>
> Key: HADOOP-10882
> URL: https://issues.apache.org/jira/browse/HADOOP-10882
> Project: Hadoop Common
>  Issue Type: Task
>  Components: util
>Affects Versions: 2.6.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Minor
> Attachments: hadoop-10882.txt
>
>
> MAPREDUCE-2841 uses a direct buffer pool to pass data back and forth between 
> native and Java code. The branch has an implementation which appears to be 
> derived from the one in HDFS. Instead of copy-pasting, we should move the 
> HDFS DirectBufferPool into Common so that MR can make use of it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10881) Clarify usage of encryption and encrypted encryption key in KeyProviderCryptoExtension

2014-07-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14071243#comment-14071243
 ] 

Hudson commented on HADOOP-10881:
-

FAILURE: Integrated in Hadoop-trunk-Commit #5947 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5947/])
HADOOP-10881. Clarify usage of encryption and encrypted encryption key in 
KeyProviderCryptoExtension. (wang) (wang: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1612737)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/KeyProviderCryptoExtension.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/crypto/key/kms/KMSClientProvider.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/crypto/key/TestKeyProviderCryptoExtension.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSServerJSONUtils.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMS.java


> Clarify usage of encryption and encrypted encryption key in 
> KeyProviderCryptoExtension
> --
>
> Key: HADOOP-10881
> URL: https://issues.apache.org/jira/browse/HADOOP-10881
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 3.0.0
>
> Attachments: hadoop-10881.001.patch, hadoop-10881.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10882) Move DirectBufferPool into common util

2014-07-22 Thread Todd Lipcon (JIRA)
Todd Lipcon created HADOOP-10882:


 Summary: Move DirectBufferPool into common util
 Key: HADOOP-10882
 URL: https://issues.apache.org/jira/browse/HADOOP-10882
 Project: Hadoop Common
  Issue Type: Task
  Components: util
Affects Versions: 2.6.0
Reporter: Todd Lipcon
Assignee: Todd Lipcon
Priority: Minor


MAPREDUCE-2841 uses a direct buffer pool to pass data back and forth between 
native and Java code. The branch has an implementation which appears to be 
derived from the one in HDFS. Instead of copy-pasting, we should move the HDFS 
DirectBufferPool into Common so that MR can make use of it.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10881) Clarify usage of encryption and encrypted encryption key in KeyProviderCryptoExtension

2014-07-22 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-10881:
-

   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

Thanks for the reviews Tucu, I committed this to trunk. TestIPC failure looks 
unrelated.

> Clarify usage of encryption and encrypted encryption key in 
> KeyProviderCryptoExtension
> --
>
> Key: HADOOP-10881
> URL: https://issues.apache.org/jira/browse/HADOOP-10881
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 3.0.0
>
> Attachments: hadoop-10881.001.patch, hadoop-10881.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10881) Clarify usage of encryption and encrypted encryption key in KeyProviderCryptoExtension

2014-07-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14071223#comment-14071223
 ] 

Hadoop QA commented on HADOOP-10881:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12657231/hadoop-10881.002.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms:

  org.apache.hadoop.ipc.TestIPC

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4343//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4343//console

This message is automatically generated.

> Clarify usage of encryption and encrypted encryption key in 
> KeyProviderCryptoExtension
> --
>
> Key: HADOOP-10881
> URL: https://issues.apache.org/jira/browse/HADOOP-10881
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hadoop-10881.001.patch, hadoop-10881.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10881) Clarify usage of encryption and encrypted encryption key in KeyProviderCryptoExtension

2014-07-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14071209#comment-14071209
 ] 

Hadoop QA commented on HADOOP-10881:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12657226/hadoop-10881.001.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-common-project/hadoop-kms:

  org.apache.hadoop.ipc.TestIPC

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4342//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4342//console

This message is automatically generated.

> Clarify usage of encryption and encrypted encryption key in 
> KeyProviderCryptoExtension
> --
>
> Key: HADOOP-10881
> URL: https://issues.apache.org/jira/browse/HADOOP-10881
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hadoop-10881.001.patch, hadoop-10881.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10855) Allow Text to be read with a known length

2014-07-22 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14071187#comment-14071187
 ] 

Hudson commented on HADOOP-10855:
-

FAILURE: Integrated in Hadoop-trunk-Commit #5946 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/5946/])
HADOOP-10855. Allow Text to be read with a known Length. Contributed by Todd 
Lipcon. (todd: 
http://svn.apache.org/viewcvs.cgi/?root=Apache-SVN&view=rev&rev=1612731)
* /hadoop/common/trunk/hadoop-common-project/hadoop-common/CHANGES.txt
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/io/Text.java
* 
/hadoop/common/trunk/hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/io/TestText.java


> Allow Text to be read with a known length
> -
>
> Key: HADOOP-10855
> URL: https://issues.apache.org/jira/browse/HADOOP-10855
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 2.6.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Minor
> Fix For: 2.6.0
>
> Attachments: hadoop-10855.txt, hadoop-10855.txt, hadoop-10855.txt
>
>
> For the native task work (MAPREDUCE-2841) it is useful to be able to store 
> strings in a different fashion than the default (varint-prefixed) 
> serialization. We should provide a "read" method in Text which takes an 
> already-known length to support this use case while still providing Text 
> objects back to the user.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10855) Allow Text to be read with a known length

2014-07-22 Thread Todd Lipcon (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Todd Lipcon updated HADOOP-10855:
-

   Resolution: Fixed
Fix Version/s: 2.6.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

> Allow Text to be read with a known length
> -
>
> Key: HADOOP-10855
> URL: https://issues.apache.org/jira/browse/HADOOP-10855
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 2.6.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Minor
> Fix For: 2.6.0
>
> Attachments: hadoop-10855.txt, hadoop-10855.txt, hadoop-10855.txt
>
>
> For the native task work (MAPREDUCE-2841) it is useful to be able to store 
> strings in a different fashion than the default (varint-prefixed) 
> serialization. We should provide a "read" method in Text which takes an 
> already-known length to support this use case while still providing Text 
> objects back to the user.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10881) Clarify usage of encryption and encrypted encryption key in KeyProviderCryptoExtension

2014-07-22 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14071179#comment-14071179
 ] 

Alejandro Abdelnur commented on HADOOP-10881:
-

+1 pending jenkins

> Clarify usage of encryption and encrypted encryption key in 
> KeyProviderCryptoExtension
> --
>
> Key: HADOOP-10881
> URL: https://issues.apache.org/jira/browse/HADOOP-10881
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hadoop-10881.001.patch, hadoop-10881.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10881) Clarify usage of encryption and encrypted encryption key in KeyProviderCryptoExtension

2014-07-22 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-10881:
-

Attachment: hadoop-10881.002.patch

[~tucu00] gave me some offline review comments, fixed in this patch:

* Added javadoc for the getters in EncryptedKeyVersion
* Made deriveIV protected
* Removed some static imports

> Clarify usage of encryption and encrypted encryption key in 
> KeyProviderCryptoExtension
> --
>
> Key: HADOOP-10881
> URL: https://issues.apache.org/jira/browse/HADOOP-10881
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hadoop-10881.001.patch, hadoop-10881.002.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10881) Clarify usage of encryption and encrypted encryption key in KeyProviderCryptoExtension

2014-07-22 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-10881:
-

Attachment: hadoop-10881.001.patch

Patch attached. I did a bunch of renames within EncryptedKeyVersion, added 
javadoc. Renamed and added comments within the generateEEK and decryptEEK 
methods too, and found a bug along the way: we were initing the decrypt with 
the wrong key! Fixed this and added a test case as well.

> Clarify usage of encryption and encrypted encryption key in 
> KeyProviderCryptoExtension
> --
>
> Key: HADOOP-10881
> URL: https://issues.apache.org/jira/browse/HADOOP-10881
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hadoop-10881.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10881) Clarify usage of encryption and encrypted encryption key in KeyProviderCryptoExtension

2014-07-22 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-10881:
-

Status: Patch Available  (was: Open)

> Clarify usage of encryption and encrypted encryption key in 
> KeyProviderCryptoExtension
> --
>
> Key: HADOOP-10881
> URL: https://issues.apache.org/jira/browse/HADOOP-10881
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Attachments: hadoop-10881.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10876) The constructor of Path should not take an empty URL as a parameter

2014-07-22 Thread zhihai xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14071114#comment-14071114
 ] 

zhihai xu commented on HADOOP-10876:


When I run the test TestIPC, it is passed.

---
 T E S T S
---
Running org.apache.hadoop.ipc.TestIPC
Tests run: 30, Failures: 0, Errors: 0, Skipped: 1, Time elapsed: 82.359 sec - 
in org.apache.hadoop.ipc.TestIPC

Results :

Tests run: 30, Failures: 0, Errors: 0, Skipped: 1



> The constructor of Path should not take an empty URL as a parameter
> ---
>
> Key: HADOOP-10876
> URL: https://issues.apache.org/jira/browse/HADOOP-10876
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: zhihai xu
> Attachments: HADOOP-10876.000.patch
>
>
> The constructor of Path should not take an empty URL as a parameter, As 
> discussed in HADOOP-10820, This JIRA is to change Path constructor at public 
> Path(URI aUri) to check the empty URI and throw IllegalArgumentException for 
> empty URI.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10876) The constructor of Path should not take an empty URL as a parameter

2014-07-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14071107#comment-14071107
 ] 

Hadoop QA commented on HADOOP-10876:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12657198/HADOOP-10876.000.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ipc.TestIPC

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4340//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4340//console

This message is automatically generated.

> The constructor of Path should not take an empty URL as a parameter
> ---
>
> Key: HADOOP-10876
> URL: https://issues.apache.org/jira/browse/HADOOP-10876
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: zhihai xu
> Attachments: HADOOP-10876.000.patch
>
>
> The constructor of Path should not take an empty URL as a parameter, As 
> discussed in HADOOP-10820, This JIRA is to change Path constructor at public 
> Path(URI aUri) to check the empty URI and throw IllegalArgumentException for 
> empty URI.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10855) Allow Text to be read with a known length

2014-07-22 Thread Aaron T. Myers (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10855?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14071093#comment-14071093
 ] 

Aaron T. Myers commented on HADOOP-10855:
-

+1, the patch looks good to me. I agree that the test failures are unrelated - 
they're in orthogonal parts of the code and I also just ran them all locally 
and they passed with this patch applied.

Thanks a lot, Todd.

> Allow Text to be read with a known length
> -
>
> Key: HADOOP-10855
> URL: https://issues.apache.org/jira/browse/HADOOP-10855
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: io
>Affects Versions: 2.6.0
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
>Priority: Minor
> Attachments: hadoop-10855.txt, hadoop-10855.txt, hadoop-10855.txt
>
>
> For the native task work (MAPREDUCE-2841) it is useful to be able to store 
> strings in a different fashion than the default (varint-prefixed) 
> serialization. We should provide a "read" method in Text which takes an 
> already-known length to support this use case while still providing Text 
> objects back to the user.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10791) AuthenticationFilter should support externalizing the secret for signing and provide rotation support

2014-07-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14071084#comment-14071084
 ] 

Hadoop QA commented on HADOOP-10791:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12657210/HADOOP-10791.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-auth:

  
org.apache.hadoop.security.authentication.client.TestKerberosAuthenticator

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4341//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4341//console

This message is automatically generated.

> AuthenticationFilter should support externalizing the secret for signing and 
> provide rotation support
> -
>
> Key: HADOOP-10791
> URL: https://issues.apache.org/jira/browse/HADOOP-10791
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.4.1
>Reporter: Alejandro Abdelnur
>Assignee: Robert Kanter
> Attachments: HADOOP-10791.patch, HADOOP-10791.patch
>
>
> It should be possible to externalize the secret used to sign the hadoop-auth 
> cookies.
> In the case of WebHDFS the shared secret used by NN and DNs could be used. In 
> the case of Oozie HA, the secret could be stored in Oozie HA control data in 
> ZooKeeper.
> In addition, it is desirable for the secret to change periodically, this 
> means that the AuthenticationService should remember a previous secret for 
> the max duration of hadoop-auth cookie.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10881) Clarify usage of encryption and encrypted encryption key in KeyProviderCryptoExtension

2014-07-22 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10881?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14071044#comment-14071044
 ] 

Andrew Wang commented on HADOOP-10881:
--

One other question I had, generateEncryptedKey uses {{SecureRandom.getSeed}} to 
generate the IV. Why {{getSeed}} (which is also deprecated in favor of 
{{generateSeed}}), instead of another {{nextBytes}} call?

> Clarify usage of encryption and encrypted encryption key in 
> KeyProviderCryptoExtension
> --
>
> Key: HADOOP-10881
> URL: https://issues.apache.org/jira/browse/HADOOP-10881
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0
>Reporter: Andrew Wang
>Assignee: Andrew Wang
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10881) Clarify usage of encryption and encrypted encryption key in KeyProviderCryptoExtension

2014-07-22 Thread Andrew Wang (JIRA)
Andrew Wang created HADOOP-10881:


 Summary: Clarify usage of encryption and encrypted encryption key 
in KeyProviderCryptoExtension
 Key: HADOOP-10881
 URL: https://issues.apache.org/jira/browse/HADOOP-10881
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Andrew Wang
Assignee: Andrew Wang






--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10791) AuthenticationFilter should support externalizing the secret for signing and provide rotation support

2014-07-22 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-10791:
---

Attachment: HADOOP-10791.patch

New patch fixes findbugs warnings

> AuthenticationFilter should support externalizing the secret for signing and 
> provide rotation support
> -
>
> Key: HADOOP-10791
> URL: https://issues.apache.org/jira/browse/HADOOP-10791
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.4.1
>Reporter: Alejandro Abdelnur
>Assignee: Robert Kanter
> Attachments: HADOOP-10791.patch, HADOOP-10791.patch
>
>
> It should be possible to externalize the secret used to sign the hadoop-auth 
> cookies.
> In the case of WebHDFS the shared secret used by NN and DNs could be used. In 
> the case of Oozie HA, the secret could be stored in Oozie HA control data in 
> ZooKeeper.
> In addition, it is desirable for the secret to change periodically, this 
> means that the AuthenticationService should remember a previous secret for 
> the max duration of hadoop-auth cookie.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10820) Empty entry in libjars results in working directory being recursively localized

2014-07-22 Thread zhihai xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14070996#comment-14070996
 ] 

zhihai xu commented on HADOOP-10820:


Hi [~knoguchi]
I just submitted the patch in HADOOP-10876 for review.
thanks
zhihai

> Empty entry in libjars results in working directory being recursively 
> localized
> ---
>
> Key: HADOOP-10820
> URL: https://issues.apache.org/jira/browse/HADOOP-10820
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Alex Holmes
>Priority: Minor
> Attachments: HADOOP-10820-1.patch, HADOOP-10820.patch
>
>
> An empty token (e.g. "a.jar,,b.jar") in the -libjars option causes the 
> current working directory to be recursively localized.
> Here's an example of this in action (using Hadoop 2.2.0):
> {code}
> # create a temp directory and touch three JAR files
> mkdir -p tmp/path && cd tmp && touch a.jar b.jar c.jar path/d.jar
> # Run an example job only specifying two of the JARs.
> # Include an empty entry in libjars.
> hadoop jar 
> /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar 
> pi -libjars a.jar,,c.jar 2 10
> # As the job is running examine the localized directory in HDFS.
> # Notice that not only are the two JAR's specified in libjars copied,
> # but in addition the contents of the working directory are also recursively 
> copied.
> $ hadoop fs -lsr 
> /tmp/hadoop-yarn/staging/aholmes/.staging/job_1404752711144_0018/libjars
> /tmp/hadoop-yarn/staging/aholmes/.staging/job_1404752711144_0018/libjars/a.jar
> /tmp/hadoop-yarn/staging/aholmes/.staging/job_1404752711144_0018/libjars/c.jar
> /tmp/hadoop-yarn/staging/aholmes/.staging/job_1404752711144_0018/libjars/tmp
> /tmp/hadoop-yarn/staging/aholmes/.staging/job_1404752711144_0018/libjars/tmp/a.jar
> /tmp/hadoop-yarn/staging/aholmes/.staging/job_1404752711144_0018/libjars/tmp/b.jar
> /tmp/hadoop-yarn/staging/aholmes/.staging/job_1404752711144_0018/libjars/tmp/c.jar
> /tmp/hadoop-yarn/staging/aholmes/.staging/job_1404752711144_0018/libjars/tmp/path
> /tmp/hadoop-yarn/staging/aholmes/.staging/job_1404752711144_0018/libjars/tmp/path/d.jar
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10876) The constructor of Path should not take an empty URL as a parameter

2014-07-22 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated HADOOP-10876:
---

Status: Patch Available  (was: Open)

> The constructor of Path should not take an empty URL as a parameter
> ---
>
> Key: HADOOP-10876
> URL: https://issues.apache.org/jira/browse/HADOOP-10876
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: zhihai xu
> Attachments: HADOOP-10876.000.patch
>
>
> The constructor of Path should not take an empty URL as a parameter, As 
> discussed in HADOOP-10820, This JIRA is to change Path constructor at public 
> Path(URI aUri) to check the empty URI and throw IllegalArgumentException for 
> empty URI.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10876) The constructor of Path should not take an empty URL as a parameter

2014-07-22 Thread zhihai xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10876?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

zhihai xu updated HADOOP-10876:
---

Attachment: HADOOP-10876.000.patch

> The constructor of Path should not take an empty URL as a parameter
> ---
>
> Key: HADOOP-10876
> URL: https://issues.apache.org/jira/browse/HADOOP-10876
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: zhihai xu
> Attachments: HADOOP-10876.000.patch
>
>
> The constructor of Path should not take an empty URL as a parameter, As 
> discussed in HADOOP-10820, This JIRA is to change Path constructor at public 
> Path(URI aUri) to check the empty URI and throw IllegalArgumentException for 
> empty URI.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6075) TestTaskTrackerMemoryManager fails with NPE

2014-07-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6075?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6075.
--

Resolution: Incomplete

Closing as stale.

> TestTaskTrackerMemoryManager fails with NPE
> ---
>
> Key: HADOOP-6075
> URL: https://issues.apache.org/jira/browse/HADOOP-6075
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Amar Kamat
>
> Here is the error
> {code}
> null
> java.lang.NullPointerException
> at 
> org.apache.hadoop.mapred.TestTaskTrackerMemoryManager.testTasksBeyondLimits(TestTaskTrackerMemoryManager.java:256)
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6069) Remove explicit dynamic loading of libz in native code

2014-07-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6069?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6069.
--

Resolution: Won't Fix

We now dlopen() snappy, so the pendulum has swung the other way again.  Closing.

> Remove explicit dynamic loading of libz in native code
> --
>
> Key: HADOOP-6069
> URL: https://issues.apache.org/jira/browse/HADOOP-6069
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Todd Lipcon
>Assignee: Todd Lipcon
> Attachments: hadoop-6069-guts.txt, hadoop-6069.txt
>
>
> The native zlib code currently uses dlopen/dlsym to dynamically load libz. 
> This used to make sense when there was an lzo option (so you could load 
> libhadoop for lzo purposes without requiring libz as well). Now that 
> libhadoop only has zlib as an option, it makes sense to just add it as an ld 
> flag and let it be automatically loaded as a shlib dependency. I also doubt 
> that there are any distros where libz isn't required by the base system.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6065) TestRunningTaskLimit doesnt work as expected

2014-07-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6065.
--

Resolution: Incomplete

Closing this as stale.

> TestRunningTaskLimit doesnt work as expected
> 
>
> Key: HADOOP-6065
> URL: https://issues.apache.org/jira/browse/HADOOP-6065
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Amar Kamat
>
> I see the following code in TestRunningTaskLimit
> {code}
> JobConf jobConf = createWaitJobConf(mr, "job1", 20, 20);
> jobConf.setRunningMapLimit(5);
> jobConf.setRunningReduceLimit(3);
> 
> // Submit the job
> RunningJob rJob = (new JobClient(jobConf)).submitJob(jobConf);
> 
> // Wait 20 seconds for it to start up
> UtilsForTests.waitFor(2);
> 
> // Check the number of running tasks
> JobTracker jobTracker = mr.getJobTrackerRunner().getJobTracker();
> JobInProgress jip = jobTracker.getJob(rJob.getID());
> assertEquals(5, jip.runningMaps());
> assertEquals(3, jip.runningReduces());
> {code}
> This check is timing based and might not work as expected. Instead we can run 
> a job with > 5 maps (all waiting) and then wait for the job to reach a stable 
> state and then test if exactly 5 maps were scheduled or not.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6066) TestJobTrackerSafeMode might not work as expected

2014-07-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6066.
--

Resolution: Fixed

Closing as stale.

> TestJobTrackerSafeMode might not work as expected
> -
>
> Key: HADOOP-6066
> URL: https://issues.apache.org/jira/browse/HADOOP-6066
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Amar Kamat
>Assignee: Amar Kamat
>
> It failed on trunk for me. Looks like the mapred.tasktracker.expiry.interval 
> is set to 5sec which is too less. I think we should carefully set it. 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6050) History cleaner is started only on successful job completion and not on killed/failed job.

2014-07-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6050?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6050.
--

Resolution: Incomplete

Closing this as stale.

> History cleaner is started only on successful job completion and not on 
> killed/failed job. 
> ---
>
> Key: HADOOP-6050
> URL: https://issues.apache.org/jira/browse/HADOOP-6050
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Amar Kamat
>
> Not sure if this is intentionally done but I think we should rethink on the 
> way history cleaner works. Also a testcase is missing that will test history 
> cleaner.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10880) Move HTTP delegation tokens out of URL querystring to a header

2014-07-22 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10880:


Priority: Blocker  (was: Major)

making it a blocker so we don't release the new HTTP DT stuff without this.

> Move HTTP delegation tokens out of URL querystring to a header
> --
>
> Key: HADOOP-10880
> URL: https://issues.apache.org/jira/browse/HADOOP-10880
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
>Priority: Blocker
>
> Following up on a discussion in HADOOP-10799.
> Because URLs are often logged, delegation tokens may end up in LOG files 
> while they are still valid. 
> We should move the tokens to a header.
> We should still support tokens in the querystring for backwards compatibility.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10799) Refactor HTTP delegation token logic from httpfs into reusable code in hadoop-common.

2014-07-22 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14070896#comment-14070896
 ] 

Alejandro Abdelnur commented on HADOOP-10799:
-

[~daryn], created HADOOP-10880 for that, made it blocker.

> Refactor HTTP delegation token logic from httpfs into reusable code in 
> hadoop-common.
> -
>
> Key: HADOOP-10799
> URL: https://issues.apache.org/jira/browse/HADOOP-10799
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-10799.patch, HADOOP-10799.patch, 
> HADOOP-10799.patch, HADOOP-10799.patch, HADOOP-10799.patch, 
> HADOOP-10799.patch, HADOOP-10799.patch, HADOOP-10799.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10880) Move HTTP delegation tokens out of URL querystring to a header

2014-07-22 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10880?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14070894#comment-14070894
 ] 

Alejandro Abdelnur commented on HADOOP-10880:
-

We could do it in a backwards compatible way:

* The new client side logic doing DT would use a header by default.
* The new client side logic doing DT would support a CONFIG switch to force 
using the querystring (to support old services).
* The server logic doing DT will do header with fallback to querystring (to 
support old clients).

> Move HTTP delegation tokens out of URL querystring to a header
> --
>
> Key: HADOOP-10880
> URL: https://issues.apache.org/jira/browse/HADOOP-10880
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
>
> Following up on a discussion in HADOOP-10799.
> Because URLs are often logged, delegation tokens may end up in LOG files 
> while they are still valid. 
> We should move the tokens to a header.
> We should still support tokens in the querystring for backwards compatibility.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10880) Move HTTP delegation tokens out of URL querystring to a header

2014-07-22 Thread Alejandro Abdelnur (JIRA)
Alejandro Abdelnur created HADOOP-10880:
---

 Summary: Move HTTP delegation tokens out of URL querystring to a 
header
 Key: HADOOP-10880
 URL: https://issues.apache.org/jira/browse/HADOOP-10880
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.4.1
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur


Because URLs are often logged, delegation tokens may end up in LOG files while 
they are still valid. 

We should move the tokens to a header.

We should still support tokens in the querystring for backwards compatibility.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10880) Move HTTP delegation tokens out of URL querystring to a header

2014-07-22 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10880?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10880:


Description: 
Following up on a discussion in HADOOP-10799.

Because URLs are often logged, delegation tokens may end up in LOG files while 
they are still valid. 

We should move the tokens to a header.

We should still support tokens in the querystring for backwards compatibility.

  was:
Because URLs are often logged, delegation tokens may end up in LOG files while 
they are still valid. 

We should move the tokens to a header.

We should still support tokens in the querystring for backwards compatibility.


> Move HTTP delegation tokens out of URL querystring to a header
> --
>
> Key: HADOOP-10880
> URL: https://issues.apache.org/jira/browse/HADOOP-10880
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.4.1
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
>
> Following up on a discussion in HADOOP-10799.
> Because URLs are often logged, delegation tokens may end up in LOG files 
> while they are still valid. 
> We should move the tokens to a header.
> We should still support tokens in the querystring for backwards compatibility.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-5997) Many test jobs write to HDFS under /

2014-07-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-5997:
-

Assignee: (was: Ramya Sunil)

> Many test jobs write to HDFS under /
> 
>
> Key: HADOOP-5997
> URL: https://issues.apache.org/jira/browse/HADOOP-5997
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.20.0
>Reporter: Ramya Sunil
>  Labels: newbie
> Attachments: HADOOP-5997-branch20.patch
>
>
> Many test jobs such as testmapredsort, TestDFSIO and nnbench try to write to 
> HDFS under root. 
> If a user 'X' brings up the cluster and gives full access to user 'Y' under 
> /user/Y, user Y's test jobs still will not run because they demand access to 
> / which cannot be granted. Such jobs should be modified to write their temp 
> outputs under /user/Y and not directly under /



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-5997) Many test jobs write to HDFS under /

2014-07-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5997?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-5997:
-

Labels: newbie  (was: )

> Many test jobs write to HDFS under /
> 
>
> Key: HADOOP-5997
> URL: https://issues.apache.org/jira/browse/HADOOP-5997
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Affects Versions: 0.20.0
>Reporter: Ramya Sunil
>Assignee: Ramya Sunil
>  Labels: newbie
> Attachments: HADOOP-5997-branch20.patch
>
>
> Many test jobs such as testmapredsort, TestDFSIO and nnbench try to write to 
> HDFS under root. 
> If a user 'X' brings up the cluster and gives full access to user 'Y' under 
> /user/Y, user Y's test jobs still will not run because they demand access to 
> / which cannot be granted. Such jobs should be modified to write their temp 
> outputs under /user/Y and not directly under /



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5991) gridmix-env-2 should not have fixed values for HADOOP_VERSION and HADOOP_HOME

2014-07-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5991.
--

Resolution: Incomplete

mix2 replaced with mix3. closing.

> gridmix-env-2 should not have fixed values for HADOOP_VERSION and HADOOP_HOME
> -
>
> Key: HADOOP-5991
> URL: https://issues.apache.org/jira/browse/HADOOP-5991
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: benchmarks
>Affects Versions: 0.20.0
>Reporter: Suman Sehgal
>Priority: Minor
>
> "gridmix-env-2 " of gridmix2  is having fix old values for HADOOP_VERSION and 
> HADOOP_HOME which override the default environment settings. These lines 
> should be either commented or no value should be specified to them (templates 
> only) as is the case with "gridmix-env" of gridmix.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5988) Add a command to ' FsShell stat ' to get a file's block location information

2014-07-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5988?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5988.
--

Resolution: Fixed

fsck already does this.

> Add a command to ' FsShell stat ' to get a file's block location information
> 
>
> Key: HADOOP-5988
> URL: https://issues.apache.org/jira/browse/HADOOP-5988
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: He Yongqiang
> Attachments: HADOOP-5988.patch, HADOOP-5988_v01.patch
>
>
> Adding an option to ' FsShell stat ' to get a file's block location 
> information will be very useful.
> we can print the block location information in this format:
> blockID:X  byte-range:-  location:dn1;dn2;
> blockID:X  byte-range:-  location:dn1;dn2;



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5983) Namenode shouldn't read mapred-site.xml

2014-07-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5983?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5983.
--

Resolution: Fixed

Closing this again.

> Namenode shouldn't read mapred-site.xml
> ---
>
> Key: HADOOP-5983
> URL: https://issues.apache.org/jira/browse/HADOOP-5983
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 1.1.0
>Reporter: Rajiv Chittajallu
>
> The name node seem to read mapred-site.xml and fails if it can't parse it.
> 2009-06-05 22:37:15,663 FATAL org.apache.hadoop.conf.Configuration: error 
> parsing conf file: org.xml.sax.SAXParseException: Error attempting to parse 
> XML file (href='/hadoop/conf/local/local-mapred-site.xml').
> 2009-06-05 22:37:15,664 ERROR 
> org.apache.hadoop.hdfs.server.namenode.NameNode: java.lang.RuntimeException: 
> org.xml.sax.SAXParseException: Error attempting to parse XML file 
> (href='/hadoop/conf/local/local-mapred-site.xml').
> In our config,  local-mapred-site.xml is included only in mapred-site.xml 
> which we don't push to the namenode.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5962) fs tests should not be placed in hdfs.

2014-07-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5962?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5962.
--

Resolution: Fixed

This has almost certainly been fixed with the mavenization, etc.

> fs tests should not be placed in hdfs.
> --
>
> Key: HADOOP-5962
> URL: https://issues.apache.org/jira/browse/HADOOP-5962
> Project: Hadoop Common
>  Issue Type: Task
>  Components: test
>Reporter: Tsz Wo Nicholas Sze
>
> The following tests are under the org.apache.hadoop.fs package but were moved 
> to hdfs sub-directory by HADOOP-5135:
> {noformat}
> ./org/apache/hadoop/fs/ftp/TestFTPFileSystem.java
> ./org/apache/hadoop/fs/loadGenerator/TestLoadGenerator.java
> ./org/apache/hadoop/fs/permission/TestStickyBit.java
> ./org/apache/hadoop/fs/TestGlobPaths.java
> ./org/apache/hadoop/fs/TestUrlStreamHandler.java
> {noformat}
> - Some of them are not related to hdfs, e.g. TestFTPFileSystem. These files 
> should be moved out from hdfs and should not use hdfs codes.
> - Some of them are testing hdfs features, e.g. TestStickyBit. They should be 
> defined under org.apache.hadoop.hdfs package.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5916) Standardize fall-back value of test.build.data for testing directories

2014-07-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5916?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5916.
--

Resolution: Incomplete

I'm going to close this as stale.

> Standardize fall-back value of test.build.data for testing directories
> --
>
> Key: HADOOP-5916
> URL: https://issues.apache.org/jira/browse/HADOOP-5916
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Jakob Homan
> Attachments: test.build.data.txt
>
>
> Currently when the test.build.data property is not set, as happens when run 
> through some configurations of Eclipse, the fall-back value varies wildly.  
> Most calls set to /tmp, which is not good as it is beyond the scope of the 
> ant clean task and thus will not be deleted.  Others default to "." which can 
> drop the test files right in the current directory.  Speaking with Konstanin, 
> it seems the correct location should be "build/test/data" to ensure any files 
> that are created are within the scope of Ant's clean command.
> Attached is the current variation in this setting.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5945) Support running multiple DataNodes/TaskTrackers simultaneously in a single node

2014-07-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5945?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5945.
--

Resolution: Not a Problem

> Support running multiple DataNodes/TaskTrackers simultaneously in a single 
> node
> ---
>
> Key: HADOOP-5945
> URL: https://issues.apache.org/jira/browse/HADOOP-5945
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: He Yongqiang
>
> We should support multiple datanodes/tasktrackers running at a same node, 
> only if they do not share same port/local fs dir etc. I think Hadoop can be 
> easily adapted to meet this.  
> I guess at first and the major step is that we should modify the script to 
> let it support startting multiple datanode/tasktracker daemons in a same node.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5917) Testpatch isn't catching newly introduced javac warnings

2014-07-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5917?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5917.
--

Resolution: Incomplete

This is likely stale as well.

> Testpatch isn't catching newly introduced javac warnings
> 
>
> Key: HADOOP-5917
> URL: https://issues.apache.org/jira/browse/HADOOP-5917
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Jakob Homan
>Assignee: Giridharan Kesavan
>
> Testpatch doesn't seem to be catching newly introduced javac warnings, as 
> detailed in the results of the experiment below.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-5901) FileSystem.fixName() has unexpected behaviour

2014-07-22 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-5901?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14070791#comment-14070791
 ] 

Allen Wittenauer commented on HADOOP-5901:
--

This should really get revisited to finish this up.

> FileSystem.fixName() has unexpected behaviour
> -
>
> Key: HADOOP-5901
> URL: https://issues.apache.org/jira/browse/HADOOP-5901
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Aaron Kimball
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-5901.2.patch, HADOOP-5901.3.patch, 
> HADOOP-5901.4.patch, HADOOP-5901.patch
>
>
> {{FileSystem.fixName()}} tries to patch up fs.default.name values, but I'm 
> not sure it helps that well. 
> Has it been warning about deprecated values for long enough for it to be 
> turned off? 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-5901) FileSystem.fixName() has unexpected behaviour

2014-07-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5901?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-5901:
-

Labels: newbie  (was: )

> FileSystem.fixName() has unexpected behaviour
> -
>
> Key: HADOOP-5901
> URL: https://issues.apache.org/jira/browse/HADOOP-5901
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.21.0
>Reporter: Steve Loughran
>Assignee: Aaron Kimball
>Priority: Minor
>  Labels: newbie
> Attachments: HADOOP-5901.2.patch, HADOOP-5901.3.patch, 
> HADOOP-5901.4.patch, HADOOP-5901.patch
>
>
> {{FileSystem.fixName()}} tries to patch up fs.default.name values, but I'm 
> not sure it helps that well. 
> Has it been warning about deprecated values for long enough for it to be 
> turned off? 



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-5894) ChecksumFileSystem is ignoring 0 byte CRC files

2014-07-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5894?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-5894:
-

Labels: newbie  (was: )

> ChecksumFileSystem is ignoring 0 byte CRC files
> ---
>
> Key: HADOOP-5894
> URL: https://issues.apache.org/jira/browse/HADOOP-5894
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Reporter: Jothi Padmanabhan
>Priority: Minor
>  Labels: newbie
>
> We saw an issue where the original file is not empty but the corresponding 
> CRC file is empty (do not know why, could be because the process that wrote 
> the file crashed in between). While reading, fs.open got a EOFException when 
> trying to read the checksum version from the CRC file and crc validation was 
> disabled for the file. Since the original intention was to have the CRC 
> validations for this file, should we just fail here instead of ignoring the 
> exception?  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5889) Allow writing to output directories that exist, as long as they are empty

2014-07-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5889?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5889.
--

Resolution: Incomplete

I'm just going to close this as stale, esp given s3 allegedly works.

> Allow writing to output directories that exist, as long as they are empty
> -
>
> Key: HADOOP-5889
> URL: https://issues.apache.org/jira/browse/HADOOP-5889
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 0.18.3
>Reporter: Ian Nowland
> Attachments: HADOOP-5889-0.patch
>
>
> The current behavior in FileOutputFormat.checkOutputSpecs is to fail if the 
> path specified by mapred.output.dir exists at the start of the job. This is 
> to protect from accidentally overwriting existing data. There seems no harm 
> then in slightly relaxing this check to allow the case for the output to 
> exist if it is an empty directory.
> At a minimum this would allow outputting to the root of S3N buckets, which is 
> currently impossible (https://issues.apache.org/jira/browse/HADOOP-5805).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5888) Improve test run time by avoiding 0.0.0.0 lookups

2014-07-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5888?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5888.
--

Resolution: Fixed

I'm going to close this as half-fixed/half-won't fix due to some tests needing 
to be on 0.0.0.0 to make sure that we test that configuration since we give it 
to users.

> Improve test run time by avoiding 0.0.0.0 lookups
> -
>
> Key: HADOOP-5888
> URL: https://issues.apache.org/jira/browse/HADOOP-5888
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Todd Lipcon
> Attachments: hadoop-5888.txt
>
>
> HADOOP-3694 discusses the fact that 0.0.0.0 is slower to reverse than 
> 127.0.0.1 on a lot of systems. The replacing of 0.0.0.0 with 127.0.0.1 was 
> only partially completed in that ticket. This ticket is to track continued 
> work on that front.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-5874) Add the crc32 sum checks to sort validator

2014-07-22 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-5874?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14070773#comment-14070773
 ] 

Allen Wittenauer commented on HADOOP-5874:
--

Ping!  Was this actually done?

> Add the crc32 sum checks to sort validator
> --
>
> Key: HADOOP-5874
> URL: https://issues.apache.org/jira/browse/HADOOP-5874
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Owen O'Malley
>
> It was very useful having comparing the sum of the crc32 of the key/value 
> pairs before and after the sort.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10879) Rename *-env.sh in the tree to *-env.sh.example

2014-07-22 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-10879:
-

 Summary: Rename *-env.sh in the tree to *-env.sh.example
 Key: HADOOP-10879
 URL: https://issues.apache.org/jira/browse/HADOOP-10879
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Allen Wittenauer


With HADOOP-9902 in place, we don't have to ship *-env.sh called as such and 
only provide examples.  This goes a long way with being able to upgrade the 
binaries in place since we would no longer overwrite those files upon 
extraction.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5772) Implement a 'refreshable' configuration system with right access-controls etc.

2014-07-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5772?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5772.
--

Resolution: Duplicate

> Implement a 'refreshable' configuration system with right access-controls etc.
> --
>
> Key: HADOOP-5772
> URL: https://issues.apache.org/jira/browse/HADOOP-5772
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: conf
>Reporter: Arun C Murthy
>
> We have various bits and pieces of code to refresh certain configuration 
> files, various components to restrict access to who can actually refresh the 
> configs etc. 
> I propose we start thinking about a simple system to support this as a 
> first-class citizen in Hadoop Core...



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10799) Refactor HTTP delegation token logic from httpfs into reusable code in hadoop-common.

2014-07-22 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10799?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14070759#comment-14070759
 ] 

Daryn Sharp commented on HADOOP-10799:
--

Haven't reviewed, but if you haven't already, can you please ensure that the 
ability to use a query string param is confined to webhdfs.  If we allow it to 
spread to new services then we will never be able to remove it.

> Refactor HTTP delegation token logic from httpfs into reusable code in 
> hadoop-common.
> -
>
> Key: HADOOP-10799
> URL: https://issues.apache.org/jira/browse/HADOOP-10799
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: security
>Affects Versions: 3.0.0
>Reporter: Alejandro Abdelnur
>Assignee: Alejandro Abdelnur
> Attachments: HADOOP-10799.patch, HADOOP-10799.patch, 
> HADOOP-10799.patch, HADOOP-10799.patch, HADOOP-10799.patch, 
> HADOOP-10799.patch, HADOOP-10799.patch, HADOOP-10799.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-5814) NativeS3FileSystem doesn't report progress when writing

2014-07-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5814?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-5814:
-

Resolution: Incomplete
Status: Resolved  (was: Patch Available)

I suspect this is stale.  If not, please open a new jira. Thanks!

> NativeS3FileSystem doesn't report progress when writing
> ---
>
> Key: HADOOP-5814
> URL: https://issues.apache.org/jira/browse/HADOOP-5814
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/s3
>Reporter: Tom White
>  Labels: S3Native
> Attachments: HADOOP-5814(1).patch, HADOOP-5814.patch
>
>
> This results in timeouts since the whole file is uploaded in the close 
> method. See 
> http://www.mail-archive.com/core-user@hadoop.apache.org/msg09881.html.
> One solution is to keep a reference to the Progressable passed in to the 
> NativeS3FsOutputStream's constructor, and progress it during writes, and 
> while copying the backup file to S3 in the close method.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-6766) Spill can fail with bad call to Random

2014-07-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6766?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-6766.
--

Resolution: Cannot Reproduce

> Spill can fail with bad call to Random
> --
>
> Key: HADOOP-6766
> URL: https://issues.apache.org/jira/browse/HADOOP-6766
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 0.20.2
>Reporter: Peter Arthur Ciccolo
>Priority: Minor
>
> java.lang.IllegalArgumentException: n must be positive
> at java.util.Random.nextInt(Random.java:250)
> at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:243)
> at 
> org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:289)
> at 
> org.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:124)
> at 
> org.apache.hadoop.mapred.MapOutputFile.getSpillFileForWrite(MapOutputFile.java:107)
> at 
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1221)
> at 
> org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1129)
> at 
> org.apache.hadoop.mapred.MapTask$NewOutputCollector.close(MapTask.java:549)
> at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:623)
> at org.apache.hadoop.mapred.MapTask.run(MapTask.java:305)
> at org.apache.hadoop.mapred.Child.main(Child.java:159)
> confChanged assumes that the list of dirs it creates 
> (LocalDirAllocator.java:215) has at least one element in it by the end of the 
> function. If, for each local dir, either the conditional on line 221 is 
> false, or the call to DiskChecker.checkDir() throws an exception, this 
> assumption will not hold. In this case, dirIndexRandomizer.nextInt() is 
> called on the number of elements in dirs, which is 0. Since 
> dirIndexRandomizer (195) is an instance of Random(), it needs a positive 
> (non-zero) argument to nextInt().



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-5793) High speed compression algorithm like BMDiff

2014-07-22 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-5793?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14070752#comment-14070752
 ] 

Allen Wittenauer commented on HADOOP-5793:
--

Or, with Snappy integrated do we still care to do this work?

> High speed compression algorithm like BMDiff
> 
>
> Key: HADOOP-5793
> URL: https://issues.apache.org/jira/browse/HADOOP-5793
> Project: Hadoop Common
>  Issue Type: New Feature
>Reporter: elhoim gibor
>Assignee: Michele Catasta
>Priority: Minor
>
> Add a high speed compression algorithm like BMDiff.
> It gives speeds ~100MB/s for writes and ~1000MB/s for reads, compressing 
> 2.1billions web pages from 45.1TB in 4.2TB
> Reference:
> http://norfolk.cs.washington.edu/htbin-post/unrestricted/colloq/details.cgi?id=437
> 2005 Jeff Dean talk about google architecture - around 46:00.
> http://feedblog.org/2008/10/12/google-bigtable-compression-zippy-and-bmdiff/
> http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=755678
> A reference implementation exists in HyperTable.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5791) to downgrade commons-cli from 2.0 to 1.2

2014-07-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5791.
--

Resolution: Won't Fix

Long stale.

> to downgrade commons-cli from 2.0 to 1.2 
> -
>
> Key: HADOOP-5791
> URL: https://issues.apache.org/jira/browse/HADOOP-5791
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Giridharan Kesavan
>Assignee: Devaraj Das
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-5787) Allow HADOOP_ROOT_LOGGER to be configured via conf/hadoop-env.sh

2014-07-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5787?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-5787:
-

Resolution: Duplicate
Status: Resolved  (was: Patch Available)

> Allow HADOOP_ROOT_LOGGER to be configured via conf/hadoop-env.sh
> 
>
> Key: HADOOP-5787
> URL: https://issues.apache.org/jira/browse/HADOOP-5787
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 0.20.0, 0.23.10
>Reporter: Arun C Murthy
>Assignee: Arun C Murthy
>
> Currently it's set in bin/hadoop-daemon.sh... we should allow it to be 
> specified in conf/hadoop-env.sh



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5760) Task process hanging on an RPC call

2014-07-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5760.
--

Resolution: Duplicate

> Task process hanging on an RPC call
> ---
>
> Key: HADOOP-5760
> URL: https://issues.apache.org/jira/browse/HADOOP-5760
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: ipc
>Reporter: Devaraj Das
>
> On a random node on a cluster, I found one task process waiting on an RPC 
> call. The process has been in that state for a few days at least.
> "main" prio=10 tid=0x08069400 nid=0x6f52 in Object.wait() 
> [0xf7e6c000..0xf7e6d1f8]
>java.lang.Thread.State: WAITING (on object monitor)
> at java.lang.Object.wait(Native Method)
> - waiting on <0xf1215700> (a org.apache.hadoop.ipc.Client$Call)
> at java.lang.Object.wait(Object.java:485)
> at org.apache.hadoop.ipc.Client.call(Client.java:725)
> - locked <0xf1215700> (a org.apache.hadoop.ipc.Client$Call)
> at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
> at org.apache.hadoop.mapred.$Proxy0.statusUpdate(Unknown Source)
> at org.apache.hadoop.mapred.Task.statusUpdate(Task.java:691)
> at org.apache.hadoop.mapred.Task.taskCleanup(Task.java:795)
> at org.apache.hadoop.mapred.Child.main(Child.java:176)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-5756) If output directory can not be created, error message on stdout does not provide any clue.

2014-07-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5756?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-5756:
-

Labels: newbie  (was: )

> If output directory can not be created, error message on stdout does not 
> provide any clue.
> --
>
> Key: HADOOP-5756
> URL: https://issues.apache.org/jira/browse/HADOOP-5756
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: examples
>Reporter: Suhas Gogate
>  Labels: newbie
>
> In the following wordcount example output directory path can not be created 
> because /temp does not exists and user has not privileges to create output 
> path at "/". 
> hadoop --config ./clustdir/ jar /homes/gogate/wordcount.jar 
> com..wordcount.WordCount /in-path/gogate/myfile /temp/mywc-gogate 
> 09/04/28 23:00:32 WARN mapred.JobClient: Use GenericOptionsParser for parsing 
> the arguments. Applications should implement Tool for the same.
> 09/04/28 23:00:32 INFO mapred.FileInputFormat: Total input paths to process : 
> 1
> 09/04/28 23:00:32 INFO mapred.FileInputFormat: Total input paths to process : 
> 1
> 09/04/28 23:00:33 INFO mapred.JobClient: Running job: job_200904282249_0004
> java.io.IOException: Job failed!
>   at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1113)
>   at com..wordcount.WordCount.main(WordCount.java:55)
>   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>   at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>   at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>   at java.lang.reflect.Method.invoke(Method.java:597)
>   at org.apache.hadoop.util.RunJar.main(RunJar.java:155)
>   at org.apache.hadoop.mapred.JobShell.run(JobShell.java:54)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65)
>   at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:79)
>   at org.apache.hadoop.mapred.JobShell.main(JobShell.java:68)
>  



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10791) AuthenticationFilter should support externalizing the secret for signing and provide rotation support

2014-07-22 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10791?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14070724#comment-14070724
 ] 

Hadoop QA commented on HADOOP-10791:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12657156/HADOOP-10791.patch
  against trunk revision .

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 5 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-auth.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4339//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4339//artifact/trunk/patchprocess/newPatchFindbugsWarningshadoop-auth.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4339//console

This message is automatically generated.

> AuthenticationFilter should support externalizing the secret for signing and 
> provide rotation support
> -
>
> Key: HADOOP-10791
> URL: https://issues.apache.org/jira/browse/HADOOP-10791
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.4.1
>Reporter: Alejandro Abdelnur
>Assignee: Robert Kanter
> Attachments: HADOOP-10791.patch
>
>
> It should be possible to externalize the secret used to sign the hadoop-auth 
> cookies.
> In the case of WebHDFS the shared secret used by NN and DNs could be used. In 
> the case of Oozie HA, the secret could be stored in Oozie HA control data in 
> ZooKeeper.
> In addition, it is desirable for the secret to change periodically, this 
> means that the AuthenticationService should remember a previous secret for 
> the max duration of hadoop-auth cookie.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10878) Hadoop servlets need ACLs

2014-07-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10878:
--

Labels: newbie  (was: )

> Hadoop servlets need ACLs
> -
>
> Key: HADOOP-10878
> URL: https://issues.apache.org/jira/browse/HADOOP-10878
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics, security
>Reporter: Allen Wittenauer
>  Labels: newbie
>
> As far as I'm aware, once a user gets past the HTTP-level authentication, all 
> servlets available on that port are available to the user.  This is a 
> security hole as there is some information and services that we don't want 
> every user to be able to access or only want them to access from certain 
> locations.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10878) Hadoop servlets need ACLs

2014-07-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10878?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10878:
--

Component/s: metrics

> Hadoop servlets need ACLs
> -
>
> Key: HADOOP-10878
> URL: https://issues.apache.org/jira/browse/HADOOP-10878
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics, security
>Reporter: Allen Wittenauer
>  Labels: newbie
>
> As far as I'm aware, once a user gets past the HTTP-level authentication, all 
> servlets available on that port are available to the user.  This is a 
> security hole as there is some information and services that we don't want 
> every user to be able to access or only want them to access from certain 
> locations.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10878) Hadoop servlets need ACLs

2014-07-22 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10878?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14070704#comment-14070704
 ] 

Allen Wittenauer commented on HADOOP-10878:
---

In particular, it would be great to lock down:

- metrics 
- webhdfs
- hftp

By host and/or user. There are likely others.

> Hadoop servlets need ACLs
> -
>
> Key: HADOOP-10878
> URL: https://issues.apache.org/jira/browse/HADOOP-10878
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Allen Wittenauer
>
> As far as I'm aware, once a user gets past the HTTP-level authentication, all 
> servlets available on that port are available to the user.  This is a 
> security hole as there is some information and services that we don't want 
> every user to be able to access or only want them to access from certain 
> locations.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5722) HTTP metrics interface enable/disable must be configurable

2014-07-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5722.
--

Resolution: Fixed

I'm going to dupe this to a new bug, as it isn't just metrics that has this 
problem now.

> HTTP metrics interface enable/disable must be configurable
> --
>
> Key: HADOOP-5722
> URL: https://issues.apache.org/jira/browse/HADOOP-5722
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics, security
>Reporter: Marco Nicosia
>
> HADOOP-5469 added a convenient end-run around JMX authentication by revealing 
> the same metrics over HTTP. That's cool, but we need to secure all accesses 
> to our Hadoop cluster, so while this may be enabled by default, we need some 
> configurable way to disable the unauthenticated port.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10878) Hadoop servlets need ACLs

2014-07-22 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-10878:
-

 Summary: Hadoop servlets need ACLs
 Key: HADOOP-10878
 URL: https://issues.apache.org/jira/browse/HADOOP-10878
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Allen Wittenauer


As far as I'm aware, once a user gets past the HTTP-level authentication, all 
servlets available on that port are available to the user.  This is a security 
hole as there is some information and services that we don't want every user to 
be able to access or only want them to access from certain locations.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-5722) HTTP metrics interface enable/disable must be configurable

2014-07-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-5722:
-

Component/s: security

> HTTP metrics interface enable/disable must be configurable
> --
>
> Key: HADOOP-5722
> URL: https://issues.apache.org/jira/browse/HADOOP-5722
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics, security
>Reporter: Marco Nicosia
>
> HADOOP-5469 added a convenient end-run around JMX authentication by revealing 
> the same metrics over HTTP. That's cool, but we need to secure all accesses 
> to our Hadoop cluster, so while this may be enabled by default, we need some 
> configurable way to disable the unauthenticated port.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10877) native client: implement hdfsMove and hdfsCopy

2014-07-22 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-10877:
-

 Summary: native client: implement hdfsMove and hdfsCopy
 Key: HADOOP-10877
 URL: https://issues.apache.org/jira/browse/HADOOP-10877
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: native
Affects Versions: HADOOP-10388
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


In the pure native client, we need to implement {{hdfsMove}} and {{hdfsCopy}}.  
These are basically recursive copy functions (in the Java code, move is copy 
with a delete at the end).



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5682) ant clean does not clean the generated api docs

2014-07-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5682?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5682.
--

Resolution: Fixed

This is almost certainly stale by now.

> ant clean does not clean the generated api docs
> ---
>
> Key: HADOOP-5682
> URL: https://issues.apache.org/jira/browse/HADOOP-5682
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Reporter: Giridharan Kesavan
>Assignee: Giridharan Kesavan
>  Labels: newbie
> Attachments: hadoop-5682.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10725) Implement listStatus and getFileInfo in the native client

2014-07-22 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-10725:
--

Attachment: HADOOP-10725-pnative.004.patch

This version should be a lot easier to read, since I split the URL, conf, and 
other changes into separate JIRAs.

> Implement listStatus and getFileInfo in the native client
> -
>
> Key: HADOOP-10725
> URL: https://issues.apache.org/jira/browse/HADOOP-10725
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
>Affects Versions: HADOOP-10388
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Attachments: HADOOP-10725-pnative.001.patch, 
> HADOOP-10725-pnative.002.patch, HADOOP-10725-pnative.003.patch, 
> HADOOP-10725-pnative.004.patch
>
>
> Implement listStatus and getFileInfo in the native client.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10791) AuthenticationFilter should support externalizing the secret for signing and provide rotation support

2014-07-22 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-10791:
---

Status: Patch Available  (was: Open)

> AuthenticationFilter should support externalizing the secret for signing and 
> provide rotation support
> -
>
> Key: HADOOP-10791
> URL: https://issues.apache.org/jira/browse/HADOOP-10791
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.4.1
>Reporter: Alejandro Abdelnur
>Assignee: Robert Kanter
> Attachments: HADOOP-10791.patch
>
>
> It should be possible to externalize the secret used to sign the hadoop-auth 
> cookies.
> In the case of WebHDFS the shared secret used by NN and DNs could be used. In 
> the case of Oozie HA, the secret could be stored in Oozie HA control data in 
> ZooKeeper.
> In addition, it is desirable for the secret to change periodically, this 
> means that the AuthenticationService should remember a previous secret for 
> the max duration of hadoop-auth cookie.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-10791) AuthenticationFilter should support externalizing the secret for signing and provide rotation support

2014-07-22 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10791?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-10791:
---

Attachment: HADOOP-10791.patch

The patch adds the {{SignerSecretProvider}} class, which can be subclassed for 
different providers.  There’s also a {{StringSignerSecretProvider}}, which just 
provides a configured string, and a {{RandomSignerSecretProvider}}, which 
provides a random number that rolls over.  These are equivalent to the current 
behavior (minus that the random secret rolls over now) and are enabled the same 
way as before.  In addition, an arbitrary subclass of {{SignerSecretProvider}} 
can be provided programmatically by any subclasses of {{AuthenticationFilter}}. 
 There’s also a {{RolloverSignerSecretProvider}} (which 
{{RandomSignerSecretProvider}} and HADOOP-10868 use); it supports rolling 
secrets and handles a bunch of stuff for its subclasses.  It rolls over at the 
same interval as the token expiration.

> AuthenticationFilter should support externalizing the secret for signing and 
> provide rotation support
> -
>
> Key: HADOOP-10791
> URL: https://issues.apache.org/jira/browse/HADOOP-10791
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.4.1
>Reporter: Alejandro Abdelnur
>Assignee: Robert Kanter
> Attachments: HADOOP-10791.patch
>
>
> It should be possible to externalize the secret used to sign the hadoop-auth 
> cookies.
> In the case of WebHDFS the shared secret used by NN and DNs could be used. In 
> the case of Oozie HA, the secret could be stored in Oozie HA control data in 
> ZooKeeper.
> In addition, it is desirable for the secret to change periodically, this 
> means that the AuthenticationService should remember a previous secret for 
> the max duration of hadoop-auth cookie.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-5628) Create target for 10 minute patch test build

2014-07-22 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-5628?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14070644#comment-14070644
 ] 

Allen Wittenauer commented on HADOOP-5628:
--

ping!

I'm tempted to close this one for a variety of reasons:
- we no longer use ant
- it's possible to just do unit tests on a sub project
- it blocks a closed jira...

On the flip side, doing a full test still takes forever.  But that might be a 
different jira than this one.

> Create target for 10 minute patch test build
> 
>
> Key: HADOOP-5628
> URL: https://issues.apache.org/jira/browse/HADOOP-5628
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Owen O'Malley
>
> I think we should create an ant target that performs a smoke test on the 
> patched system to enable developers to have faster turn around time on 
> developing patches than the 3 hour unit tests that we currently have.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-5617) make chukwa log4j configuration more transparent from hadoop

2014-07-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-5617?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-5617.
--

Resolution: Duplicate

HADOOP-9902 fixes the env var issues.

> make chukwa log4j configuration more transparent from hadoop
> 
>
> Key: HADOOP-5617
> URL: https://issues.apache.org/jira/browse/HADOOP-5617
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
> Environment: Redhat 5.1, Java 6
>Reporter: Eric Yang
>Priority: Minor
>
> The current log4j appender retro fitting to hadoop is less than ideal. In 
> theory, the log4j appender configuration should be changable by the 
> environment scripts.  This ticket is to track any changes required to make 
> hadoop log4j configuration more portable.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10607) Create an API to Separate Credentials/Password Storage from Applications

2014-07-22 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14070613#comment-14070613
 ] 

Larry McCay commented on HADOOP-10607:
--

Well, I am waiting on the HADOOP-10791 patch as I said and am hoping to add 
concrete usage there as well as other places. In the meantime, having it 
available on the classpath for others on branch-2 is really helpful. I am not 
sure that I understand all the discussion on those logistics which is why I 
have been sticking to the technical and usecase answers.  :)

Concrete usage is certainly not far behind.


> Create an API to Separate Credentials/Password Storage from Applications
> 
>
> Key: HADOOP-10607
> URL: https://issues.apache.org/jira/browse/HADOOP-10607
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 3.0.0, 2.6.0
>
> Attachments: 10607-10.patch, 10607-11.patch, 10607-12.patch, 
> 10607-2.patch, 10607-3.patch, 10607-4.patch, 10607-5.patch, 10607-6.patch, 
> 10607-7.patch, 10607-8.patch, 10607-9.patch, 10607-branch-2.patch, 10607.patch
>
>
> As with the filesystem API, we need to provide a generic mechanism to support 
> multiple credential storage mechanisms that are potentially from third 
> parties. 
> We need the ability to eliminate the storage of passwords and secrets in 
> clear text within configuration files or within code.
> Toward that end, I propose an API that is configured using a list of URLs of 
> CredentialProviders. The implementation will look for implementations using 
> the ServiceLoader interface and thus support third party libraries.
> Two providers will be included in this patch. One using the credentials cache 
> in MapReduce jobs and the other using Java KeyStores from either HDFS or 
> local file system. 
> A CredShell CLI will also be included in this patch which provides the 
> ability to manage the credentials within the stores.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10607) Create an API to Separate Credentials/Password Storage from Applications

2014-07-22 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14070606#comment-14070606
 ] 

Alejandro Abdelnur commented on HADOOP-10607:
-

Larry,

That makes sense, thanks.

Now, regarding the concrete usage in Hadoop, I still don't see it at the moment 
and that is why I say it should stay in trunk. 

Do you want this in Hadoop, so it is available to components down the stack via 
Hadoop classpath and nothing else?

> Create an API to Separate Credentials/Password Storage from Applications
> 
>
> Key: HADOOP-10607
> URL: https://issues.apache.org/jira/browse/HADOOP-10607
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 3.0.0, 2.6.0
>
> Attachments: 10607-10.patch, 10607-11.patch, 10607-12.patch, 
> 10607-2.patch, 10607-3.patch, 10607-4.patch, 10607-5.patch, 10607-6.patch, 
> 10607-7.patch, 10607-8.patch, 10607-9.patch, 10607-branch-2.patch, 10607.patch
>
>
> As with the filesystem API, we need to provide a generic mechanism to support 
> multiple credential storage mechanisms that are potentially from third 
> parties. 
> We need the ability to eliminate the storage of passwords and secrets in 
> clear text within configuration files or within code.
> Toward that end, I propose an API that is configured using a list of URLs of 
> CredentialProviders. The implementation will look for implementations using 
> the ServiceLoader interface and thus support third party libraries.
> Two providers will be included in this patch. One using the credentials cache 
> in MapReduce jobs and the other using Java KeyStores from either HDFS or 
> local file system. 
> A CredShell CLI will also be included in this patch which provides the 
> ability to manage the credentials within the stores.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10876) The constructor of Path should not take an empty URL as a parameter

2014-07-22 Thread Koji Noguchi (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14070605#comment-14070605
 ] 

Koji Noguchi commented on HADOOP-10876:
---

My comment and suggestion can be found at 
https://issues.apache.org/jira/browse/HADOOP-10820?focusedCommentId=14068776&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14068776
and
https://issues.apache.org/jira/browse/HADOOP-10820?focusedCommentId=14070441&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14070441

> The constructor of Path should not take an empty URL as a parameter
> ---
>
> Key: HADOOP-10876
> URL: https://issues.apache.org/jira/browse/HADOOP-10876
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: zhihai xu
>
> The constructor of Path should not take an empty URL as a parameter, As 
> discussed in HADOOP-10820, This JIRA is to change Path constructor at public 
> Path(URI aUri) to check the empty URI and throw IllegalArgumentException for 
> empty URI.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10811) Allow classes to be reloaded at runtime

2014-07-22 Thread Chris Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10811?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14070602#comment-14070602
 ] 

Chris Li commented on HADOOP-10811:
---

One thing worth discussing is also whether or not this is a useful feature, now 
that HA allows for rolling restarts. Not everyone is running HA today, but it 
may be encouraged to do so in the future for this ability

> Allow classes to be reloaded at runtime
> ---
>
> Key: HADOOP-10811
> URL: https://issues.apache.org/jira/browse/HADOOP-10811
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: conf
>Affects Versions: 3.0.0
>Reporter: Chris Li
>Assignee: Chris Li
>Priority: Minor
>
> Currently hadoop loads its classes and caches them in the Configuration 
> class. Even if the user swaps a class's jar at runtime, hadoop will continue 
> to use the cached classes when using reflection to instantiate objects. This 
> limits the usefulness of things like HADOOP-10285, because the admin would 
> need to restart each time they wanted to change their queue class.
> This patch is to add a way to refresh the class cache, by creating a new 
> refresh handler to do so (using HADOOP-10376)



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10876) The constructor of Path should not take an empty URL as a parameter

2014-07-22 Thread zhihai xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14070597#comment-14070597
 ] 

zhihai xu commented on HADOOP-10876:


This JIRA is similar as HADOOP-1386:The constructor of Path should not take an 
empty string as a parameter.
Since the string-based constructor checks for an empty string, for consistency 
it would make sense that the URI-based constructor do the same thing.

> The constructor of Path should not take an empty URL as a parameter
> ---
>
> Key: HADOOP-10876
> URL: https://issues.apache.org/jira/browse/HADOOP-10876
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: zhihai xu
>
> The constructor of Path should not take an empty URL as a parameter, As 
> discussed in HADOOP-10820, This JIRA is to change Path constructor at public 
> Path(URI aUri) to check the empty URI and throw IllegalArgumentException for 
> empty URI.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10607) Create an API to Separate Credentials/Password Storage from Applications

2014-07-22 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14070596#comment-14070596
 ] 

Larry McCay commented on HADOOP-10607:
--

The keystore provider is useful even without a central authenticating server 
for many usecases.
Ideally and eventually, we will have a kerberos authenticating server to serve 
such credentials but in the meantime the keystore is a way to persist the 
password without being in clear text. Coupled with file permissions this is 
stronger protection than file permissions alone. Later migration to a central 
credential server will be easily accomplished through the use of the API. We 
are taking babysteps to get where we need to be while satisfying user 
requirements in a reasonable manner along the way.

> Create an API to Separate Credentials/Password Storage from Applications
> 
>
> Key: HADOOP-10607
> URL: https://issues.apache.org/jira/browse/HADOOP-10607
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 3.0.0, 2.6.0
>
> Attachments: 10607-10.patch, 10607-11.patch, 10607-12.patch, 
> 10607-2.patch, 10607-3.patch, 10607-4.patch, 10607-5.patch, 10607-6.patch, 
> 10607-7.patch, 10607-8.patch, 10607-9.patch, 10607-branch-2.patch, 10607.patch
>
>
> As with the filesystem API, we need to provide a generic mechanism to support 
> multiple credential storage mechanisms that are potentially from third 
> parties. 
> We need the ability to eliminate the storage of passwords and secrets in 
> clear text within configuration files or within code.
> Toward that end, I propose an API that is configured using a list of URLs of 
> CredentialProviders. The implementation will look for implementations using 
> the ServiceLoader interface and thus support third party libraries.
> Two providers will be included in this patch. One using the credentials cache 
> in MapReduce jobs and the other using Java KeyStores from either HDFS or 
> local file system. 
> A CredShell CLI will also be included in this patch which provides the 
> ability to manage the credentials within the stores.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10876) The constructor of Path should not take an empty URL as a parameter

2014-07-22 Thread zhihai xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10876?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14070590#comment-14070590
 ] 

zhihai xu commented on HADOOP-10876:


I will submit a patch for this issue.

> The constructor of Path should not take an empty URL as a parameter
> ---
>
> Key: HADOOP-10876
> URL: https://issues.apache.org/jira/browse/HADOOP-10876
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: zhihai xu
>
> The constructor of Path should not take an empty URL as a parameter, As 
> discussed in HADOOP-10820, This JIRA is to change Path constructor at public 
> Path(URI aUri) to check the empty URI and throw IllegalArgumentException for 
> empty URI.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10818) native client: refactor URI code to be clearer

2014-07-22 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10818?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe resolved HADOOP-10818.
---

  Resolution: Fixed
   Fix Version/s: HADOOP-10388
Target Version/s: HADOOP-10388

> native client: refactor URI code to be clearer
> --
>
> Key: HADOOP-10818
> URL: https://issues.apache.org/jira/browse/HADOOP-10818
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: native
>Affects Versions: HADOOP-10388
>Reporter: Colin Patrick McCabe
>Assignee: Colin Patrick McCabe
> Fix For: HADOOP-10388
>
> Attachments: HADOOP-10818-pnative.001.patch, 
> HADOOP-10818-pnative.002.patch
>
>
> Refactor the {{common/uri.c}} code to be a bit clearer.  We should just be 
> able to refer to user_info, auth, port, path, etc. fields in the structure, 
> rather than calling accessors.  {{hdfsBuilder}} should just have a connection 
> URI rather than separate fields for all these things.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10820) Empty entry in libjars results in working directory being recursively localized

2014-07-22 Thread zhihai xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14070580#comment-14070580
 ] 

zhihai xu commented on HADOOP-10820:


I just created a JIRA HADOOP-10876 to address the empty URI issue from  Path 
constructor. I will submit a patch for HADOOP-10876.

> Empty entry in libjars results in working directory being recursively 
> localized
> ---
>
> Key: HADOOP-10820
> URL: https://issues.apache.org/jira/browse/HADOOP-10820
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Alex Holmes
>Priority: Minor
> Attachments: HADOOP-10820-1.patch, HADOOP-10820.patch
>
>
> An empty token (e.g. "a.jar,,b.jar") in the -libjars option causes the 
> current working directory to be recursively localized.
> Here's an example of this in action (using Hadoop 2.2.0):
> {code}
> # create a temp directory and touch three JAR files
> mkdir -p tmp/path && cd tmp && touch a.jar b.jar c.jar path/d.jar
> # Run an example job only specifying two of the JARs.
> # Include an empty entry in libjars.
> hadoop jar 
> /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar 
> pi -libjars a.jar,,c.jar 2 10
> # As the job is running examine the localized directory in HDFS.
> # Notice that not only are the two JAR's specified in libjars copied,
> # but in addition the contents of the working directory are also recursively 
> copied.
> $ hadoop fs -lsr 
> /tmp/hadoop-yarn/staging/aholmes/.staging/job_1404752711144_0018/libjars
> /tmp/hadoop-yarn/staging/aholmes/.staging/job_1404752711144_0018/libjars/a.jar
> /tmp/hadoop-yarn/staging/aholmes/.staging/job_1404752711144_0018/libjars/c.jar
> /tmp/hadoop-yarn/staging/aholmes/.staging/job_1404752711144_0018/libjars/tmp
> /tmp/hadoop-yarn/staging/aholmes/.staging/job_1404752711144_0018/libjars/tmp/a.jar
> /tmp/hadoop-yarn/staging/aholmes/.staging/job_1404752711144_0018/libjars/tmp/b.jar
> /tmp/hadoop-yarn/staging/aholmes/.staging/job_1404752711144_0018/libjars/tmp/c.jar
> /tmp/hadoop-yarn/staging/aholmes/.staging/job_1404752711144_0018/libjars/tmp/path
> /tmp/hadoop-yarn/staging/aholmes/.staging/job_1404752711144_0018/libjars/tmp/path/d.jar
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10607) Create an API to Separate Credentials/Password Storage from Applications

2014-07-22 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10607?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14070577#comment-14070577
 ] 

Alejandro Abdelnur commented on HADOOP-10607:
-

Owen, 

Apologies, I didn’t mean to puzzle you with my puzzling (smile).

hadoop-auth started outside of Hadoop as Alfredo. The initial use cases where 
for Hadoop itself and Oozie and because of that we brought it in.

I see the value in the CredentialProvider, I just don’t see a concrete use in 
Hadoop at the moment other than we could use it for this or that, but we are 
not using for anything.

Until we have a concrete usecase, I think we should keep it in trunk.

Larry,

In its current form, the CredentialProvider implementations are not really 
useful as it is not a service and it cannot be used by an app running in the 
cluster, right? Or am I missing something?

That was the case with the KeyProvider and that is why I took on the KMS work 
and now we are using it for HDFS encryption.


> Create an API to Separate Credentials/Password Storage from Applications
> 
>
> Key: HADOOP-10607
> URL: https://issues.apache.org/jira/browse/HADOOP-10607
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
> Fix For: 3.0.0, 2.6.0
>
> Attachments: 10607-10.patch, 10607-11.patch, 10607-12.patch, 
> 10607-2.patch, 10607-3.patch, 10607-4.patch, 10607-5.patch, 10607-6.patch, 
> 10607-7.patch, 10607-8.patch, 10607-9.patch, 10607-branch-2.patch, 10607.patch
>
>
> As with the filesystem API, we need to provide a generic mechanism to support 
> multiple credential storage mechanisms that are potentially from third 
> parties. 
> We need the ability to eliminate the storage of passwords and secrets in 
> clear text within configuration files or within code.
> Toward that end, I propose an API that is configured using a list of URLs of 
> CredentialProviders. The implementation will look for implementations using 
> the ServiceLoader interface and thus support third party libraries.
> Two providers will be included in this patch. One using the credentials cache 
> in MapReduce jobs and the other using Java KeyStores from either HDFS or 
> local file system. 
> A CredShell CLI will also be included in this patch which provides the 
> ability to manage the credentials within the stores.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10876) The constructor of Path should not take an empty URL as a parameter

2014-07-22 Thread zhihai xu (JIRA)
zhihai xu created HADOOP-10876:
--

 Summary: The constructor of Path should not take an empty URL as a 
parameter
 Key: HADOOP-10876
 URL: https://issues.apache.org/jira/browse/HADOOP-10876
 Project: Hadoop Common
  Issue Type: Bug
Reporter: zhihai xu


The constructor of Path should not take an empty URL as a parameter, As 
discussed in HADOOP-10820, This JIRA is to change Path constructor at public 
Path(URI aUri) to check the empty URI and throw IllegalArgumentException for 
empty URI.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (HADOOP-10820) Empty entry in libjars results in working directory being recursively localized

2014-07-22 Thread zhihai xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10820?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14070571#comment-14070571
 ] 

zhihai xu commented on HADOOP-10820:


[~knoguchi] I will create a JIRA to fix empty URI path.
Alex solution looks OK to me.

> Empty entry in libjars results in working directory being recursively 
> localized
> ---
>
> Key: HADOOP-10820
> URL: https://issues.apache.org/jira/browse/HADOOP-10820
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.2.0
>Reporter: Alex Holmes
>Priority: Minor
> Attachments: HADOOP-10820-1.patch, HADOOP-10820.patch
>
>
> An empty token (e.g. "a.jar,,b.jar") in the -libjars option causes the 
> current working directory to be recursively localized.
> Here's an example of this in action (using Hadoop 2.2.0):
> {code}
> # create a temp directory and touch three JAR files
> mkdir -p tmp/path && cd tmp && touch a.jar b.jar c.jar path/d.jar
> # Run an example job only specifying two of the JARs.
> # Include an empty entry in libjars.
> hadoop jar 
> /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar 
> pi -libjars a.jar,,c.jar 2 10
> # As the job is running examine the localized directory in HDFS.
> # Notice that not only are the two JAR's specified in libjars copied,
> # but in addition the contents of the working directory are also recursively 
> copied.
> $ hadoop fs -lsr 
> /tmp/hadoop-yarn/staging/aholmes/.staging/job_1404752711144_0018/libjars
> /tmp/hadoop-yarn/staging/aholmes/.staging/job_1404752711144_0018/libjars/a.jar
> /tmp/hadoop-yarn/staging/aholmes/.staging/job_1404752711144_0018/libjars/c.jar
> /tmp/hadoop-yarn/staging/aholmes/.staging/job_1404752711144_0018/libjars/tmp
> /tmp/hadoop-yarn/staging/aholmes/.staging/job_1404752711144_0018/libjars/tmp/a.jar
> /tmp/hadoop-yarn/staging/aholmes/.staging/job_1404752711144_0018/libjars/tmp/b.jar
> /tmp/hadoop-yarn/staging/aholmes/.staging/job_1404752711144_0018/libjars/tmp/c.jar
> /tmp/hadoop-yarn/staging/aholmes/.staging/job_1404752711144_0018/libjars/tmp/path
> /tmp/hadoop-yarn/staging/aholmes/.staging/job_1404752711144_0018/libjars/tmp/path/d.jar
> {code}



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10875) Sqoop2: Attach a debugger to miniclusters

2014-07-22 Thread Abraham Elmahrek (JIRA)
Abraham Elmahrek created HADOOP-10875:
-

 Summary: Sqoop2: Attach a debugger to miniclusters
 Key: HADOOP-10875
 URL: https://issues.apache.org/jira/browse/HADOOP-10875
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Abraham Elmahrek


It would be nice to have a way to attach a debugger to the miniclusters: 
http://cargo.codehaus.org/Starting+and+stopping+a+container.

For tomcat, I needed to add the following to TomcatSqoopMiniCluster:
{code}
configuration.setProperty(GeneralPropertySet.JVMARGS, 
"\"-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005\"");
{code}

There should also be a way to attach a debugger to the Yarn container.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (HADOOP-10875) Sqoop2: Attach a debugger to miniclusters

2014-07-22 Thread Abraham Elmahrek (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10875?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Abraham Elmahrek resolved HADOOP-10875.
---

Resolution: Won't Fix

Wrong project!

> Sqoop2: Attach a debugger to miniclusters
> -
>
> Key: HADOOP-10875
> URL: https://issues.apache.org/jira/browse/HADOOP-10875
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Abraham Elmahrek
>
> It would be nice to have a way to attach a debugger to the miniclusters: 
> http://cargo.codehaus.org/Starting+and+stopping+a+container.
> For tomcat, I needed to add the following to TomcatSqoopMiniCluster:
> {code}
> configuration.setProperty(GeneralPropertySet.JVMARGS, 
> "\"-agentlib:jdwp=transport=dt_socket,server=y,suspend=y,address=5005\"");
> {code}
> There should also be a way to attach a debugger to the Yarn container.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Created] (HADOOP-10874) hdfs dfs -getmerge can have an additional parameter for sort order

2014-07-22 Thread Sambit Tripathy (JIRA)
Sambit Tripathy created HADOOP-10874:


 Summary: hdfs dfs -getmerge can have an additional parameter for 
sort order
 Key: HADOOP-10874
 URL: https://issues.apache.org/jira/browse/HADOOP-10874
 Project: Hadoop Common
  Issue Type: Wish
  Components: fs
Reporter: Sambit Tripathy
Priority: Minor


Default implementation sorts the array in ascending order of the files,  a 
parameter can be added to current implementation so that it can sort in 
descending order as well.

Current impl:
public static boolean More ...copyMerge(FileSystem srcFS, Path srcDir, 
FileSystem dstFS, Path dstFile, boolean deleteSource, Configuration conf, 
String addString) throws IOException {

Proposed:
public static boolean More ...copyMerge(FileSystem srcFS, Path srcDir, 
FileSystem dstFS, Path dstFile, boolean deleteSource, Configuration conf, 
String addString, boolean sort) throws IOException {



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (HADOOP-9902) Shell script rewrite

2014-07-22 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9902?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-9902:
-

Attachment: HADOOP-9902-7.patch

This version:

* fixes the commit conflict with HADOOP-9921
* Documents a few missing env vars (that have been missing since those features 
were committed!)
* Moves some defaults out of hadoop-env.sh into hdfs-config.sh so that 
hadoop-env.sh can run empty

> Shell script rewrite
> 
>
> Key: HADOOP-9902
> URL: https://issues.apache.org/jira/browse/HADOOP-9902
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>  Labels: releasenotes
> Attachments: HADOOP-9902-2.patch, HADOOP-9902-3.patch, 
> HADOOP-9902-4.patch, HADOOP-9902-5.patch, HADOOP-9902-6.patch, 
> HADOOP-9902-7.patch, HADOOP-9902.patch, HADOOP-9902.txt, hadoop-9902-1.patch, 
> more-info.txt
>
>
> Umbrella JIRA for shell script rewrite.  See more-info.txt for more details.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


  1   2   >