[jira] [Commented] (HADOOP-11078) Create a release note for 2.5.1 release

2014-09-10 Thread Akira AJISAKA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11078?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14128126#comment-14128126
 ] 

Akira AJISAKA commented on HADOOP-11078:


I understand. I'll close this ticket and leave out the fix. Thanks [~kkambatl] 
for the comment.

 Create a release note for 2.5.1 release
 ---

 Key: HADOOP-11078
 URL: https://issues.apache.org/jira/browse/HADOOP-11078
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.5.1
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Blocker
 Attachments: HADOOP-11078.patch


 There's a sentence Apache Hadoop 2.5.1 is a minor release in the 2.x.y 
 release line, buliding upon the previous stable release 2.4.1. in the 
 document of 2.5.1-RC0.
 Apparently, 2.5.1 is not a minor release. We should create a new release note.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11078) Create a release note for 2.5.1 release

2014-09-10 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11078?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA resolved HADOOP-11078.

  Resolution: Not a Problem
Target Version/s:   (was: 2.5.1)

 Create a release note for 2.5.1 release
 ---

 Key: HADOOP-11078
 URL: https://issues.apache.org/jira/browse/HADOOP-11078
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.5.1
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Blocker
 Attachments: HADOOP-11078.patch


 There's a sentence Apache Hadoop 2.5.1 is a minor release in the 2.x.y 
 release line, buliding upon the previous stable release 2.4.1. in the 
 document of 2.5.1-RC0.
 Apparently, 2.5.1 is not a minor release. We should create a new release note.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11074) Move s3-related FS connector code to hadoop-aws

2014-09-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14128196#comment-14128196
 ] 

Steve Loughran commented on HADOOP-11074:
-

the test. params for the FSContract test should go into 'auth-keys.xml' a la 
openstack ...that setup is designed to stop anyone accidentally committing 
their secret keys.

it could (and the openstack  azure modules) reworked to use 
{{contract-test-options.xml}}. All we need is one file to enable the test run.

If you think about *why* we need that option, it is that the classic 
{{FileSystemContractBaseTest}} is a junit 3 lib, and you can't use 
{{Assert.assume()}} to skip tests there. The ideal fix would be do clone that 
into a Junit 4 test case, then {{@deprecate}} the original so that other FS 
impls can still use it for now. Not for this patch though.

h2. findbugs

Presumably all of those bugs were hidden before. I'd recommend copying over the 
hadoop-common excludes to make the warnings go away —so that changes to the 
source will go into a later patch. This one can just move the code and fix the 
POMs

 Move s3-related FS connector code to hadoop-aws
 ---

 Key: HADOOP-11074
 URL: https://issues.apache.org/jira/browse/HADOOP-11074
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.0.0
Reporter: David S. Wang
Assignee: David S. Wang
 Fix For: 3.0.0

 Attachments: HADOOP-11074.patch, HADOOP-11074.patch.2


 Now that hadoop-aws has been created, we should actually move the relevant 
 code into that module, similar to what was done with hadoop-openstack, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11064) UnsatisifedLinkError with hadoop 2.4 JARs on hadoop-2.6 due NativeCRC32 method changes

2014-09-10 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14128212#comment-14128212
 ] 

Steve Loughran commented on HADOOP-11064:
-

# I have nothing against versioning, but IMO that can be a different JIRA, add 
versioning to the libhadoop. This JIRA is covering linkage errors which we can 
fix.
# w.r.t patch 3, the log of exceptions should include the stack trace, perhaps.

bq. There are a few months until 2.6 will be released-- do we really need to 
hack this? 

Yes. Because those of us who have switched our code to only work against 
branch-2 are the one finding bugs sooner rather than later. If we weren't, this 
issue wouldn't have surfaced until hadoop-2.6 shipped —at which point the fixes 
become an even bigger piece of firefighting in a sprint to get 2.6.1 out the 
door the following week.

If branch-2 isn't in a state usable by anyone downstream, it doesn't get used, 
regressions don't get picked up.

Right now, for us, it isn't usable —because we're the only team that's tried to 
deploy HBase 0.98 on a Hadoop 2.6 codebase cluster. 

Furthermore, I don't think it is a hack, it is retain the entry points which 
hadoop 2.4 code expect of the native library

 UnsatisifedLinkError with hadoop 2.4 JARs on hadoop-2.6 due NativeCRC32 
 method changes
 --

 Key: HADOOP-11064
 URL: https://issues.apache.org/jira/browse/HADOOP-11064
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.6.0
 Environment: Hadoop 2.6 cluster, trying to run code containing hadoop 
 2.4 JARs
Reporter: Steve Loughran
Assignee: Colin Patrick McCabe
Priority: Blocker
 Attachments: HADOOP-11064.001.patch, HADOOP-11064.002.patch, 
 HADOOP-11064.003.patch


 The private native method names and signatures in {{NativeCrc32}} were 
 changed in HDFS-6561 ... as a result hadoop-common-2.4 JARs get unsatisifed 
 link errors when they try to perform checksums. 
 This essentially stops Hadoop 2.4 applications running on Hadoop 2.6 unless 
 rebuilt and repackaged with the hadoop- 2.6 JARs



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11080) Convert Windows native build in hadoop-common to use CMake.

2014-09-10 Thread Remus Rusanu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14128324#comment-14128324
 ] 

Remus Rusanu commented on HADOOP-11080:
---

YARN-2198, YARN-1972 and YARN-2458 patches come with some big changes to the 
winutils/libwinutils build, including resource compile and midl step for IDL 
compile. My changes also cleaned up the msbuild, eg. use the maven target as 
intermediate build location and so on. As we already have CMake dependency for 
hdfs, I have no objections to swithing to it, but well have to validate the 
more complex build steps that are coming with the Windows secure nodemanager 
work.

 Convert Windows native build in hadoop-common to use CMake.
 ---

 Key: HADOOP-11080
 URL: https://issues.apache.org/jira/browse/HADOOP-11080
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, native
Reporter: Chris Nauroth

 The Windows native build in hadoop-common currently relies on a set of 
 checked-in MSBuild project files.  The logic of these project files is 
 largely redundant with the logic of CMakeLists.txt.  It causes some dual 
 maintenance overhead, because certain build changes require separate changes 
 for both CMake and MSBuild.  This issue proposes to convert the Windows build 
 process to use CMake instead of checked-in project files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11077) NPE if hosts not specified in ProxyUsers

2014-09-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14128353#comment-14128353
 ] 

Hudson commented on HADOOP-11077:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #676 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/676/])
HADOOP-11077. NPE if hosts not specified in ProxyUsers. (gchanan via tucu) 
(tucu: rev 9ee891aa90333bf18cba412400daa5834f15c41d)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/DefaultImpersonationProvider.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/authorize/TestProxyUsers.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 NPE if hosts not specified in ProxyUsers
 

 Key: HADOOP-11077
 URL: https://issues.apache.org/jira/browse/HADOOP-11077
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Fix For: 2.6.0

 Attachments: HADOOP-11077.patch


 When using the TokenDelegationAuthenticationFilter, I noticed if I don't 
 specify the hosts for a user/groups proxy user and then try to authenticate, 
 I get an NPE rather than an AuthorizationException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10925) Compilation fails in native link0 function on Windows.

2014-09-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14128359#comment-14128359
 ] 

Hudson commented on HADOOP-10925:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #676 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/676/])
HADOOP-10925. Change attribution in CHANGES.txt from trunk to 2.6.0. (cnauroth: 
rev 3e8f353c8e36b1467af4a8a421097afa512b324c)
* hadoop-common-project/hadoop-common/CHANGES.txt


 Compilation fails in native link0 function on Windows.
 --

 Key: HADOOP-10925
 URL: https://issues.apache.org/jira/browse/HADOOP-10925
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Blocker
 Fix For: 2.6.0

 Attachments: HADOOP-10925.1.patch


 HDFS-6482 introduced a new native code function for creating hard links.  The 
 Windows implementation of this function does not compile due to an incorrect 
 call to {{CreateHardLink}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11057) checknative command to probe for winutils.exe on windows

2014-09-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14128355#comment-14128355
 ] 

Hudson commented on HADOOP-11057:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #676 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/676/])
HADOOP-11057. checknative command to probe for winutils.exe on windows. 
Contributed by Xiaoyu Yao. (cnauroth: rev 
6dae4b430c342f9ad44ad8659c372e519f3931c9)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestNativeLibraryChecker.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeLibraryChecker.java


 checknative command to probe for winutils.exe on windows
 

 Key: HADOOP-11057
 URL: https://issues.apache.org/jira/browse/HADOOP-11057
 Project: Hadoop Common
  Issue Type: Improvement
  Components: native
Affects Versions: 2.5.0
 Environment: windows
Reporter: Steve Loughran
Assignee: Xiaoyu Yao
Priority: Minor
 Fix For: 2.6.0

 Attachments: HADOOP-11057.0.patch, HADOOP-11057.1.patch, 
 HADOOP-11057.2.patch, HADOOP-11057.3.patch


 hadoop's {{checknative}} command looks for native binaries and returns an 
 error code if one is missing.
 But it doesn't check for {{winutils.exe}} on windows, which turns out to be 
 essential for some operations. 
 Adding this check to the -a (or default) operation would allow the check to 
 be used as a health check on windows installations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9989) Bug introduced in HADOOP-9374, which parses the -tokenCacheFile as binary file but set it to the configuration as JSON file.

2014-09-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14128360#comment-14128360
 ] 

Hudson commented on HADOOP-9989:


FAILURE: Integrated in Hadoop-Yarn-trunk #676 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/676/])
HADOOP-9989. Bug introduced in HADOOP-9374, which parses the -tokenCacheFile as 
binary file but set it to the configuration as JSON file. (zxu via tucu) (tucu: 
rev b100949404843ed245ef4e118291f55b3fdc81b8)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericOptionsParser.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestGenericOptionsParser.java


 Bug introduced in HADOOP-9374, which parses the -tokenCacheFile as binary 
 file but set it to the configuration as JSON file.
 

 Key: HADOOP-9989
 URL: https://issues.apache.org/jira/browse/HADOOP-9989
 Project: Hadoop Common
  Issue Type: Bug
  Components: security, util
Affects Versions: 2.1.0-beta
 Environment: Red Hat Enterprise 6 with Sun Java 1.7 and IBM Java 1.6
Reporter: Jinghui Wang
Assignee: zhihai xu
 Fix For: 2.6.0

 Attachments: HADOOP-9989.001.patch, HADOOP-9989.patch

   Original Estimate: 0h
  Remaining Estimate: 0h

 The code in JIRA HADOOP-9374's patch introduced a bug, where the value of the 
 tokenCacheFile parameter is being parsed as a binary file and set it to the
 mapreduce.job.credentials.json parameter in GenericOptionsParser, which 
 cannot be parsed by JobSubmitter when it gets the value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9374) Add tokens from -tokenCacheFile into UGI

2014-09-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14128361#comment-14128361
 ] 

Hudson commented on HADOOP-9374:


FAILURE: Integrated in Hadoop-Yarn-trunk #676 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/676/])
HADOOP-9989. Bug introduced in HADOOP-9374, which parses the -tokenCacheFile as 
binary file but set it to the configuration as JSON file. (zxu via tucu) (tucu: 
rev b100949404843ed245ef4e118291f55b3fdc81b8)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericOptionsParser.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestGenericOptionsParser.java


 Add tokens from -tokenCacheFile into UGI
 

 Key: HADOOP-9374
 URL: https://issues.apache.org/jira/browse/HADOOP-9374
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.0.0-alpha, 3.0.0, 0.23.7
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 0.23.7, 2.0.4-alpha

 Attachments: HADOOP-9374.patch


 {{GenericOptionsParser}} accepts a {{-tokenCacheFile}} option.  However, it 
 only sets the {{mapreduce.job.credentials.json}} conf value instead of also 
 adding the tokens to the UGI so they are usable by the command being executed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11081) Document hadoop properties (-D's) expected to be set the shell code

2014-09-10 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-11081:
-

 Summary: Document hadoop properties (-D's) expected to be set the 
shell code
 Key: HADOOP-11081
 URL: https://issues.apache.org/jira/browse/HADOOP-11081
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, scripts
Reporter: Allen Wittenauer


There are quite a few Java properties that are expected to be set by the shell 
code. These are currently undocumented.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9989) Bug introduced in HADOOP-9374, which parses the -tokenCacheFile as binary file but set it to the configuration as JSON file.

2014-09-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14128497#comment-14128497
 ] 

Hudson commented on HADOOP-9989:


SUCCESS: Integrated in Hadoop-Hdfs-trunk #1867 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1867/])
HADOOP-9989. Bug introduced in HADOOP-9374, which parses the -tokenCacheFile as 
binary file but set it to the configuration as JSON file. (zxu via tucu) (tucu: 
rev b100949404843ed245ef4e118291f55b3fdc81b8)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericOptionsParser.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestGenericOptionsParser.java


 Bug introduced in HADOOP-9374, which parses the -tokenCacheFile as binary 
 file but set it to the configuration as JSON file.
 

 Key: HADOOP-9989
 URL: https://issues.apache.org/jira/browse/HADOOP-9989
 Project: Hadoop Common
  Issue Type: Bug
  Components: security, util
Affects Versions: 2.1.0-beta
 Environment: Red Hat Enterprise 6 with Sun Java 1.7 and IBM Java 1.6
Reporter: Jinghui Wang
Assignee: zhihai xu
 Fix For: 2.6.0

 Attachments: HADOOP-9989.001.patch, HADOOP-9989.patch

   Original Estimate: 0h
  Remaining Estimate: 0h

 The code in JIRA HADOOP-9374's patch introduced a bug, where the value of the 
 tokenCacheFile parameter is being parsed as a binary file and set it to the
 mapreduce.job.credentials.json parameter in GenericOptionsParser, which 
 cannot be parsed by JobSubmitter when it gets the value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11081) Document hadoop properties (-D's) expected to be set the shell code

2014-09-10 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11081?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14128500#comment-14128500
 ] 

Allen Wittenauer commented on HADOOP-11081:
---

Post-HADOOP-9902, this is much easier to do, so we should take advantage of it! 
 HADOOP and HDFS ones should be documented in hadoop-env.sh, YARN ones in 
yarn-env.sh.  

 Document hadoop properties (-D's) expected to be set the shell code
 ---

 Key: HADOOP-11081
 URL: https://issues.apache.org/jira/browse/HADOOP-11081
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, scripts
Reporter: Allen Wittenauer
  Labels: newbie

 There are quite a few Java properties that are expected to be set by the 
 shell code. These are currently undocumented.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11057) checknative command to probe for winutils.exe on windows

2014-09-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14128492#comment-14128492
 ] 

Hudson commented on HADOOP-11057:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1867 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1867/])
HADOOP-11057. checknative command to probe for winutils.exe on windows. 
Contributed by Xiaoyu Yao. (cnauroth: rev 
6dae4b430c342f9ad44ad8659c372e519f3931c9)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestNativeLibraryChecker.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeLibraryChecker.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 checknative command to probe for winutils.exe on windows
 

 Key: HADOOP-11057
 URL: https://issues.apache.org/jira/browse/HADOOP-11057
 Project: Hadoop Common
  Issue Type: Improvement
  Components: native
Affects Versions: 2.5.0
 Environment: windows
Reporter: Steve Loughran
Assignee: Xiaoyu Yao
Priority: Minor
 Fix For: 2.6.0

 Attachments: HADOOP-11057.0.patch, HADOOP-11057.1.patch, 
 HADOOP-11057.2.patch, HADOOP-11057.3.patch


 hadoop's {{checknative}} command looks for native binaries and returns an 
 error code if one is missing.
 But it doesn't check for {{winutils.exe}} on windows, which turns out to be 
 essential for some operations. 
 Adding this check to the -a (or default) operation would allow the check to 
 be used as a health check on windows installations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9374) Add tokens from -tokenCacheFile into UGI

2014-09-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14128498#comment-14128498
 ] 

Hudson commented on HADOOP-9374:


SUCCESS: Integrated in Hadoop-Hdfs-trunk #1867 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1867/])
HADOOP-9989. Bug introduced in HADOOP-9374, which parses the -tokenCacheFile as 
binary file but set it to the configuration as JSON file. (zxu via tucu) (tucu: 
rev b100949404843ed245ef4e118291f55b3fdc81b8)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericOptionsParser.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestGenericOptionsParser.java


 Add tokens from -tokenCacheFile into UGI
 

 Key: HADOOP-9374
 URL: https://issues.apache.org/jira/browse/HADOOP-9374
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.0.0-alpha, 3.0.0, 0.23.7
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 0.23.7, 2.0.4-alpha

 Attachments: HADOOP-9374.patch


 {{GenericOptionsParser}} accepts a {{-tokenCacheFile}} option.  However, it 
 only sets the {{mapreduce.job.credentials.json}} conf value instead of also 
 adding the tokens to the UGI so they are usable by the command being executed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10925) Compilation fails in native link0 function on Windows.

2014-09-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14128496#comment-14128496
 ] 

Hudson commented on HADOOP-10925:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1867 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1867/])
HADOOP-10925. Change attribution in CHANGES.txt from trunk to 2.6.0. (cnauroth: 
rev 3e8f353c8e36b1467af4a8a421097afa512b324c)
* hadoop-common-project/hadoop-common/CHANGES.txt


 Compilation fails in native link0 function on Windows.
 --

 Key: HADOOP-10925
 URL: https://issues.apache.org/jira/browse/HADOOP-10925
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Blocker
 Fix For: 2.6.0

 Attachments: HADOOP-10925.1.patch


 HDFS-6482 introduced a new native code function for creating hard links.  The 
 Windows implementation of this function does not compile due to an incorrect 
 call to {{CreateHardLink}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11077) NPE if hosts not specified in ProxyUsers

2014-09-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14128490#comment-14128490
 ] 

Hudson commented on HADOOP-11077:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #1867 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/1867/])
HADOOP-11077. NPE if hosts not specified in ProxyUsers. (gchanan via tucu) 
(tucu: rev 9ee891aa90333bf18cba412400daa5834f15c41d)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/DefaultImpersonationProvider.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/authorize/TestProxyUsers.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 NPE if hosts not specified in ProxyUsers
 

 Key: HADOOP-11077
 URL: https://issues.apache.org/jira/browse/HADOOP-11077
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Fix For: 2.6.0

 Attachments: HADOOP-11077.patch


 When using the TokenDelegationAuthenticationFilter, I noticed if I don't 
 specify the hosts for a user/groups proxy user and then try to authenticate, 
 I get an NPE rather than an AuthorizationException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11081) Document hadoop properties expected to be set the shell code

2014-09-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11081:
--
Summary: Document hadoop properties expected to be set the shell code  
(was: Document hadoop properties (-D's) expected to be set the shell code)

 Document hadoop properties expected to be set the shell code
 

 Key: HADOOP-11081
 URL: https://issues.apache.org/jira/browse/HADOOP-11081
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, scripts
Reporter: Allen Wittenauer
  Labels: newbie

 There are quite a few Java properties that are expected to be set by the 
 shell code. These are currently undocumented.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11081) Document hadoop properties expected to be set by the shell code

2014-09-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11081?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11081:
--
Summary: Document hadoop properties expected to be set by the shell code  
(was: Document hadoop properties expected to be set the shell code)

 Document hadoop properties expected to be set by the shell code
 ---

 Key: HADOOP-11081
 URL: https://issues.apache.org/jira/browse/HADOOP-11081
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, scripts
Reporter: Allen Wittenauer
  Labels: newbie

 There are quite a few Java properties that are expected to be set by the 
 shell code. These are currently undocumented.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11074) Move s3-related FS connector code to hadoop-aws

2014-09-10 Thread David S. Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David S. Wang updated HADOOP-11074:
---
Status: Open  (was: Patch Available)

 Move s3-related FS connector code to hadoop-aws
 ---

 Key: HADOOP-11074
 URL: https://issues.apache.org/jira/browse/HADOOP-11074
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.0.0
Reporter: David S. Wang
Assignee: David S. Wang
 Fix For: 3.0.0

 Attachments: HADOOP-11074.patch, HADOOP-11074.patch.2


 Now that hadoop-aws has been created, we should actually move the relevant 
 code into that module, similar to what was done with hadoop-openstack, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11074) Move s3-related FS connector code to hadoop-aws

2014-09-10 Thread David S. Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David S. Wang updated HADOOP-11074:
---
Status: Patch Available  (was: Open)

 Move s3-related FS connector code to hadoop-aws
 ---

 Key: HADOOP-11074
 URL: https://issues.apache.org/jira/browse/HADOOP-11074
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.0.0
Reporter: David S. Wang
Assignee: David S. Wang
 Fix For: 3.0.0

 Attachments: HADOOP-11074.patch, HADOOP-11074.patch.2, 
 HADOOP-11074.patch.3


 Now that hadoop-aws has been created, we should actually move the relevant 
 code into that module, similar to what was done with hadoop-openstack, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11074) Move s3-related FS connector code to hadoop-aws

2014-09-10 Thread David S. Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

David S. Wang updated HADOOP-11074:
---
Attachment: HADOOP-11074.patch.3

This patch copies the findbugs exclude file from hadoop-common into hadoop-aws, 
as per Steve's suggestion.  I also added back in the pom content for 
auth-keys.xml as well.  Thanks for the suggestions.

I ran mvn install from top-level as well as mvn test from hadoop-aws (with and 
without auth-keys.xml) and they all pass as expected.

 Move s3-related FS connector code to hadoop-aws
 ---

 Key: HADOOP-11074
 URL: https://issues.apache.org/jira/browse/HADOOP-11074
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.0.0
Reporter: David S. Wang
Assignee: David S. Wang
 Fix For: 3.0.0

 Attachments: HADOOP-11074.patch, HADOOP-11074.patch.2, 
 HADOOP-11074.patch.3


 Now that hadoop-aws has been created, we should actually move the relevant 
 code into that module, similar to what was done with hadoop-openstack, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11082) Resolve findbugs warnings in hadoop-aws module

2014-09-10 Thread David S. Wang (JIRA)
David S. Wang created HADOOP-11082:
--

 Summary: Resolve findbugs warnings in hadoop-aws module
 Key: HADOOP-11082
 URL: https://issues.apache.org/jira/browse/HADOOP-11082
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 3.0.0
Reporter: David S. Wang
Priority: Minor


Currently hadoop-aws module has the findbugs exclude file from hadoop-common.  
It would be nice to address the findbugs bugs eventually.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11074) Move s3-related FS connector code to hadoop-aws

2014-09-10 Thread David S. Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14128616#comment-14128616
 ] 

David S. Wang commented on HADOOP-11074:


Forgot to also mention that I changed the top-level pom.xml to add an exclusion 
for jackson for aws-java-sdk, which has the dependency on the earlier version 
of jackson.  Moving to aws-java-sdk 1.8.9 didn't do the trick, and while it 
would be great to harmonize the jackson dependency between aws-java-sdk and 
azure, I don't see that happening anytime soon.

Colin, I filed a findbugs JIRA as HADOOP-11082 and linked it to this JIRA.

 Move s3-related FS connector code to hadoop-aws
 ---

 Key: HADOOP-11074
 URL: https://issues.apache.org/jira/browse/HADOOP-11074
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.0.0
Reporter: David S. Wang
Assignee: David S. Wang
 Fix For: 3.0.0

 Attachments: HADOOP-11074.patch, HADOOP-11074.patch.2, 
 HADOOP-11074.patch.3


 Now that hadoop-aws has been created, we should actually move the relevant 
 code into that module, similar to what was done with hadoop-openstack, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11074) Move s3-related FS connector code to hadoop-aws

2014-09-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14128684#comment-14128684
 ] 

Hadoop QA commented on HADOOP-11074:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12667765/HADOOP-11074.patch.3
  against trunk revision 3072c83.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 25 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
100 warning messages.
See 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4690//artifact/trunk/patchprocess/diffJavadocWarnings.txt
 for details.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
hadoop-tools/hadoop-tools-dist:

  org.apache.hadoop.ha.TestZKFailoverControllerStress

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4690//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4690//console

This message is automatically generated.

 Move s3-related FS connector code to hadoop-aws
 ---

 Key: HADOOP-11074
 URL: https://issues.apache.org/jira/browse/HADOOP-11074
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.0.0
Reporter: David S. Wang
Assignee: David S. Wang
 Fix For: 3.0.0

 Attachments: HADOOP-11074.patch, HADOOP-11074.patch.2, 
 HADOOP-11074.patch.3


 Now that hadoop-aws has been created, we should actually move the relevant 
 code into that module, similar to what was done with hadoop-openstack, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11080) Convert Windows native build in hadoop-common to use CMake.

2014-09-10 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11080?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14128689#comment-14128689
 ] 

Chris Nauroth commented on HADOOP-11080:


Thanks for the notice, Remus.  I've linked all of those jiras to indicate that 
we'll wait for those to complete first.  Otherwise, we'd just set ourselves up 
for merge conflicts.

 Convert Windows native build in hadoop-common to use CMake.
 ---

 Key: HADOOP-11080
 URL: https://issues.apache.org/jira/browse/HADOOP-11080
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build, native
Reporter: Chris Nauroth

 The Windows native build in hadoop-common currently relies on a set of 
 checked-in MSBuild project files.  The logic of these project files is 
 largely redundant with the logic of CMakeLists.txt.  It causes some dual 
 maintenance overhead, because certain build changes require separate changes 
 for both CMake and MSBuild.  This issue proposes to convert the Windows build 
 process to use CMake instead of checked-in project files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11074) Move s3-related FS connector code to hadoop-aws

2014-09-10 Thread David S. Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14128711#comment-14128711
 ] 

David S. Wang commented on HADOOP-11074:


I don't think the failing test is related to these changes.

 Move s3-related FS connector code to hadoop-aws
 ---

 Key: HADOOP-11074
 URL: https://issues.apache.org/jira/browse/HADOOP-11074
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.0.0
Reporter: David S. Wang
Assignee: David S. Wang
 Fix For: 3.0.0

 Attachments: HADOOP-11074.patch, HADOOP-11074.patch.2, 
 HADOOP-11074.patch.3


 Now that hadoop-aws has been created, we should actually move the relevant 
 code into that module, similar to what was done with hadoop-openstack, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10758) KMS: add ACLs on per key basis.

2014-09-10 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HADOOP-10758:
-
Attachment: HADOOP-10758.9.patch

* Rebasing with trunk
* Added default key acls to {{src/main/conf/key-acls.xml}}

 KMS: add ACLs on per key basis.
 ---

 Key: HADOOP-10758
 URL: https://issues.apache.org/jira/browse/HADOOP-10758
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Arun Suresh
 Attachments: HADOOP-10758.1.patch, HADOOP-10758.2.patch, 
 HADOOP-10758.3.patch, HADOOP-10758.4.patch, HADOOP-10758.5.patch, 
 HADOOP-10758.6.patch, HADOOP-10758.7.patch, HADOOP-10758.8.patch, 
 HADOOP-10758.9.patch


 The KMS server should enforce ACLs on per key basis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9450) HADOOP_USER_CLASSPATH_FIRST is not honored; CLASSPATH is PREpended instead of APpended

2014-09-10 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9450?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14128753#comment-14128753
 ] 

Sangjin Lee commented on HADOOP-9450:
-

This was borne out of an investigation of a hadoop issue at our company. It 
appears that prior to this the hadoop configuration directory (HADOOP_CONF_DIR) 
was always the first entry in the classpath, regardless of 
HADOOP_USER_CLASSPATH_FIRST or HADOOP_CLASSPATH. But after HADOOP-9450, if 
HADOOP_USER_CLASSPATH_FIRST is set and the user provides his/her version of 
*-site.xml through HADOOP_CLASSPATH, the user would end up trumping the hadoop 
configuration. And I believe it is still the case after Allen's changes 
(HADOOP-9902).

Is this an intended behavior? What I'm not sure of is whether we expect the 
client to be able to override the site.xml files the hadoop configuration 
provides. If that is true, then it is working as desired. If not, we'd need to 
fix this behavior.

Thoughts?


 HADOOP_USER_CLASSPATH_FIRST is not honored; CLASSPATH is PREpended instead of 
 APpended
 --

 Key: HADOOP-9450
 URL: https://issues.apache.org/jira/browse/HADOOP-9450
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Reporter: Mitch Wyle
Assignee: Harsh J
 Fix For: 1-win, 2.1.0-beta, 1.3.0

 Attachments: HADOOP-9450-branch-1-win.patch, 
 HADOOP-9450-branch-1.patch, HADOOP-9450-branch-2.patch, HADOOP-9450.patch, 
 HADOOP-9450.patch


 On line 133 of the hadoop shell wrapper, CLASSPATH is set as:
 CLASSPATH=${CLASSPATH}:${HADOOP_CLASSPATH}
 Notice that the built-up CLASSPATH, along with all the libs and unwanted JARS 
 are pre-pended BEFORE the user's HADOOP_CLASSPATH.  Therefore there is no way 
 to put your own JARs in front of those that the hadoop wrapper script sets.
 We propose a patch that reverses this order.  Failing that, we would like to 
 add a command line option to override this behavior and enable a user's JARs 
 to be found before the wrong ones in the Hadoop library paths.
 We always welcome your opinions.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10758) KMS: add ACLs on per key basis.

2014-09-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14128779#comment-14128779
 ] 

Hadoop QA commented on HADOOP-10758:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12667786/HADOOP-10758.9.patch
  against trunk revision 3072c83.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-kms.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4691//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4691//console

This message is automatically generated.

 KMS: add ACLs on per key basis.
 ---

 Key: HADOOP-10758
 URL: https://issues.apache.org/jira/browse/HADOOP-10758
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Arun Suresh
 Attachments: HADOOP-10758.1.patch, HADOOP-10758.2.patch, 
 HADOOP-10758.3.patch, HADOOP-10758.4.patch, HADOOP-10758.5.patch, 
 HADOOP-10758.6.patch, HADOOP-10758.7.patch, HADOOP-10758.8.patch, 
 HADOOP-10758.9.patch


 The KMS server should enforce ACLs on per key basis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10925) Compilation fails in native link0 function on Windows.

2014-09-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10925?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14128917#comment-14128917
 ] 

Hudson commented on HADOOP-10925:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1892 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1892/])
HADOOP-10925. Change attribution in CHANGES.txt from trunk to 2.6.0. (cnauroth: 
rev 3e8f353c8e36b1467af4a8a421097afa512b324c)
* hadoop-common-project/hadoop-common/CHANGES.txt


 Compilation fails in native link0 function on Windows.
 --

 Key: HADOOP-10925
 URL: https://issues.apache.org/jira/browse/HADOOP-10925
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 3.0.0
Reporter: Chris Nauroth
Assignee: Chris Nauroth
Priority: Blocker
 Fix For: 2.6.0

 Attachments: HADOOP-10925.1.patch


 HDFS-6482 introduced a new native code function for creating hard links.  The 
 Windows implementation of this function does not compile due to an incorrect 
 call to {{CreateHardLink}}.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9374) Add tokens from -tokenCacheFile into UGI

2014-09-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9374?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14128919#comment-14128919
 ] 

Hudson commented on HADOOP-9374:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #1892 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1892/])
HADOOP-9989. Bug introduced in HADOOP-9374, which parses the -tokenCacheFile as 
binary file but set it to the configuration as JSON file. (zxu via tucu) (tucu: 
rev b100949404843ed245ef4e118291f55b3fdc81b8)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericOptionsParser.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestGenericOptionsParser.java


 Add tokens from -tokenCacheFile into UGI
 

 Key: HADOOP-9374
 URL: https://issues.apache.org/jira/browse/HADOOP-9374
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.0.0-alpha, 3.0.0, 0.23.7
Reporter: Daryn Sharp
Assignee: Daryn Sharp
 Fix For: 0.23.7, 2.0.4-alpha

 Attachments: HADOOP-9374.patch


 {{GenericOptionsParser}} accepts a {{-tokenCacheFile}} option.  However, it 
 only sets the {{mapreduce.job.credentials.json}} conf value instead of also 
 adding the tokens to the UGI so they are usable by the command being executed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9989) Bug introduced in HADOOP-9374, which parses the -tokenCacheFile as binary file but set it to the configuration as JSON file.

2014-09-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14128918#comment-14128918
 ] 

Hudson commented on HADOOP-9989:


FAILURE: Integrated in Hadoop-Mapreduce-trunk #1892 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1892/])
HADOOP-9989. Bug introduced in HADOOP-9374, which parses the -tokenCacheFile as 
binary file but set it to the configuration as JSON file. (zxu via tucu) (tucu: 
rev b100949404843ed245ef4e118291f55b3fdc81b8)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericOptionsParser.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestGenericOptionsParser.java


 Bug introduced in HADOOP-9374, which parses the -tokenCacheFile as binary 
 file but set it to the configuration as JSON file.
 

 Key: HADOOP-9989
 URL: https://issues.apache.org/jira/browse/HADOOP-9989
 Project: Hadoop Common
  Issue Type: Bug
  Components: security, util
Affects Versions: 2.1.0-beta
 Environment: Red Hat Enterprise 6 with Sun Java 1.7 and IBM Java 1.6
Reporter: Jinghui Wang
Assignee: zhihai xu
 Fix For: 2.6.0

 Attachments: HADOOP-9989.001.patch, HADOOP-9989.patch

   Original Estimate: 0h
  Remaining Estimate: 0h

 The code in JIRA HADOOP-9374's patch introduced a bug, where the value of the 
 tokenCacheFile parameter is being parsed as a binary file and set it to the
 mapreduce.job.credentials.json parameter in GenericOptionsParser, which 
 cannot be parsed by JobSubmitter when it gets the value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11077) NPE if hosts not specified in ProxyUsers

2014-09-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14128910#comment-14128910
 ] 

Hudson commented on HADOOP-11077:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1892 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1892/])
HADOOP-11077. NPE if hosts not specified in ProxyUsers. (gchanan via tucu) 
(tucu: rev 9ee891aa90333bf18cba412400daa5834f15c41d)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/authorize/DefaultImpersonationProvider.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/authorize/TestProxyUsers.java


 NPE if hosts not specified in ProxyUsers
 

 Key: HADOOP-11077
 URL: https://issues.apache.org/jira/browse/HADOOP-11077
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Gregory Chanan
Assignee: Gregory Chanan
 Fix For: 2.6.0

 Attachments: HADOOP-11077.patch


 When using the TokenDelegationAuthenticationFilter, I noticed if I don't 
 specify the hosts for a user/groups proxy user and then try to authenticate, 
 I get an NPE rather than an AuthorizationException.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11057) checknative command to probe for winutils.exe on windows

2014-09-10 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11057?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14128912#comment-14128912
 ] 

Hudson commented on HADOOP-11057:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #1892 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/1892/])
HADOOP-11057. checknative command to probe for winutils.exe on windows. 
Contributed by Xiaoyu Yao. (cnauroth: rev 
6dae4b430c342f9ad44ad8659c372e519f3931c9)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/NativeLibraryChecker.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestNativeLibraryChecker.java


 checknative command to probe for winutils.exe on windows
 

 Key: HADOOP-11057
 URL: https://issues.apache.org/jira/browse/HADOOP-11057
 Project: Hadoop Common
  Issue Type: Improvement
  Components: native
Affects Versions: 2.5.0
 Environment: windows
Reporter: Steve Loughran
Assignee: Xiaoyu Yao
Priority: Minor
 Fix For: 2.6.0

 Attachments: HADOOP-11057.0.patch, HADOOP-11057.1.patch, 
 HADOOP-11057.2.patch, HADOOP-11057.3.patch


 hadoop's {{checknative}} command looks for native binaries and returns an 
 error code if one is missing.
 But it doesn't check for {{winutils.exe}} on windows, which turns out to be 
 essential for some operations. 
 Adding this check to the -a (or default) operation would allow the check to 
 be used as a health check on windows installations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10995) HBase cannot run correctly with Hadoop trunk

2014-09-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10995?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10995:
--
Target Version/s:   (was: 2.6.0)

 HBase cannot run correctly with Hadoop trunk
 

 Key: HADOOP-10995
 URL: https://issues.apache.org/jira/browse/HADOOP-10995
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Zhijie Shen
Assignee: Zhijie Shen
Priority: Critical
 Attachments: HADOOP-10995.1.patch, YARN-2032.dependency.patch


 Several incompatible changes that happened on trunk but not on branch-2 have 
 broken the compatibility for HBbase:
 HADOOP-10348
 HADOOP-8124
 HADOOP-10255
 In general, HttpServer is and Syncable.sync have been missed.
 It blocks YARN-2032, which makes timeline sever support HBase store.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8386) hadoop script doesn't work if 'cd' prints to stdout (default behavior in Ubuntu)

2014-09-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-8386:
-
Fix Version/s: (was: 3.0.0)

 hadoop script doesn't work if 'cd' prints to stdout (default behavior in 
 Ubuntu)
 

 Key: HADOOP-8386
 URL: https://issues.apache.org/jira/browse/HADOOP-8386
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 1.0.2
 Environment: Ubuntu
Reporter: Christopher Berner
Assignee: Christopher Berner
 Fix For: 1.2.0, 0.23.5

 Attachments: hadoop-8386-1.diff, hadoop-8386-1.diff, 
 hadoop-8386.diff, hadoop.diff


 if the 'hadoop' script is run as 'bin/hadoop' on a distro where the 'cd' 
 command prints to stdout, the script will fail due to this line: 'bin=`cd 
 $bin; pwd`'
 Workaround: execute from the bin/ directory as './hadoop'
 Fix: change that line to 'bin=`cd $bin  /dev/null; pwd`'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8419) GzipCodec NPE upon reset with IBM JDK

2014-09-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8419?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-8419:
-
Fix Version/s: (was: 3.0.0)

 GzipCodec NPE upon reset with IBM JDK
 -

 Key: HADOOP-8419
 URL: https://issues.apache.org/jira/browse/HADOOP-8419
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Affects Versions: 1.0.3
Reporter: Luke Lu
Assignee: Yu Li
  Labels: gzip, ibm-jdk
 Fix For: 1.1.2, 2.0.5-alpha

 Attachments: HADOOP-8419-branch-1.patch, 
 HADOOP-8419-branch1-v2.patch, HADOOP-8419-trunk-v2.patch, 
 HADOOP-8419-trunk.patch


 The GzipCodec will NPE upon reset after finish when the native zlib codec is 
 not loaded. When the native zlib is loaded the codec creates a 
 CompressorOutputStream that doesn't have the problem, otherwise, the 
 GZipCodec uses GZIPOutputStream which is extended to provide the resetState 
 method. Since IBM JDK 6 SR9 FP2 including the current JDK 6 SR10, 
 GZIPOutputStream#finish will release the underlying deflater, which causes 
 NPE upon reset. This seems to be an IBM JDK quirk as Sun JDK and OpenJDK 
 doesn't have this issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11074) Move s3-related FS connector code to hadoop-aws

2014-09-10 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129034#comment-14129034
 ] 

Colin Patrick McCabe commented on HADOOP-11074:
---

bq. This patch copies the findbugs exclude file from hadoop-common into 
hadoop-aws, as per Steve's suggestion. I also added back in the pom content for 
auth-keys.xml as well. Thanks for the suggestions.

Sounds good.

bq. Colin, I filed a findbugs JIRA as HADOOP-11082 and linked it to this JIRA.

Thanks.

bq. I don't think the failing test is related to these changes.

Yeah, I agree.  This patch doesn't alter MiniZKFCCluster.

+1, will commit in an hour or two if there are no more comments.

 Move s3-related FS connector code to hadoop-aws
 ---

 Key: HADOOP-11074
 URL: https://issues.apache.org/jira/browse/HADOOP-11074
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.0.0
Reporter: David S. Wang
Assignee: David S. Wang
 Fix For: 3.0.0

 Attachments: HADOOP-11074.patch, HADOOP-11074.patch.2, 
 HADOOP-11074.patch.3


 Now that hadoop-aws has been created, we should actually move the relevant 
 code into that module, similar to what was done with hadoop-openstack, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-8719) Workaround for kerberos-related log errors upon running any hadoop command on OSX

2014-09-10 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129039#comment-14129039
 ] 

Allen Wittenauer commented on HADOOP-8719:
--

Howdy. Verifying that what is said to be fixed in 3.x is actually fixed in 3.x 
and not in 2.x .  In that process, I read this comment:

bq. Looks like this patch makes it impossible to debug a secured cluster in Mac 
OS X.

This is incorrect. *-env.sh is meant to be edited by the user.  Therefore, a 
user running security on OS X is expected to remove those bits from the 
*-env.sh files.  Since the time that this patch was committed, HADOOP-9902 has 
reworked this code:

a) Fixed support for Mavericks
b) Made the change in one place and not two
c) Put a comment above the code that specifically says that users running 
security should remove it.

 Workaround for kerberos-related log errors upon running any hadoop command on 
 OSX
 -

 Key: HADOOP-8719
 URL: https://issues.apache.org/jira/browse/HADOOP-8719
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
 Environment: Mac OS X 10.7, Java 1.6.0_26
Reporter: Jianbin Wei
Priority: Trivial
 Fix For: 3.0.0

 Attachments: HADOOP-8719.patch, HADOOP-8719.patch, HADOOP-8719.patch, 
 HADOOP-8719.patch


 When starting Hadoop on OS X 10.7 (Lion) using start-all.sh, Hadoop logs 
 the following errors:
 2011-07-28 11:45:31.469 java[77427:1a03] Unable to load realm info from 
 SCDynamicStore
 Hadoop does seem to function properly despite this.
 The workaround takes only 10 minutes.
 There are numerous discussions about this:
 google Unable to load realm mapping info from SCDynamicStore returns 1770 
 hits.  Each one has many discussions.  
 Assume each discussion take only 5 minute, a 10-minute fix can save ~150 
 hours.  This does not count much search of this issue and its 
 solution/workaround, which can easily hit (wasted) thousands of hours!!!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8386) hadoop script doesn't work if 'cd' prints to stdout (default behavior in Ubuntu)

2014-09-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8386?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-8386:
-
Fix Version/s: 3.0.0

 hadoop script doesn't work if 'cd' prints to stdout (default behavior in 
 Ubuntu)
 

 Key: HADOOP-8386
 URL: https://issues.apache.org/jira/browse/HADOOP-8386
 Project: Hadoop Common
  Issue Type: Bug
  Components: scripts
Affects Versions: 1.0.2
 Environment: Ubuntu
Reporter: Christopher Berner
Assignee: Christopher Berner
 Fix For: 1.2.0, 3.0.0, 0.23.5

 Attachments: hadoop-8386-1.diff, hadoop-8386-1.diff, 
 hadoop-8386.diff, hadoop.diff


 if the 'hadoop' script is run as 'bin/hadoop' on a distro where the 'cd' 
 command prints to stdout, the script will fail due to this line: 'bin=`cd 
 $bin; pwd`'
 Workaround: execute from the bin/ directory as './hadoop'
 Fix: change that line to 'bin=`cd $bin  /dev/null; pwd`'



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8813) RPC Server and Client classes need InterfaceAudience and InterfaceStability annotations

2014-09-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8813?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-8813:
-
Summary: RPC Server and Client classes need InterfaceAudience and 
InterfaceStability annotations  (was: RPC Serever and Client classes need 
InterfaceAudience and InterfaceStability annotations)

 RPC Server and Client classes need InterfaceAudience and InterfaceStability 
 annotations
 ---

 Key: HADOOP-8813
 URL: https://issues.apache.org/jira/browse/HADOOP-8813
 Project: Hadoop Common
  Issue Type: Improvement
  Components: ipc
Affects Versions: 3.0.0
Reporter: Brandon Li
Assignee: Brandon Li
Priority: Trivial
 Fix For: 3.0.0

 Attachments: HADOOP-8813.patch, HADOOP-8813.patch


 RPC Serever and Client classes need InterfaceAudience and InterfaceStability 
 annotations



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11020) TestRefreshUserMappings fails

2014-09-10 Thread Chen He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11020?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen He resolved HADOOP-11020.
--
Resolution: Duplicate

 TestRefreshUserMappings fails
 -

 Key: HADOOP-11020
 URL: https://issues.apache.org/jira/browse/HADOOP-11020
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Chen He

 Error Message
 /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build%402/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/testGroupMappingRefresh_rsrc.xml
  (No such file or directory)
 Stacktrace
 java.io.FileNotFoundException: 
 /home/jenkins/jenkins-slave/workspace/PreCommit-HDFS-Build%402/trunk/hadoop-hdfs-project/hadoop-hdfs/target/test-classes/testGroupMappingRefresh_rsrc.xml
  (No such file or directory)
   at java.io.FileOutputStream.open(Native Method)
   at java.io.FileOutputStream.init(FileOutputStream.java:194)
   at java.io.FileOutputStream.init(FileOutputStream.java:84)
   at 
 org.apache.hadoop.security.TestRefreshUserMappings.addNewConfigResource(TestRefreshUserMappings.java:242)
   at 
 org.apache.hadoop.security.TestRefreshUserMappings.testRefreshSuperUserGroupsConfiguration(TestRefreshUserMappings.java:203)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10758) KMS: add ACLs on per key basis.

2014-09-10 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129139#comment-14129139
 ] 

Alejandro Abdelnur commented on HADOOP-10758:
-

+1

 KMS: add ACLs on per key basis.
 ---

 Key: HADOOP-10758
 URL: https://issues.apache.org/jira/browse/HADOOP-10758
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Arun Suresh
 Attachments: HADOOP-10758.1.patch, HADOOP-10758.2.patch, 
 HADOOP-10758.3.patch, HADOOP-10758.4.patch, HADOOP-10758.5.patch, 
 HADOOP-10758.6.patch, HADOOP-10758.7.patch, HADOOP-10758.8.patch, 
 HADOOP-10758.9.patch


 The KMS server should enforce ACLs on per key basis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-10758) KMS: add ACLs on per key basis.

2014-09-10 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129139#comment-14129139
 ] 

Alejandro Abdelnur edited comment on HADOOP-10758 at 9/10/14 9:26 PM:
--

+1. Made a minor correction in the docs NOTE: The default ACL does not 
support ALL operation qualifier.


was (Author: tucu00):
+1

 KMS: add ACLs on per key basis.
 ---

 Key: HADOOP-10758
 URL: https://issues.apache.org/jira/browse/HADOOP-10758
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Arun Suresh
 Attachments: HADOOP-10758.1.patch, HADOOP-10758.2.patch, 
 HADOOP-10758.3.patch, HADOOP-10758.4.patch, HADOOP-10758.5.patch, 
 HADOOP-10758.6.patch, HADOOP-10758.7.patch, HADOOP-10758.8.patch, 
 HADOOP-10758.9.patch


 The KMS server should enforce ACLs on per key basis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10758) KMS: add ACLs on per key basis.

2014-09-10 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-10758:

   Resolution: Fixed
Fix Version/s: 2.6.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks Arun. Committed to trunk and branch-2.

 KMS: add ACLs on per key basis.
 ---

 Key: HADOOP-10758
 URL: https://issues.apache.org/jira/browse/HADOOP-10758
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 3.0.0
Reporter: Alejandro Abdelnur
Assignee: Arun Suresh
 Fix For: 2.6.0

 Attachments: HADOOP-10758.1.patch, HADOOP-10758.2.patch, 
 HADOOP-10758.3.patch, HADOOP-10758.4.patch, HADOOP-10758.5.patch, 
 HADOOP-10758.6.patch, HADOOP-10758.7.patch, HADOOP-10758.8.patch, 
 HADOOP-10758.9.patch


 The KMS server should enforce ACLs on per key basis.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11074) Move s3-related FS connector code to hadoop-aws

2014-09-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129249#comment-14129249
 ] 

Hadoop QA commented on HADOOP-11074:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12667765/HADOOP-11074.patch.3
  against trunk revision b02a4b4.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 25 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 
100 warning messages.
See 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4692//artifact/trunk/patchprocess/diffJavadocWarnings.txt
 for details.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common hadoop-tools/hadoop-aws 
hadoop-tools/hadoop-tools-dist.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4692//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4692//console

This message is automatically generated.

 Move s3-related FS connector code to hadoop-aws
 ---

 Key: HADOOP-11074
 URL: https://issues.apache.org/jira/browse/HADOOP-11074
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.0.0
Reporter: David S. Wang
Assignee: David S. Wang
 Fix For: 3.0.0

 Attachments: HADOOP-11074.patch, HADOOP-11074.patch.2, 
 HADOOP-11074.patch.3


 Now that hadoop-aws has been created, we should actually move the relevant 
 code into that module, similar to what was done with hadoop-openstack, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11083) After refactoring of HTTP proxyuser to common, doAs param is case sensitive

2014-09-10 Thread Alejandro Abdelnur (JIRA)
Alejandro Abdelnur created HADOOP-11083:
---

 Summary: After refactoring of HTTP proxyuser to common, doAs param 
is case sensitive
 Key: HADOOP-11083
 URL: https://issues.apache.org/jira/browse/HADOOP-11083
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur


In HADOOP-10835 I've overlooked that the {{doAs}} parameter was been handled as 
case insensitive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Assigned] (HADOOP-11083) After refactoring of HTTP proxyuser to common, doAs param is case sensitive

2014-09-10 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur reassigned HADOOP-11083:
---

Assignee: Alejandro Abdelnur

 After refactoring of HTTP proxyuser to common, doAs param is case sensitive
 ---

 Key: HADOOP-11083
 URL: https://issues.apache.org/jira/browse/HADOOP-11083
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur

 In HADOOP-10835 I've overlooked that the {{doAs}} parameter was been handled 
 as case insensitive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11083) After refactoring of HTTP proxyuser to common, doAs param is case sensitive

2014-09-10 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-11083:

Status: Patch Available  (was: Open)

 After refactoring of HTTP proxyuser to common, doAs param is case sensitive
 ---

 Key: HADOOP-11083
 URL: https://issues.apache.org/jira/browse/HADOOP-11083
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-11083.patch


 In HADOOP-10835 I've overlooked that the {{doAs}} parameter was been handled 
 as case insensitive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11083) After refactoring of HTTP proxyuser to common, doAs param is case sensitive

2014-09-10 Thread Alejandro Abdelnur (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alejandro Abdelnur updated HADOOP-11083:

Attachment: HADOOP-11083.patch

making doAs search case insensitive

 After refactoring of HTTP proxyuser to common, doAs param is case sensitive
 ---

 Key: HADOOP-11083
 URL: https://issues.apache.org/jira/browse/HADOOP-11083
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-11083.patch


 In HADOOP-10835 I've overlooked that the {{doAs}} parameter was been handled 
 as case insensitive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11074) Move s3-related FS connector code to hadoop-aws

2014-09-10 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129328#comment-14129328
 ] 

Colin Patrick McCabe commented on HADOOP-11074:
---

Let's address the javadoc warnings in HADOOP-11082.  This is not new code, so 
it seems likely to be a build XML / configuration issue.  Since 
{{test-patch.sh}} compares the old and new javadoc count, the fact that this 
adds a few more warnings won't make anyone else's build red.

 Move s3-related FS connector code to hadoop-aws
 ---

 Key: HADOOP-11074
 URL: https://issues.apache.org/jira/browse/HADOOP-11074
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.0.0
Reporter: David S. Wang
Assignee: David S. Wang
 Fix For: 3.0.0

 Attachments: HADOOP-11074.patch, HADOOP-11074.patch.2, 
 HADOOP-11074.patch.3


 Now that hadoop-aws has been created, we should actually move the relevant 
 code into that module, similar to what was done with hadoop-openstack, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11084) jenkins patchprocess links are broken

2014-09-10 Thread Colin Patrick McCabe (JIRA)
Colin Patrick McCabe created HADOOP-11084:
-

 Summary: jenkins patchprocess links are broken
 Key: HADOOP-11084
 URL: https://issues.apache.org/jira/browse/HADOOP-11084
 Project: Hadoop Common
  Issue Type: New Feature
  Components: scripts
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe


jenkins patchprocess links of the form 
{{https://builds.apache.org/job/PreCommit-HADOOP-Build/build_id//artifact/trunk/patchprocess/diffJavadocWarnings.txt}}
 and so forth are dead links.  We should fix them to reflect the new source 
layout after git.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11084) jenkins patchprocess links are broken

2014-09-10 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-11084:
--
Attachment: HADOOP-11084.001.patch

 jenkins patchprocess links are broken
 -

 Key: HADOOP-11084
 URL: https://issues.apache.org/jira/browse/HADOOP-11084
 Project: Hadoop Common
  Issue Type: New Feature
  Components: scripts
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-11084.001.patch


 jenkins patchprocess links of the form 
 {{https://builds.apache.org/job/PreCommit-HADOOP-Build/build_id//artifact/trunk/patchprocess/diffJavadocWarnings.txt}}
  and so forth are dead links.  We should fix them to reflect the new source 
 layout after git.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11084) jenkins patchprocess links are broken

2014-09-10 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11084?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-11084:
--
Status: Patch Available  (was: Open)

 jenkins patchprocess links are broken
 -

 Key: HADOOP-11084
 URL: https://issues.apache.org/jira/browse/HADOOP-11084
 Project: Hadoop Common
  Issue Type: New Feature
  Components: scripts
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-11084.001.patch


 jenkins patchprocess links of the form 
 {{https://builds.apache.org/job/PreCommit-HADOOP-Build/build_id//artifact/trunk/patchprocess/diffJavadocWarnings.txt}}
  and so forth are dead links.  We should fix them to reflect the new source 
 layout after git.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10635) Add a method to CryptoCodec to generate SRNs for IV

2014-09-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10635?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10635:
--
Fix Version/s: (was: 3.0.0)
   2.6.0

 Add a method to CryptoCodec to generate SRNs for IV
 ---

 Key: HADOOP-10635
 URL: https://issues.apache.org/jira/browse/HADOOP-10635
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Alejandro Abdelnur
Assignee: Yi Liu
 Fix For: 2.6.0

 Attachments: HADOOP-10635.1.patch, HADOOP-10635.patch


 SRN generators are provided by crypto libraries. the CryptoCodec gives access 
 to a crypto library, thus it makes sense to expose the SRN generator on the 
 CryptoCodec API.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10632) Minor improvements to Crypto input and output streams

2014-09-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10632?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10632:
--
Fix Version/s: (was: 3.0.0)
   2.6.0

 Minor improvements to Crypto input and output streams
 -

 Key: HADOOP-10632
 URL: https://issues.apache.org/jira/browse/HADOOP-10632
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: fs-encryption (HADOOP-10150 and HDFS-6134)
Reporter: Alejandro Abdelnur
Assignee: Yi Liu
 Fix For: 2.6.0

 Attachments: HADOOP-10632.1.patch, HADOOP-10632.2.patch, 
 HADOOP-10632.3.patch, HADOOP-10632.4.patch, HADOOP-10632.patch


 Minor follow up feedback on the crypto streams



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11074) Move s3-related FS connector code to hadoop-aws

2014-09-10 Thread Colin Patrick McCabe (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11074?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Colin Patrick McCabe updated HADOOP-11074:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

 Move s3-related FS connector code to hadoop-aws
 ---

 Key: HADOOP-11074
 URL: https://issues.apache.org/jira/browse/HADOOP-11074
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.0.0
Reporter: David S. Wang
Assignee: David S. Wang
 Fix For: 3.0.0

 Attachments: HADOOP-11074.patch, HADOOP-11074.patch.2, 
 HADOOP-11074.patch.3


 Now that hadoop-aws has been created, we should actually move the relevant 
 code into that module, similar to what was done with hadoop-openstack, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11009) Add Timestamp Preservation to DistCp

2014-09-10 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129362#comment-14129362
 ] 

Allen Wittenauer commented on HADOOP-11009:
---

Pssst: HADOOP-5620 .

 Add Timestamp Preservation to DistCp
 

 Key: HADOOP-11009
 URL: https://issues.apache.org/jira/browse/HADOOP-11009
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools/distcp
Affects Versions: 2.4.0
Reporter: Gary Steelman
 Attachments: HADOOP-11009.1.patch, HADOOP-11009.2.patch, 
 HADOOP-11009.3.patch


 Currently access and modification times are not preserved on files copied 
 using DistCp. This patch adds an option to DistCp for timestamp preservation. 
 The patch ready, but I understand there is a Contributor form I need to sign 
 before I can upload it. Can someone point me in the right direction for this 
 form? Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11082) Resolve findbugs warnings in hadoop-aws module

2014-09-10 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11082?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129363#comment-14129363
 ] 

Colin Patrick McCabe commented on HADOOP-11082:
---

We should also resolve some errors showing up in the javadoc build, that look 
like this:

{code}
+[WARNING] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/hadoop-common-project/hadoop-common/target/test-classes/org/apache/hadoop/fs/TestLocalDirAllocator.class:
 warning: Cannot find annotation method 'value()' in type 
'org.junit.runner.RunWith': class file for org.junit.runner.RunWith not found
+[WARNING] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/hadoop-common-project/hadoop-common/target/test-classes/org/apache/hadoop/fs/TestLocalDirAllocator.class:
 warning: Cannot find annotation method 'timeout()' in type 'org.junit.Test'
+[WARNING] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/hadoop-common-project/hadoop-common/target/test-classes/org/apache/hadoop/fs/TestLocalDirAllocator.class:
 warning: Cannot find annotation method 'timeout()' in type 'org.junit.Test'
+[WARNING] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/hadoop-common-project/hadoop-common/target/test-classes/org/apache/hadoop/fs/TestLocalDirAllocator.class:
 warning: Cannot find annotation method 'timeout()' in type 'org.junit.Test'
+[WARNING] 
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/hadoop-common-project/hadoop-common/target/test-classes/org/apache/hadoop/fs/TestLocalDirAllocator.class:
 warning: Cannot find annotation method 'timeout()' in type 'org.junit.Test'
{code}

I think upgrading all the tests from junit 3 to junit 4 should resolve this.

 Resolve findbugs warnings in hadoop-aws module
 --

 Key: HADOOP-11082
 URL: https://issues.apache.org/jira/browse/HADOOP-11082
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 3.0.0
Reporter: David S. Wang
Priority: Minor

 Currently hadoop-aws module has the findbugs exclude file from hadoop-common. 
  It would be nice to address the findbugs bugs eventually.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11074) Move s3-related FS connector code to hadoop-aws

2014-09-10 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11074?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129365#comment-14129365
 ] 

Colin Patrick McCabe commented on HADOOP-11074:
---

Can you post a patch backporting this to branch-2?  There are a few conflicts 
and it would be worthwhile to review the backport.

 Move s3-related FS connector code to hadoop-aws
 ---

 Key: HADOOP-11074
 URL: https://issues.apache.org/jira/browse/HADOOP-11074
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 3.0.0
Reporter: David S. Wang
Assignee: David S. Wang
 Fix For: 3.0.0

 Attachments: HADOOP-11074.patch, HADOOP-11074.patch.2, 
 HADOOP-11074.patch.3


 Now that hadoop-aws has been created, we should actually move the relevant 
 code into that module, similar to what was done with hadoop-openstack, etc.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11009) Add Timestamp Preservation to DistCp

2014-09-10 Thread Gary Steelman (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129376#comment-14129376
 ] 

Gary Steelman commented on HADOOP-11009:


Thanks [~aw]. I had seen that previously but after looking through the code in 
trunk and branch-2, I didn't see where that feature was available. I think it 
was somehow lost? Maybe I wasn't looking in the correct. place.

 Add Timestamp Preservation to DistCp
 

 Key: HADOOP-11009
 URL: https://issues.apache.org/jira/browse/HADOOP-11009
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools/distcp
Affects Versions: 2.4.0
Reporter: Gary Steelman
 Attachments: HADOOP-11009.1.patch, HADOOP-11009.2.patch, 
 HADOOP-11009.3.patch


 Currently access and modification times are not preserved on files copied 
 using DistCp. This patch adds an option to DistCp for timestamp preservation. 
 The patch ready, but I understand there is a Contributor form I need to sign 
 before I can upload it. Can someone point me in the right direction for this 
 form? Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10868) Create a ZooKeeper-backed secret provider

2014-09-10 Thread Robert Kanter (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10868?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Robert Kanter updated HADOOP-10868:
---
Attachment: HADOOP-10868.patch
HADOOP-10868_branch-2.patch

The new patch does all of the changes except for {{ReflectionUtils}} because 
it's in hadoop-common.

 Create a ZooKeeper-backed secret provider
 -

 Key: HADOOP-10868
 URL: https://issues.apache.org/jira/browse/HADOOP-10868
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 2.4.1
Reporter: Robert Kanter
Assignee: Robert Kanter
 Attachments: HADOOP-10868.patch, HADOOP-10868.patch, 
 HADOOP-10868.patch, HADOOP-10868.patch, HADOOP-10868_branch-2.patch, 
 HADOOP-10868_branch-2.patch, HADOOP-10868_branch-2.patch, 
 HADOOP-10868_branch-2.patch


 Create a secret provider (see HADOOP-10791) that is backed by ZooKeeper and 
 can synchronize amongst different servers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11084) jenkins patchprocess links are broken

2014-09-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129398#comment-14129398
 ] 

Hadoop QA commented on HADOOP-11084:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12667894/HADOOP-11084.001.patch
  against trunk revision 5ec7fcd.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:red}-1 javadoc{color}.  The javadoc tool appears to have generated 2 
warning messages.
See 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4694//artifact/trunk/patchprocess/diffJavadocWarnings.txt
 for details.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-nfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4694//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4694//console

This message is automatically generated.

 jenkins patchprocess links are broken
 -

 Key: HADOOP-11084
 URL: https://issues.apache.org/jira/browse/HADOOP-11084
 Project: Hadoop Common
  Issue Type: New Feature
  Components: scripts
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-11084.001.patch


 jenkins patchprocess links of the form 
 {{https://builds.apache.org/job/PreCommit-HADOOP-Build/build_id//artifact/trunk/patchprocess/diffJavadocWarnings.txt}}
  and so forth are dead links.  We should fix them to reflect the new source 
 layout after git.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11009) Add Timestamp Preservation to DistCp

2014-09-10 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11009?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129397#comment-14129397
 ] 

Allen Wittenauer commented on HADOOP-11009:
---

The bigger thing is that this should have feature and flag parity.

 Add Timestamp Preservation to DistCp
 

 Key: HADOOP-11009
 URL: https://issues.apache.org/jira/browse/HADOOP-11009
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tools/distcp
Affects Versions: 2.4.0
Reporter: Gary Steelman
 Attachments: HADOOP-11009.1.patch, HADOOP-11009.2.patch, 
 HADOOP-11009.3.patch


 Currently access and modification times are not preserved on files copied 
 using DistCp. This patch adds an option to DistCp for timestamp preservation. 
 The patch ready, but I understand there is a Contributor form I need to sign 
 before I can upload it. Can someone point me in the right direction for this 
 form? Thanks!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8523) test-patch.sh doesn't validate patches before building

2014-09-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-8523:
-
Assignee: Jack Dintruff

 test-patch.sh doesn't validate patches before building
 --

 Key: HADOOP-8523
 URL: https://issues.apache.org/jira/browse/HADOOP-8523
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 0.23.3, 2.0.0-alpha
Reporter: Jack Dintruff
Assignee: Jack Dintruff
Priority: Minor
  Labels: newbie
 Fix For: 3.0.0

 Attachments: HADOOP-8523.patch, HADOOP-8523.patch, Hadoop-8523.patch, 
 Hadoop-8523.patch


 When running test-patch.sh with an invalid patch (not formatted properly) or 
 one that doesn't compile, the script spends a lot of time building Hadoop 
 before checking to see if the patch is invalid.  It would help devs if it 
 checked first just in case we run test-patch.sh with a bad patch file. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8719) Workaround for kerberos-related log errors upon running any hadoop command on OSX

2014-09-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8719?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-8719:
-
Assignee: Jianbin Wei

 Workaround for kerberos-related log errors upon running any hadoop command on 
 OSX
 -

 Key: HADOOP-8719
 URL: https://issues.apache.org/jira/browse/HADOOP-8719
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.0.0-alpha
 Environment: Mac OS X 10.7, Java 1.6.0_26
Reporter: Jianbin Wei
Assignee: Jianbin Wei
Priority: Trivial
 Fix For: 3.0.0

 Attachments: HADOOP-8719.patch, HADOOP-8719.patch, HADOOP-8719.patch, 
 HADOOP-8719.patch


 When starting Hadoop on OS X 10.7 (Lion) using start-all.sh, Hadoop logs 
 the following errors:
 2011-07-28 11:45:31.469 java[77427:1a03] Unable to load realm info from 
 SCDynamicStore
 Hadoop does seem to function properly despite this.
 The workaround takes only 10 minutes.
 There are numerous discussions about this:
 google Unable to load realm mapping info from SCDynamicStore returns 1770 
 hits.  Each one has many discussions.  
 Assume each discussion take only 5 minute, a 10-minute fix can save ~150 
 hours.  This does not count much search of this issue and its 
 solution/workaround, which can easily hit (wasted) thousands of hours!!!



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8017) Configure hadoop-main pom to get rid of M2E plugin execution not covered

2014-09-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8017?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-8017:
-
Assignee: Eric Charles

 Configure hadoop-main pom to get rid of M2E plugin execution not covered
 

 Key: HADOOP-8017
 URL: https://issues.apache.org/jira/browse/HADOOP-8017
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 0.24.0
Reporter: Eric Charles
Assignee: Eric Charles
 Fix For: 3.0.0

 Attachments: HADOOP-8017.patch


 Last M2Eclipse plugin (maven plugin for eclipse) shows nasty errors when 
 importing the hadoop maven modules (read more on 
 http://wiki.eclipse.org/M2E_plugin_execution_not_covered).
 The solution is to configure the build section of the pom with a 
 org.eclipse.m2e:lifecycle-mapping plugin.
 This configuration has no influence on the Maven build itself.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-7256) Resource leak during failure scenario of closing of resources.

2014-09-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-7256?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-7256:
-
Assignee: ramkrishna.s.vasudevan

 Resource leak during failure scenario of closing of resources. 
 ---

 Key: HADOOP-7256
 URL: https://issues.apache.org/jira/browse/HADOOP-7256
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 0.20.2
Reporter: ramkrishna.s.vasudevan
Assignee: ramkrishna.s.vasudevan
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-7256-patch-1.patch, HADOOP-7256-patch-2.patch, 
 HADOOP-7256.patch

   Original Estimate: 8h
  Remaining Estimate: 8h

 Problem Statement:
 ===
 There are chances of resource leak and stream not getting closed 
 Take the case when after copying data we try to close the Input and output 
 stream followed by closing of the socket.
 Suppose an exception occurs while closing the input stream(due to runtime 
 exception) then the subsequent operations of closing the output stream and 
 socket may not happen and there is a chance of resource leak.
 Scenario 
 ===
 During long run of map reduce jobs, the copyFromLocalFile() api is getting 
 called.
 Here we found some exceptions happening. As a result of this we found the 
 lsof value raising leading to resource leak.
 Solution:
 ===
 While doing a close operation of any resource catch the RuntimeException also 
 rather than catching the IOException alone.
 Additionally there are places where we try to close a resource in the catch 
 block.
 If this close fails, we just throw and come out of the current flow.
 In order to avoid this, we can carry out the close operation in the finally 
 block.
 Probable reasons for getting RunTimeExceptions:
 =
 We may get runtime exception from customised hadoop streams like 
 FSDataOutputStream.close() . So better to handle RunTimeExceptions also.
  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-6871) When the value of a configuration key is set to its unresolved form, it causes the IllegalStateException in Configuration.get() stating that substitution depth is too la

2014-09-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-6871?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-6871:
-
Assignee: Arvind Prabhakar

 When the value of a configuration key is set to its unresolved form, it 
 causes the IllegalStateException in Configuration.get() stating that 
 substitution depth is too large.
 -

 Key: HADOOP-6871
 URL: https://issues.apache.org/jira/browse/HADOOP-6871
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 3.0.0
Reporter: Arvind Prabhakar
Assignee: Arvind Prabhakar
 Fix For: 3.0.0

 Attachments: HADOOP-6871-1.patch, HADOOP-6871-2.patch, 
 HADOOP-6871-3.patch, HADOOP-6871.patch


 When a configuration value is set to its unresolved expression string, it 
 leads to recursive substitution attempts in 
 {{Configuration.substituteVars(String)}} method until the max substitution 
 check kicks in and raises an IllegalStateException indicating that the 
 substitution depth is too large. For example, the configuration key 
 {{foobar}} with a value set to {{$\{foobar\}}} will cause this behavior. 
 While this is not a usual use case, it can happen in build environments where 
 a property value is not specified and yet being passed into the test 
 mechanism leading to failures due to this limitation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11083) After refactoring of HTTP proxyuser to common, doAs param is case sensitive

2014-09-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129439#comment-14129439
 ] 

Hadoop QA commented on HADOOP-11083:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12667878/HADOOP-11083.patch
  against trunk revision 7f80e14.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ipc.TestFairCallQueue
  org.apache.hadoop.crypto.key.TestValueQueue
  org.apache.hadoop.ipc.TestDecayRpcScheduler
  org.apache.hadoop.crypto.random.TestOsSecureRandom

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4693//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4693//console

This message is automatically generated.

 After refactoring of HTTP proxyuser to common, doAs param is case sensitive
 ---

 Key: HADOOP-11083
 URL: https://issues.apache.org/jira/browse/HADOOP-11083
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-11083.patch


 In HADOOP-10835 I've overlooked that the {{doAs}} parameter was been handled 
 as case insensitive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10868) Create a ZooKeeper-backed secret provider

2014-09-10 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10868?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129442#comment-14129442
 ] 

Hadoop QA commented on HADOOP-10868:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12667901/HADOOP-10868.patch
  against trunk revision 5ec7fcd.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 8 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-auth hadoop-hdfs-project/hadoop-hdfs-httpfs.

{color:green}+1 contrib tests{color}.  The patch passed contrib unit tests.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4695//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4695//console

This message is automatically generated.

 Create a ZooKeeper-backed secret provider
 -

 Key: HADOOP-10868
 URL: https://issues.apache.org/jira/browse/HADOOP-10868
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Affects Versions: 2.4.1
Reporter: Robert Kanter
Assignee: Robert Kanter
 Attachments: HADOOP-10868.patch, HADOOP-10868.patch, 
 HADOOP-10868.patch, HADOOP-10868.patch, HADOOP-10868_branch-2.patch, 
 HADOOP-10868_branch-2.patch, HADOOP-10868_branch-2.patch, 
 HADOOP-10868_branch-2.patch


 Create a secret provider (see HADOOP-10791) that is backed by ZooKeeper and 
 can synchronize amongst different servers.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11084) jenkins patchprocess links are broken

2014-09-10 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11084?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129444#comment-14129444
 ] 

Karthik Kambatla commented on HADOOP-11084:
---

We should other builds too - HDFS, YARN, MapReduce 

 jenkins patchprocess links are broken
 -

 Key: HADOOP-11084
 URL: https://issues.apache.org/jira/browse/HADOOP-11084
 Project: Hadoop Common
  Issue Type: New Feature
  Components: scripts
Reporter: Colin Patrick McCabe
Assignee: Colin Patrick McCabe
 Attachments: HADOOP-11084.001.patch


 jenkins patchprocess links of the form 
 {{https://builds.apache.org/job/PreCommit-HADOOP-Build/build_id//artifact/trunk/patchprocess/diffJavadocWarnings.txt}}
  and so forth are dead links.  We should fix them to reflect the new source 
 layout after git.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9989) Bug introduced in HADOOP-9374, which parses the -tokenCacheFile as binary file but set it to the configuration as JSON file.

2014-09-10 Thread zhihai xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9989?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129474#comment-14129474
 ] 

zhihai xu commented on HADOOP-9989:
---

Thanks a lot [~tucu00]
Many thanks to [~daryn] for the review and comments.

 Bug introduced in HADOOP-9374, which parses the -tokenCacheFile as binary 
 file but set it to the configuration as JSON file.
 

 Key: HADOOP-9989
 URL: https://issues.apache.org/jira/browse/HADOOP-9989
 Project: Hadoop Common
  Issue Type: Bug
  Components: security, util
Affects Versions: 2.1.0-beta
 Environment: Red Hat Enterprise 6 with Sun Java 1.7 and IBM Java 1.6
Reporter: Jinghui Wang
Assignee: zhihai xu
 Fix For: 2.6.0

 Attachments: HADOOP-9989.001.patch, HADOOP-9989.patch

   Original Estimate: 0h
  Remaining Estimate: 0h

 The code in JIRA HADOOP-9374's patch introduced a bug, where the value of the 
 tokenCacheFile parameter is being parsed as a binary file and set it to the
 mapreduce.job.credentials.json parameter in GenericOptionsParser, which 
 cannot be parsed by JobSubmitter when it gets the value.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10759) Remove hardcoded JAVA_HEAP_MAX in hadoop-config.sh

2014-09-10 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10759:
--
Fix Version/s: (was: 2.6.0)

 Remove hardcoded JAVA_HEAP_MAX in hadoop-config.sh
 --

 Key: HADOOP-10759
 URL: https://issues.apache.org/jira/browse/HADOOP-10759
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
Affects Versions: 2.4.0
 Environment: Linux64
Reporter: sam liu
Priority: Minor
 Attachments: HADOOP-10759.patch, HADOOP-10759.patch


 In hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh, there 
 is a hard code for Java parameter: 'JAVA_HEAP_MAX=-Xmx1000m'. It should be 
 removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10759) Remove hardcoded JAVA_HEAP_MAX in hadoop-config.sh

2014-09-10 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129494#comment-14129494
 ] 

Allen Wittenauer commented on HADOOP-10759:
---

Removed the fix version this.

Additionally, the trunk, post-9902 version of this is part of HADOOP-10950 .  
It likely needs to get rebased though.

 Remove hardcoded JAVA_HEAP_MAX in hadoop-config.sh
 --

 Key: HADOOP-10759
 URL: https://issues.apache.org/jira/browse/HADOOP-10759
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
Affects Versions: 2.4.0
 Environment: Linux64
Reporter: sam liu
Priority: Minor
 Attachments: HADOOP-10759.patch, HADOOP-10759.patch


 In hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh, there 
 is a hard code for Java parameter: 'JAVA_HEAP_MAX=-Xmx1000m'. It should be 
 removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-10759) Remove hardcoded JAVA_HEAP_MAX in hadoop-config.sh

2014-09-10 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129494#comment-14129494
 ] 

Allen Wittenauer edited comment on HADOOP-10759 at 9/11/14 1:43 AM:


Removed the fix version since this was reverted.

Additionally, the trunk, post-9902 version of this is part of HADOOP-10950 .  
It likely needs to get rebased though.


was (Author: aw):
Removed the fix version this.

Additionally, the trunk, post-9902 version of this is part of HADOOP-10950 .  
It likely needs to get rebased though.

 Remove hardcoded JAVA_HEAP_MAX in hadoop-config.sh
 --

 Key: HADOOP-10759
 URL: https://issues.apache.org/jira/browse/HADOOP-10759
 Project: Hadoop Common
  Issue Type: Bug
  Components: bin
Affects Versions: 2.4.0
 Environment: Linux64
Reporter: sam liu
Priority: Minor
 Attachments: HADOOP-10759.patch, HADOOP-10759.patch


 In hadoop-common-project/hadoop-common/src/main/bin/hadoop-config.sh, there 
 is a hard code for Java parameter: 'JAVA_HEAP_MAX=-Xmx1000m'. It should be 
 removed.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11083) After refactoring of HTTP proxyuser to common, doAs param is case sensitive

2014-09-10 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11083?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129556#comment-14129556
 ] 

Alejandro Abdelnur commented on HADOOP-11083:
-

test failures are unrelated

 After refactoring of HTTP proxyuser to common, doAs param is case sensitive
 ---

 Key: HADOOP-11083
 URL: https://issues.apache.org/jira/browse/HADOOP-11083
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Alejandro Abdelnur
 Attachments: HADOOP-11083.patch


 In HADOOP-10835 I've overlooked that the {{doAs}} parameter was been handled 
 as case insensitive.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11062) CryptoCodec testcases requiring OpenSSL should be run only if -Pnative is used

2014-09-10 Thread Alejandro Abdelnur (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14129560#comment-14129560
 ] 

Alejandro Abdelnur commented on HADOOP-11062:
-

indentation in the pom changes is wrong, are you using tab?

 CryptoCodec testcases requiring OpenSSL should be run only if -Pnative is used
 --

 Key: HADOOP-11062
 URL: https://issues.apache.org/jira/browse/HADOOP-11062
 Project: Hadoop Common
  Issue Type: Bug
  Components: security, test
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Arun Suresh
 Attachments: HADOOP-11062.1.patch, HADOOP-11062.1.patch, 
 HADOOP-11062.2.patch, HADOOP-11062.3.patch, HADOOP-11062.4.patch


 there are a few testcases, cryptocodec related that require Hadoop native 
 code and OpenSSL.
 These tests should be skipped if -Pnative is not used when running the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11062) CryptoCodec testcases requiring OpenSSL should be run only if -Pnative is used

2014-09-10 Thread Arun Suresh (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11062?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arun Suresh updated HADOOP-11062:
-
Attachment: HADOOP-11062.5.patch

Corrected indentation in the pom files..

 CryptoCodec testcases requiring OpenSSL should be run only if -Pnative is used
 --

 Key: HADOOP-11062
 URL: https://issues.apache.org/jira/browse/HADOOP-11062
 Project: Hadoop Common
  Issue Type: Bug
  Components: security, test
Affects Versions: 2.6.0
Reporter: Alejandro Abdelnur
Assignee: Arun Suresh
 Attachments: HADOOP-11062.1.patch, HADOOP-11062.1.patch, 
 HADOOP-11062.2.patch, HADOOP-11062.3.patch, HADOOP-11062.4.patch, 
 HADOOP-11062.5.patch


 there are a few testcases, cryptocodec related that require Hadoop native 
 code and OpenSSL.
 These tests should be skipped if -Pnative is not used when running the tests.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)