[jira] [Commented] (HADOOP-11750) distcp fails if we copy data from swift to secure HDFS

2015-03-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11750?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14381655#comment-14381655
 ] 

Steve Loughran commented on HADOOP-11750:
-

DistCp is for hdfs:  to hdfs: copies only; you will have to use {{dfs -cp}} 
instead. Closing as invalid

 distcp fails if we copy data from swift to secure HDFS
 --

 Key: HADOOP-11750
 URL: https://issues.apache.org/jira/browse/HADOOP-11750
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/swift
Affects Versions: 3.0.0, 2.3.0
Reporter: Chen He
Assignee: Chen He

 ERROR tools.DistCp: Exception encountered
 java.lang.IllegalArgumentException: java.net.UnknownHostException: 
 babynames.main
 at 
 org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:373)
 at 
 org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:258)
 at 
 org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:301)
 at 
 org.apache.hadoop.fs.FileSystem.collectDelegationTokens(FileSystem.java:523)
 at org.apache.hadoop.fs.FileSystem.addDelegationTokens(FileSystem.java:507)
 at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:121)
 at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
 at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
 at 
 org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:133)
 at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:83)
 at 
 org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:90)
 at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:84)
 at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:353)
 at org.apache.hadoop.tools.DistCp.execute(DistCp.java:160)
 at org.apache.hadoop.tools.DistCp.run(DistCp.java:121)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at org.apache.hadoop.tools.DistCp.main(DistCp.java:401)
 Caused by: java.net.UnknownHostException: babynames.main
 ... 17 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11747) Why not re-use the security model offered by SELINUX?

2015-03-26 Thread Madhan Sundararajan Devaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14381666#comment-14381666
 ] 

Madhan Sundararajan Devaki commented on HADOOP-11747:
-

Thanks again.

 Why not re-use the security model offered by SELINUX?
 -

 Key: HADOOP-11747
 URL: https://issues.apache.org/jira/browse/HADOOP-11747
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Madhan Sundararajan Devaki
Priority: Critical

 SELINUX was introduced to bring in a robust security management in Linux OS.
 In all distributions of Hadoop (Cloudera/Hortonworks/...) one of the 
 pre-installation checklist items is to disable SELINUX in all the nodes of 
 the cluster.
 Why not re-use the security model offered by SELINUX setting instead of 
 re-inventing from scratch through Sentry/Knox/etc...?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11747) Why not re-use the security model offered by SELINUX?

2015-03-26 Thread Madhan Sundararajan Devaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14381559#comment-14381559
 ] 

Madhan Sundararajan Devaki commented on HADOOP-11747:
-

Thanks. :)

 Why not re-use the security model offered by SELINUX?
 -

 Key: HADOOP-11747
 URL: https://issues.apache.org/jira/browse/HADOOP-11747
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Madhan Sundararajan Devaki
Priority: Critical

 SELINUX was introduced to bring in a robust security management in Linux OS.
 In all distributions of Hadoop (Cloudera/Hortonworks/...) one of the 
 pre-installation checklist items is to disable SELINUX in all the nodes of 
 the cluster.
 Why not re-use the security model offered by SELINUX setting instead of 
 re-inventing from scratch through Sentry/Knox/etc...?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11747) Why not re-use the security model offered by SELINUX?

2015-03-26 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14381561#comment-14381561
 ] 

Chris Douglas commented on HADOOP-11747:


http://hadoop.apache.org/mailing_lists.html

 Why not re-use the security model offered by SELINUX?
 -

 Key: HADOOP-11747
 URL: https://issues.apache.org/jira/browse/HADOOP-11747
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Madhan Sundararajan Devaki
Priority: Critical

 SELINUX was introduced to bring in a robust security management in Linux OS.
 In all distributions of Hadoop (Cloudera/Hortonworks/...) one of the 
 pre-installation checklist items is to disable SELINUX in all the nodes of 
 the cluster.
 Why not re-use the security model offered by SELINUX setting instead of 
 re-inventing from scratch through Sentry/Knox/etc...?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9565) Add a Blobstore interface to add to blobstore FileSystems

2015-03-26 Thread Thomas Demoor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14381617#comment-14381617
 ] 

Thomas Demoor commented on HADOOP-9565:
---

Any ideas on how we can test the changes in other packages 
(CommandWithDestination, FileOutputCommiiter) vs an AWS bucket with maven? 
We're currently moving offices so I can't access to my test cluster so no 
extensive tests for now. Guess I'll have to wait it out.

Moving to FileContext is probably fine but I'm slightly concerned about 
external projects/ still using the old API. How many are there? 

FYI THe FileOutputCommitter has seen some patches recently (MAPREDUCE-4815, 
MAPREDUCE-6275).


 Add a Blobstore interface to add to blobstore FileSystems
 -

 Key: HADOOP-9565
 URL: https://issues.apache.org/jira/browse/HADOOP-9565
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs, fs/s3, fs/swift
Affects Versions: 2.6.0
Reporter: Steve Loughran
Assignee: Steve Loughran
 Attachments: HADOOP-9565-001.patch, HADOOP-9565-002.patch, 
 HADOOP-9565-003.patch


 We can make the fact that some {{FileSystem}} implementations are really 
 blobstores, with different atomicity and consistency guarantees, by adding a 
 {{Blobstore}} interface to add to them. 
 This could also be a place to add a {{Copy(Path,Path)}} method, assuming that 
 all blobstores implement at server-side copy operation as a substitute for 
 rename.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11747) Why not re-use the security model offered by SELINUX?

2015-03-26 Thread Chris Douglas (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14381619#comment-14381619
 ] 

Chris Douglas commented on HADOOP-11747:


common-dev@ is probably most appropriate

 Why not re-use the security model offered by SELINUX?
 -

 Key: HADOOP-11747
 URL: https://issues.apache.org/jira/browse/HADOOP-11747
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Madhan Sundararajan Devaki
Priority: Critical

 SELINUX was introduced to bring in a robust security management in Linux OS.
 In all distributions of Hadoop (Cloudera/Hortonworks/...) one of the 
 pre-installation checklist items is to disable SELINUX in all the nodes of 
 the cluster.
 Why not re-use the security model offered by SELINUX setting instead of 
 re-inventing from scratch through Sentry/Knox/etc...?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11750) distcp fails if we copy data from swift to secure HDFS

2015-03-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11750?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran resolved HADOOP-11750.
-
Resolution: Invalid

 distcp fails if we copy data from swift to secure HDFS
 --

 Key: HADOOP-11750
 URL: https://issues.apache.org/jira/browse/HADOOP-11750
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/swift
Affects Versions: 3.0.0, 2.3.0
Reporter: Chen He
Assignee: Chen He

 ERROR tools.DistCp: Exception encountered
 java.lang.IllegalArgumentException: java.net.UnknownHostException: 
 babynames.main
 at 
 org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:373)
 at 
 org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:258)
 at 
 org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:301)
 at 
 org.apache.hadoop.fs.FileSystem.collectDelegationTokens(FileSystem.java:523)
 at org.apache.hadoop.fs.FileSystem.addDelegationTokens(FileSystem.java:507)
 at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:121)
 at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
 at 
 org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
 at 
 org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:133)
 at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:83)
 at 
 org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:90)
 at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:84)
 at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:353)
 at org.apache.hadoop.tools.DistCp.execute(DistCp.java:160)
 at org.apache.hadoop.tools.DistCp.run(DistCp.java:121)
 at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
 at org.apache.hadoop.tools.DistCp.main(DistCp.java:401)
 Caused by: java.net.UnknownHostException: babynames.main
 ... 17 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11747) Why not re-use the security model offered by SELINUX?

2015-03-26 Thread Madhan Sundararajan Devaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14381557#comment-14381557
 ] 

Madhan Sundararajan Devaki commented on HADOOP-11747:
-

Please let me know the mailing list.

 Why not re-use the security model offered by SELINUX?
 -

 Key: HADOOP-11747
 URL: https://issues.apache.org/jira/browse/HADOOP-11747
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Madhan Sundararajan Devaki
Priority: Critical

 SELINUX was introduced to bring in a robust security management in Linux OS.
 In all distributions of Hadoop (Cloudera/Hortonworks/...) one of the 
 pre-installation checklist items is to disable SELINUX in all the nodes of 
 the cluster.
 Why not re-use the security model offered by SELINUX setting instead of 
 re-inventing from scratch through Sentry/Knox/etc...?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11747) Why not re-use the security model offered by SELINUX?

2015-03-26 Thread Madhan Sundararajan Devaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14381589#comment-14381589
 ] 

Madhan Sundararajan Devaki commented on HADOOP-11747:
-

Thanks.
There seems to be a lot of them. :)
To which mailing list should I forward this question please?

 Why not re-use the security model offered by SELINUX?
 -

 Key: HADOOP-11747
 URL: https://issues.apache.org/jira/browse/HADOOP-11747
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Madhan Sundararajan Devaki
Priority: Critical

 SELINUX was introduced to bring in a robust security management in Linux OS.
 In all distributions of Hadoop (Cloudera/Hortonworks/...) one of the 
 pre-installation checklist items is to disable SELINUX in all the nodes of 
 the cluster.
 Why not re-use the security model offered by SELINUX setting instead of 
 re-inventing from scratch through Sentry/Knox/etc...?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11753) TestS3AContractOpen#testOpenReadZeroByteFile fails due to negative range header

2015-03-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14381681#comment-14381681
 ] 

Steve Loughran commented on HADOOP-11753:
-

This is interesting. That line hasn't changed since S3a shipped -and I'm not 
seeing errors against US east.

Which AWS endpoint are you testing against? Or is it something in-house 
supporting the S3 protocol?

 TestS3AContractOpen#testOpenReadZeroByteFile fails due to negative range 
 header
 ---

 Key: HADOOP-11753
 URL: https://issues.apache.org/jira/browse/HADOOP-11753
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 3.0.0, 2.7.0
Reporter: Takenori Sato
Assignee: Takenori Sato
 Attachments: HADOOP-11753-branch-2.7.001.patch


 _TestS3AContractOpen#testOpenReadZeroByteFile_ fails as follows.
 {code}
 testOpenReadZeroByteFile(org.apache.hadoop.fs.contract.s3a.TestS3AContractOpen)
   Time elapsed: 3.312 sec   ERROR!
 com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 416, AWS 
 Service: Amazon S3, AWS Request ID: A58A95E0D36811E4, AWS Error Code: 
 InvalidRange, AWS Error Message: The requested range cannot be satisfied.
   at 
 com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
   at 
 com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
   at 
 com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
   at 
 com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
   at 
 com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:)
   at 
 org.apache.hadoop.fs.s3a.S3AInputStream.reopen(S3AInputStream.java:91)
   at 
 org.apache.hadoop.fs.s3a.S3AInputStream.openIfNeeded(S3AInputStream.java:62)
   at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:127)
   at java.io.FilterInputStream.read(FilterInputStream.java:83)
   at 
 org.apache.hadoop.fs.contract.AbstractContractOpenTest.testOpenReadZeroByteFile(AbstractContractOpenTest.java:66)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
   at 
 org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
 {code}
 This is because the header is wrong when calling _S3AInputStream#read_ after 
 _S3AInputStream#open_.
 {code}
 Range: bytes=0--1
 * from 0 to -1
 {code}
 Tested on the latest branch-2.7.
 {quote}
 $ git log
 commit d286673c602524af08935ea132c8afd181b6e2e4
 Author: Jitendra Pandey Jitendra@Jitendra-Pandeys-MacBook-Pro-4.local
 Date:   Tue Mar 24 16:17:06 2015 -0700
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11755) Update avro version to have PowerPC supported Snappy-java

2015-03-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11755:

 Priority: Minor  (was: Major)
 Target Version/s:   (was: 2.7.0)
Affects Version/s: (was: 2.6.0)
   3.0.0
   Issue Type: Improvement  (was: Task)

 Update avro version to have PowerPC supported Snappy-java 
 --

 Key: HADOOP-11755
 URL: https://issues.apache.org/jira/browse/HADOOP-11755
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Affects Versions: 3.0.0
 Environment: PowerPC64, PowerPC64LE
Reporter: Ayappan
Priority: Minor

 Hadoop downloads Snappy-java 1.0.4.1 version (which don't have PowerPC native 
 libraries) through avro 1.7.4 dependency. 
 Current Avro development version ( 1.8.0-SNAPSHOT) has updated the 
 snappy-java version to 1.1.1.3 which has ppc64  ppc64le native libraries.So 
 Hadoop needs to update the avro version to the upcoming release ( probably 
 1.7.8) to have PowerPC supported snappy-java in its lib.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11755) Update avro version to have PowerPC supported Snappy-java

2015-03-26 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11755?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14381693#comment-14381693
 ] 

Steve Loughran commented on HADOOP-11755:
-

Linking to HADOOP-9991, the general update JARs JIRA.

Upgrading dependencies is one of the key backwards compatibility troublespots: 
we don't want to ship old binaries, but we often have to for fear of breaking 
things downstream. 

There isn't going to be a rush to do this for Avro; it's not going to happen 
until after a release has been made and everyone is happy avro and snappy 
releases are not going to break things such as existing MR workloads.

You are of course free to install later versions of avro (even unreleased ones) 
in your own installations

 Update avro version to have PowerPC supported Snappy-java 
 --

 Key: HADOOP-11755
 URL: https://issues.apache.org/jira/browse/HADOOP-11755
 Project: Hadoop Common
  Issue Type: Task
  Components: build
Affects Versions: 3.0.0
 Environment: PowerPC64, PowerPC64LE
Reporter: Ayappan

 Hadoop downloads Snappy-java 1.0.4.1 version (which don't have PowerPC native 
 libraries) through avro 1.7.4 dependency. 
 Current Avro development version ( 1.8.0-SNAPSHOT) has updated the 
 snappy-java version to 1.1.1.3 which has ppc64  ppc64le native libraries.So 
 Hadoop needs to update the avro version to the upcoming release ( probably 
 1.7.8) to have PowerPC supported snappy-java in its lib.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11719) [Fsshell] Remove bin/hadoop reference from GenericOptionsParser default help text

2015-03-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14381744#comment-14381744
 ] 

Hudson commented on HADOOP-11719:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #878 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/878/])
HADOOP-11719.[Fsshell] Remove bin/hadoop reference from GenericOptionsParser 
default help text. Contributed by Brahma Reddy Battula. (harsh: rev 
b4b4fe90569a116c67bfc94fbfbab95b1a0b712a)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericOptionsParser.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 [Fsshell] Remove bin/hadoop reference from GenericOptionsParser default help 
 text
 -

 Key: HADOOP-11719
 URL: https://issues.apache.org/jira/browse/HADOOP-11719
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 2.6.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
Priority: Minor
  Labels: newbie
 Fix For: 2.8.0

 Attachments: HADOOP-11719-001.patch, HDFS-3387.patch, 
 HDFS-3387_updated.patch


 Scenario:
 --
 Execute any fsshell command with invalid options
 Like ./hdfs haadmin -transitionToActive...
 Here it is logging as following..
 bin/hadoop command [genericOptions] [commandOptions]...
 Expected: Here help message is misleading to user saying that bin/hadoop that 
 is not actually user ran
 it's better to log bin/hdfs..Anyway hadoop is deprecated..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10670) Allow AuthenticationFilters to load secret from signature secret files

2015-03-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14381747#comment-14381747
 ] 

Hudson commented on HADOOP-10670:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #878 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/878/])
HADOOP-10670. Allow AuthenticationFilters to load secret from signature secret 
files. Contributed by Kai Zheng. (wheat9: rev 
e4b8d9e72d54d4725bf2a902452459b6b243b2e9)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestAuthenticationFilter.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/security/TimelineAuthenticationFilterInitializer.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/AuthenticationFilterInitializer.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAuthenticationFilter.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/security/http/RMAuthenticationFilterInitializer.java
Addendum for HADOOP-10670. (wheat9: rev 
3807884263f859f0aaf6a7cbf0009ffc6543c157)
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestFileSignerSecretProvider.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/FileSignerSecretProvider.java


 Allow AuthenticationFilters to load secret from signature secret files
 --

 Key: HADOOP-10670
 URL: https://issues.apache.org/jira/browse/HADOOP-10670
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng
Priority: Minor
 Fix For: 2.7.0

 Attachments: HADOOP-10670-v4.patch, HADOOP-10670-v5.patch, 
 HADOOP-10670-v6.patch, hadoop-10670-v2.patch, hadoop-10670-v3.patch, 
 hadoop-10670.patch


 In Hadoop web console, by using AuthenticationFilterInitializer, it's allowed 
 to configure AuthenticationFilter for the required signature secret by 
 specifying signature.secret.file property. This improvement would also allow 
 this when AuthenticationFilterInitializer isn't used in situations like 
 webhdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11524) hadoop_do_classpath_subcommand throws a shellcheck warning

2015-03-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14381745#comment-14381745
 ] 

Hudson commented on HADOOP-11524:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #878 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/878/])
HADOOP-11524. hadoop_do_classpath_subcommand throws a shellcheck warning. 
Contributed by Chris Nauroth. (cnauroth: rev 
4528eb9fb2e1a99c985926eacb3450b806ea6b4f)
* hadoop-common-project/hadoop-common/src/main/bin/hadoop
* hadoop-mapreduce-project/bin/mapred
* hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* hadoop-yarn-project/hadoop-yarn/bin/yarn
* hadoop-common-project/hadoop-common/CHANGES.txt


 hadoop_do_classpath_subcommand throws a shellcheck warning
 --

 Key: HADOOP-11524
 URL: https://issues.apache.org/jira/browse/HADOOP-11524
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Chris Nauroth
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-11524.001.patch


 {code}
 CLASS=org.apache.hadoop.util.Classpath
 ^-- SC2034: CLASS appears unused. Verify it or export it.
 {code}
 We should probably use a local var here and return it or something, even 
 though CLASS is technically a global.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11724) DistCp throws NPE when the target directory is root.

2015-03-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14381746#comment-14381746
 ] 

Hudson commented on HADOOP-11724:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #878 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/878/])
HADOOP-11724. DistCp throws NPE when the target directory is root. (Lei Eddy Xu 
via Yongjun Zhang) (yzhang: rev 44809b80814d5520a73d5609d0f73a13eb2360ac)
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 DistCp throws NPE when the target directory is root.
 

 Key: HADOOP-11724
 URL: https://issues.apache.org/jira/browse/HADOOP-11724
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-11724.000.patch, HADOOP-11724.001.patch, 
 HADOOP-11724.002.patch


 Distcp throws NPE when the target directory is root. It is due to 
 {{CopyCommitter#cleanupTempFiles}} attempts to delete parent directory of 
 root, which is {{null}}:
 {code}
 $ hadoop distcp pom.xml hdfs://localhost/
 15/03/17 11:17:44 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 15/03/17 11:17:45 INFO tools.DistCp: Input Options: 
 DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
 ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
 copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[pom.xml], 
 targetPath=hdfs://localhost/, targetPathExists=true, preserveRawXattrs=false}
 15/03/17 11:17:45 INFO Configuration.deprecation: session.id is deprecated. 
 Instead, use dfs.metrics.session-id
 15/03/17 11:17:45 INFO jvm.JvmMetrics: Initializing JVM Metrics with 
 processName=JobTracker, sessionId=
 15/03/17 11:17:45 INFO Configuration.deprecation: io.sort.mb is deprecated. 
 Instead, use mapreduce.task.io.sort.mb
 15/03/17 11:17:45 INFO Configuration.deprecation: io.sort.factor is 
 deprecated. Instead, use mapreduce.task.io.sort.factor
 15/03/17 11:17:45 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with 
 processName=JobTracker, sessionId= - already initialized
 15/03/17 11:17:45 INFO mapreduce.JobSubmitter: number of splits:1
 15/03/17 11:17:45 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
 job_local992233322_0001
 15/03/17 11:17:46 INFO mapreduce.Job: The url to track the job: 
 http://localhost:8080/
 15/03/17 11:17:46 INFO tools.DistCp: DistCp job-id: job_local992233322_0001
 15/03/17 11:17:46 INFO mapreduce.Job: Running job: job_local992233322_0001
 15/03/17 11:17:46 INFO mapred.LocalJobRunner: OutputCommitter set in config 
 null
 15/03/17 11:17:46 INFO output.FileOutputCommitter: File Output Committer 
 Algorithm version is 1
 15/03/17 11:17:46 INFO mapred.LocalJobRunner: OutputCommitter is 
 org.apache.hadoop.tools.mapred.CopyCommitter
 15/03/17 11:17:46 INFO mapred.LocalJobRunner: Waiting for map tasks
 15/03/17 11:17:46 INFO mapred.LocalJobRunner: Starting task: 
 attempt_local992233322_0001_m_00_0
 15/03/17 11:17:46 INFO output.FileOutputCommitter: File Output Committer 
 Algorithm version is 1
 15/03/17 11:17:46 INFO util.ProcfsBasedProcessTree: ProcfsBasedProcessTree 
 currently is supported only on Linux.
 15/03/17 11:17:46 INFO mapred.Task:  Using ResourceCalculatorProcessTree : 
 null
 15/03/17 11:17:46 INFO mapred.MapTask: Processing split: 
 file:/tmp/hadoop/mapred/staging/lei2046334351/.staging/_distcp-1889397390/fileList.seq:0+220
 15/03/17 11:17:46 INFO output.FileOutputCommitter: File Output Committer 
 Algorithm version is 1
 15/03/17 11:17:46 INFO mapred.CopyMapper: Copying 
 file:/Users/lei/work/cloudera/s3a_cp_target/pom.xml to 
 hdfs://localhost/pom.xml
 15/03/17 11:17:46 INFO mapred.CopyMapper: Skipping copy of 
 file:/Users/lei/work/cloudera/s3a_cp_target/pom.xml to 
 hdfs://localhost/pom.xml
 15/03/17 11:17:46 INFO mapred.LocalJobRunner:
 15/03/17 11:17:46 INFO mapred.Task: 
 Task:attempt_local992233322_0001_m_00_0 is done. And is in the process of 
 committing
 15/03/17 11:17:46 INFO mapred.LocalJobRunner:
 15/03/17 11:17:46 INFO mapred.Task: Task 
 attempt_local992233322_0001_m_00_0 is allowed to commit now
 15/03/17 11:17:46 INFO output.FileOutputCommitter: Saved output of task 
 'attempt_local992233322_0001_m_00_0' to 
 file:/tmp/hadoop/mapred/staging/lei2046334351/.staging/_distcp-1889397390/_logs/_temporary/0/task_local992233322_0001_m_00
 15/03/17 11:17:46 INFO mapred.LocalJobRunner: Copying 
 file:/Users/lei/work/cloudera/s3a_cp_target/pom.xml to 
 hdfs://localhost/pom.xml
 15/03/17 11:17:46 INFO mapred.Task: Task 
 

[jira] [Updated] (HADOOP-11753) TestS3AContractOpen#testOpenReadZeroByteFile fails due to negative range header

2015-03-26 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11753?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11753:

Affects Version/s: 2.7.0
   3.0.0

 TestS3AContractOpen#testOpenReadZeroByteFile fails due to negative range 
 header
 ---

 Key: HADOOP-11753
 URL: https://issues.apache.org/jira/browse/HADOOP-11753
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 3.0.0, 2.7.0
Reporter: Takenori Sato
Assignee: Takenori Sato
 Attachments: HADOOP-11753-branch-2.7.001.patch


 _TestS3AContractOpen#testOpenReadZeroByteFile_ fails as follows.
 {code}
 testOpenReadZeroByteFile(org.apache.hadoop.fs.contract.s3a.TestS3AContractOpen)
   Time elapsed: 3.312 sec   ERROR!
 com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 416, AWS 
 Service: Amazon S3, AWS Request ID: A58A95E0D36811E4, AWS Error Code: 
 InvalidRange, AWS Error Message: The requested range cannot be satisfied.
   at 
 com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
   at 
 com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
   at 
 com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
   at 
 com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
   at 
 com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:)
   at 
 org.apache.hadoop.fs.s3a.S3AInputStream.reopen(S3AInputStream.java:91)
   at 
 org.apache.hadoop.fs.s3a.S3AInputStream.openIfNeeded(S3AInputStream.java:62)
   at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:127)
   at java.io.FilterInputStream.read(FilterInputStream.java:83)
   at 
 org.apache.hadoop.fs.contract.AbstractContractOpenTest.testOpenReadZeroByteFile(AbstractContractOpenTest.java:66)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
   at 
 org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
 {code}
 This is because the header is wrong when calling _S3AInputStream#read_ after 
 _S3AInputStream#open_.
 {code}
 Range: bytes=0--1
 * from 0 to -1
 {code}
 Tested on the latest branch-2.7.
 {quote}
 $ git log
 commit d286673c602524af08935ea132c8afd181b6e2e4
 Author: Jitendra Pandey Jitendra@Jitendra-Pandeys-MacBook-Pro-4.local
 Date:   Tue Mar 24 16:17:06 2015 -0700
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10670) Allow AuthenticationFilters to load secret from signature secret files

2015-03-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14381736#comment-14381736
 ] 

Hudson commented on HADOOP-10670:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #144 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/144/])
HADOOP-10670. Allow AuthenticationFilters to load secret from signature secret 
files. Contributed by Kai Zheng. (wheat9: rev 
e4b8d9e72d54d4725bf2a902452459b6b243b2e9)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/security/http/RMAuthenticationFilterInitializer.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/security/TimelineAuthenticationFilterInitializer.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAuthenticationFilter.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/AuthenticationFilterInitializer.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestAuthenticationFilter.java
Addendum for HADOOP-10670. (wheat9: rev 
3807884263f859f0aaf6a7cbf0009ffc6543c157)
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/FileSignerSecretProvider.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestFileSignerSecretProvider.java


 Allow AuthenticationFilters to load secret from signature secret files
 --

 Key: HADOOP-10670
 URL: https://issues.apache.org/jira/browse/HADOOP-10670
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng
Priority: Minor
 Fix For: 2.7.0

 Attachments: HADOOP-10670-v4.patch, HADOOP-10670-v5.patch, 
 HADOOP-10670-v6.patch, hadoop-10670-v2.patch, hadoop-10670-v3.patch, 
 hadoop-10670.patch


 In Hadoop web console, by using AuthenticationFilterInitializer, it's allowed 
 to configure AuthenticationFilter for the required signature secret by 
 specifying signature.secret.file property. This improvement would also allow 
 this when AuthenticationFilterInitializer isn't used in situations like 
 webhdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11724) DistCp throws NPE when the target directory is root.

2015-03-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14381735#comment-14381735
 ] 

Hudson commented on HADOOP-11724:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #144 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/144/])
HADOOP-11724. DistCp throws NPE when the target directory is root. (Lei Eddy Xu 
via Yongjun Zhang) (yzhang: rev 44809b80814d5520a73d5609d0f73a13eb2360ac)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java


 DistCp throws NPE when the target directory is root.
 

 Key: HADOOP-11724
 URL: https://issues.apache.org/jira/browse/HADOOP-11724
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-11724.000.patch, HADOOP-11724.001.patch, 
 HADOOP-11724.002.patch


 Distcp throws NPE when the target directory is root. It is due to 
 {{CopyCommitter#cleanupTempFiles}} attempts to delete parent directory of 
 root, which is {{null}}:
 {code}
 $ hadoop distcp pom.xml hdfs://localhost/
 15/03/17 11:17:44 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 15/03/17 11:17:45 INFO tools.DistCp: Input Options: 
 DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
 ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
 copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[pom.xml], 
 targetPath=hdfs://localhost/, targetPathExists=true, preserveRawXattrs=false}
 15/03/17 11:17:45 INFO Configuration.deprecation: session.id is deprecated. 
 Instead, use dfs.metrics.session-id
 15/03/17 11:17:45 INFO jvm.JvmMetrics: Initializing JVM Metrics with 
 processName=JobTracker, sessionId=
 15/03/17 11:17:45 INFO Configuration.deprecation: io.sort.mb is deprecated. 
 Instead, use mapreduce.task.io.sort.mb
 15/03/17 11:17:45 INFO Configuration.deprecation: io.sort.factor is 
 deprecated. Instead, use mapreduce.task.io.sort.factor
 15/03/17 11:17:45 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with 
 processName=JobTracker, sessionId= - already initialized
 15/03/17 11:17:45 INFO mapreduce.JobSubmitter: number of splits:1
 15/03/17 11:17:45 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
 job_local992233322_0001
 15/03/17 11:17:46 INFO mapreduce.Job: The url to track the job: 
 http://localhost:8080/
 15/03/17 11:17:46 INFO tools.DistCp: DistCp job-id: job_local992233322_0001
 15/03/17 11:17:46 INFO mapreduce.Job: Running job: job_local992233322_0001
 15/03/17 11:17:46 INFO mapred.LocalJobRunner: OutputCommitter set in config 
 null
 15/03/17 11:17:46 INFO output.FileOutputCommitter: File Output Committer 
 Algorithm version is 1
 15/03/17 11:17:46 INFO mapred.LocalJobRunner: OutputCommitter is 
 org.apache.hadoop.tools.mapred.CopyCommitter
 15/03/17 11:17:46 INFO mapred.LocalJobRunner: Waiting for map tasks
 15/03/17 11:17:46 INFO mapred.LocalJobRunner: Starting task: 
 attempt_local992233322_0001_m_00_0
 15/03/17 11:17:46 INFO output.FileOutputCommitter: File Output Committer 
 Algorithm version is 1
 15/03/17 11:17:46 INFO util.ProcfsBasedProcessTree: ProcfsBasedProcessTree 
 currently is supported only on Linux.
 15/03/17 11:17:46 INFO mapred.Task:  Using ResourceCalculatorProcessTree : 
 null
 15/03/17 11:17:46 INFO mapred.MapTask: Processing split: 
 file:/tmp/hadoop/mapred/staging/lei2046334351/.staging/_distcp-1889397390/fileList.seq:0+220
 15/03/17 11:17:46 INFO output.FileOutputCommitter: File Output Committer 
 Algorithm version is 1
 15/03/17 11:17:46 INFO mapred.CopyMapper: Copying 
 file:/Users/lei/work/cloudera/s3a_cp_target/pom.xml to 
 hdfs://localhost/pom.xml
 15/03/17 11:17:46 INFO mapred.CopyMapper: Skipping copy of 
 file:/Users/lei/work/cloudera/s3a_cp_target/pom.xml to 
 hdfs://localhost/pom.xml
 15/03/17 11:17:46 INFO mapred.LocalJobRunner:
 15/03/17 11:17:46 INFO mapred.Task: 
 Task:attempt_local992233322_0001_m_00_0 is done. And is in the process of 
 committing
 15/03/17 11:17:46 INFO mapred.LocalJobRunner:
 15/03/17 11:17:46 INFO mapred.Task: Task 
 attempt_local992233322_0001_m_00_0 is allowed to commit now
 15/03/17 11:17:46 INFO output.FileOutputCommitter: Saved output of task 
 'attempt_local992233322_0001_m_00_0' to 
 file:/tmp/hadoop/mapred/staging/lei2046334351/.staging/_distcp-1889397390/_logs/_temporary/0/task_local992233322_0001_m_00
 15/03/17 11:17:46 INFO mapred.LocalJobRunner: Copying 
 file:/Users/lei/work/cloudera/s3a_cp_target/pom.xml to 
 hdfs://localhost/pom.xml
 15/03/17 11:17:46 INFO mapred.Task: Task 
 

[jira] [Commented] (HADOOP-11524) hadoop_do_classpath_subcommand throws a shellcheck warning

2015-03-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14381734#comment-14381734
 ] 

Hudson commented on HADOOP-11524:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #144 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/144/])
HADOOP-11524. hadoop_do_classpath_subcommand throws a shellcheck warning. 
Contributed by Chris Nauroth. (cnauroth: rev 
4528eb9fb2e1a99c985926eacb3450b806ea6b4f)
* hadoop-mapreduce-project/bin/mapred
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* hadoop-common-project/hadoop-common/src/main/bin/hadoop
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
* hadoop-yarn-project/hadoop-yarn/bin/yarn


 hadoop_do_classpath_subcommand throws a shellcheck warning
 --

 Key: HADOOP-11524
 URL: https://issues.apache.org/jira/browse/HADOOP-11524
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Chris Nauroth
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-11524.001.patch


 {code}
 CLASS=org.apache.hadoop.util.Classpath
 ^-- SC2034: CLASS appears unused. Verify it or export it.
 {code}
 We should probably use a local var here and return it or something, even 
 though CLASS is technically a global.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11719) [Fsshell] Remove bin/hadoop reference from GenericOptionsParser default help text

2015-03-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14381733#comment-14381733
 ] 

Hudson commented on HADOOP-11719:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #144 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/144/])
HADOOP-11719.[Fsshell] Remove bin/hadoop reference from GenericOptionsParser 
default help text. Contributed by Brahma Reddy Battula. (harsh: rev 
b4b4fe90569a116c67bfc94fbfbab95b1a0b712a)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericOptionsParser.java


 [Fsshell] Remove bin/hadoop reference from GenericOptionsParser default help 
 text
 -

 Key: HADOOP-11719
 URL: https://issues.apache.org/jira/browse/HADOOP-11719
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 2.6.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
Priority: Minor
  Labels: newbie
 Fix For: 2.8.0

 Attachments: HADOOP-11719-001.patch, HDFS-3387.patch, 
 HDFS-3387_updated.patch


 Scenario:
 --
 Execute any fsshell command with invalid options
 Like ./hdfs haadmin -transitionToActive...
 Here it is logging as following..
 bin/hadoop command [genericOptions] [commandOptions]...
 Expected: Here help message is misleading to user saying that bin/hadoop that 
 is not actually user ran
 it's better to log bin/hdfs..Anyway hadoop is deprecated..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11524) hadoop_do_classpath_subcommand throws a shellcheck warning

2015-03-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14381855#comment-14381855
 ] 

Hudson commented on HADOOP-11524:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #144 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/144/])
HADOOP-11524. hadoop_do_classpath_subcommand throws a shellcheck warning. 
Contributed by Chris Nauroth. (cnauroth: rev 
4528eb9fb2e1a99c985926eacb3450b806ea6b4f)
* hadoop-mapreduce-project/bin/mapred
* hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-yarn-project/hadoop-yarn/bin/yarn
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* hadoop-common-project/hadoop-common/src/main/bin/hadoop


 hadoop_do_classpath_subcommand throws a shellcheck warning
 --

 Key: HADOOP-11524
 URL: https://issues.apache.org/jira/browse/HADOOP-11524
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Chris Nauroth
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-11524.001.patch


 {code}
 CLASS=org.apache.hadoop.util.Classpath
 ^-- SC2034: CLASS appears unused. Verify it or export it.
 {code}
 We should probably use a local var here and return it or something, even 
 though CLASS is technically a global.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10670) Allow AuthenticationFilters to load secret from signature secret files

2015-03-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14381857#comment-14381857
 ] 

Hudson commented on HADOOP-10670:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #144 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/144/])
HADOOP-10670. Allow AuthenticationFilters to load secret from signature secret 
files. Contributed by Kai Zheng. (wheat9: rev 
e4b8d9e72d54d4725bf2a902452459b6b243b2e9)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/security/http/RMAuthenticationFilterInitializer.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/AuthenticationFilterInitializer.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestAuthenticationFilter.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAuthenticationFilter.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/security/TimelineAuthenticationFilterInitializer.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
Addendum for HADOOP-10670. (wheat9: rev 
3807884263f859f0aaf6a7cbf0009ffc6543c157)
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/FileSignerSecretProvider.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestFileSignerSecretProvider.java


 Allow AuthenticationFilters to load secret from signature secret files
 --

 Key: HADOOP-10670
 URL: https://issues.apache.org/jira/browse/HADOOP-10670
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng
Priority: Minor
 Fix For: 2.7.0

 Attachments: HADOOP-10670-v4.patch, HADOOP-10670-v5.patch, 
 HADOOP-10670-v6.patch, hadoop-10670-v2.patch, hadoop-10670-v3.patch, 
 hadoop-10670.patch


 In Hadoop web console, by using AuthenticationFilterInitializer, it's allowed 
 to configure AuthenticationFilter for the required signature secret by 
 specifying signature.secret.file property. This improvement would also allow 
 this when AuthenticationFilterInitializer isn't used in situations like 
 webhdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11719) [Fsshell] Remove bin/hadoop reference from GenericOptionsParser default help text

2015-03-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14381854#comment-14381854
 ] 

Hudson commented on HADOOP-11719:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #144 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/144/])
HADOOP-11719.[Fsshell] Remove bin/hadoop reference from GenericOptionsParser 
default help text. Contributed by Brahma Reddy Battula. (harsh: rev 
b4b4fe90569a116c67bfc94fbfbab95b1a0b712a)
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericOptionsParser.java


 [Fsshell] Remove bin/hadoop reference from GenericOptionsParser default help 
 text
 -

 Key: HADOOP-11719
 URL: https://issues.apache.org/jira/browse/HADOOP-11719
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 2.6.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
Priority: Minor
  Labels: newbie
 Fix For: 2.8.0

 Attachments: HADOOP-11719-001.patch, HDFS-3387.patch, 
 HDFS-3387_updated.patch


 Scenario:
 --
 Execute any fsshell command with invalid options
 Like ./hdfs haadmin -transitionToActive...
 Here it is logging as following..
 bin/hadoop command [genericOptions] [commandOptions]...
 Expected: Here help message is misleading to user saying that bin/hadoop that 
 is not actually user ran
 it's better to log bin/hdfs..Anyway hadoop is deprecated..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11724) DistCp throws NPE when the target directory is root.

2015-03-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14381856#comment-14381856
 ] 

Hudson commented on HADOOP-11724:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #144 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/144/])
HADOOP-11724. DistCp throws NPE when the target directory is root. (Lei Eddy Xu 
via Yongjun Zhang) (yzhang: rev 44809b80814d5520a73d5609d0f73a13eb2360ac)
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 DistCp throws NPE when the target directory is root.
 

 Key: HADOOP-11724
 URL: https://issues.apache.org/jira/browse/HADOOP-11724
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-11724.000.patch, HADOOP-11724.001.patch, 
 HADOOP-11724.002.patch


 Distcp throws NPE when the target directory is root. It is due to 
 {{CopyCommitter#cleanupTempFiles}} attempts to delete parent directory of 
 root, which is {{null}}:
 {code}
 $ hadoop distcp pom.xml hdfs://localhost/
 15/03/17 11:17:44 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 15/03/17 11:17:45 INFO tools.DistCp: Input Options: 
 DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
 ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
 copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[pom.xml], 
 targetPath=hdfs://localhost/, targetPathExists=true, preserveRawXattrs=false}
 15/03/17 11:17:45 INFO Configuration.deprecation: session.id is deprecated. 
 Instead, use dfs.metrics.session-id
 15/03/17 11:17:45 INFO jvm.JvmMetrics: Initializing JVM Metrics with 
 processName=JobTracker, sessionId=
 15/03/17 11:17:45 INFO Configuration.deprecation: io.sort.mb is deprecated. 
 Instead, use mapreduce.task.io.sort.mb
 15/03/17 11:17:45 INFO Configuration.deprecation: io.sort.factor is 
 deprecated. Instead, use mapreduce.task.io.sort.factor
 15/03/17 11:17:45 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with 
 processName=JobTracker, sessionId= - already initialized
 15/03/17 11:17:45 INFO mapreduce.JobSubmitter: number of splits:1
 15/03/17 11:17:45 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
 job_local992233322_0001
 15/03/17 11:17:46 INFO mapreduce.Job: The url to track the job: 
 http://localhost:8080/
 15/03/17 11:17:46 INFO tools.DistCp: DistCp job-id: job_local992233322_0001
 15/03/17 11:17:46 INFO mapreduce.Job: Running job: job_local992233322_0001
 15/03/17 11:17:46 INFO mapred.LocalJobRunner: OutputCommitter set in config 
 null
 15/03/17 11:17:46 INFO output.FileOutputCommitter: File Output Committer 
 Algorithm version is 1
 15/03/17 11:17:46 INFO mapred.LocalJobRunner: OutputCommitter is 
 org.apache.hadoop.tools.mapred.CopyCommitter
 15/03/17 11:17:46 INFO mapred.LocalJobRunner: Waiting for map tasks
 15/03/17 11:17:46 INFO mapred.LocalJobRunner: Starting task: 
 attempt_local992233322_0001_m_00_0
 15/03/17 11:17:46 INFO output.FileOutputCommitter: File Output Committer 
 Algorithm version is 1
 15/03/17 11:17:46 INFO util.ProcfsBasedProcessTree: ProcfsBasedProcessTree 
 currently is supported only on Linux.
 15/03/17 11:17:46 INFO mapred.Task:  Using ResourceCalculatorProcessTree : 
 null
 15/03/17 11:17:46 INFO mapred.MapTask: Processing split: 
 file:/tmp/hadoop/mapred/staging/lei2046334351/.staging/_distcp-1889397390/fileList.seq:0+220
 15/03/17 11:17:46 INFO output.FileOutputCommitter: File Output Committer 
 Algorithm version is 1
 15/03/17 11:17:46 INFO mapred.CopyMapper: Copying 
 file:/Users/lei/work/cloudera/s3a_cp_target/pom.xml to 
 hdfs://localhost/pom.xml
 15/03/17 11:17:46 INFO mapred.CopyMapper: Skipping copy of 
 file:/Users/lei/work/cloudera/s3a_cp_target/pom.xml to 
 hdfs://localhost/pom.xml
 15/03/17 11:17:46 INFO mapred.LocalJobRunner:
 15/03/17 11:17:46 INFO mapred.Task: 
 Task:attempt_local992233322_0001_m_00_0 is done. And is in the process of 
 committing
 15/03/17 11:17:46 INFO mapred.LocalJobRunner:
 15/03/17 11:17:46 INFO mapred.Task: Task 
 attempt_local992233322_0001_m_00_0 is allowed to commit now
 15/03/17 11:17:46 INFO output.FileOutputCommitter: Saved output of task 
 'attempt_local992233322_0001_m_00_0' to 
 file:/tmp/hadoop/mapred/staging/lei2046334351/.staging/_distcp-1889397390/_logs/_temporary/0/task_local992233322_0001_m_00
 15/03/17 11:17:46 INFO mapred.LocalJobRunner: Copying 
 file:/Users/lei/work/cloudera/s3a_cp_target/pom.xml to 
 hdfs://localhost/pom.xml
 15/03/17 11:17:46 INFO mapred.Task: 

[jira] [Commented] (HADOOP-11524) hadoop_do_classpath_subcommand throws a shellcheck warning

2015-03-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14381876#comment-14381876
 ] 

Hudson commented on HADOOP-11524:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2094 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2094/])
HADOOP-11524. hadoop_do_classpath_subcommand throws a shellcheck warning. 
Contributed by Chris Nauroth. (cnauroth: rev 
4528eb9fb2e1a99c985926eacb3450b806ea6b4f)
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* hadoop-common-project/hadoop-common/src/main/bin/hadoop
* hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
* hadoop-yarn-project/hadoop-yarn/bin/yarn
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-mapreduce-project/bin/mapred


 hadoop_do_classpath_subcommand throws a shellcheck warning
 --

 Key: HADOOP-11524
 URL: https://issues.apache.org/jira/browse/HADOOP-11524
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Chris Nauroth
Priority: Minor
 Fix For: 3.0.0

 Attachments: HADOOP-11524.001.patch


 {code}
 CLASS=org.apache.hadoop.util.Classpath
 ^-- SC2034: CLASS appears unused. Verify it or export it.
 {code}
 We should probably use a local var here and return it or something, even 
 though CLASS is technically a global.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11724) DistCp throws NPE when the target directory is root.

2015-03-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14381877#comment-14381877
 ] 

Hudson commented on HADOOP-11724:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2094 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2094/])
HADOOP-11724. DistCp throws NPE when the target directory is root. (Lei Eddy Xu 
via Yongjun Zhang) (yzhang: rev 44809b80814d5520a73d5609d0f73a13eb2360ac)
* 
hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/mapred/CopyCommitter.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 DistCp throws NPE when the target directory is root.
 

 Key: HADOOP-11724
 URL: https://issues.apache.org/jira/browse/HADOOP-11724
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0
Reporter: Lei (Eddy) Xu
Assignee: Lei (Eddy) Xu
Priority: Minor
 Fix For: 2.8.0

 Attachments: HADOOP-11724.000.patch, HADOOP-11724.001.patch, 
 HADOOP-11724.002.patch


 Distcp throws NPE when the target directory is root. It is due to 
 {{CopyCommitter#cleanupTempFiles}} attempts to delete parent directory of 
 root, which is {{null}}:
 {code}
 $ hadoop distcp pom.xml hdfs://localhost/
 15/03/17 11:17:44 WARN util.NativeCodeLoader: Unable to load native-hadoop 
 library for your platform... using builtin-java classes where applicable
 15/03/17 11:17:45 INFO tools.DistCp: Input Options: 
 DistCpOptions{atomicCommit=false, syncFolder=false, deleteMissing=false, 
 ignoreFailures=false, maxMaps=20, sslConfigurationFile='null', 
 copyStrategy='uniformsize', sourceFileListing=null, sourcePaths=[pom.xml], 
 targetPath=hdfs://localhost/, targetPathExists=true, preserveRawXattrs=false}
 15/03/17 11:17:45 INFO Configuration.deprecation: session.id is deprecated. 
 Instead, use dfs.metrics.session-id
 15/03/17 11:17:45 INFO jvm.JvmMetrics: Initializing JVM Metrics with 
 processName=JobTracker, sessionId=
 15/03/17 11:17:45 INFO Configuration.deprecation: io.sort.mb is deprecated. 
 Instead, use mapreduce.task.io.sort.mb
 15/03/17 11:17:45 INFO Configuration.deprecation: io.sort.factor is 
 deprecated. Instead, use mapreduce.task.io.sort.factor
 15/03/17 11:17:45 INFO jvm.JvmMetrics: Cannot initialize JVM Metrics with 
 processName=JobTracker, sessionId= - already initialized
 15/03/17 11:17:45 INFO mapreduce.JobSubmitter: number of splits:1
 15/03/17 11:17:45 INFO mapreduce.JobSubmitter: Submitting tokens for job: 
 job_local992233322_0001
 15/03/17 11:17:46 INFO mapreduce.Job: The url to track the job: 
 http://localhost:8080/
 15/03/17 11:17:46 INFO tools.DistCp: DistCp job-id: job_local992233322_0001
 15/03/17 11:17:46 INFO mapreduce.Job: Running job: job_local992233322_0001
 15/03/17 11:17:46 INFO mapred.LocalJobRunner: OutputCommitter set in config 
 null
 15/03/17 11:17:46 INFO output.FileOutputCommitter: File Output Committer 
 Algorithm version is 1
 15/03/17 11:17:46 INFO mapred.LocalJobRunner: OutputCommitter is 
 org.apache.hadoop.tools.mapred.CopyCommitter
 15/03/17 11:17:46 INFO mapred.LocalJobRunner: Waiting for map tasks
 15/03/17 11:17:46 INFO mapred.LocalJobRunner: Starting task: 
 attempt_local992233322_0001_m_00_0
 15/03/17 11:17:46 INFO output.FileOutputCommitter: File Output Committer 
 Algorithm version is 1
 15/03/17 11:17:46 INFO util.ProcfsBasedProcessTree: ProcfsBasedProcessTree 
 currently is supported only on Linux.
 15/03/17 11:17:46 INFO mapred.Task:  Using ResourceCalculatorProcessTree : 
 null
 15/03/17 11:17:46 INFO mapred.MapTask: Processing split: 
 file:/tmp/hadoop/mapred/staging/lei2046334351/.staging/_distcp-1889397390/fileList.seq:0+220
 15/03/17 11:17:46 INFO output.FileOutputCommitter: File Output Committer 
 Algorithm version is 1
 15/03/17 11:17:46 INFO mapred.CopyMapper: Copying 
 file:/Users/lei/work/cloudera/s3a_cp_target/pom.xml to 
 hdfs://localhost/pom.xml
 15/03/17 11:17:46 INFO mapred.CopyMapper: Skipping copy of 
 file:/Users/lei/work/cloudera/s3a_cp_target/pom.xml to 
 hdfs://localhost/pom.xml
 15/03/17 11:17:46 INFO mapred.LocalJobRunner:
 15/03/17 11:17:46 INFO mapred.Task: 
 Task:attempt_local992233322_0001_m_00_0 is done. And is in the process of 
 committing
 15/03/17 11:17:46 INFO mapred.LocalJobRunner:
 15/03/17 11:17:46 INFO mapred.Task: Task 
 attempt_local992233322_0001_m_00_0 is allowed to commit now
 15/03/17 11:17:46 INFO output.FileOutputCommitter: Saved output of task 
 'attempt_local992233322_0001_m_00_0' to 
 file:/tmp/hadoop/mapred/staging/lei2046334351/.staging/_distcp-1889397390/_logs/_temporary/0/task_local992233322_0001_m_00
 15/03/17 11:17:46 INFO mapred.LocalJobRunner: Copying 
 file:/Users/lei/work/cloudera/s3a_cp_target/pom.xml to 
 hdfs://localhost/pom.xml
 15/03/17 11:17:46 INFO mapred.Task: Task 
 

[jira] [Commented] (HADOOP-10670) Allow AuthenticationFilters to load secret from signature secret files

2015-03-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14381878#comment-14381878
 ] 

Hudson commented on HADOOP-10670:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2094 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2094/])
HADOOP-10670. Allow AuthenticationFilters to load secret from signature secret 
files. Contributed by Kai Zheng. (wheat9: rev 
e4b8d9e72d54d4725bf2a902452459b6b243b2e9)
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-applicationhistoryservice/src/main/java/org/apache/hadoop/yarn/server/timeline/security/TimelineAuthenticationFilterInitializer.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/AuthenticationFilterInitializer.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/TestAuthenticationFilter.java
* 
hadoop-yarn-project/hadoop-yarn/hadoop-yarn-server/hadoop-yarn-server-common/src/main/java/org/apache/hadoop/yarn/server/security/http/RMAuthenticationFilterInitializer.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAuthenticationFilter.java
Addendum for HADOOP-10670. (wheat9: rev 
3807884263f859f0aaf6a7cbf0009ffc6543c157)
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/TestFileSignerSecretProvider.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/FileSignerSecretProvider.java


 Allow AuthenticationFilters to load secret from signature secret files
 --

 Key: HADOOP-10670
 URL: https://issues.apache.org/jira/browse/HADOOP-10670
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng
Priority: Minor
 Fix For: 2.7.0

 Attachments: HADOOP-10670-v4.patch, HADOOP-10670-v5.patch, 
 HADOOP-10670-v6.patch, hadoop-10670-v2.patch, hadoop-10670-v3.patch, 
 hadoop-10670.patch


 In Hadoop web console, by using AuthenticationFilterInitializer, it's allowed 
 to configure AuthenticationFilter for the required signature secret by 
 specifying signature.secret.file property. This improvement would also allow 
 this when AuthenticationFilterInitializer isn't used in situations like 
 webhdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11257) Update hadoop jar documentation to warn against using it for launching yarn jars

2015-03-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth resolved HADOOP-11257.

  Resolution: Fixed
Hadoop Flags: Reviewed

I committed the addendum patch to branch-2 and branch-2.7.  [~iwasakims], thank 
you for acting so quickly to provide the patch.

 Update hadoop jar documentation to warn against using it for launching yarn 
 jars
 --

 Key: HADOOP-11257
 URL: https://issues.apache.org/jira/browse/HADOOP-11257
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.1-beta
Reporter: Allen Wittenauer
Assignee: Masatake Iwasaki
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HADOOP-11257-branch-2.addendum.001.patch, 
 HADOOP-11257.1.patch, HADOOP-11257.1.patch, HADOOP-11257.2.patch, 
 HADOOP-11257.3.patch, HADOOP-11257.4.patch, HADOOP-11257.4.patch


 We should update the hadoop jar documentation to warn against using it for 
 launching yarn jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10392) Use FileSystem#makeQualified(Path) instead of Path#makeQualified(FileSystem)

2015-03-26 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10392:
---
Target Version/s: 2.8.0  (was: 2.7.0)

 Use FileSystem#makeQualified(Path) instead of Path#makeQualified(FileSystem)
 

 Key: HADOOP-10392
 URL: https://issues.apache.org/jira/browse/HADOOP-10392
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 2.3.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-10392.2.patch, HADOOP-10392.3.patch, 
 HADOOP-10392.4.patch, HADOOP-10392.4.patch, HADOOP-10392.5.patch, 
 HADOOP-10392.6.patch, HADOOP-10392.7.patch, HADOOP-10392.7.patch, 
 HADOOP-10392.8.patch, HADOOP-10392.patch


 There're some methods calling Path.makeQualified(FileSystem), which causes 
 javac warning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-10392) Use FileSystem#makeQualified(Path) instead of Path#makeQualified(FileSystem)

2015-03-26 Thread Akira AJISAKA (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira AJISAKA updated HADOOP-10392:
---
Attachment: HADOOP-10392.8.patch

Rebased for the latest trunk.

 Use FileSystem#makeQualified(Path) instead of Path#makeQualified(FileSystem)
 

 Key: HADOOP-10392
 URL: https://issues.apache.org/jira/browse/HADOOP-10392
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs
Affects Versions: 2.3.0
Reporter: Akira AJISAKA
Assignee: Akira AJISAKA
Priority: Minor
  Labels: newbie
 Attachments: HADOOP-10392.2.patch, HADOOP-10392.3.patch, 
 HADOOP-10392.4.patch, HADOOP-10392.4.patch, HADOOP-10392.5.patch, 
 HADOOP-10392.6.patch, HADOOP-10392.7.patch, HADOOP-10392.7.patch, 
 HADOOP-10392.8.patch, HADOOP-10392.patch


 There're some methods calling Path.makeQualified(FileSystem), which causes 
 javac warning.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Issue Comment Deleted] (HADOOP-11746) rewrite test-patch.sh

2015-03-26 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11746:
--
Comment: was deleted

(was: (and, of course, JIRA's formatting screwed it up. lol.  but it should 
look good in email.  The JIRA formatting is different, so shouldn't suffer like 
that.))

 rewrite test-patch.sh
 -

 Key: HADOOP-11746
 URL: https://issues.apache.org/jira/browse/HADOOP-11746
 Project: Hadoop Common
  Issue Type: Test
  Components: build, test
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, 
 HADOOP-11746-02.patch


 This code is bad and you should feel bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-11746) rewrite test-patch.sh

2015-03-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383372#comment-14383372
 ] 

Allen Wittenauer edited comment on HADOOP-11746 at 3/27/15 5:56 AM:


-02: 
* still very little new functionality, except that --dirty-workspace is now 
working for me.
* console output cleanup
* fix the /tmp default for the patch scratch space to be 
/tmp/(projname)-test-patch/pid to allow for simultaneous test patch runs
* add in a timer to show how long various steps take (as requested by 
[~raviprak] )
* Fix more we should have a var rather than hard code this binary issues
* just skip the findbugs test if findbugs isn't installed
* send the mvn install run to a log file rather than dumping it to the screen
* fix some of the backslash indentation problems generated by the autoformatter


Current console output now looks like:

{noformat}
w$ dev-support/test-patch.sh --dirty-workspace /tmp/H1
Running in developer mode
/tmp/Hadoop-test-patch/54448 has been created


===
===
Testing patch for H1.
===
===

-1 overall

| Vote |   Subsystem | Comment
|  +1  |@author  |  00m 00s  | The patch does not contain any 
 | @author tags.
|  -1  | tests included  |  00m 00s  | The patch doesn't appear to include 
 | any new or modified tests. Please
 | justify why no new tests are needed
 | for this patch. Also please list what
 | manual steps were performed to verify
 | this patch.
|  +1  |  javac  |  04m 30s  | There were no new javac warning 
 | messages.
|  +1  |javadoc  |  06m 15s  | There were no new javadoc warning 
 | messages.
|  +1  |eclipse:eclipse  |  00m 24s  | The patch built with eclipse:eclipse.
|  -1  |  release audit  |  00m 05s  | The applied patch generated 1 
release 
 | audit warnings.


===
===
   Finished build.
===
===
{noformat}

Note the always centered text and the column wrap on the output. :D


was (Author: aw):
-02: 
* still very little new functionality, except that --dirty-workspace is now 
working for me.
* console output cleanup
* fix the /tmp default for the patch scratch space to be 
/tmp/(projname)-test-patch/pid to allow for simultaneous test patch runs
* add in a timer to show how long various steps take (as requested by 
[~raviprak] )
* Fix more we should have a var rather than hard code this binary issues
* just skip the findbugs test if findbugs isn't installed
* send the mvn install run to a log file rather than dumping it to the screen
* fix some of the backslash indentation problems generated by the autoformatter


Current console output now looks like:

{code}
w$ dev-support/test-patch.sh --dirty-workspace /tmp/H1
Running in developer mode
/tmp/Hadoop-test-patch/54448 has been created


===
===
Testing patch for H1.
===
===

-1 overall

| Vote |   Subsystem | Comment
|  +1  |@author  |  00m 00s  | The patch does not contain any 
 | @author tags.
|  -1  | tests included  |  00m 00s  | The patch doesn't appear to include 
 | any new or modified tests. Please
 | justify why no new tests are needed
 | for this patch. Also please list what
 | manual steps were performed to verify
 | this patch.
|  +1  |  javac  |  04m 30s  | There were no new javac warning 
 | messages.
|  +1  |javadoc  |  06m 15s  | There were no new javadoc 

[jira] [Updated] (HADOOP-11691) X86 build of libwinutils is broken

2015-03-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-11691:
---
   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

+1 for the patch.  I committed this to trunk, branch-2 and branch-2.7.  Kiran, 
thank you for the patch.  Remus and Chuan, thank you for helping with code 
review and testing.

 X86 build of libwinutils is broken
 --

 Key: HADOOP-11691
 URL: https://issues.apache.org/jira/browse/HADOOP-11691
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, native
Affects Versions: 2.7.0
Reporter: Remus Rusanu
Assignee: Kiran Kumar M R
Priority: Critical
 Fix For: 2.7.0

 Attachments: HADOOP-11691-001.patch, HADOOP-11691-002.patch, 
 HADOOP-11691-003.patch


 Hadoop-9922 recently fixed x86 build. After YARN-2190 compiling x86 results 
 in error:
 {code}
 (Link target) -
   
 E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\target/winutils/hadoopwinutilsvc_s.obj
  : fatal error LNK1112: module machine type 'x64' conflicts with target 
 machine type 'X86' 
 [E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\src\main\winutils\winutils.vcxproj]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11746) rewrite test-patch.sh

2015-03-26 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11746:
--
Attachment: HADOOP-11746-02.patch

-02: 
* still very little new functionality, except that --dirty-workspace is now 
working for me.
* console output cleanup
* fix the /tmp default for the patch scratch space to be 
/tmp/(projname)-test-patch/pid to allow for simultaneous test patch runs
* add in a timer to show how long various steps take (as requested by 
[~raviprak] )
* Fix more we should have a var rather than hard code this binary issues
* just skip the findbugs test if findbugs isn't installed
* send the mvn install run to a log file rather than dumping it to the screen
* fix some of the backslash indentation problems generated by the autoformatter


Current console output now looks like:

{code}
w$ dev-support/test-patch.sh --dirty-workspace /tmp/H1
Running in developer mode
/tmp/Hadoop-test-patch/54448 has been created


===
===
Testing patch for H1.
===
===

-1 overall

| Vote |   Subsystem | Comment
|  +1  |@author  |  00m 00s  | The patch does not contain any 
 | @author tags.
|  -1  | tests included  |  00m 00s  | The patch doesn't appear to include 
 | any new or modified tests. Please
 | justify why no new tests are needed
 | for this patch. Also please list what
 | manual steps were performed to verify
 | this patch.
|  +1  |  javac  |  04m 30s  | There were no new javac warning 
 | messages.
|  +1  |javadoc  |  06m 15s  | There were no new javadoc warning 
 | messages.
|  +1  |eclipse:eclipse  |  00m 24s  | The patch built with eclipse:eclipse.
|  -1  |  release audit  |  00m 05s  | The applied patch generated 1 
release 
 | audit warnings.


===
===
   Finished build.
===
===
{code}

Note the always centered text and the column wrap on the output. :D

 rewrite test-patch.sh
 -

 Key: HADOOP-11746
 URL: https://issues.apache.org/jira/browse/HADOOP-11746
 Project: Hadoop Common
  Issue Type: Test
  Components: build, test
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, 
 HADOOP-11746-02.patch


 This code is bad and you should feel bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11762) Enable swift distcp to secure HDFS

2015-03-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11762?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383256#comment-14383256
 ] 

Hadoop QA commented on HADOOP-11762:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12707672/HADOOP-11762.000.patch
  against trunk revision 47782cb.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-tools/hadoop-openstack.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6009//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6009//console

This message is automatically generated.

 Enable swift distcp to secure HDFS
 --

 Key: HADOOP-11762
 URL: https://issues.apache.org/jira/browse/HADOOP-11762
 Project: Hadoop Common
  Issue Type: Improvement
  Components: fs/swift
Affects Versions: 2.3.0, 2.4.0, 2.5.0, 2.4.1, 2.6.0, 2.5.1
Reporter: Chen He
Assignee: Chen He
 Attachments: HADOOP-11762.000.patch


 Even we can use dfs -put or dfs -cp to move data between swift and 
 secured HDFS, it will be impractical for moving huge amount of data like 10TB 
 or larger.
 Current Hadoop code will result in :java.lang.IllegalArgumentException: 
 java.net.UnknownHostException: container.swiftdomain 
 Since it does not support token feature in SwiftNativeFileSystem right now, 
 it will be reasonable that we override the getCanonicalServiceName method 
 like other filesystem extensions (S3FileSystem, S3AFileSystem)



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11691) X86 build of libwinutils is broken

2015-03-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383347#comment-14383347
 ] 

Hudson commented on HADOOP-11691:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #7445 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7445/])
HADOOP-11691. X86 build of libwinutils is broken. Contributed by Kiran Kumar M 
R. (cnauroth: rev af618f23a70508111f490a24d74fc90161cfc079)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/winutils/win8sdk.props


 X86 build of libwinutils is broken
 --

 Key: HADOOP-11691
 URL: https://issues.apache.org/jira/browse/HADOOP-11691
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, native
Affects Versions: 2.7.0
Reporter: Remus Rusanu
Assignee: Kiran Kumar M R
Priority: Critical
 Fix For: 2.7.0

 Attachments: HADOOP-11691-001.patch, HADOOP-11691-002.patch, 
 HADOOP-11691-003.patch


 Hadoop-9922 recently fixed x86 build. After YARN-2190 compiling x86 results 
 in error:
 {code}
 (Link target) -
   
 E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\target/winutils/hadoopwinutilsvc_s.obj
  : fatal error LNK1112: module machine type 'x64' conflicts with target 
 machine type 'X86' 
 [E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\src\main\winutils\winutils.vcxproj]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11746) rewrite test-patch.sh

2015-03-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11746?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383374#comment-14383374
 ] 

Allen Wittenauer commented on HADOOP-11746:
---

(and, of course, JIRA's formatting screwed it up. lol.  but it should look good 
in email.  The JIRA formatting is different, so shouldn't suffer like that.)

 rewrite test-patch.sh
 -

 Key: HADOOP-11746
 URL: https://issues.apache.org/jira/browse/HADOOP-11746
 Project: Hadoop Common
  Issue Type: Test
  Components: build, test
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
 Attachments: HADOOP-11746-00.patch, HADOOP-11746-01.patch, 
 HADOOP-11746-02.patch


 This code is bad and you should feel bad.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11691) X86 build of libwinutils is broken

2015-03-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-11691:
---
 Priority: Critical  (was: Major)
 Target Version/s: 2.7.0  (was: 3.0.0)
Affects Version/s: (was: 3.0.0)
   2.7.0

[~chuanliu], could you please try to verify Kiran's newest patch to see if this 
has resolved the earlier problem that you saw?  From my side, I was able to 
build successfully for both 64-bit and 32-bit using Windows SDK 7.1.

I'm retargeting this to 2.7.0, which was the original goal for HADOOP-9922.  
I'm bumping priority to critical, because we expect to start cutting 2.7.0 
release candidates in a few days.

 X86 build of libwinutils is broken
 --

 Key: HADOOP-11691
 URL: https://issues.apache.org/jira/browse/HADOOP-11691
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, native
Affects Versions: 2.7.0
Reporter: Remus Rusanu
Assignee: Kiran Kumar M R
Priority: Critical
 Attachments: HADOOP-11691-001.patch, HADOOP-11691-002.patch, 
 HADOOP-11691-003.patch


 Hadoop-9922 recently fixed x86 build. After YARN-2190 compiling x86 results 
 in error:
 {code}
 (Link target) -
   
 E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\target/winutils/hadoopwinutilsvc_s.obj
  : fatal error LNK1112: module machine type 'x64' conflicts with target 
 machine type 'X86' 
 [E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\src\main\winutils\winutils.vcxproj]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14382282#comment-14382282
 ] 

Haohui Mai commented on HADOOP-11754:
-

bq. How about reverting this change for now (at least for branch-2)? This is a 
blocker for the 2.7 release. Is there a strong reason that HADOOP-10670 must be 
part of the 2.7 release? If not, it may not be a bad idea to revert this for 
now and revise the patch for a later release. Thoughts?

I pull this into 2.7 because it is a build block for HDFS-5796, which is a 
blocker for 2.7 as well. :-( I'll take care of this today.

 RM fails to start in non-secure mode due to authentication filter failure
 -

 Key: HADOOP-11754
 URL: https://issues.apache.org/jira/browse/HADOOP-11754
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch


 RM fails to start in the non-secure mode with the following exception:
 {noformat}
 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
   at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
   at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
   ... 23 more
 ...
 2015-03-25 22:02:42,538 FATAL 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
 ResourceManager
 org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 

[jira] [Assigned] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai reassigned HADOOP-11754:
---

Assignee: Haohui Mai

 RM fails to start in non-secure mode due to authentication filter failure
 -

 Key: HADOOP-11754
 URL: https://issues.apache.org/jira/browse/HADOOP-11754
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch


 RM fails to start in the non-secure mode with the following exception:
 {noformat}
 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
   at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
   at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
   ... 23 more
 ...
 2015-03-25 22:02:42,538 FATAL 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
 ResourceManager
 org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.io.IOException: Problem in starting http server. Server 
 handlers failed
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:785)

[jira] [Updated] (HADOOP-11553) Formalize the shell API

2015-03-26 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11553:
--
Attachment: HADOOP-11553-06.patch

-06:
* Fixed those spelling errors

Thanks for the reviews, btw. :)

 Formalize the shell API
 ---

 Key: HADOOP-11553
 URL: https://issues.apache.org/jira/browse/HADOOP-11553
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Attachments: HADOOP-11553-00.patch, HADOOP-11553-01.patch, 
 HADOOP-11553-02.patch, HADOOP-11553-03.patch, HADOOP-11553-04.patch, 
 HADOOP-11553-05.patch, HADOOP-11553-06.patch


 After HADOOP-11485, we need to formally document functions and environment 
 variables that 3rd parties can expect to be able to exist/use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Reopened] (HADOOP-11257) Update hadoop jar documentation to warn against using it for launching yarn jars

2015-03-26 Thread Gunther Hagleitner (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Gunther Hagleitner reopened HADOOP-11257:
-

This is causing issues in Hive. Can we at least have the warning go to stderr 
instead of stdout? In hive anything printed to stdout is considered part of the 
query result and now that includes this warning message.

 Update hadoop jar documentation to warn against using it for launching yarn 
 jars
 --

 Key: HADOOP-11257
 URL: https://issues.apache.org/jira/browse/HADOOP-11257
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.1-beta
Reporter: Allen Wittenauer
Assignee: Masatake Iwasaki
 Fix For: 2.7.0

 Attachments: HADOOP-11257.1.patch, HADOOP-11257.1.patch, 
 HADOOP-11257.2.patch, HADOOP-11257.3.patch, HADOOP-11257.4.patch, 
 HADOOP-11257.4.patch


 We should update the hadoop jar documentation to warn against using it for 
 launching yarn jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11756) Warning yarn jar instead of hadoop jar in hadoop 2.7.0

2015-03-26 Thread Gunther Hagleitner (JIRA)
Gunther Hagleitner created HADOOP-11756:
---

 Summary: Warning yarn jar instead of hadoop jar in hadoop 2.7.0
 Key: HADOOP-11756
 URL: https://issues.apache.org/jira/browse/HADOOP-11756
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Gunther Hagleitner


HADOOP-11257 adds a warning to stdout

{noformat}
WARNING: Use yarn jar to launch YARN applications.
{noformat}

which will cause issues if untreated with folks that programatically parse 
stdout for query results (i.e.: CLI, silent mode, etc).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11257) Update hadoop jar documentation to warn against using it for launching yarn jars

2015-03-26 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383005#comment-14383005
 ] 

Chris Nauroth commented on HADOOP-11257:


On further thought, I'm also +1 for the addendum patch that sends the warning 
message to stderr.  This is exactly how it works on trunk via the 
{{hadoop_error}} function.

 Update hadoop jar documentation to warn against using it for launching yarn 
 jars
 --

 Key: HADOOP-11257
 URL: https://issues.apache.org/jira/browse/HADOOP-11257
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.1-beta
Reporter: Allen Wittenauer
Assignee: Masatake Iwasaki
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HADOOP-11257-branch-2.addendum.001.patch, 
 HADOOP-11257.1.patch, HADOOP-11257.1.patch, HADOOP-11257.2.patch, 
 HADOOP-11257.3.patch, HADOOP-11257.4.patch, HADOOP-11257.4.patch


 We should update the hadoop jar documentation to warn against using it for 
 launching yarn jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-11754:

Attachment: HADOOP-11754.000.patch

 RM fails to start in non-secure mode due to authentication filter failure
 -

 Key: HADOOP-11754
 URL: https://issues.apache.org/jira/browse/HADOOP-11754
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
 HADOOP-11754.000.patch


 RM fails to start in the non-secure mode with the following exception:
 {noformat}
 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
   at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
   at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
   ... 23 more
 ...
 2015-03-25 22:02:42,538 FATAL 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
 ResourceManager
 org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.io.IOException: Problem in starting http server. Server 
 handlers failed
   at 

[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383016#comment-14383016
 ] 

Haohui Mai commented on HADOOP-11754:
-

Uploaded a patch to implement the second approach. In insecure mode, the 
{{AuthenticationFilerInitializer}} will fall back to 
{{RandomSignerSecretProvider}} when the secret file is unavailable. Note that 
the patch is based on HADOOP-11748.

 RM fails to start in non-secure mode due to authentication filter failure
 -

 Key: HADOOP-11754
 URL: https://issues.apache.org/jira/browse/HADOOP-11754
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
 HADOOP-11754.000.patch


 RM fails to start in the non-secure mode with the following exception:
 {noformat}
 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
   at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
   at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
   ... 23 more
 ...
 2015-03-25 22:02:42,538 FATAL 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
 ResourceManager
 org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 

[jira] [Updated] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-11754:

Target Version/s: 2.7.0

 RM fails to start in non-secure mode due to authentication filter failure
 -

 Key: HADOOP-11754
 URL: https://issues.apache.org/jira/browse/HADOOP-11754
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
 HADOOP-11754.000.patch


 RM fails to start in the non-secure mode with the following exception:
 {noformat}
 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
   at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
   at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
   ... 23 more
 ...
 2015-03-25 22:02:42,538 FATAL 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
 ResourceManager
 org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.io.IOException: Problem in starting http server. Server 
 handlers failed
   at 

[jira] [Created] (HADOOP-11759) TockenCache doc has minor problem

2015-03-26 Thread Chen He (JIRA)
Chen He created HADOOP-11759:


 Summary: TockenCache doc has minor problem
 Key: HADOOP-11759
 URL: https://issues.apache.org/jira/browse/HADOOP-11759
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0, 3.0.0
Reporter: Chen He
Priority: Trivial


/**
   * get delegation token for a specific FS
   * @param fs
   * @param credentials
   * @param p
   * @param conf
   * @throws IOException
   */
  static void obtainTokensForNamenodesInternal(FileSystem fs, 
  Credentials credentials, Configuration conf) throws IOException {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11664) Loading predefined EC schemas from configuration

2015-03-26 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383027#comment-14383027
 ] 

Zhe Zhang commented on HADOOP-11664:


Thanks Kai for the patch! The main logic looks good. Just 1 minor comment:

Is it necessary to configure the name of the xml file? I suggest we just hard 
code the file name to simplify code.
{code}
+  public static final String IO_ERASURECODE_SCHEMA_FILE_KEY =
+  hadoop.io.erasurecode.;
+  public static final String IO_ERASURECODE_SCHEMA_FILE_DEFAULT =
+  ecschema-def.xml;
{code}

 Loading predefined EC schemas from configuration
 

 Key: HADOOP-11664
 URL: https://issues.apache.org/jira/browse/HADOOP-11664
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-11664-v2.patch, HADOOP-11664-v3.patch, 
 HDFS-7371_v1.patch


 System administrator can configure multiple EC codecs in hdfs-site.xml file, 
 and codec instances or schemas in a new configuration file named 
 ec-schema.xml in the conf folder. A codec can be referenced by its instance 
 or schema using the codec name, and a schema can be utilized and specified by 
 the schema name for a folder or EC ZONE to enforce EC. Once a schema is used 
 to define an EC ZONE, then its associated parameter values will be stored as 
 xattributes and respected thereafter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11760) Typo in DistCp.java

2015-03-26 Thread Chen He (JIRA)
Chen He created HADOOP-11760:


 Summary: Typo in DistCp.java
 Key: HADOOP-11760
 URL: https://issues.apache.org/jira/browse/HADOOP-11760
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chen He
Priority: Trivial


/**
   * Create a default working folder for the job, under the
   * job staging directory
   *
   * @return Returns the working folder information
   * @throws Exception - EXception if any
   */
  private Path createMetaFolderPath() throws Exception {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11748) Secrets for auth cookies can be specified in clear text

2015-03-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383033#comment-14383033
 ] 

Hadoop QA commented on HADOOP-11748:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12707639/HADOOP-11748.001.patch
  against trunk revision 5695c7a.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 4 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-auth hadoop-hdfs-project/hadoop-hdfs-httpfs.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6006//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6006//artifact/patchprocess/newPatchFindbugsWarningshadoop-auth.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6006//console

This message is automatically generated.

 Secrets for auth cookies can be specified in clear text
 ---

 Key: HADOOP-11748
 URL: https://issues.apache.org/jira/browse/HADOOP-11748
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
Priority: Critical
 Attachments: HADOOP-11748-032615-poc.patch, HADOOP-11748.001.patch


 Based on the discussion on HADOOP-10670, this jira proposes to remove 
 {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
 and security vulnerabilities.
 {quote}
 My understanding is that the use case of inlining the secret is never 
 supported. The property is used to pass the secret internally. The way it 
 works before HADOOP-10868 is the following:
 * Users specify the initializer of the authentication filter in the 
 configuration.
 * AuthenticationFilterInitializer reads the secret file. The server will not 
 start if the secret file does not exists. The initializer will set the 
 property if it read the file correctly.
 *There is no way to specify the secret in the configuration out-of-the-box – 
 the secret is always overwritten by AuthenticationFilterInitializer.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11748) Secrets for auth cookies can be specified in clear text

2015-03-26 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383047#comment-14383047
 ] 

Haohui Mai commented on HADOOP-11748:
-

The findbugs warning seems to be originated from HADOOP-10670, I'll file 
another jira to fix it.

 Secrets for auth cookies can be specified in clear text
 ---

 Key: HADOOP-11748
 URL: https://issues.apache.org/jira/browse/HADOOP-11748
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
Priority: Critical
 Attachments: HADOOP-11748-032615-poc.patch, HADOOP-11748.001.patch


 Based on the discussion on HADOOP-10670, this jira proposes to remove 
 {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
 and security vulnerabilities.
 {quote}
 My understanding is that the use case of inlining the secret is never 
 supported. The property is used to pass the secret internally. The way it 
 works before HADOOP-10868 is the following:
 * Users specify the initializer of the authentication filter in the 
 configuration.
 * AuthenticationFilterInitializer reads the secret file. The server will not 
 start if the secret file does not exists. The initializer will set the 
 property if it read the file correctly.
 *There is no way to specify the secret in the configuration out-of-the-box – 
 the secret is always overwritten by AuthenticationFilterInitializer.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11748) Secrets for auth cookies can be specified in clear text

2015-03-26 Thread Jing Zhao (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383056#comment-14383056
 ] 

Jing Zhao commented on HADOOP-11748:


+1

 Secrets for auth cookies can be specified in clear text
 ---

 Key: HADOOP-11748
 URL: https://issues.apache.org/jira/browse/HADOOP-11748
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
Priority: Critical
 Attachments: HADOOP-11748-032615-poc.patch, HADOOP-11748.001.patch


 Based on the discussion on HADOOP-10670, this jira proposes to remove 
 {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
 and security vulnerabilities.
 {quote}
 My understanding is that the use case of inlining the secret is never 
 supported. The property is used to pass the secret internally. The way it 
 works before HADOOP-10868 is the following:
 * Users specify the initializer of the authentication filter in the 
 configuration.
 * AuthenticationFilterInitializer reads the secret file. The server will not 
 start if the secret file does not exists. The initializer will set the 
 property if it read the file correctly.
 *There is no way to specify the secret in the configuration out-of-the-box – 
 the secret is always overwritten by AuthenticationFilterInitializer.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11761) Fix findbugs warnings in org.apache.hadoop.security.authentication

2015-03-26 Thread Li Lu (JIRA)
Li Lu created HADOOP-11761:
--

 Summary: Fix findbugs warnings in 
org.apache.hadoop.security.authentication
 Key: HADOOP-11761
 URL: https://issues.apache.org/jira/browse/HADOOP-11761
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Li Lu
Assignee: Li Lu


As discovered in HADOOP-11748, we need to fix the findbugs warnings in 
org.apache.hadoop.security.authentication. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11748) The secrets of auth cookies should not be specified in configuration in clear text

2015-03-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-11748:

Summary: The secrets of auth cookies should not be specified in 
configuration in clear text  (was: Secrets for auth cookies can be specified in 
clear text)

 The secrets of auth cookies should not be specified in configuration in clear 
 text
 --

 Key: HADOOP-11748
 URL: https://issues.apache.org/jira/browse/HADOOP-11748
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
Priority: Critical
 Attachments: HADOOP-11748-032615-poc.patch, HADOOP-11748.001.patch


 Based on the discussion on HADOOP-10670, this jira proposes to remove 
 {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
 and security vulnerabilities.
 {quote}
 My understanding is that the use case of inlining the secret is never 
 supported. The property is used to pass the secret internally. The way it 
 works before HADOOP-10868 is the following:
 * Users specify the initializer of the authentication filter in the 
 configuration.
 * AuthenticationFilterInitializer reads the secret file. The server will not 
 start if the secret file does not exists. The initializer will set the 
 property if it read the file correctly.
 *There is no way to specify the secret in the configuration out-of-the-box – 
 the secret is always overwritten by AuthenticationFilterInitializer.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11748) The secrets of auth cookies should not be specified in configuration in clear text

2015-03-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-11748:

   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

I've committed the patch to trunk, branch-2 and branch-2.7. Thanks 
[~gtCarrera9] for the contribution.

 The secrets of auth cookies should not be specified in configuration in clear 
 text
 --

 Key: HADOOP-11748
 URL: https://issues.apache.org/jira/browse/HADOOP-11748
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
Priority: Critical
 Fix For: 2.7.0

 Attachments: HADOOP-11748-032615-poc.patch, HADOOP-11748.001.patch


 Based on the discussion on HADOOP-10670, this jira proposes to remove 
 {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
 and security vulnerabilities.
 {quote}
 My understanding is that the use case of inlining the secret is never 
 supported. The property is used to pass the secret internally. The way it 
 works before HADOOP-10868 is the following:
 * Users specify the initializer of the authentication filter in the 
 configuration.
 * AuthenticationFilterInitializer reads the secret file. The server will not 
 start if the secret file does not exists. The initializer will set the 
 property if it read the file correctly.
 *There is no way to specify the secret in the configuration out-of-the-box – 
 the secret is always overwritten by AuthenticationFilterInitializer.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11761) Fix findbugs warnings in org.apache.hadoop.security.authentication

2015-03-26 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11761:
---
Priority: Minor  (was: Major)

 Fix findbugs warnings in org.apache.hadoop.security.authentication
 --

 Key: HADOOP-11761
 URL: https://issues.apache.org/jira/browse/HADOOP-11761
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Li Lu
Assignee: Li Lu
Priority: Minor

 As discovered in HADOOP-11748, we need to fix the findbugs warnings in 
 org.apache.hadoop.security.authentication. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11748) The secrets of auth cookies should not be specified in configuration in clear text

2015-03-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383084#comment-14383084
 ] 

Hudson commented on HADOOP-11748:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7444 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7444/])
HADOOP-11748. The secrets of auth cookies should not be specified in 
configuration in clear text. Contributed by Li Lu and Haohui Mai. (wheat9: rev 
47782cbf4a66d49064fd3dd6d1d1a19cc42157fc)
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProviderCreator.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProvider.java
* hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProvider.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAuthenticationFilter.java


 The secrets of auth cookies should not be specified in configuration in clear 
 text
 --

 Key: HADOOP-11748
 URL: https://issues.apache.org/jira/browse/HADOOP-11748
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
Priority: Critical
 Fix For: 2.7.0

 Attachments: HADOOP-11748-032615-poc.patch, HADOOP-11748.001.patch


 Based on the discussion on HADOOP-10670, this jira proposes to remove 
 {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
 and security vulnerabilities.
 {quote}
 My understanding is that the use case of inlining the secret is never 
 supported. The property is used to pass the secret internally. The way it 
 works before HADOOP-10868 is the following:
 * Users specify the initializer of the authentication filter in the 
 configuration.
 * AuthenticationFilterInitializer reads the secret file. The server will not 
 start if the secret file does not exists. The initializer will set the 
 property if it read the file correctly.
 *There is no way to specify the secret in the configuration out-of-the-box – 
 the secret is always overwritten by AuthenticationFilterInitializer.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-11754:

Attachment: HADOOP-11754.001.patch

 RM fails to start in non-secure mode due to authentication filter failure
 -

 Key: HADOOP-11754
 URL: https://issues.apache.org/jira/browse/HADOOP-11754
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
 HADOOP-11754.000.patch, HADOOP-11754.001.patch


 RM fails to start in the non-secure mode with the following exception:
 {noformat}
 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
   at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
   at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
   ... 23 more
 ...
 2015-03-25 22:02:42,538 FATAL 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
 ResourceManager
 org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.io.IOException: Problem in starting http server. Server 
 handlers failed
   at 

[jira] [Updated] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-11754:

Status: Patch Available  (was: Open)

 RM fails to start in non-secure mode due to authentication filter failure
 -

 Key: HADOOP-11754
 URL: https://issues.apache.org/jira/browse/HADOOP-11754
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
 HADOOP-11754.000.patch, HADOOP-11754.001.patch


 RM fails to start in the non-secure mode with the following exception:
 {noformat}
 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
   at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
   at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
   ... 23 more
 ...
 2015-03-25 22:02:42,538 FATAL 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
 ResourceManager
 org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.io.IOException: Problem in starting http server. Server 
 handlers failed
   at 

[jira] [Updated] (HADOOP-11761) Fix findbugs warnings in org.apache.hadoop.security.authentication

2015-03-26 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11761:
---
Status: Patch Available  (was: Open)

 Fix findbugs warnings in org.apache.hadoop.security.authentication
 --

 Key: HADOOP-11761
 URL: https://issues.apache.org/jira/browse/HADOOP-11761
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Li Lu
Assignee: Li Lu
Priority: Minor
 Attachments: HADOOP-11761-032615.patch


 As discovered in HADOOP-11748, we need to fix the findbugs warnings in 
 org.apache.hadoop.security.authentication. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11761) Fix findbugs warnings in org.apache.hadoop.security.authentication

2015-03-26 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11761:
---
Labels: findbugs  (was: )

 Fix findbugs warnings in org.apache.hadoop.security.authentication
 --

 Key: HADOOP-11761
 URL: https://issues.apache.org/jira/browse/HADOOP-11761
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Li Lu
Assignee: Li Lu
Priority: Minor
  Labels: findbugs
 Attachments: HADOOP-11761-032615.patch


 As discovered in HADOOP-11748, we need to fix the findbugs warnings in 
 org.apache.hadoop.security.authentication. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11761) Fix findbugs warnings in org.apache.hadoop.security.authentication

2015-03-26 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11761?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11761:
---
Attachment: HADOOP-11761-032615.patch

This issue looks really weird as I cleaned all findbugs warnings in 
HADOOP-11379. After looking into it, seems like the fix in HADOOP-10670 
introduce the warning to the current location. In the findbugs log of 
HADOOP-10670, I can notice the following lines:
{code}
==
==
Determining number of patched Findbugs warnings.
==
==


  Running findbugs in 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core
/home/jenkins/tools/maven/latest/bin/mvn clean test findbugs:findbugs 
-DskipTests -DHadoopPatchProcess  /dev/null  
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/../patchprocess/patchFindBugsOutputhadoop-mapreduce-client-core.txt
 21
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build
  Running findbugs in hadoop-tools/hadoop-archives
/home/jenkins/tools/maven/latest/bin/mvn clean test findbugs:findbugs 
-DskipTests -DHadoopPatchProcess  /dev/null  
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/../patchprocess/patchFindBugsOutputhadoop-archives.txt
 21
/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build
Found 0 Findbugs warnings 
(/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/hadoop-tools/hadoop-archives/target/findbugsXml.xml)
Found 0 Findbugs warnings 
(/home/jenkins/jenkins-slave/workspace/PreCommit-HADOOP-Build/hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-core/target/findbugsXml.xml)
{code}
So apparently Jenkins ran findbugs against a wrong place on HADOOP-10670. I 
reran findbugs locally against hadoop-auth and now the warning is gone after 
this quick fix. 

 Fix findbugs warnings in org.apache.hadoop.security.authentication
 --

 Key: HADOOP-11761
 URL: https://issues.apache.org/jira/browse/HADOOP-11761
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Li Lu
Assignee: Li Lu
Priority: Minor
  Labels: findbugs
 Attachments: HADOOP-11761-032615.patch


 As discovered in HADOOP-11748, we need to fix the findbugs warnings in 
 org.apache.hadoop.security.authentication. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11719) [Fsshell] Remove bin/hadoop reference from GenericOptionsParser default help text

2015-03-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11719?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14381875#comment-14381875
 ] 

Hudson commented on HADOOP-11719:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2094 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2094/])
HADOOP-11719.[Fsshell] Remove bin/hadoop reference from GenericOptionsParser 
default help text. Contributed by Brahma Reddy Battula. (harsh: rev 
b4b4fe90569a116c67bfc94fbfbab95b1a0b712a)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericOptionsParser.java
* hadoop-common-project/hadoop-common/CHANGES.txt


 [Fsshell] Remove bin/hadoop reference from GenericOptionsParser default help 
 text
 -

 Key: HADOOP-11719
 URL: https://issues.apache.org/jira/browse/HADOOP-11719
 Project: Hadoop Common
  Issue Type: Improvement
  Components: scripts
Affects Versions: 2.6.0
Reporter: Brahma Reddy Battula
Assignee: Brahma Reddy Battula
Priority: Minor
  Labels: newbie
 Fix For: 2.8.0

 Attachments: HADOOP-11719-001.patch, HDFS-3387.patch, 
 HDFS-3387_updated.patch


 Scenario:
 --
 Execute any fsshell command with invalid options
 Like ./hdfs haadmin -transitionToActive...
 Here it is logging as following..
 bin/hadoop command [genericOptions] [commandOptions]...
 Expected: Here help message is misleading to user saying that bin/hadoop that 
 is not actually user ran
 it's better to log bin/hdfs..Anyway hadoop is deprecated..



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11553) Formalize the shell API

2015-03-26 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-11553:
---
Hadoop Flags: Incompatible change,Reviewed  (was: Incompatible change)

+1 for patch v06.  Thank you for the documentation, Allen.

 Formalize the shell API
 ---

 Key: HADOOP-11553
 URL: https://issues.apache.org/jira/browse/HADOOP-11553
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Attachments: HADOOP-11553-00.patch, HADOOP-11553-01.patch, 
 HADOOP-11553-02.patch, HADOOP-11553-03.patch, HADOOP-11553-04.patch, 
 HADOOP-11553-05.patch, HADOOP-11553-06.patch


 After HADOOP-11485, we need to formally document functions and environment 
 variables that 3rd parties can expect to be able to exist/use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14382748#comment-14382748
 ] 

Vinod Kumar Vavilapalli commented on HADOOP-11754:
--

Okay, then how about this?
 - In secure mode, we always had the signature file configured by default in 
the default config files. And if that file didn't exist, we failed the daemons 
(in HDFS as well as YARN). We should keep this behavior here.
 - In non-secure mode, before HADOOP-10670, RM didn't fail the daemon if the 
default signature file didn't exist but it starts failing after HADOOP-10670. 
We should fix this to not have RM fail. There are two ways to do this
-- Not use the filter at all for the ResourceManager in non-secure mode - 
other daemons already do this. And so no cookies sent to RM clients. Which 
should be okay in non-secure mode.
-- Use the filter in RM in non-secure mode also but fall back to the 
RandomSigner signed cookie as it is today. This can be done by keeping the 
signer choice code in each of the individual filter-initializers.

 RM fails to start in non-secure mode due to authentication filter failure
 -

 Key: HADOOP-11754
 URL: https://issues.apache.org/jira/browse/HADOOP-11754
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch


 RM fails to start in the non-secure mode with the following exception:
 {noformat}
 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
   at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
   at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
   at 
 

[jira] [Updated] (HADOOP-11257) Update hadoop jar documentation to warn against using it for launching yarn jars

2015-03-26 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-11257:
--
Attachment: HADOOP-11257-branch-2.addendum.001.patch

I attached the patch to echo warn message to stderr.

 Update hadoop jar documentation to warn against using it for launching yarn 
 jars
 --

 Key: HADOOP-11257
 URL: https://issues.apache.org/jira/browse/HADOOP-11257
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.1-beta
Reporter: Allen Wittenauer
Assignee: Masatake Iwasaki
 Fix For: 2.7.0

 Attachments: HADOOP-11257-branch-2.addendum.001.patch, 
 HADOOP-11257.1.patch, HADOOP-11257.1.patch, HADOOP-11257.2.patch, 
 HADOOP-11257.3.patch, HADOOP-11257.4.patch, HADOOP-11257.4.patch


 We should update the hadoop jar documentation to warn against using it for 
 launching yarn jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11553) Formalize the shell API

2015-03-26 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11553:
--
   Resolution: Fixed
Fix Version/s: 3.0.0
   Status: Resolved  (was: Patch Available)

This has been committed to trunk.

Thanks for the review!

 Formalize the shell API
 ---

 Key: HADOOP-11553
 URL: https://issues.apache.org/jira/browse/HADOOP-11553
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation, scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Fix For: 3.0.0

 Attachments: HADOOP-11553-00.patch, HADOOP-11553-01.patch, 
 HADOOP-11553-02.patch, HADOOP-11553-03.patch, HADOOP-11553-04.patch, 
 HADOOP-11553-05.patch, HADOOP-11553-06.patch


 After HADOOP-11485, we need to formally document functions and environment 
 variables that 3rd parties can expect to be able to exist/use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11758) Add options to filter out too much granular tracing spans

2015-03-26 Thread Masatake Iwasaki (JIRA)
Masatake Iwasaki created HADOOP-11758:
-

 Summary: Add options to filter out too much granular tracing spans
 Key: HADOOP-11758
 URL: https://issues.apache.org/jira/browse/HADOOP-11758
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tracing
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor


in order to avoid queue in span receiver spills



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11758) Add options to filter out too much granular tracing spans

2015-03-26 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11758?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-11758:
--
Attachment: testWriteTraceHooks.html

e.g. DFSOutputStream#writeChunk in testWriteTraceHooks.html.

 Add options to filter out too much granular tracing spans
 -

 Key: HADOOP-11758
 URL: https://issues.apache.org/jira/browse/HADOOP-11758
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tracing
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Attachments: testWriteTraceHooks.html


 in order to avoid queue in span receiver spills



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11757) NFS gateway should shutdown when it can't start UDP or TCP server

2015-03-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14382743#comment-14382743
 ] 

Hadoop QA commented on HADOOP-11757:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12707589/HDFS-7989.002.patch
  against trunk revision 61df1b2.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-nfs.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6005//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6005//console

This message is automatically generated.

 NFS gateway should shutdown when it can't start UDP or TCP server
 -

 Key: HADOOP-11757
 URL: https://issues.apache.org/jira/browse/HADOOP-11757
 Project: Hadoop Common
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-7989.001.patch, HDFS-7989.002.patch


 Unlike the Portmap, Nfs3 class does shutdown when the service can't start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14382779#comment-14382779
 ] 

Kai Zheng commented on HADOOP-11754:


It sounds complete ! Just note for the 2nd way, to allow RM to fall back to 
RandomSigner, we can unset or remove the file property. Current 
AuthenticationFilter will perform the fallback when not seeing the file 
property. We don't have to bring back the original specific codes in RM.

 RM fails to start in non-secure mode due to authentication filter failure
 -

 Key: HADOOP-11754
 URL: https://issues.apache.org/jira/browse/HADOOP-11754
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch


 RM fails to start in the non-secure mode with the following exception:
 {noformat}
 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
   at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
   at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
   ... 23 more
 ...
 2015-03-25 22:02:42,538 FATAL 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
 ResourceManager
 org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 

[jira] [Updated] (HADOOP-11553) Formalize the shell API

2015-03-26 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11553?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11553:
--
Issue Type: New Feature  (was: Improvement)

 Formalize the shell API
 ---

 Key: HADOOP-11553
 URL: https://issues.apache.org/jira/browse/HADOOP-11553
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation, scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Attachments: HADOOP-11553-00.patch, HADOOP-11553-01.patch, 
 HADOOP-11553-02.patch, HADOOP-11553-03.patch, HADOOP-11553-04.patch, 
 HADOOP-11553-05.patch, HADOOP-11553-06.patch


 After HADOOP-11485, we need to formally document functions and environment 
 variables that 3rd parties can expect to be able to exist/use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14382640#comment-14382640
 ] 

Zhijie Shen commented on HADOOP-11754:
--

bq. That's a definite change in behavior. If a secret wasn't configured, the 
2.6 and previous filters generated a random one since it was assumed that the 
serving system was a single host.

Agree. This change is incompatible. It will break the current timeline server 
secure deployment.

 RM fails to start in non-secure mode due to authentication filter failure
 -

 Key: HADOOP-11754
 URL: https://issues.apache.org/jira/browse/HADOOP-11754
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch


 RM fails to start in the non-secure mode with the following exception:
 {noformat}
 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
   at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
   at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
   ... 23 more
 ...
 2015-03-25 22:02:42,538 FATAL 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
 ResourceManager
 org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 

[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Vinod Kumar Vavilapalli (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14382648#comment-14382648
 ] 

Vinod Kumar Vavilapalli commented on HADOOP-11754:
--

[~drankye] / [~wheat9],

I may still not have the full-picture but how about this? If all we want in 
HADOOP-10670 is for the webHDFS auth filter to be able to use file-based 
signer, why don't' we implement that functionality there similar to  
RMAuthenticationFilterInitializer instead of changing AuthenticationFilter. 
That should get what you want but avoid this breakage, even if it isn't ideal?

 RM fails to start in non-secure mode due to authentication filter failure
 -

 Key: HADOOP-11754
 URL: https://issues.apache.org/jira/browse/HADOOP-11754
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch


 RM fails to start in the non-secure mode with the following exception:
 {noformat}
 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
   at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
   at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
   ... 23 more
 ...
 2015-03-25 22:02:42,538 FATAL 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
 ResourceManager
 org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 

[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes

2015-03-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14382660#comment-14382660
 ] 

Colin Patrick McCabe commented on HADOOP-11731:
---

Thank you for tackling this, Allen.  It looks good.

{code}
1   #!/usr/bin/python
{code}

Should be {{#!/usr/bin/env python}}?

{code}
2   #   Licensed under the Apache License, Version 2.0 (the License);
3   #   you may not use this file except in compliance with the License.
4   #   You may obtain a copy of the License at
5   #
6   #   http://www.apache.org/licenses/LICENSE-2.0
7   #
8   #   Unless required by applicable law or agreed to in writing, software
9   #   distributed under the License is distributed on an AS IS BASIS,
10  #   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 
implied.
11  #   See the License for the specific language governing permissions and
12  #   limitations under the License.
{code}
I realize that you are just copying this from the previous {{relnotes.py}}, but 
we should fix this to match our other license headers.  If you look at 
{{determine-flaky-tests-hadoop.py}}, you can see its header is:

{code}
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# License); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an AS IS BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
{code}
The text about the {{NOTICE}} file is missing from {{releasedocmaker.py}}.

{code}
296 def main():
297   parser = OptionParser(usage=usage: %prog --version VERSION 
[--version VERSION2 ...])
298   parser.add_option(-v, --version, dest=versions,
299  action=append, type=string,
300  help=versions in JIRA to include in releasenotes, 
metavar=VERSION)
301   parser.add_option(-m,--master, dest=master, action=store_true,
302  help=only create the master files)
303   parser.add_option(-i,--index, dest=index, action=store_true,
304  help=build an index file)
{code}
Can you add a note to the usage message about which files are generated by this 
script, and what their names will be?  Also where the files will be generated

{code}
80  found = re.match('^((\d+)(\.\d+)*).*$', data)
81  if (found):
82self.parts = [ int(p) for p in found.group(1).split('.') ]
83  else:
84self.parts = []
{code}
Should we throw an exception if we can't parse the version?

{code}
28  def clean(str):
29return clean(re.sub(namePattern, , str))
30  
31  def formatComponents(str):
32str = re.sub(namePattern, '', str).replace(', )
33if str != :
34  ret = str
35else:
36  # some markdown parsers don't like empty tables
37  ret = .
38return clean(ret)
39  
40  def lessclean(str):
41str=str.encode('utf-8')
42str=str.replace(_,\_)
43str=str.replace(\r,)
44str=str.rstrip()
45return str
46  
47  def clean(str):
48str=lessclean(str)
49str=str.replace(|,\|)
50str=str.rstrip()
{code}

I find this a bit confusing.  Can we call the first function something other 
than clean, to avoid having two different functions named clean that do 
different things?  When would I use {{lessclean}} rather than {{clean}}?  It 
seems like only the release notes get the lessclean treatment.  It would be 
helpful to have a comments before the lessclean function explaining when it is 
useful.

 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14382673#comment-14382673
 ] 

Sangjin Lee commented on HADOOP-11754:
--

The current state of things is SIGNATURE_SECRET_FILE is set by default in 
core-default.xml, so it's always set unless the user explicitly unsets it.

 RM fails to start in non-secure mode due to authentication filter failure
 -

 Key: HADOOP-11754
 URL: https://issues.apache.org/jira/browse/HADOOP-11754
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch


 RM fails to start in the non-secure mode with the following exception:
 {noformat}
 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
   at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
   at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
   ... 23 more
 ...
 2015-03-25 22:02:42,538 FATAL 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
 ResourceManager
 org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 

[jira] [Commented] (HADOOP-11553) Formalize the shell API

2015-03-26 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14382635#comment-14382635
 ] 

Hadoop QA commented on HADOOP-11553:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12707572/HADOOP-11553-06.patch
  against trunk revision 87130bf.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6004//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6004//console

This message is automatically generated.

 Formalize the shell API
 ---

 Key: HADOOP-11553
 URL: https://issues.apache.org/jira/browse/HADOOP-11553
 Project: Hadoop Common
  Issue Type: Improvement
  Components: documentation, scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Attachments: HADOOP-11553-00.patch, HADOOP-11553-01.patch, 
 HADOOP-11553-02.patch, HADOOP-11553-03.patch, HADOOP-11553-04.patch, 
 HADOOP-11553-05.patch, HADOOP-11553-06.patch


 After HADOOP-11485, we need to formally document functions and environment 
 variables that 3rd parties can expect to be able to exist/use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11691) X86 build of libwinutils is broken

2015-03-26 Thread Chuan Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14382665#comment-14382665
 ] 

Chuan Liu commented on HADOOP-11691:


+1 I have verified the CPU feature is present when building with the latest 
patch. Thanks for fixing the build! 

 X86 build of libwinutils is broken
 --

 Key: HADOOP-11691
 URL: https://issues.apache.org/jira/browse/HADOOP-11691
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, native
Affects Versions: 2.7.0
Reporter: Remus Rusanu
Assignee: Kiran Kumar M R
Priority: Critical
 Attachments: HADOOP-11691-001.patch, HADOOP-11691-002.patch, 
 HADOOP-11691-003.patch


 Hadoop-9922 recently fixed x86 build. After YARN-2190 compiling x86 results 
 in error:
 {code}
 (Link target) -
   
 E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\target/winutils/hadoopwinutilsvc_s.obj
  : fatal error LNK1112: module machine type 'x64' conflicts with target 
 machine type 'X86' 
 [E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\src\main\winutils\winutils.vcxproj]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14382676#comment-14382676
 ] 

Allen Wittenauer commented on HADOOP-11754:
---

bq.  If all we want in HADOOP-10670 is for the webHDFS auth filter to be able 
to use file-based signer, why don't' we implement that functionality there 
similar to RMAuthenticationFilterInitializer instead of changing 
AuthenticationFilter. That should get what you want but avoid this breakage, 
even if it isn't ideal?

... which is pretty much what [~rsasson]'s patch in HDFS-5796 does. :)



 RM fails to start in non-secure mode due to authentication filter failure
 -

 Key: HADOOP-11754
 URL: https://issues.apache.org/jira/browse/HADOOP-11754
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch


 RM fails to start in the non-secure mode with the following exception:
 {noformat}
 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
   at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
   at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
   ... 23 more
 ...
 2015-03-25 22:02:42,538 FATAL 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
 ResourceManager
 org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 

[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14382692#comment-14382692
 ] 

Kai Zheng commented on HADOOP-11754:


For the long term, ideally, as desired and did in HADOOP-10670, the signature 
secret file setting stuff should be taken care of in the 
{{AuthenticationFilter}} so that it's possible for all the Hadoop web UIs 
(HDFS, YARN) can easily share the same and common configuration and logics, 
thus some advanced SSO effect can be achieved. The dirty things can all be 
handled in the common place, not needed in all the places. It's ideal.

For now and the release, to keep the original behavior, as I said before, we 
can have some logic in RM like this: if it's not in secure mode, and signature 
file property is set, then we may remove the property. If we want to be more 
careful, we can even check the specified file is originally the default file or 
not. As such change is only in RM and Timeline server, it won't affect other 
places.


 RM fails to start in non-secure mode due to authentication filter failure
 -

 Key: HADOOP-11754
 URL: https://issues.apache.org/jira/browse/HADOOP-11754
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch


 RM fails to start in the non-secure mode with the following exception:
 {noformat}
 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
   at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
   at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
   ... 23 more
 ...
 

[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14382659#comment-14382659
 ] 

Zhijie Shen commented on HADOOP-11754:
--

BTW, that SIGNATURE_SECRET_FILE is not set and that SIGNATURE_SECRET_FILE is 
pointing to a non-existing file mean different. While the former case indicates 
using the default random secret, the latter one is regarded as the wrong 
configuration.

 RM fails to start in non-secure mode due to authentication filter failure
 -

 Key: HADOOP-11754
 URL: https://issues.apache.org/jira/browse/HADOOP-11754
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch


 RM fails to start in the non-secure mode with the following exception:
 {noformat}
 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
   at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
   at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
   ... 23 more
 ...
 2015-03-25 22:02:42,538 FATAL 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
 ResourceManager
 org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 

[jira] [Commented] (HADOOP-11757) NFS gateway should shutdown when it can't start UDP or TCP server

2015-03-26 Thread Brandon Li (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11757?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14382678#comment-14382678
 ] 

Brandon Li commented on HADOOP-11757:
-

Moved this JIRA form HDFS to COMMON since we only changed the common code.

 NFS gateway should shutdown when it can't start UDP or TCP server
 -

 Key: HADOOP-11757
 URL: https://issues.apache.org/jira/browse/HADOOP-11757
 Project: Hadoop Common
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-7989.001.patch, HDFS-7989.002.patch


 Unlike the Portmap, Nfs3 class does shutdown when the service can't start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Moved] (HADOOP-11757) NFS gateway should shutdown when it can't start UDP or TCP server

2015-03-26 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11757?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li moved HDFS-7989 to HADOOP-11757:
---

  Component/s: (was: nfs)
   nfs
Affects Version/s: (was: 2.2.0)
   2.2.0
  Key: HADOOP-11757  (was: HDFS-7989)
  Project: Hadoop Common  (was: Hadoop HDFS)

 NFS gateway should shutdown when it can't start UDP or TCP server
 -

 Key: HADOOP-11757
 URL: https://issues.apache.org/jira/browse/HADOOP-11757
 Project: Hadoop Common
  Issue Type: Bug
  Components: nfs
Affects Versions: 2.2.0
Reporter: Brandon Li
Assignee: Brandon Li
 Attachments: HDFS-7989.001.patch, HDFS-7989.002.patch


 Unlike the Portmap, Nfs3 class does shutdown when the service can't start.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11257) Update hadoop jar documentation to warn against using it for launching yarn jars

2015-03-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14382693#comment-14382693
 ] 

Colin Patrick McCabe commented on HADOOP-11257:
---

I have no objection to using stderr rather than stdout, but I also think Hive 
should be using yarn jar to launch yarn jars.  If you post a patch to send 
this to stderr I will review

 Update hadoop jar documentation to warn against using it for launching yarn 
 jars
 --

 Key: HADOOP-11257
 URL: https://issues.apache.org/jira/browse/HADOOP-11257
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.1-beta
Reporter: Allen Wittenauer
Assignee: Masatake Iwasaki
 Fix For: 2.7.0

 Attachments: HADOOP-11257.1.patch, HADOOP-11257.1.patch, 
 HADOOP-11257.2.patch, HADOOP-11257.3.patch, HADOOP-11257.4.patch, 
 HADOOP-11257.4.patch


 We should update the hadoop jar documentation to warn against using it for 
 launching yarn jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14382700#comment-14382700
 ] 

Kai Zheng commented on HADOOP-11754:


Oh bad, may I correct.
bq.we can have some logic in RM like this:...
I mean, the fix logic could be: if 1) it's not in secure mode, 2) **the 
signature file property is set but absent**, and optionally 3) it's the default 
property value (not set explicitly by user), then we may remove the property.

 RM fails to start in non-secure mode due to authentication filter failure
 -

 Key: HADOOP-11754
 URL: https://issues.apache.org/jira/browse/HADOOP-11754
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch


 RM fails to start in the non-secure mode with the following exception:
 {noformat}
 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
   at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
   at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
   ... 23 more
 ...
 2015-03-25 22:02:42,538 FATAL 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
 ResourceManager
 org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 

[jira] [Commented] (HADOOP-11660) Add support for hardware crc on ARM aarch64 architecture

2015-03-26 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11660?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14382721#comment-14382721
 ] 

Colin Patrick McCabe commented on HADOOP-11660:
---

Thank you for your patience, [~enevill].  I think this is almost ready to 
commit.

{code}
168 ELSEIF (CMAKE_SYSTEM_PROCESSOR STREQUAL aarch64)
169   set(BULK_CRC_ARCH_SOURCE_FIlE ${D}/util/bulk_crc32_aarch64.c)
170 ENDIF()
{code}
Can you put in a {{MESSAGE}} here that explains that the architecture is 
unsupported in the ELSE case?  We certainly don't want to be losing hardware 
acceleration without being aware of it.

{{bulk_crc32_aarch64.c}}: you should include {{stdint.h}} here for {{uint8_t}}, 
etc.  Even though some other header is probably pulling it in now by accident.

{{bulk_crc32_x86.c}}: I would really prefer not to wrap this in a giant {{#if 
defined(__GNUC__)  !defined(__FreeBSD__)}}, especially since we're not 
wrapping the ARM version like that.  If people want this to be compiler and 
os-specific, it would be better to do it at the CMake level.  I would say just 
take that out and let people fix it if it becomes a problem for them.

Can you post before / after performance numbers for x86_64?  Maybe you can 
instrument test_bulk_crc32.c to produce those numbers.

It looks like when this was done previously, the test code was not checked in.  
See:
https://issues.apache.org/jira/browse/HADOOP-7446?focusedCommentId=13084519page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-13084519

I'm sorry to dump another task on you, but my co-workers will kill me if I 
regress checksum performance.

Thanks again for working on this.  As soon as we verify that we haven't 
regressed perf, and made those minor changes, we should be good to go.

 Add support for hardware crc on ARM aarch64 architecture
 

 Key: HADOOP-11660
 URL: https://issues.apache.org/jira/browse/HADOOP-11660
 Project: Hadoop Common
  Issue Type: Improvement
  Components: native
Affects Versions: 3.0.0
 Environment: ARM aarch64 development platform
Reporter: Edward Nevill
Assignee: Edward Nevill
Priority: Minor
  Labels: performance
 Attachments: jira-11660.patch

   Original Estimate: 48h
  Remaining Estimate: 48h

 This patch adds support for hardware crc for ARM's new 64 bit architecture
 The patch is completely conditionalized on __aarch64__
 I have only added support for the non pipelined version as I benchmarked the 
 pipelined version on aarch64 and it showed no performance improvement.
 The aarch64 version supports both Castagnoli and Zlib CRCs as both of these 
 are supported on ARM aarch64 hardwre.
 To benchmark this I modified the test_bulk_crc32 test to print out the time 
 taken to CRC a 1MB dataset 1000 times.
 Before:
 CRC 1048576 bytes @ 512 bytes per checksum X 1000 iterations = 2.55
 CRC 1048576 bytes @ 512 bytes per checksum X 1000 iterations = 2.55
 After:
 CRC 1048576 bytes @ 512 bytes per checksum X 1000 iterations = 0.57
 CRC 1048576 bytes @ 512 bytes per checksum X 1000 iterations = 0.57
 So this represents a 5X performance improvement on raw CRC calculation.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11257) Update hadoop jar documentation to warn against using it for launching yarn jars

2015-03-26 Thread Chris Nauroth (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14382718#comment-14382718
 ] 

Chris Nauroth commented on HADOOP-11257:


Shall we just revert the script changes from branch-2 and branch-2.7 since this 
has proven to be a backwards-incompatible change?  We can still make the script 
changes in trunk, and the documentation part of the change is still good for 
all branches.

 Update hadoop jar documentation to warn against using it for launching yarn 
 jars
 --

 Key: HADOOP-11257
 URL: https://issues.apache.org/jira/browse/HADOOP-11257
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.1-beta
Reporter: Allen Wittenauer
Assignee: Masatake Iwasaki
 Fix For: 2.7.0

 Attachments: HADOOP-11257.1.patch, HADOOP-11257.1.patch, 
 HADOOP-11257.2.patch, HADOOP-11257.3.patch, HADOOP-11257.4.patch, 
 HADOOP-11257.4.patch


 We should update the hadoop jar documentation to warn against using it for 
 launching yarn jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11748) Secrets for auth cookies can be specified in clear text

2015-03-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-11748:

Attachment: HADOOP-11748.001.patch

 Secrets for auth cookies can be specified in clear text
 ---

 Key: HADOOP-11748
 URL: https://issues.apache.org/jira/browse/HADOOP-11748
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
Priority: Critical
 Attachments: HADOOP-11748-032615-poc.patch, HADOOP-11748.001.patch


 Based on the discussion on HADOOP-10670, this jira proposes to remove 
 {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
 and security vulnerabilities.
 {quote}
 My understanding is that the use case of inlining the secret is never 
 supported. The property is used to pass the secret internally. The way it 
 works before HADOOP-10868 is the following:
 * Users specify the initializer of the authentication filter in the 
 configuration.
 * AuthenticationFilterInitializer reads the secret file. The server will not 
 start if the secret file does not exists. The initializer will set the 
 property if it read the file correctly.
 *There is no way to specify the secret in the configuration out-of-the-box – 
 the secret is always overwritten by AuthenticationFilterInitializer.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11748) Secrets for auth cookies can be specified in clear text

2015-03-26 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-11748:

Status: Patch Available  (was: Open)

 Secrets for auth cookies can be specified in clear text
 ---

 Key: HADOOP-11748
 URL: https://issues.apache.org/jira/browse/HADOOP-11748
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
Priority: Critical
 Attachments: HADOOP-11748-032615-poc.patch, HADOOP-11748.001.patch


 Based on the discussion on HADOOP-10670, this jira proposes to remove 
 {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
 and security vulnerabilities.
 {quote}
 My understanding is that the use case of inlining the secret is never 
 supported. The property is used to pass the secret internally. The way it 
 works before HADOOP-10868 is the following:
 * Users specify the initializer of the authentication filter in the 
 configuration.
 * AuthenticationFilterInitializer reads the secret file. The server will not 
 start if the secret file does not exists. The initializer will set the 
 property if it read the file correctly.
 *There is no way to specify the secret in the configuration out-of-the-box – 
 the secret is always overwritten by AuthenticationFilterInitializer.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11748) Secrets for auth cookies can be specified in clear text

2015-03-26 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14382936#comment-14382936
 ] 

Haohui Mai commented on HADOOP-11748:
-

Continue on [~gtCarrera]'s work and to fix the unit tests.

 Secrets for auth cookies can be specified in clear text
 ---

 Key: HADOOP-11748
 URL: https://issues.apache.org/jira/browse/HADOOP-11748
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
Priority: Critical
 Attachments: HADOOP-11748-032615-poc.patch, HADOOP-11748.001.patch


 Based on the discussion on HADOOP-10670, this jira proposes to remove 
 {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
 and security vulnerabilities.
 {quote}
 My understanding is that the use case of inlining the secret is never 
 supported. The property is used to pass the secret internally. The way it 
 works before HADOOP-10868 is the following:
 * Users specify the initializer of the authentication filter in the 
 configuration.
 * AuthenticationFilterInitializer reads the secret file. The server will not 
 start if the secret file does not exists. The initializer will set the 
 property if it read the file correctly.
 *There is no way to specify the secret in the configuration out-of-the-box – 
 the secret is always overwritten by AuthenticationFilterInitializer.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11748) Secrets for auth cookies can be specified in clear text

2015-03-26 Thread Li Lu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14382953#comment-14382953
 ] 

Li Lu commented on HADOOP-11748:


Thanks [~wheat9] for continuing on this. The fix on TestAuthenticationFilter 
looks good to me. 

 Secrets for auth cookies can be specified in clear text
 ---

 Key: HADOOP-11748
 URL: https://issues.apache.org/jira/browse/HADOOP-11748
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
Priority: Critical
 Attachments: HADOOP-11748-032615-poc.patch, HADOOP-11748.001.patch


 Based on the discussion on HADOOP-10670, this jira proposes to remove 
 {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
 and security vulnerabilities.
 {quote}
 My understanding is that the use case of inlining the secret is never 
 supported. The property is used to pass the secret internally. The way it 
 works before HADOOP-10868 is the following:
 * Users specify the initializer of the authentication filter in the 
 configuration.
 * AuthenticationFilterInitializer reads the secret file. The server will not 
 start if the secret file does not exists. The initializer will set the 
 property if it read the file correctly.
 *There is no way to specify the secret in the configuration out-of-the-box – 
 the secret is always overwritten by AuthenticationFilterInitializer.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11740) Combine erasure encoder and decoder interfaces

2015-03-26 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HADOOP-11740:
---
Attachment: HADOOP-11740-000.patch

This initial patch simply removes {{ErasureEncoder}} and {{ErasureDecoder}}. I 
think the following further simplifications are possible:
# We can get rid of {{ErasureCoder}} since it has a single subclass now 
({{AbstractErasureCoder}}
# Similarly, maybe we can get rid of {{ErasureCodingStep}} since 
{{AbstractErasureCodingStep}} provides enough abstraction anyway
# If {{ECBlockGroup}} can provide erased indices, we can further combine 
encoding and decoding classes

 Combine erasure encoder and decoder interfaces
 --

 Key: HADOOP-11740
 URL: https://issues.apache.org/jira/browse/HADOOP-11740
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Zhe Zhang
Assignee: Zhe Zhang
 Attachments: HADOOP-11740-000.patch


 Rationale [discussed | 
 https://issues.apache.org/jira/browse/HDFS-7337?focusedCommentId=14376540page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14376540]
  under HDFS-7337.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11553) Formalize the shell API

2015-03-26 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14382815#comment-14382815
 ] 

Hudson commented on HADOOP-11553:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7443 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7443/])
HADOOP-11553. Foramlize the shell API (aw) (aw: rev 
b30ca8ce0e0d435327e179f0877bd58fa3896793)
* hadoop-common-project/hadoop-common/src/site/markdown/UnixShellGuide.md
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* hadoop-project/src/site/site.xml
* dev-support/shelldocs.py
* hadoop-common-project/hadoop-common/pom.xml
HADOOP-11553 addendum fix the typo in the changes file (aw: rev 
5695c7a541c1a3092040523446f1ba689fb495e3)
* hadoop-common-project/hadoop-common/CHANGES.txt


 Formalize the shell API
 ---

 Key: HADOOP-11553
 URL: https://issues.apache.org/jira/browse/HADOOP-11553
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation, scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Fix For: 3.0.0

 Attachments: HADOOP-11553-00.patch, HADOOP-11553-01.patch, 
 HADOOP-11553-02.patch, HADOOP-11553-03.patch, HADOOP-11553-04.patch, 
 HADOOP-11553-05.patch, HADOOP-11553-06.patch


 After HADOOP-11485, we need to formally document functions and environment 
 variables that 3rd parties can expect to be able to exist/use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14382879#comment-14382879
 ] 

Zhijie Shen commented on HADOOP-11754:
--

bq. There are two ways to do this

Prefer the second way. We still want to load  auth filter with pseudo auth 
handler to accept user.name=blah blah. Moreover, before HADOOP-10670, the 
semantics is falling back to random secret if no customized secret is given, no 
matter it's from config directly, or read from a configured secret file. After 
that jira, the semantics changed to also failing when error happens in reading 
the secret file. So previously if secret file is empty, it will work. Now even 
though no read failure happens, I'm afraid the empty secret file will still 
bring down the auth filter with null secret object.



 RM fails to start in non-secure mode due to authentication filter failure
 -

 Key: HADOOP-11754
 URL: https://issues.apache.org/jira/browse/HADOOP-11754
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch


 RM fails to start in the non-secure mode with the following exception:
 {noformat}
 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
   at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
   at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
   ... 23 more
 ...
 2015-03-25 22:02:42,538 FATAL 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
 ResourceManager
 org.apache.hadoop.yarn.webapp.WebAppException: Error starting http 

[jira] [Updated] (HADOOP-11257) Update hadoop jar documentation to warn against using it for launching yarn jars

2015-03-26 Thread Vinod Kumar Vavilapalli (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinod Kumar Vavilapalli updated HADOOP-11257:
-
Priority: Blocker  (was: Major)

Marked as a blocker for 2.7. I think we should get in the patch that prints it 
to stderr.

 Update hadoop jar documentation to warn against using it for launching yarn 
 jars
 --

 Key: HADOOP-11257
 URL: https://issues.apache.org/jira/browse/HADOOP-11257
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.1-beta
Reporter: Allen Wittenauer
Assignee: Masatake Iwasaki
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HADOOP-11257-branch-2.addendum.001.patch, 
 HADOOP-11257.1.patch, HADOOP-11257.1.patch, HADOOP-11257.2.patch, 
 HADOOP-11257.3.patch, HADOOP-11257.4.patch, HADOOP-11257.4.patch


 We should update the hadoop jar documentation to warn against using it for 
 launching yarn jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11748) Secrets for auth cookies can be specified in clear text

2015-03-26 Thread Li Lu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Li Lu updated HADOOP-11748:
---
Attachment: HADOOP-11748-032615-poc.patch

Did some work to change the {{StringSecretProvider}} class to be test only. 
Most work done but TestAuthenticationFilter is failing because we're changing 
the default filters. In a comprehensive fix we need to change the mockito 
settings in TestAuthenticationFilter to create {{StringSecretProvider}}s in 
{{config}} objects. 

 Secrets for auth cookies can be specified in clear text
 ---

 Key: HADOOP-11748
 URL: https://issues.apache.org/jira/browse/HADOOP-11748
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
Priority: Critical
 Attachments: HADOOP-11748-032615-poc.patch


 Based on the discussion on HADOOP-10670, this jira proposes to remove 
 {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
 and security vulnerabilities.
 {quote}
 My understanding is that the use case of inlining the secret is never 
 supported. The property is used to pass the secret internally. The way it 
 works before HADOOP-10868 is the following:
 * Users specify the initializer of the authentication filter in the 
 configuration.
 * AuthenticationFilterInitializer reads the secret file. The server will not 
 start if the secret file does not exists. The initializer will set the 
 property if it read the file correctly.
 *There is no way to specify the secret in the configuration out-of-the-box – 
 the secret is always overwritten by AuthenticationFilterInitializer.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-26 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14382895#comment-14382895
 ] 

Kai Zheng commented on HADOOP-11754:


You're right. To be safer, we may also need to check if the file is empty or 
not, when deciding to unset the property or otherwise. Kinds of dirty.

Maybe we can have the 2nd way as a work around for the release to keep the 
original behavior, and the 1st way for the next release to clean up finally ?

 RM fails to start in non-secure mode due to authentication filter failure
 -

 Key: HADOOP-11754
 URL: https://issues.apache.org/jira/browse/HADOOP-11754
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch


 RM fails to start in the non-secure mode with the following exception:
 {noformat}
 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
   at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
   at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
   ... 23 more
 ...
 2015-03-25 22:02:42,538 FATAL 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
 ResourceManager
 org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 

[jira] [Work started] (HADOOP-11740) Combine erasure encoder and decoder interfaces

2015-03-26 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11740?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Work on HADOOP-11740 started by Zhe Zhang.
--
 Combine erasure encoder and decoder interfaces
 --

 Key: HADOOP-11740
 URL: https://issues.apache.org/jira/browse/HADOOP-11740
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: io
Reporter: Zhe Zhang
Assignee: Zhe Zhang

 Rationale [discussed | 
 https://issues.apache.org/jira/browse/HDFS-7337?focusedCommentId=14376540page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14376540]
  under HDFS-7337.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >