[jira] [Commented] (HADOOP-10392) Use FileSystem#makeQualified(Path) instead of Path#makeQualified(FileSystem)

2015-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10392?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383435#comment-14383435
 ] 

Hadoop QA commented on HADOOP-10392:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12707690/HADOOP-10392.8.patch
  against trunk revision af618f2.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 24 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common 
hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient
 hadoop-tools/hadoop-archives hadoop-tools/hadoop-aws 
hadoop-tools/hadoop-gridmix hadoop-tools/hadoop-openstack 
hadoop-tools/hadoop-rumen hadoop-tools/hadoop-streaming:

  org.apache.hadoop.mapred.TestMRIntermediateDataEncryption
  org.apache.hadoop.mapred.TestReduceFetch
  org.apache.hadoop.mapred.TestMiniMRClasspath
  org.apache.hadoop.mapreduce.lib.output.TestJobOutputCommitter
  org.apache.hadoop.mapreduce.v2.TestSpeculativeExecution
  org.apache.hadoop.mapreduce.v2.TestMROldApiJobs
  org.apache.hadoop.mapreduce.v2.TestMRJobsWithProfiler
  org.apache.hadoop.mapreduce.TestMapReduceLazyOutput
  org.apache.hadoop.mapreduce.security.TestMRCredentials
  org.apache.hadoop.ipc.TestMRCJCSocketFactory
  org.apache.hadoop.mapred.TestMiniMRBringup
  org.apache.hadoop.conf.TestNoDefaultsJobConf
  org.apache.hadoop.mapreduce.TestChild
  org.apache.hadoop.mapred.TestMRTimelineEventHandling
  org.apache.hadoop.mapred.TestLazyOutput
  org.apache.hadoop.mapred.TestJobCleanup
  org.apache.hadoop.mapred.TestJobCounters
  org.apache.hadoop.mapreduce.v2.TestMiniMRProxyUser
  org.apache.hadoop.mapred.TestJobName
  org.apache.hadoop.mapreduce.v2.TestUberAM
  org.apache.hadoop.mapreduce.v2.TestMRJobsWithHistoryService
  org.apache.hadoop.mapreduce.v2.TestMRAppWithCombiner
  org.apache.hadoop.mapred.TestMiniMRWithDFSWithDistinctUsers
  org.apache.hadoop.mapreduce.v2.TestRMNMInfo
  org.apache.hadoop.mapred.TestMerge
  org.apache.hadoop.mapreduce.v2.TestNonExistentJob
  org.apache.hadoop.mapred.TestReduceFetchFromPartialMem
  org.apache.hadoop.mapred.TestClusterMapReduceTestCase
  org.apache.hadoop.mapreduce.security.TestBinaryTokenFile
  org.apache.hadoop.mapred.TestJobSysDirWithDFS
  org.apache.hadoop.mapred.TestMiniMRClientCluster
  org.apache.hadoop.mapreduce.TestLargeSort
  org.apache.hadoop.mapreduce.TestMRJobClient
  org.apache.hadoop.mapreduce.v2.TestMRJobs
  
org.apache.hadoop.mapreduce.v2.TestMRAMWithNonNormalizedCapabilities
  org.apache.hadoop.mapred.TestClusterMRNotification
  org.apache.hadoop.mapred.TestNetworkedJob
  org.apache.hadoop.mapreduce.security.ssl.TestEncryptedShuffle
  org.apache.hadoop.mapred.TestSpecialCharactersInOutputPath
  org.apache.hadoop.mapred.TestMiniMRChildTask
  org.apache.hadoop.mapred.gridmix.TestDistCacheEmulation
  org.apache.hadoop.mapred.gridmix.TestLoadJob
  org.apache.hadoop.mapred.gridmix.TestGridmixSubmission
  org.apache.hadoop.mapred.gridmix.TestSleepJob
  org.apache.hadoop.streaming.TestFileArgs
  org.apache.hadoop.streaming.TestMultipleCachefiles
  org.apache.hadoop.streaming.TestMultipleArchiveFiles
  org.apache.hadoop.streaming.TestSymLink
  org.apache.hadoop.streaming.TestStreamingBadRecords

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6010//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6010//console

This 

[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-27 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383394#comment-14383394
 ] 

Kai Zheng commented on HADOOP-11754:


The logic looks good. It's smart to tell if security/Kerberos is enabled or 
not. I'm not sure why we change the tests, which caused the failures. Do we 
need an update or just trigger since the dep of HADOOP-11748 was already in ?

 RM fails to start in non-secure mode due to authentication filter failure
 -

 Key: HADOOP-11754
 URL: https://issues.apache.org/jira/browse/HADOOP-11754
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
 HADOOP-11754.000.patch, HADOOP-11754.001.patch


 RM fails to start in the non-secure mode with the following exception:
 {noformat}
 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
   at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
   at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
   ... 23 more
 ...
 2015-03-25 22:02:42,538 FATAL 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
 ResourceManager
 org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 

[jira] [Commented] (HADOOP-11639) Clean up Windows native code compilation warnings related to Windows Secure Container Executor.

2015-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383639#comment-14383639
 ] 

Hadoop QA commented on HADOOP-11639:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12707725/HADOOP-11639.03.patch
  against trunk revision af618f2.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6011//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6011//console

This message is automatically generated.

 Clean up Windows native code compilation warnings related to Windows Secure 
 Container Executor.
 ---

 Key: HADOOP-11639
 URL: https://issues.apache.org/jira/browse/HADOOP-11639
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.6.0
Reporter: Chris Nauroth
Assignee: Remus Rusanu
 Attachments: HADOOP-11639.00.patch, HADOOP-11639.01.patch, 
 HADOOP-11639.02.patch, HADOOP-11639.03.patch


 YARN-2198 introduced additional code in Hadoop Common to support the 
 NodeManager {{WindowsSecureContainerExecutor}}.  The patch introduced new 
 compilation warnings that we need to investigate and resolve.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11760) Typo in DistCp.java

2015-03-27 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-11760:
--
Status: Patch Available  (was: Open)

 Typo in DistCp.java
 ---

 Key: HADOOP-11760
 URL: https://issues.apache.org/jira/browse/HADOOP-11760
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chen He
Assignee: Brahma Reddy Battula
Priority: Trivial
  Labels: newbie++
 Attachments: HADOOP-11760.patch


 /**
* Create a default working folder for the job, under the
* job staging directory
*
* @return Returns the working folder information
* @throws Exception - EXception if any
*/
   private Path createMetaFolderPath() throws Exception {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11753) TestS3AContractOpen#testOpenReadZeroByteFile fails due to negative range header

2015-03-27 Thread Thomas Demoor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383659#comment-14383659
 ] 

Thomas Demoor commented on HADOOP-11753:


Probably against Cloudian backend.

Please see the HTTP [spec| https://tools.ietf.org/html/rfc7233#section-2.1]
{quote} An origin server MUST ignore a Range header field that contains a range 
unit it does not understand. {quote}

If you still use the [old spec | 
https://tools.ietf.org/html/rfc2616#section-14.35.1]
{quote}The recipient of a byte-range-set that includes one or more 
syntactically invalid byte-range-spec values MUST ignore the header field that 
includes that byte-range-set.{quote}

Investigated vs AWS: correct implementation, the request is served as if it 
would be a non-ranged GET. (f.i.: (0,-1) on a 0-byte object returns 0 bytes,  
(0,-1000) on a 4 byte object returns 4 bytes, ...).



 TestS3AContractOpen#testOpenReadZeroByteFile fails due to negative range 
 header
 ---

 Key: HADOOP-11753
 URL: https://issues.apache.org/jira/browse/HADOOP-11753
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 3.0.0, 2.7.0
Reporter: Takenori Sato
Assignee: Takenori Sato
 Attachments: HADOOP-11753-branch-2.7.001.patch


 _TestS3AContractOpen#testOpenReadZeroByteFile_ fails as follows.
 {code}
 testOpenReadZeroByteFile(org.apache.hadoop.fs.contract.s3a.TestS3AContractOpen)
   Time elapsed: 3.312 sec   ERROR!
 com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 416, AWS 
 Service: Amazon S3, AWS Request ID: A58A95E0D36811E4, AWS Error Code: 
 InvalidRange, AWS Error Message: The requested range cannot be satisfied.
   at 
 com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
   at 
 com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
   at 
 com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
   at 
 com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
   at 
 com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:)
   at 
 org.apache.hadoop.fs.s3a.S3AInputStream.reopen(S3AInputStream.java:91)
   at 
 org.apache.hadoop.fs.s3a.S3AInputStream.openIfNeeded(S3AInputStream.java:62)
   at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:127)
   at java.io.FilterInputStream.read(FilterInputStream.java:83)
   at 
 org.apache.hadoop.fs.contract.AbstractContractOpenTest.testOpenReadZeroByteFile(AbstractContractOpenTest.java:66)
   at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
   at 
 sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
   at 
 sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
   at java.lang.reflect.Method.invoke(Method.java:606)
   at 
 org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
   at 
 org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
   at 
 org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
   at 
 org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
   at 
 org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
   at 
 org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
   at 
 org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
 {code}
 This is because the header is wrong when calling _S3AInputStream#read_ after 
 _S3AInputStream#open_.
 {code}
 Range: bytes=0--1
 * from 0 to -1
 {code}
 Tested on the latest branch-2.7.
 {quote}
 $ git log
 commit d286673c602524af08935ea132c8afd181b6e2e4
 Author: Jitendra Pandey Jitendra@Jitendra-Pandeys-MacBook-Pro-4.local
 Date:   Tue Mar 24 16:17:06 2015 -0700
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11553) Formalize the shell API

2015-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383697#comment-14383697
 ] 

Hudson commented on HADOOP-11553:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #879 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/879/])
HADOOP-11553. Foramlize the shell API (aw) (aw: rev 
b30ca8ce0e0d435327e179f0877bd58fa3896793)
* hadoop-project/src/site/site.xml
* hadoop-common-project/hadoop-common/pom.xml
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/shelldocs.py
* hadoop-common-project/hadoop-common/src/site/markdown/UnixShellGuide.md
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
HADOOP-11553 addendum fix the typo in the changes file (aw: rev 
5695c7a541c1a3092040523446f1ba689fb495e3)
* hadoop-common-project/hadoop-common/CHANGES.txt


 Formalize the shell API
 ---

 Key: HADOOP-11553
 URL: https://issues.apache.org/jira/browse/HADOOP-11553
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation, scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Fix For: 3.0.0

 Attachments: HADOOP-11553-00.patch, HADOOP-11553-01.patch, 
 HADOOP-11553-02.patch, HADOOP-11553-03.patch, HADOOP-11553-04.patch, 
 HADOOP-11553-05.patch, HADOOP-11553-06.patch


 After HADOOP-11485, we need to formally document functions and environment 
 variables that 3rd parties can expect to be able to exist/use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11691) X86 build of libwinutils is broken

2015-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383695#comment-14383695
 ] 

Hudson commented on HADOOP-11691:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #879 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/879/])
HADOOP-11691. X86 build of libwinutils is broken. Contributed by Kiran Kumar M 
R. (cnauroth: rev af618f23a70508111f490a24d74fc90161cfc079)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/winutils/win8sdk.props


 X86 build of libwinutils is broken
 --

 Key: HADOOP-11691
 URL: https://issues.apache.org/jira/browse/HADOOP-11691
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, native
Affects Versions: 2.7.0
Reporter: Remus Rusanu
Assignee: Kiran Kumar M R
Priority: Critical
 Fix For: 2.7.0

 Attachments: HADOOP-11691-001.patch, HADOOP-11691-002.patch, 
 HADOOP-11691-003.patch


 Hadoop-9922 recently fixed x86 build. After YARN-2190 compiling x86 results 
 in error:
 {code}
 (Link target) -
   
 E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\target/winutils/hadoopwinutilsvc_s.obj
  : fatal error LNK1112: module machine type 'x64' conflicts with target 
 machine type 'X86' 
 [E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\src\main\winutils\winutils.vcxproj]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11748) The secrets of auth cookies should not be specified in configuration in clear text

2015-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383696#comment-14383696
 ] 

Hudson commented on HADOOP-11748:
-

FAILURE: Integrated in Hadoop-Yarn-trunk #879 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk/879/])
HADOOP-11748. The secrets of auth cookies should not be specified in 
configuration in clear text. Contributed by Li Lu and Haohui Mai. (wheat9: rev 
47782cbf4a66d49064fd3dd6d1d1a19cc42157fc)
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProvider.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProvider.java
* hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAuthenticationFilter.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProviderCreator.java


 The secrets of auth cookies should not be specified in configuration in clear 
 text
 --

 Key: HADOOP-11748
 URL: https://issues.apache.org/jira/browse/HADOOP-11748
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
Priority: Critical
 Fix For: 2.7.0

 Attachments: HADOOP-11748-032615-poc.patch, HADOOP-11748.001.patch


 Based on the discussion on HADOOP-10670, this jira proposes to remove 
 {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
 and security vulnerabilities.
 {quote}
 My understanding is that the use case of inlining the secret is never 
 supported. The property is used to pass the secret internally. The way it 
 works before HADOOP-10868 is the following:
 * Users specify the initializer of the authentication filter in the 
 configuration.
 * AuthenticationFilterInitializer reads the secret file. The server will not 
 start if the secret file does not exists. The initializer will set the 
 property if it read the file correctly.
 *There is no way to specify the secret in the configuration out-of-the-box – 
 the secret is always overwritten by AuthenticationFilterInitializer.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11759) TockenCache doc has minor problem

2015-03-27 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-11759:
--
Status: Patch Available  (was: Open)

 TockenCache doc has minor problem
 -

 Key: HADOOP-11759
 URL: https://issues.apache.org/jira/browse/HADOOP-11759
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.6.0, 3.0.0
Reporter: Chen He
Assignee: Brahma Reddy Battula
Priority: Trivial
  Labels: newbie++
 Attachments: HADOOP-11759.patch


 /**
* get delegation token for a specific FS
* @param fs
* @param credentials
* @param p
* @param conf
* @throws IOException
*/
   static void obtainTokensForNamenodesInternal(FileSystem fs, 
   Credentials credentials, Configuration conf) throws IOException {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11759) TockenCache doc has minor problem

2015-03-27 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-11759:
--
Attachment: HADOOP-11759.patch

 TockenCache doc has minor problem
 -

 Key: HADOOP-11759
 URL: https://issues.apache.org/jira/browse/HADOOP-11759
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.6.0
Reporter: Chen He
Assignee: Brahma Reddy Battula
Priority: Trivial
  Labels: newbie++
 Attachments: HADOOP-11759.patch


 /**
* get delegation token for a specific FS
* @param fs
* @param credentials
* @param p
* @param conf
* @throws IOException
*/
   static void obtainTokensForNamenodesInternal(FileSystem fs, 
   Credentials credentials, Configuration conf) throws IOException {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11760) Typo in DistCp.java

2015-03-27 Thread Brahma Reddy Battula (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brahma Reddy Battula updated HADOOP-11760:
--
Attachment: HADOOP-11760.patch

 Typo in DistCp.java
 ---

 Key: HADOOP-11760
 URL: https://issues.apache.org/jira/browse/HADOOP-11760
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chen He
Assignee: Brahma Reddy Battula
Priority: Trivial
  Labels: newbie++
 Attachments: HADOOP-11760.patch


 /**
* Create a default working folder for the job, under the
* job staging directory
*
* @return Returns the working folder information
* @throws Exception - EXception if any
*/
   private Path createMetaFolderPath() throws Exception {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11748) The secrets of auth cookies should not be specified in configuration in clear text

2015-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383683#comment-14383683
 ] 

Hudson commented on HADOOP-11748:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #145 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/145/])
HADOOP-11748. The secrets of auth cookies should not be specified in 
configuration in clear text. Contributed by Li Lu and Haohui Mai. (wheat9: rev 
47782cbf4a66d49064fd3dd6d1d1a19cc42157fc)
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProvider.java
* hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProviderCreator.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAuthenticationFilter.java


 The secrets of auth cookies should not be specified in configuration in clear 
 text
 --

 Key: HADOOP-11748
 URL: https://issues.apache.org/jira/browse/HADOOP-11748
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
Priority: Critical
 Fix For: 2.7.0

 Attachments: HADOOP-11748-032615-poc.patch, HADOOP-11748.001.patch


 Based on the discussion on HADOOP-10670, this jira proposes to remove 
 {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
 and security vulnerabilities.
 {quote}
 My understanding is that the use case of inlining the secret is never 
 supported. The property is used to pass the secret internally. The way it 
 works before HADOOP-10868 is the following:
 * Users specify the initializer of the authentication filter in the 
 configuration.
 * AuthenticationFilterInitializer reads the secret file. The server will not 
 start if the secret file does not exists. The initializer will set the 
 property if it read the file correctly.
 *There is no way to specify the secret in the configuration out-of-the-box – 
 the secret is always overwritten by AuthenticationFilterInitializer.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11553) Formalize the shell API

2015-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383684#comment-14383684
 ] 

Hudson commented on HADOOP-11553:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #145 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/145/])
HADOOP-11553. Foramlize the shell API (aw) (aw: rev 
b30ca8ce0e0d435327e179f0877bd58fa3896793)
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* hadoop-common-project/hadoop-common/pom.xml
* dev-support/shelldocs.py
* hadoop-project/src/site/site.xml
* hadoop-common-project/hadoop-common/src/site/markdown/UnixShellGuide.md
* hadoop-common-project/hadoop-common/CHANGES.txt
HADOOP-11553 addendum fix the typo in the changes file (aw: rev 
5695c7a541c1a3092040523446f1ba689fb495e3)
* hadoop-common-project/hadoop-common/CHANGES.txt


 Formalize the shell API
 ---

 Key: HADOOP-11553
 URL: https://issues.apache.org/jira/browse/HADOOP-11553
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation, scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Fix For: 3.0.0

 Attachments: HADOOP-11553-00.patch, HADOOP-11553-01.patch, 
 HADOOP-11553-02.patch, HADOOP-11553-03.patch, HADOOP-11553-04.patch, 
 HADOOP-11553-05.patch, HADOOP-11553-06.patch


 After HADOOP-11485, we need to formally document functions and environment 
 variables that 3rd parties can expect to be able to exist/use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11691) X86 build of libwinutils is broken

2015-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383682#comment-14383682
 ] 

Hudson commented on HADOOP-11691:
-

FAILURE: Integrated in Hadoop-Yarn-trunk-Java8 #145 (See 
[https://builds.apache.org/job/Hadoop-Yarn-trunk-Java8/145/])
HADOOP-11691. X86 build of libwinutils is broken. Contributed by Kiran Kumar M 
R. (cnauroth: rev af618f23a70508111f490a24d74fc90161cfc079)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/winutils/win8sdk.props


 X86 build of libwinutils is broken
 --

 Key: HADOOP-11691
 URL: https://issues.apache.org/jira/browse/HADOOP-11691
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, native
Affects Versions: 2.7.0
Reporter: Remus Rusanu
Assignee: Kiran Kumar M R
Priority: Critical
 Fix For: 2.7.0

 Attachments: HADOOP-11691-001.patch, HADOOP-11691-002.patch, 
 HADOOP-11691-003.patch


 Hadoop-9922 recently fixed x86 build. After YARN-2190 compiling x86 results 
 in error:
 {code}
 (Link target) -
   
 E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\target/winutils/hadoopwinutilsvc_s.obj
  : fatal error LNK1112: module machine type 'x64' conflicts with target 
 machine type 'X86' 
 [E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\src\main\winutils\winutils.vcxproj]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11257) Update hadoop jar documentation to warn against using it for launching yarn jars

2015-03-27 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383499#comment-14383499
 ] 

Masatake Iwasaki commented on HADOOP-11257:
---

Thanks, [~cnauroth].

 Update hadoop jar documentation to warn against using it for launching yarn 
 jars
 --

 Key: HADOOP-11257
 URL: https://issues.apache.org/jira/browse/HADOOP-11257
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.1-beta
Reporter: Allen Wittenauer
Assignee: Masatake Iwasaki
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HADOOP-11257-branch-2.addendum.001.patch, 
 HADOOP-11257.1.patch, HADOOP-11257.1.patch, HADOOP-11257.2.patch, 
 HADOOP-11257.3.patch, HADOOP-11257.4.patch, HADOOP-11257.4.patch


 We should update the hadoop jar documentation to warn against using it for 
 launching yarn jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11627) Remove io.native.lib.available from trunk

2015-03-27 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383551#comment-14383551
 ] 

Vinayakumar B commented on HADOOP-11627:


bq. 1. Would you remove the following code from TestTFileSeqFileComparison.java?
I wonder how compilation passed for this in QA? 

bq. I'm thinking we can fix these failures by adding a setter method for 
ZlibFactory.nativeZlibLoaded and setting the variable to false instead of just 
removing conf.setBoolean(CommonConfigurationKeys.IO_NATIVE_LIB_AVAILABLE_KEY, 
false). If we use the setter method, we should add @After method in the test to 
reset the variable.
Thats a good idea. While resetting after test need to reset to its original 
value (not just true). How about reloading, same as in static block. Do below 
changes in ZLibfactory and  calling {{ZlibFactory.loadNativeZLib}} in 
{{@After}} method.
{code}   static {
+loadNativeZLib();
+  }
+
+  @VisibleForTesting
+  public static void loadNativeZLib() {
 if (NativeCodeLoader.isNativeCodeLoaded()) {
   nativeZlibLoaded = ZlibCompressor.isNativeZlibLoaded() 
 ZlibDecompressor.isNativeZlibLoaded();{code}


regarding patch, some nits
1. DeprecatedProperties.md can have the description as below. because the 
property was not avoiding loading libs, but it was avoiding usage of them for 
compression codecs.
{code}+| io.native.lib.available | NONE - By Default native libs will be used 
for bz2 and zlib compression codecs if available. |{code}

2. {{TestConcatenatedCompressedInput.java}} also need similar treatment as 
{{TestCodec}} to avoid failure when {{-Pnative}} specified.

 Remove io.native.lib.available from trunk
 -

 Key: HADOOP-11627
 URL: https://issues.apache.org/jira/browse/HADOOP-11627
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 3.0.0
Reporter: Akira AJISAKA
Assignee: Brahma Reddy Battula
 Attachments: HADOOP-11627-002.patch, HADOOP-11627-003.patch, 
 HADOOP-11627-004.patch, HADOOP-11627-005.patch, HADOOP-11627.patch


 According to the discussion in HADOOP-8642, we should remove 
 {{io.native.lib.available}} from trunk, and always use native libraries if 
 they exist.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11639) Clean up Windows native code compilation warnings related to Windows Secure Container Executor.

2015-03-27 Thread Remus Rusanu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Remus Rusanu updated HADOOP-11639:
--
Attachment: HADOOP-11639.03.patch

o3.Patch is rebased to current trunk and also fixes the new PSAPI_VERSION macro 
redefinition warning.

 Clean up Windows native code compilation warnings related to Windows Secure 
 Container Executor.
 ---

 Key: HADOOP-11639
 URL: https://issues.apache.org/jira/browse/HADOOP-11639
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.6.0
Reporter: Chris Nauroth
Assignee: Remus Rusanu
 Attachments: HADOOP-11639.00.patch, HADOOP-11639.01.patch, 
 HADOOP-11639.02.patch, HADOOP-11639.03.patch


 YARN-2198 introduced additional code in Hadoop Common to support the 
 NodeManager {{WindowsSecureContainerExecutor}}.  The patch introduced new 
 compilation warnings that we need to investigate and resolve.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11742) mkdir by file system shell fails on an empty bucket

2015-03-27 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383779#comment-14383779
 ] 

Steve Loughran commented on HADOOP-11742:
-

[~tsato] : whose endpoints are you playing with there? Are these AWS or 
something of your own. As if its the latter, what's being shown up here is a 
difference between your impl  AWS. While we'll try and do our best to help, 
you do have to consider any variation in behaviour a variation between S3 API 
and your impl —which is really a bug on your end. 

 mkdir by file system shell fails on an empty bucket
 ---

 Key: HADOOP-11742
 URL: https://issues.apache.org/jira/browse/HADOOP-11742
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
 Environment: CentOS 7
Reporter: Takenori Sato
 Attachments: HADOOP-11742-branch-2.7.001.patch, 
 HADOOP-11742-branch-2.7.002.patch


 I have built the latest 2.7, and tried S3AFileSystem.
 Then found that _mkdir_ fails on an empty bucket, named *s3a* here, as 
 follows:
 {code}
 # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3a://s3a/foo
 15/03/24 03:49:35 DEBUG s3a.S3AFileSystem: Getting path status for 
 s3a://s3a/foo (foo)
 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/foo
 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ 
 ()
 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/
 mkdir: `s3a://s3a/foo': No such file or directory
 {code}
 So does _ls_.
 {code}
 # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3a://s3a/
 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ 
 ()
 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/
 ls: `s3a://s3a/': No such file or directory
 {code}
 This is how it works via s3n.
 {code}
 # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/
 # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3n://s3n/foo
 # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/
 Found 1 items
 drwxrwxrwx   -  0 1970-01-01 00:00 s3n://s3n/foo
 {code}
 The snapshot is the following:
 {quote}
 \# git branch
 \* branch-2.7
   trunk
 \# git log
 commit 929b04ce3a4fe419dece49ed68d4f6228be214c1
 Author: Harsh J ha...@cloudera.com
 Date:   Sun Mar 22 10:18:32 2015 +0530
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11742) mkdir by file system shell fails on an empty bucket

2015-03-27 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11742?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11742:

 Priority: Minor  (was: Major)
Affects Version/s: 2.7.0

 mkdir by file system shell fails on an empty bucket
 ---

 Key: HADOOP-11742
 URL: https://issues.apache.org/jira/browse/HADOOP-11742
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Affects Versions: 2.7.0
 Environment: CentOS 7
Reporter: Takenori Sato
Priority: Minor
 Attachments: HADOOP-11742-branch-2.7.001.patch, 
 HADOOP-11742-branch-2.7.002.patch


 I have built the latest 2.7, and tried S3AFileSystem.
 Then found that _mkdir_ fails on an empty bucket, named *s3a* here, as 
 follows:
 {code}
 # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3a://s3a/foo
 15/03/24 03:49:35 DEBUG s3a.S3AFileSystem: Getting path status for 
 s3a://s3a/foo (foo)
 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/foo
 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ 
 ()
 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/
 mkdir: `s3a://s3a/foo': No such file or directory
 {code}
 So does _ls_.
 {code}
 # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3a://s3a/
 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ 
 ()
 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/
 ls: `s3a://s3a/': No such file or directory
 {code}
 This is how it works via s3n.
 {code}
 # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/
 # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3n://s3n/foo
 # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/
 Found 1 items
 drwxrwxrwx   -  0 1970-01-01 00:00 s3n://s3n/foo
 {code}
 The snapshot is the following:
 {quote}
 \# git branch
 \* branch-2.7
   trunk
 \# git log
 commit 929b04ce3a4fe419dece49ed68d4f6228be214c1
 Author: Harsh J ha...@cloudera.com
 Date:   Sun Mar 22 10:18:32 2015 +0530
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11760) Typo in DistCp.java

2015-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383714#comment-14383714
 ] 

Hadoop QA commented on HADOOP-11760:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12707739/HADOOP-11760.patch
  against trunk revision af618f2.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-tools/hadoop-distcp.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6012//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6012//console

This message is automatically generated.

 Typo in DistCp.java
 ---

 Key: HADOOP-11760
 URL: https://issues.apache.org/jira/browse/HADOOP-11760
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chen He
Assignee: Brahma Reddy Battula
Priority: Trivial
  Labels: newbie++
 Attachments: HADOOP-11760.patch


 /**
* Create a default working folder for the job, under the
* job staging directory
*
* @return Returns the working folder information
* @throws Exception - EXception if any
*/
   private Path createMetaFolderPath() throws Exception {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11742) mkdir by file system shell fails on an empty bucket

2015-03-27 Thread Thomas Demoor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11742?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383747#comment-14383747
 ] 

Thomas Demoor commented on HADOOP-11742:


{quote} Then, without this fix, TestS3AContractRootDir failed as follows.{quote}
At my end, without the fix, the test passes vs AWS. 

With the fix the test passes as well so what are you fixing? Can you elaborate? 
Will have a closer look then.

 mkdir by file system shell fails on an empty bucket
 ---

 Key: HADOOP-11742
 URL: https://issues.apache.org/jira/browse/HADOOP-11742
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
 Environment: CentOS 7
Reporter: Takenori Sato
 Attachments: HADOOP-11742-branch-2.7.001.patch, 
 HADOOP-11742-branch-2.7.002.patch


 I have built the latest 2.7, and tried S3AFileSystem.
 Then found that _mkdir_ fails on an empty bucket, named *s3a* here, as 
 follows:
 {code}
 # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3a://s3a/foo
 15/03/24 03:49:35 DEBUG s3a.S3AFileSystem: Getting path status for 
 s3a://s3a/foo (foo)
 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/foo
 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ 
 ()
 15/03/24 03:49:36 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/
 mkdir: `s3a://s3a/foo': No such file or directory
 {code}
 So does _ls_.
 {code}
 # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3a://s3a/
 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Getting path status for s3a://s3a/ 
 ()
 15/03/24 03:47:48 DEBUG s3a.S3AFileSystem: Not Found: s3a://s3a/
 ls: `s3a://s3a/': No such file or directory
 {code}
 This is how it works via s3n.
 {code}
 # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/
 # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -mkdir s3n://s3n/foo
 # hadoop-2.7.0-SNAPSHOT/bin/hdfs dfs -ls s3n://s3n/
 Found 1 items
 drwxrwxrwx   -  0 1970-01-01 00:00 s3n://s3n/foo
 {code}
 The snapshot is the following:
 {quote}
 \# git branch
 \* branch-2.7
   trunk
 \# git log
 commit 929b04ce3a4fe419dece49ed68d4f6228be214c1
 Author: Harsh J ha...@cloudera.com
 Date:   Sun Mar 22 10:18:32 2015 +0530
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11691) X86 build of libwinutils is broken

2015-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383837#comment-14383837
 ] 

Hudson commented on HADOOP-11691:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2095 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2095/])
HADOOP-11691. X86 build of libwinutils is broken. Contributed by Kiran Kumar M 
R. (cnauroth: rev af618f23a70508111f490a24d74fc90161cfc079)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/winutils/win8sdk.props


 X86 build of libwinutils is broken
 --

 Key: HADOOP-11691
 URL: https://issues.apache.org/jira/browse/HADOOP-11691
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, native
Affects Versions: 2.7.0
Reporter: Remus Rusanu
Assignee: Kiran Kumar M R
Priority: Critical
 Fix For: 2.7.0

 Attachments: HADOOP-11691-001.patch, HADOOP-11691-002.patch, 
 HADOOP-11691-003.patch


 Hadoop-9922 recently fixed x86 build. After YARN-2190 compiling x86 results 
 in error:
 {code}
 (Link target) -
   
 E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\target/winutils/hadoopwinutilsvc_s.obj
  : fatal error LNK1112: module machine type 'x64' conflicts with target 
 machine type 'X86' 
 [E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\src\main\winutils\winutils.vcxproj]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11748) The secrets of auth cookies should not be specified in configuration in clear text

2015-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383838#comment-14383838
 ] 

Hudson commented on HADOOP-11748:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2095 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2095/])
HADOOP-11748. The secrets of auth cookies should not be specified in 
configuration in clear text. Contributed by Li Lu and Haohui Mai. (wheat9: rev 
47782cbf4a66d49064fd3dd6d1d1a19cc42157fc)
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
* hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProvider.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProviderCreator.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAuthenticationFilter.java


 The secrets of auth cookies should not be specified in configuration in clear 
 text
 --

 Key: HADOOP-11748
 URL: https://issues.apache.org/jira/browse/HADOOP-11748
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
Priority: Critical
 Fix For: 2.7.0

 Attachments: HADOOP-11748-032615-poc.patch, HADOOP-11748.001.patch


 Based on the discussion on HADOOP-10670, this jira proposes to remove 
 {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
 and security vulnerabilities.
 {quote}
 My understanding is that the use case of inlining the secret is never 
 supported. The property is used to pass the secret internally. The way it 
 works before HADOOP-10868 is the following:
 * Users specify the initializer of the authentication filter in the 
 configuration.
 * AuthenticationFilterInitializer reads the secret file. The server will not 
 start if the secret file does not exists. The initializer will set the 
 property if it read the file correctly.
 *There is no way to specify the secret in the configuration out-of-the-box – 
 the secret is always overwritten by AuthenticationFilterInitializer.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11748) The secrets of auth cookies should not be specified in configuration in clear text

2015-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383808#comment-14383808
 ] 

Hudson commented on HADOOP-11748:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #145 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/145/])
HADOOP-11748. The secrets of auth cookies should not be specified in 
configuration in clear text. Contributed by Li Lu and Haohui Mai. (wheat9: rev 
47782cbf4a66d49064fd3dd6d1d1a19cc42157fc)
* hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProviderCreator.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProvider.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProvider.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAuthenticationFilter.java


 The secrets of auth cookies should not be specified in configuration in clear 
 text
 --

 Key: HADOOP-11748
 URL: https://issues.apache.org/jira/browse/HADOOP-11748
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
Priority: Critical
 Fix For: 2.7.0

 Attachments: HADOOP-11748-032615-poc.patch, HADOOP-11748.001.patch


 Based on the discussion on HADOOP-10670, this jira proposes to remove 
 {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
 and security vulnerabilities.
 {quote}
 My understanding is that the use case of inlining the secret is never 
 supported. The property is used to pass the secret internally. The way it 
 works before HADOOP-10868 is the following:
 * Users specify the initializer of the authentication filter in the 
 configuration.
 * AuthenticationFilterInitializer reads the secret file. The server will not 
 start if the secret file does not exists. The initializer will set the 
 property if it read the file correctly.
 *There is no way to specify the secret in the configuration out-of-the-box – 
 the secret is always overwritten by AuthenticationFilterInitializer.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11553) Formalize the shell API

2015-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383809#comment-14383809
 ] 

Hudson commented on HADOOP-11553:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #145 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/145/])
HADOOP-11553. Foramlize the shell API (aw) (aw: rev 
b30ca8ce0e0d435327e179f0877bd58fa3896793)
* hadoop-common-project/hadoop-common/src/site/markdown/UnixShellGuide.md
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* hadoop-common-project/hadoop-common/CHANGES.txt
* dev-support/shelldocs.py
* hadoop-common-project/hadoop-common/pom.xml
* hadoop-project/src/site/site.xml
HADOOP-11553 addendum fix the typo in the changes file (aw: rev 
5695c7a541c1a3092040523446f1ba689fb495e3)
* hadoop-common-project/hadoop-common/CHANGES.txt


 Formalize the shell API
 ---

 Key: HADOOP-11553
 URL: https://issues.apache.org/jira/browse/HADOOP-11553
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation, scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Fix For: 3.0.0

 Attachments: HADOOP-11553-00.patch, HADOOP-11553-01.patch, 
 HADOOP-11553-02.patch, HADOOP-11553-03.patch, HADOOP-11553-04.patch, 
 HADOOP-11553-05.patch, HADOOP-11553-06.patch


 After HADOOP-11485, we need to formally document functions and environment 
 variables that 3rd parties can expect to be able to exist/use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11691) X86 build of libwinutils is broken

2015-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383807#comment-14383807
 ] 

Hudson commented on HADOOP-11691:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk-Java8 #145 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk-Java8/145/])
HADOOP-11691. X86 build of libwinutils is broken. Contributed by Kiran Kumar M 
R. (cnauroth: rev af618f23a70508111f490a24d74fc90161cfc079)
* hadoop-common-project/hadoop-common/src/main/winutils/win8sdk.props
* hadoop-common-project/hadoop-common/CHANGES.txt


 X86 build of libwinutils is broken
 --

 Key: HADOOP-11691
 URL: https://issues.apache.org/jira/browse/HADOOP-11691
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, native
Affects Versions: 2.7.0
Reporter: Remus Rusanu
Assignee: Kiran Kumar M R
Priority: Critical
 Fix For: 2.7.0

 Attachments: HADOOP-11691-001.patch, HADOOP-11691-002.patch, 
 HADOOP-11691-003.patch


 Hadoop-9922 recently fixed x86 build. After YARN-2190 compiling x86 results 
 in error:
 {code}
 (Link target) -
   
 E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\target/winutils/hadoopwinutilsvc_s.obj
  : fatal error LNK1112: module machine type 'x64' conflicts with target 
 machine type 'X86' 
 [E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\src\main\winutils\winutils.vcxproj]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11553) Formalize the shell API

2015-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383839#comment-14383839
 ] 

Hudson commented on HADOOP-11553:
-

FAILURE: Integrated in Hadoop-Mapreduce-trunk #2095 (See 
[https://builds.apache.org/job/Hadoop-Mapreduce-trunk/2095/])
HADOOP-11553. Foramlize the shell API (aw) (aw: rev 
b30ca8ce0e0d435327e179f0877bd58fa3896793)
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* hadoop-project/src/site/site.xml
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/pom.xml
* hadoop-common-project/hadoop-common/src/site/markdown/UnixShellGuide.md
* dev-support/shelldocs.py
HADOOP-11553 addendum fix the typo in the changes file (aw: rev 
5695c7a541c1a3092040523446f1ba689fb495e3)
* hadoop-common-project/hadoop-common/CHANGES.txt


 Formalize the shell API
 ---

 Key: HADOOP-11553
 URL: https://issues.apache.org/jira/browse/HADOOP-11553
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation, scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Fix For: 3.0.0

 Attachments: HADOOP-11553-00.patch, HADOOP-11553-01.patch, 
 HADOOP-11553-02.patch, HADOOP-11553-03.patch, HADOOP-11553-04.patch, 
 HADOOP-11553-05.patch, HADOOP-11553-06.patch


 After HADOOP-11485, we need to formally document functions and environment 
 variables that 3rd parties can expect to be able to exist/use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11759) Javadoc of TockenCache has an extra parameter

2015-03-27 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-11759:

Summary: Javadoc of TockenCache has an extra parameter  (was: TockenCache 
doc has minor problem)

 Javadoc of TockenCache has an extra parameter
 -

 Key: HADOOP-11759
 URL: https://issues.apache.org/jira/browse/HADOOP-11759
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.6.0
Reporter: Chen He
Assignee: Brahma Reddy Battula
Priority: Trivial
  Labels: newbie++
 Attachments: HADOOP-11759.patch


 /**
* get delegation token for a specific FS
* @param fs
* @param credentials
* @param p
* @param conf
* @throws IOException
*/
   static void obtainTokensForNamenodesInternal(FileSystem fs, 
   Credentials credentials, Configuration conf) throws IOException {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11760) Fix typo of javadoc in DistCp

2015-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383901#comment-14383901
 ] 

Hudson commented on HADOOP-11760:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7447 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7447/])
HADOOP-11760. Fix typo of javadoc in DistCp. Contributed by Brahma Reddy 
Battula. (ozawa: rev e074952bd6bedf58d993bbea690bad08c9a0e6aa)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java


 Fix typo of javadoc in DistCp
 -

 Key: HADOOP-11760
 URL: https://issues.apache.org/jira/browse/HADOOP-11760
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chen He
Assignee: Brahma Reddy Battula
Priority: Trivial
  Labels: newbie++
 Fix For: 2.8.0

 Attachments: HADOOP-11760.patch


 /**
* Create a default working folder for the job, under the
* job staging directory
*
* @return Returns the working folder information
* @throws Exception - EXception if any
*/
   private Path createMetaFolderPath() throws Exception {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11748) The secrets of auth cookies should not be specified in configuration in clear text

2015-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383941#comment-14383941
 ] 

Hudson commented on HADOOP-11748:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2077 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2077/])
HADOOP-11748. The secrets of auth cookies should not be specified in 
configuration in clear text. Contributed by Li Lu and Haohui Mai. (wheat9: rev 
47782cbf4a66d49064fd3dd6d1d1a19cc42157fc)
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProvider.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProviderCreator.java
* hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProvider.java
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAuthenticationFilter.java


 The secrets of auth cookies should not be specified in configuration in clear 
 text
 --

 Key: HADOOP-11748
 URL: https://issues.apache.org/jira/browse/HADOOP-11748
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
Priority: Critical
 Fix For: 2.7.0

 Attachments: HADOOP-11748-032615-poc.patch, HADOOP-11748.001.patch


 Based on the discussion on HADOOP-10670, this jira proposes to remove 
 {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
 and security vulnerabilities.
 {quote}
 My understanding is that the use case of inlining the secret is never 
 supported. The property is used to pass the secret internally. The way it 
 works before HADOOP-10868 is the following:
 * Users specify the initializer of the authentication filter in the 
 configuration.
 * AuthenticationFilterInitializer reads the secret file. The server will not 
 start if the secret file does not exists. The initializer will set the 
 property if it read the file correctly.
 *There is no way to specify the secret in the configuration out-of-the-box – 
 the secret is always overwritten by AuthenticationFilterInitializer.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11553) Formalize the shell API

2015-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383942#comment-14383942
 ] 

Hudson commented on HADOOP-11553:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2077 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2077/])
HADOOP-11553. Foramlize the shell API (aw) (aw: rev 
b30ca8ce0e0d435327e179f0877bd58fa3896793)
* hadoop-project/src/site/site.xml
* dev-support/shelldocs.py
* hadoop-common-project/hadoop-common/src/site/markdown/UnixShellGuide.md
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* hadoop-common-project/hadoop-common/pom.xml
HADOOP-11553 addendum fix the typo in the changes file (aw: rev 
5695c7a541c1a3092040523446f1ba689fb495e3)
* hadoop-common-project/hadoop-common/CHANGES.txt


 Formalize the shell API
 ---

 Key: HADOOP-11553
 URL: https://issues.apache.org/jira/browse/HADOOP-11553
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation, scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Fix For: 3.0.0

 Attachments: HADOOP-11553-00.patch, HADOOP-11553-01.patch, 
 HADOOP-11553-02.patch, HADOOP-11553-03.patch, HADOOP-11553-04.patch, 
 HADOOP-11553-05.patch, HADOOP-11553-06.patch


 After HADOOP-11485, we need to formally document functions and environment 
 variables that 3rd parties can expect to be able to exist/use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11760) Fix typo of javadoc in DistCp

2015-03-27 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-11760:

Summary: Fix typo of javadoc in DistCp  (was: Typo in DistCp.java)

 Fix typo of javadoc in DistCp
 -

 Key: HADOOP-11760
 URL: https://issues.apache.org/jira/browse/HADOOP-11760
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chen He
Assignee: Brahma Reddy Battula
Priority: Trivial
  Labels: newbie++
 Attachments: HADOOP-11760.patch


 /**
* Create a default working folder for the job, under the
* job staging directory
*
* @return Returns the working folder information
* @throws Exception - EXception if any
*/
   private Path createMetaFolderPath() throws Exception {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11759) Javadoc of TockenCache has an extra parameter

2015-03-27 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383875#comment-14383875
 ] 

Tsuyoshi Ozawa commented on HADOOP-11759:
-

+1, pending for Jenkins.

 Javadoc of TockenCache has an extra parameter
 -

 Key: HADOOP-11759
 URL: https://issues.apache.org/jira/browse/HADOOP-11759
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.6.0
Reporter: Chen He
Assignee: Brahma Reddy Battula
Priority: Trivial
  Labels: newbie++
 Attachments: HADOOP-11759.patch


 /**
* get delegation token for a specific FS
* @param fs
* @param credentials
* @param p
* @param conf
* @throws IOException
*/
   static void obtainTokensForNamenodesInternal(FileSystem fs, 
   Credentials credentials, Configuration conf) throws IOException {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11760) Fix typo of javadoc in DistCp

2015-03-27 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383884#comment-14383884
 ] 

Tsuyoshi Ozawa commented on HADOOP-11760:
-

+1, committing this shortly.

 Fix typo of javadoc in DistCp
 -

 Key: HADOOP-11760
 URL: https://issues.apache.org/jira/browse/HADOOP-11760
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chen He
Assignee: Brahma Reddy Battula
Priority: Trivial
  Labels: newbie++
 Attachments: HADOOP-11760.patch


 /**
* Create a default working folder for the job, under the
* job staging directory
*
* @return Returns the working folder information
* @throws Exception - EXception if any
*/
   private Path createMetaFolderPath() throws Exception {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11760) Fix typo of javadoc in DistCp

2015-03-27 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11760?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-11760:

   Resolution: Fixed
Fix Version/s: 2.8.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Committed this to trunk and branch-2. Thanks [~brahmareddy] for your 
contribution and thanks [~airbots] for your report!

 Fix typo of javadoc in DistCp
 -

 Key: HADOOP-11760
 URL: https://issues.apache.org/jira/browse/HADOOP-11760
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chen He
Assignee: Brahma Reddy Battula
Priority: Trivial
  Labels: newbie++
 Fix For: 2.8.0

 Attachments: HADOOP-11760.patch


 /**
* Create a default working folder for the job, under the
* job staging directory
*
* @return Returns the working folder information
* @throws Exception - EXception if any
*/
   private Path createMetaFolderPath() throws Exception {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11691) X86 build of libwinutils is broken

2015-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383940#comment-14383940
 ] 

Hudson commented on HADOOP-11691:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk #2077 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk/2077/])
HADOOP-11691. X86 build of libwinutils is broken. Contributed by Kiran Kumar M 
R. (cnauroth: rev af618f23a70508111f490a24d74fc90161cfc079)
* hadoop-common-project/hadoop-common/src/main/winutils/win8sdk.props
* hadoop-common-project/hadoop-common/CHANGES.txt


 X86 build of libwinutils is broken
 --

 Key: HADOOP-11691
 URL: https://issues.apache.org/jira/browse/HADOOP-11691
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, native
Affects Versions: 2.7.0
Reporter: Remus Rusanu
Assignee: Kiran Kumar M R
Priority: Critical
 Fix For: 2.7.0

 Attachments: HADOOP-11691-001.patch, HADOOP-11691-002.patch, 
 HADOOP-11691-003.patch


 Hadoop-9922 recently fixed x86 build. After YARN-2190 compiling x86 results 
 in error:
 {code}
 (Link target) -
   
 E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\target/winutils/hadoopwinutilsvc_s.obj
  : fatal error LNK1112: module machine type 'x64' conflicts with target 
 machine type 'X86' 
 [E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\src\main\winutils\winutils.vcxproj]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11759) Javadoc of TockenCache has an extra parameter

2015-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383955#comment-14383955
 ] 

Hadoop QA commented on HADOOP-11759:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12707740/HADOOP-11759.patch
  against trunk revision af618f2.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The following test timeouts occurred in 
hadoop-hdfs-project/hadoop-hdfs:

org.apache.hadoop.hdfs.server.blockmanagement.TestDatanodeManager
org.apache.hadoop.hdfs.TestDatanodeDeath

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6013//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6013//console

This message is automatically generated.

 Javadoc of TockenCache has an extra parameter
 -

 Key: HADOOP-11759
 URL: https://issues.apache.org/jira/browse/HADOOP-11759
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.6.0
Reporter: Chen He
Assignee: Brahma Reddy Battula
Priority: Trivial
  Labels: newbie++
 Attachments: HADOOP-11759.patch


 /**
* get delegation token for a specific FS
* @param fs
* @param credentials
* @param p
* @param conf
* @throws IOException
*/
   static void obtainTokensForNamenodesInternal(FileSystem fs, 
   Credentials credentials, Configuration conf) throws IOException {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11759) Javadoc of TockenCache has an extra parameter

2015-03-27 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11759?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383958#comment-14383958
 ] 

Tsuyoshi Ozawa commented on HADOOP-11759:
-

The test failure is not related since the patch only includes a fix about 
javadoc. No test is needed with same reason. Committing this shortly.

 Javadoc of TockenCache has an extra parameter
 -

 Key: HADOOP-11759
 URL: https://issues.apache.org/jira/browse/HADOOP-11759
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.6.0
Reporter: Chen He
Assignee: Brahma Reddy Battula
Priority: Trivial
  Labels: newbie++
 Attachments: HADOOP-11759.patch


 /**
* get delegation token for a specific FS
* @param fs
* @param credentials
* @param p
* @param conf
* @throws IOException
*/
   static void obtainTokensForNamenodesInternal(FileSystem fs, 
   Credentials credentials, Configuration conf) throws IOException {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11759) Remove an extra parameter described in Javadoc of TockenCache

2015-03-27 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11759?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-11759:

Summary: Remove an extra parameter described in Javadoc of TockenCache  
(was: Javadoc of TockenCache has an extra parameter)

 Remove an extra parameter described in Javadoc of TockenCache
 -

 Key: HADOOP-11759
 URL: https://issues.apache.org/jira/browse/HADOOP-11759
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0, 2.6.0
Reporter: Chen He
Assignee: Brahma Reddy Battula
Priority: Trivial
  Labels: newbie++
 Attachments: HADOOP-11759.patch


 /**
* get delegation token for a specific FS
* @param fs
* @param credentials
* @param p
* @param conf
* @throws IOException
*/
   static void obtainTokensForNamenodesInternal(FileSystem fs, 
   Credentials credentials, Configuration conf) throws IOException {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-03-27 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11717:
-
Status: Patch Available  (was: Open)

 Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
 -

 Key: HADOOP-11717
 URL: https://issues.apache.org/jira/browse/HADOOP-11717
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Larry McCay
Assignee: Larry McCay
 Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
 HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
 HADOOP-11717-6.patch


 Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
 The actual authentication is done by some external service that the handler 
 will redirect to when there is no hadoop.auth cookie and no JWT token found 
 in the incoming request.
 Using JWT provides a number of benefits:
 * It is not tied to any specific authentication mechanism - so buys us many 
 SSO integrations
 * It is cryptographically verifiable for determining whether it can be trusted
 * Checking for expiration allows for a limited lifetime and window for 
 compromised use
 This will introduce the use of nimbus-jose-jwt library for processing, 
 validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-03-27 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11717:
-
Attachment: HADOOP-11717-6.patch

Attaching new patch - it addresses:

* some minor changes to tests as requested by [~drankye]
* separation of certificate PEM parsing into CertificateUtil class
* more appropriate handling of token validation errors to reauthenticate rather 
than return a 403

I think that any additional refactoring can be done as a result of needing to 
leverage common code.



 Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
 -

 Key: HADOOP-11717
 URL: https://issues.apache.org/jira/browse/HADOOP-11717
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Larry McCay
Assignee: Larry McCay
 Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
 HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
 HADOOP-11717-6.patch


 Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
 The actual authentication is done by some external service that the handler 
 will redirect to when there is no hadoop.auth cookie and no JWT token found 
 in the incoming request.
 Using JWT provides a number of benefits:
 * It is not tied to any specific authentication mechanism - so buys us many 
 SSO integrations
 * It is cryptographically verifiable for determining whether it can be trusted
 * Checking for expiration allows for a limited lifetime and window for 
 compromised use
 This will introduce the use of nimbus-jose-jwt library for processing, 
 validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-03-27 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11717:
-
Status: Open  (was: Patch Available)

 Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
 -

 Key: HADOOP-11717
 URL: https://issues.apache.org/jira/browse/HADOOP-11717
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Larry McCay
Assignee: Larry McCay
 Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
 HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
 HADOOP-11717-6.patch


 Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
 The actual authentication is done by some external service that the handler 
 will redirect to when there is no hadoop.auth cookie and no JWT token found 
 in the incoming request.
 Using JWT provides a number of benefits:
 * It is not tied to any specific authentication mechanism - so buys us many 
 SSO integrations
 * It is cryptographically verifiable for determining whether it can be trusted
 * Checking for expiration allows for a limited lifetime and window for 
 compromised use
 This will introduce the use of nimbus-jose-jwt library for processing, 
 validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-03-27 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11717:
-
Status: Patch Available  (was: Open)

Resubmitting the patch - shouldn't have caused the build to fail...

 Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
 -

 Key: HADOOP-11717
 URL: https://issues.apache.org/jira/browse/HADOOP-11717
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Larry McCay
Assignee: Larry McCay
 Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
 HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
 HADOOP-11717-6.patch


 Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
 The actual authentication is done by some external service that the handler 
 will redirect to when there is no hadoop.auth cookie and no JWT token found 
 in the incoming request.
 Using JWT provides a number of benefits:
 * It is not tied to any specific authentication mechanism - so buys us many 
 SSO integrations
 * It is cryptographically verifiable for determining whether it can be trusted
 * Checking for expiration allows for a limited lifetime and window for 
 compromised use
 This will introduce the use of nimbus-jose-jwt library for processing, 
 validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-27 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384151#comment-14384151
 ] 

Zhijie Shen commented on HADOOP-11754:
--

I'm not sure why we want to prevent using the random secret in the secure mode. 
As is mentioned above, it's an incompatible semantics change, which will break 
RM web interface and timeline server secure deployment. I don't think we have 
conveyed this secure setup requirement of secret file to the users (e.g., 
Ambari). [~vinodkv], any idea?
{code}
277 // Fallback to RandomeSignerSecretProvider if the secret file is
278 // unspecified in insecure mode
279 if (!isSecurityEnabled  config.getProperty(SIGNATURE_SECRET_FILE) 
==
280 null) {
281   name = random;
282 }
{code}

{code}
289 if (!isSecurityEnabled) {
290   LOG.info(The signature secret of the authentication filter 
is  +
291unspecified, falling back to use random 
secrets.);
292   provider = new RandomSignerSecretProvider();
293   provider.init(config, servletContext, validity);
294 } else {
295   throw e;
296 }
{code}

 RM fails to start in non-secure mode due to authentication filter failure
 -

 Key: HADOOP-11754
 URL: https://issues.apache.org/jira/browse/HADOOP-11754
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
 HADOOP-11754.000.patch, HADOOP-11754.001.patch


 RM fails to start in the non-secure mode with the following exception:
 {noformat}
 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
   at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
   at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 

[jira] [Commented] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384199#comment-14384199
 ] 

Hadoop QA commented on HADOOP-11717:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12707807/HADOOP-11717-6.patch
  against trunk revision 05499b1.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:red}-1 javac{color:red}.  The patch appears to cause the build to 
fail.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6015//console

This message is automatically generated.

 Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
 -

 Key: HADOOP-11717
 URL: https://issues.apache.org/jira/browse/HADOOP-11717
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Larry McCay
Assignee: Larry McCay
 Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
 HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
 HADOOP-11717-6.patch


 Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
 The actual authentication is done by some external service that the handler 
 will redirect to when there is no hadoop.auth cookie and no JWT token found 
 in the incoming request.
 Using JWT provides a number of benefits:
 * It is not tied to any specific authentication mechanism - so buys us many 
 SSO integrations
 * It is cryptographically verifiable for determining whether it can be trusted
 * Checking for expiration allows for a limited lifetime and window for 
 compromised use
 This will introduce the use of nimbus-jose-jwt library for processing, 
 validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes

2015-03-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384180#comment-14384180
 ] 

Allen Wittenauer commented on HADOOP-11731:
---

bq. Should we throw an exception if we can't parse the version?

Nope, otherwise {{releasedocmaker.py --version trunk-win}} would fail.

 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11763) RM in insecure model get start failure after HADOOP-10670.

2015-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384110#comment-14384110
 ] 

Hadoop QA commented on HADOOP-11763:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12707784/HADOOP-11763.patch
  against trunk revision 05499b1.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:red}-1 tests included{color}.  The patch doesn't appear to include 
any new or modified tests.
Please justify why no new tests are needed for this 
patch.
Also please list what manual steps were performed to 
verify this patch.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-common:

  org.apache.hadoop.ipc.TestRPCWaitForProxy

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6014//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6014//console

This message is automatically generated.

 RM in insecure model get start failure after HADOOP-10670.
 --

 Key: HADOOP-11763
 URL: https://issues.apache.org/jira/browse/HADOOP-11763
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Junping Du
Priority: Blocker
 Attachments: HADOOP-11763.patch


 TestDistributedShell get failed due to RM start failure.
 The log exception:
 {code}
 2015-03-27 14:43:17,190 WARN  [RM-0] mortbay.log (Slf4jLog.java:warn(89)) - 
 Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@2d2d0132{/,file:/Users/jdu/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-comm
 on/target/classes/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/jdu/hadoop-http-auth-signature-secret
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
 at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
 at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
 at 
 org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
 at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
 at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
 at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
 at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
 at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
 at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
 at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
 at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
 at org.mortbay.jetty.Server.doStart(Server.java:224)
 at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
 at 
 org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:989)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1089)
 at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
 at 
 org.apache.hadoop.yarn.server.MiniYARNCluster$2.run(MiniYARNCluster.java:312)
 Caused by: 

[jira] [Moved] (HADOOP-11763) TestDistributedShell get failed due to RM start failure.

2015-03-27 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du moved YARN-3408 to HADOOP-11763:
---

 Component/s: (was: resourcemanager)
  security
Target Version/s: 2.7.0  (was: 2.8.0)
 Key: HADOOP-11763  (was: YARN-3408)
 Project: Hadoop Common  (was: Hadoop YARN)

 TestDistributedShell get failed due to RM start failure.
 

 Key: HADOOP-11763
 URL: https://issues.apache.org/jira/browse/HADOOP-11763
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Junping Du
Assignee: Junping Du

 The log exception:
 {code}
 2015-03-27 14:43:17,190 WARN  [RM-0] mortbay.log (Slf4jLog.java:warn(89)) - 
 Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@2d2d0132{/,file:/Users/jdu/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/classes/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/jdu/hadoop-http-auth-signature-secret
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
 at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
 at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
 at 
 org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
 at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
 at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
 at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
 at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
 at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
 at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
 at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
 at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
 at org.mortbay.jetty.Server.doStart(Server.java:224)
 at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
 at 
 org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:989)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1089)
 at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
 at 
 org.apache.hadoop.yarn.server.MiniYARNCluster$2.run(MiniYARNCluster.java:312)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/jdu/hadoop-http-auth-signature-secret
 at 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
 ... 23 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11691) X86 build of libwinutils is broken

2015-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11691?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383985#comment-14383985
 ] 

Hudson commented on HADOOP-11691:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #136 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/136/])
HADOOP-11691. X86 build of libwinutils is broken. Contributed by Kiran Kumar M 
R. (cnauroth: rev af618f23a70508111f490a24d74fc90161cfc079)
* hadoop-common-project/hadoop-common/src/main/winutils/win8sdk.props
* hadoop-common-project/hadoop-common/CHANGES.txt


 X86 build of libwinutils is broken
 --

 Key: HADOOP-11691
 URL: https://issues.apache.org/jira/browse/HADOOP-11691
 Project: Hadoop Common
  Issue Type: Bug
  Components: build, native
Affects Versions: 2.7.0
Reporter: Remus Rusanu
Assignee: Kiran Kumar M R
Priority: Critical
 Fix For: 2.7.0

 Attachments: HADOOP-11691-001.patch, HADOOP-11691-002.patch, 
 HADOOP-11691-003.patch


 Hadoop-9922 recently fixed x86 build. After YARN-2190 compiling x86 results 
 in error:
 {code}
 (Link target) -
   
 E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\target/winutils/hadoopwinutilsvc_s.obj
  : fatal error LNK1112: module machine type 'x64' conflicts with target 
 machine type 'X86' 
 [E:\HW\project\hadoop-common\hadoop-common-project\hadoop-common\src\main\winutils\winutils.vcxproj]
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11553) Formalize the shell API

2015-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11553?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383987#comment-14383987
 ] 

Hudson commented on HADOOP-11553:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #136 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/136/])
HADOOP-11553. Foramlize the shell API (aw) (aw: rev 
b30ca8ce0e0d435327e179f0877bd58fa3896793)
* hadoop-common-project/hadoop-common/CHANGES.txt
* hadoop-project/src/site/site.xml
* hadoop-common-project/hadoop-common/pom.xml
* hadoop-common-project/hadoop-common/src/site/markdown/UnixShellGuide.md
* hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* dev-support/shelldocs.py
HADOOP-11553 addendum fix the typo in the changes file (aw: rev 
5695c7a541c1a3092040523446f1ba689fb495e3)
* hadoop-common-project/hadoop-common/CHANGES.txt


 Formalize the shell API
 ---

 Key: HADOOP-11553
 URL: https://issues.apache.org/jira/browse/HADOOP-11553
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation, scripts
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer
Priority: Blocker
 Fix For: 3.0.0

 Attachments: HADOOP-11553-00.patch, HADOOP-11553-01.patch, 
 HADOOP-11553-02.patch, HADOOP-11553-03.patch, HADOOP-11553-04.patch, 
 HADOOP-11553-05.patch, HADOOP-11553-06.patch


 After HADOOP-11485, we need to formally document functions and environment 
 variables that 3rd parties can expect to be able to exist/use.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11763) RM in insecure model get start failure after HADOOP-10670.

2015-03-27 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-11763:

Priority: Blocker  (was: Major)

 RM in insecure model get start failure after HADOOP-10670.
 --

 Key: HADOOP-11763
 URL: https://issues.apache.org/jira/browse/HADOOP-11763
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Junping Du
Assignee: Junping Du
Priority: Blocker
 Attachments: HADOOP-11763.patch


 TestDistributedShell get failed due to RM start failure.
 The log exception:
 {code}
 2015-03-27 14:43:17,190 WARN  [RM-0] mortbay.log (Slf4jLog.java:warn(89)) - 
 Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@2d2d0132{/,file:/Users/jdu/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-comm
 on/target/classes/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/jdu/hadoop-http-auth-signature-secret
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
 at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
 at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
 at 
 org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
 at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
 at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
 at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
 at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
 at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
 at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
 at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
 at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
 at org.mortbay.jetty.Server.doStart(Server.java:224)
 at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
 at 
 org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:989)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1089)
 at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
 at 
 org.apache.hadoop.yarn.server.MiniYARNCluster$2.run(MiniYARNCluster.java:312)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/jdu/hadoop-http-auth-signature-secret
 at 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
 ... 23 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11763) RM in insecure model get start failure after HADOOP-10670.

2015-03-27 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-11763:

Status: Patch Available  (was: Open)

 RM in insecure model get start failure after HADOOP-10670.
 --

 Key: HADOOP-11763
 URL: https://issues.apache.org/jira/browse/HADOOP-11763
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Junping Du
Assignee: Junping Du
 Attachments: HADOOP-11763.patch


 TestDistributedShell get failed due to RM start failure.
 The log exception:
 {code}
 2015-03-27 14:43:17,190 WARN  [RM-0] mortbay.log (Slf4jLog.java:warn(89)) - 
 Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@2d2d0132{/,file:/Users/jdu/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-comm
 on/target/classes/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/jdu/hadoop-http-auth-signature-secret
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
 at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
 at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
 at 
 org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
 at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
 at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
 at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
 at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
 at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
 at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
 at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
 at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
 at org.mortbay.jetty.Server.doStart(Server.java:224)
 at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
 at 
 org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:989)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1089)
 at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
 at 
 org.apache.hadoop.yarn.server.MiniYARNCluster$2.run(MiniYARNCluster.java:312)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/jdu/hadoop-http-auth-signature-secret
 at 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
 ... 23 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11763) RM in insecure model get start failure after HADOOP-10670.

2015-03-27 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-11763:

Attachment: HADOOP-11763.patch

 RM in insecure model get start failure after HADOOP-10670.
 --

 Key: HADOOP-11763
 URL: https://issues.apache.org/jira/browse/HADOOP-11763
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Junping Du
Assignee: Junping Du
 Attachments: HADOOP-11763.patch


 TestDistributedShell get failed due to RM start failure.
 The log exception:
 {code}
 2015-03-27 14:43:17,190 WARN  [RM-0] mortbay.log (Slf4jLog.java:warn(89)) - 
 Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@2d2d0132{/,file:/Users/jdu/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-comm
 on/target/classes/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/jdu/hadoop-http-auth-signature-secret
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
 at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
 at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
 at 
 org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
 at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
 at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
 at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
 at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
 at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
 at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
 at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
 at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
 at org.mortbay.jetty.Server.doStart(Server.java:224)
 at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
 at 
 org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:989)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1089)
 at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
 at 
 org.apache.hadoop.yarn.server.MiniYARNCluster$2.run(MiniYARNCluster.java:312)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/jdu/hadoop-http-auth-signature-secret
 at 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
 ... 23 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11760) Fix typo of javadoc in DistCp

2015-03-27 Thread Brahma Reddy Battula (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384012#comment-14384012
 ] 

Brahma Reddy Battula commented on HADOOP-11760:
---

Thanks a lot [~ozawa]!!!

 Fix typo of javadoc in DistCp
 -

 Key: HADOOP-11760
 URL: https://issues.apache.org/jira/browse/HADOOP-11760
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chen He
Assignee: Brahma Reddy Battula
Priority: Trivial
  Labels: newbie++
 Fix For: 2.8.0

 Attachments: HADOOP-11760.patch


 /**
* Create a default working folder for the job, under the
* job staging directory
*
* @return Returns the working folder information
* @throws Exception - EXception if any
*/
   private Path createMetaFolderPath() throws Exception {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11748) The secrets of auth cookies should not be specified in configuration in clear text

2015-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11748?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14383986#comment-14383986
 ] 

Hudson commented on HADOOP-11748:
-

SUCCESS: Integrated in Hadoop-Hdfs-trunk-Java8 #136 (See 
[https://builds.apache.org/job/Hadoop-Hdfs-trunk-Java8/136/])
HADOOP-11748. The secrets of auth cookies should not be specified in 
configuration in clear text. Contributed by Li Lu and Haohui Mai. (wheat9: rev 
47782cbf4a66d49064fd3dd6d1d1a19cc42157fc)
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProviderCreator.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java
* hadoop-common-project/hadoop-common/CHANGES.txt
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProvider.java
* 
hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/util/StringSignerSecretProvider.java
* 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/test/java/org/apache/hadoop/fs/http/server/TestHttpFSServer.java
* hadoop-hdfs-project/hadoop-hdfs-httpfs/pom.xml
* 
hadoop-common-project/hadoop-auth/src/test/java/org/apache/hadoop/security/authentication/server/TestAuthenticationFilter.java


 The secrets of auth cookies should not be specified in configuration in clear 
 text
 --

 Key: HADOOP-11748
 URL: https://issues.apache.org/jira/browse/HADOOP-11748
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Assignee: Li Lu
Priority: Critical
 Fix For: 2.7.0

 Attachments: HADOOP-11748-032615-poc.patch, HADOOP-11748.001.patch


 Based on the discussion on HADOOP-10670, this jira proposes to remove 
 {{StringSecretProvider}} as it opens up possibilities for misconfiguration 
 and security vulnerabilities.
 {quote}
 My understanding is that the use case of inlining the secret is never 
 supported. The property is used to pass the secret internally. The way it 
 works before HADOOP-10868 is the following:
 * Users specify the initializer of the authentication filter in the 
 configuration.
 * AuthenticationFilterInitializer reads the secret file. The server will not 
 start if the secret file does not exists. The initializer will set the 
 property if it read the file correctly.
 *There is no way to specify the secret in the configuration out-of-the-box – 
 the secret is always overwritten by AuthenticationFilterInitializer.
 {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11763) RM in insecure model get start failure after HADOOP-10670.

2015-03-27 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-11763:

Summary: RM in insecure model get start failure after HADOOP-10670.  (was: 
TestDistributedShell get failed due to RM start failure.)

 RM in insecure model get start failure after HADOOP-10670.
 --

 Key: HADOOP-11763
 URL: https://issues.apache.org/jira/browse/HADOOP-11763
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Junping Du
Assignee: Junping Du

 TestDistributedShell get failed due to RM start failure.
 The log exception:
 {code}
 2015-03-27 14:43:17,190 WARN  [RM-0] mortbay.log (Slf4jLog.java:warn(89)) - 
 Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@2d2d0132{/,file:/Users/jdu/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-comm
 on/target/classes/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/jdu/hadoop-http-auth-signature-secret
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
 at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
 at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
 at 
 org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
 at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
 at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
 at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
 at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
 at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
 at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
 at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
 at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
 at org.mortbay.jetty.Server.doStart(Server.java:224)
 at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
 at 
 org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:989)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1089)
 at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
 at 
 org.apache.hadoop.yarn.server.MiniYARNCluster$2.run(MiniYARNCluster.java:312)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/jdu/hadoop-http-auth-signature-secret
 at 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
 ... 23 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11763) RM in insecure model get start failure after HADOOP-10670.

2015-03-27 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-11763:

Resolution: Duplicate
  Assignee: (was: Junping Du)
Status: Resolved  (was: Patch Available)

 RM in insecure model get start failure after HADOOP-10670.
 --

 Key: HADOOP-11763
 URL: https://issues.apache.org/jira/browse/HADOOP-11763
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Junping Du
Priority: Blocker
 Attachments: HADOOP-11763.patch


 TestDistributedShell get failed due to RM start failure.
 The log exception:
 {code}
 2015-03-27 14:43:17,190 WARN  [RM-0] mortbay.log (Slf4jLog.java:warn(89)) - 
 Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@2d2d0132{/,file:/Users/jdu/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-comm
 on/target/classes/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/jdu/hadoop-http-auth-signature-secret
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
 at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
 at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
 at 
 org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
 at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
 at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
 at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
 at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
 at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
 at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
 at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
 at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
 at org.mortbay.jetty.Server.doStart(Server.java:224)
 at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
 at 
 org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:989)
 at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1089)
 at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
 at 
 org.apache.hadoop.yarn.server.MiniYARNCluster$2.run(MiniYARNCluster.java:312)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/jdu/hadoop-http-auth-signature-secret
 at 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
 at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
 ... 23 more
 {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11763) TestDistributedShell get failed due to RM start failure.

2015-03-27 Thread Junping Du (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11763?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Junping Du updated HADOOP-11763:

Description: 
TestDistributedShell get failed due to RM start failure.
The log exception:
{code}
2015-03-27 14:43:17,190 WARN  [RM-0] mortbay.log (Slf4jLog.java:warn(89)) - 
Failed startup of context 
org.mortbay.jetty.webapp.WebAppContext@2d2d0132{/,file:/Users/jdu/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-comm
on/target/classes/webapps/cluster}
javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
signature secret file: /Users/jdu/hadoop-http-auth-signature-secret
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
at 
org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
at 
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at 
org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
at 
org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
at 
org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
at 
org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
at 
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at 
org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
at 
org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
at 
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at 
org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
at org.mortbay.jetty.Server.doStart(Server.java:224)
at 
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:989)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1089)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at 
org.apache.hadoop.yarn.server.MiniYARNCluster$2.run(MiniYARNCluster.java:312)
Caused by: java.lang.RuntimeException: Could not read signature secret file: 
/Users/jdu/hadoop-http-auth-signature-secret
at 
org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
... 23 more
{code}

  was:
The log exception:
{code}
2015-03-27 14:43:17,190 WARN  [RM-0] mortbay.log (Slf4jLog.java:warn(89)) - 
Failed startup of context 
org.mortbay.jetty.webapp.WebAppContext@2d2d0132{/,file:/Users/jdu/hadoop/hadoop-yarn-project/hadoop-yarn/hadoop-yarn-common/target/classes/webapps/cluster}
javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
signature secret file: /Users/jdu/hadoop-http-auth-signature-secret
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
at 
org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
at 
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at 
org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
at 
org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
at 
org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
at 
org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
at 

[jira] [Commented] (HADOOP-10670) Allow AuthenticationFilters to load secret from signature secret files

2015-03-27 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384015#comment-14384015
 ] 

Junping Du commented on HADOOP-10670:
-

Gentlemen, just tracing from YARN test failures (TestDistributedShell) and 
found that this patch break RM get started in insecure model which is very 
risky to 2.7. I just filed HADOOP-11763 and deliver a quick patch to fix it 
(comment out the default value of 
hadoop.http.authentication.signature.secret.file). 
I am not sure if we can find some better way (like comments above - modify the 
RM to avoid binding the filter when it is not in the secure mode) quickly. If 
not, let's go with the easy way like HADOOP-11763, or we should revert the 
change here for 2.7 release.
CC to [~vinodkv].

 Allow AuthenticationFilters to load secret from signature secret files
 --

 Key: HADOOP-10670
 URL: https://issues.apache.org/jira/browse/HADOOP-10670
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng
Priority: Minor
 Fix For: 2.7.0

 Attachments: HADOOP-10670-v4.patch, HADOOP-10670-v5.patch, 
 HADOOP-10670-v6.patch, hadoop-10670-v2.patch, hadoop-10670-v3.patch, 
 hadoop-10670.patch


 In Hadoop web console, by using AuthenticationFilterInitializer, it's allowed 
 to configure AuthenticationFilter for the required signature secret by 
 specifying signature.secret.file property. This improvement would also allow 
 this when AuthenticationFilterInitializer isn't used in situations like 
 webhdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-27 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384074#comment-14384074
 ] 

Kai Zheng commented on HADOOP-11754:


I checked the failures, they're caused by the patch. The cause is in 
{{TestKerberosAuthenticator}}, it's in secure mode, but no signature file 
property is set; in the patch, when in secure mode, it will use {{file}} type, 
without checking if the signature file property is set or not; so 
{{FileSignerSecretProvider}} will be used anyway, but in it, if no signature 
file property is set then no file reading will be tried thus no exception will 
happen. Therefore its {{getCurrentSecret()}} will return null even though its 
{{init()}} is successful.

 RM fails to start in non-secure mode due to authentication filter failure
 -

 Key: HADOOP-11754
 URL: https://issues.apache.org/jira/browse/HADOOP-11754
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
 HADOOP-11754.000.patch, HADOOP-11754.001.patch


 RM fails to start in the non-secure mode with the following exception:
 {noformat}
 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
   at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
   at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
   ... 23 more
 ...
 2015-03-25 22:02:42,538 FATAL 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
 ResourceManager
 org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
   at 

[jira] [Commented] (HADOOP-10670) Allow AuthenticationFilters to load secret from signature secret files

2015-03-27 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10670?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384072#comment-14384072
 ] 

Junping Du commented on HADOOP-10670:
-

Just found that HADOOP-11754 already there. Mark HADOOP-11763 as duplicated.

 Allow AuthenticationFilters to load secret from signature secret files
 --

 Key: HADOOP-10670
 URL: https://issues.apache.org/jira/browse/HADOOP-10670
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng
Priority: Minor
 Fix For: 2.7.0

 Attachments: HADOOP-10670-v4.patch, HADOOP-10670-v5.patch, 
 HADOOP-10670-v6.patch, hadoop-10670-v2.patch, hadoop-10670-v3.patch, 
 hadoop-10670.patch


 In Hadoop web console, by using AuthenticationFilterInitializer, it's allowed 
 to configure AuthenticationFilter for the required signature secret by 
 specifying signature.secret.file property. This improvement would also allow 
 this when AuthenticationFilterInitializer isn't used in situations like 
 webhdfs.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-27 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384455#comment-14384455
 ] 

Haohui Mai commented on HADOOP-11754:
-

bq. I'm not sure why we want to prevent using the random secret in the secure 
mode. 

This is for fallback only. The behavior is consistent with the previous 
behavior. The authentication filter bails out when the secret is not found. 
This is true for both RM and other users of the authentication filters.

bq. As is mentioned above, it's an incompatible semantics change, which will 
break RM web interface and timeline server secure deployment. 

Can you be more specific? What are the behaviors before and after the changes?

bq. To be specific, timeline server never has a default secret file before. 
This patch will forces it to have one.

I'm confused. What does timeline server has to do with {{RMFilterInitializer}}? 
I think it is a separate issue and we can look at it in a separate jira.

 RM fails to start in non-secure mode due to authentication filter failure
 -

 Key: HADOOP-11754
 URL: https://issues.apache.org/jira/browse/HADOOP-11754
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
 HADOOP-11754.000.patch, HADOOP-11754.001.patch


 RM fails to start in the non-secure mode with the following exception:
 {noformat}
 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
   at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
   at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
   ... 

[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-27 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384423#comment-14384423
 ] 

Zhijie Shen commented on HADOOP-11754:
--

To be specific, timeline server never has a default secret file before. This 
patch will forces it to have one.

 RM fails to start in non-secure mode due to authentication filter failure
 -

 Key: HADOOP-11754
 URL: https://issues.apache.org/jira/browse/HADOOP-11754
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
 HADOOP-11754.000.patch, HADOOP-11754.001.patch


 RM fails to start in the non-secure mode with the following exception:
 {noformat}
 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
   at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
   at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
   ... 23 more
 ...
 2015-03-25 22:02:42,538 FATAL 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
 ResourceManager
 org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 

[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-27 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384532#comment-14384532
 ] 

Zhijie Shen commented on HADOOP-11754:
--

Before 2.7:

* {{AuthenticationFilterInitializer}}, {{RMAuthenticationFilterInitializer}} 
and {{TimelineAuthenticationFilterInitializer}} read the secret file, but 
behave a bit different. {{FileSignerSecretProvider}} seems to choose the 
behavior of {{RMAuthenticationFilterInitializer}}. However, unlike 
{{RMAuthenticationFilterInitializer}}, {{AuthenticationFilterInitializer}} 
doesn't allow null secret file path, while 
{{TimelineAuthenticationFilterInitializer}} DOESN'T have default secret file 
path.

* {{AuthenticationFilter}} check it customized secret exists (no matter it 
comes from secret file or directly put in the configuration) or not to decide 
failback to random secret no matter {{AuthenticationFilter}} is used in secure 
mode (Kerberos handler) or in insecure mode (Pseudo handler).

After these changes in 2.7.

* {{RMAuthenticationFilterInitializer}}'s behavior is chosen as the standard.

* {{AuthenticationFilter}} no longer accepts secret that is put inside the 
configuration file. It may not be the best practice, but it's a valid scenario 
before. {{AuthenticationFilter}} also forces the user to have the secret file 
in secure mode, and it's not able to failback to random secret.

Talking about timeline server specifically, in the case of starting timeline 
server in secure mode with the default secret config, the following logic will 
happen:

1. It tries to read the secret file, but it doesn't exists.
2. It checks and finds it's a secure mode, and throws the exception, and 
consequently timeline server fails to start.

bq.  think it is a separate issue and we can look at it in a separate jira.

I'm afraid it's not a separate issue. This change is going to break the 
timeline server secure deployment.

 RM fails to start in non-secure mode due to authentication filter failure
 -

 Key: HADOOP-11754
 URL: https://issues.apache.org/jira/browse/HADOOP-11754
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
 HADOOP-11754.000.patch, HADOOP-11754.001.patch


 RM fails to start in the non-secure mode with the following exception:
 {noformat}
 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
   at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
   at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 

[jira] [Updated] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-03-27 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11717:
-
Status: Patch Available  (was: Open)

 Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
 -

 Key: HADOOP-11717
 URL: https://issues.apache.org/jira/browse/HADOOP-11717
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Larry McCay
Assignee: Larry McCay
 Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
 HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
 HADOOP-11717-6.patch, HADOOP-11717-7.patch


 Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
 The actual authentication is done by some external service that the handler 
 will redirect to when there is no hadoop.auth cookie and no JWT token found 
 in the incoming request.
 Using JWT provides a number of benefits:
 * It is not tied to any specific authentication mechanism - so buys us many 
 SSO integrations
 * It is cryptographically verifiable for determining whether it can be trusted
 * Checking for expiration allows for a limited lifetime and window for 
 compromised use
 This will introduce the use of nimbus-jose-jwt library for processing, 
 validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-03-27 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11717:
-
Attachment: HADOOP-11717-7.patch

removed extraneous imports in test which were causing a build failure.

 Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
 -

 Key: HADOOP-11717
 URL: https://issues.apache.org/jira/browse/HADOOP-11717
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Larry McCay
Assignee: Larry McCay
 Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
 HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
 HADOOP-11717-6.patch, HADOOP-11717-7.patch


 Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
 The actual authentication is done by some external service that the handler 
 will redirect to when there is no hadoop.auth cookie and no JWT token found 
 in the incoming request.
 Using JWT provides a number of benefits:
 * It is not tied to any specific authentication mechanism - so buys us many 
 SSO integrations
 * It is cryptographically verifiable for determining whether it can be trusted
 * Checking for expiration allows for a limited lifetime and window for 
 compromised use
 This will introduce the use of nimbus-jose-jwt library for processing, 
 validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11257) Update hadoop jar documentation to warn against using it for launching yarn jars

2015-03-27 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11257?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384318#comment-14384318
 ] 

Colin Patrick McCabe commented on HADOOP-11257:
---

Thanks for your quick action on this, [~cnauroth] and [~iwasakims].  +1

 Update hadoop jar documentation to warn against using it for launching yarn 
 jars
 --

 Key: HADOOP-11257
 URL: https://issues.apache.org/jira/browse/HADOOP-11257
 Project: Hadoop Common
  Issue Type: Improvement
Affects Versions: 2.1.1-beta
Reporter: Allen Wittenauer
Assignee: Masatake Iwasaki
Priority: Blocker
 Fix For: 2.7.0

 Attachments: HADOOP-11257-branch-2.addendum.001.patch, 
 HADOOP-11257.1.patch, HADOOP-11257.1.patch, HADOOP-11257.2.patch, 
 HADOOP-11257.3.patch, HADOOP-11257.4.patch, HADOOP-11257.4.patch


 We should update the hadoop jar documentation to warn against using it for 
 launching yarn jars.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384413#comment-14384413
 ] 

Hadoop QA commented on HADOOP-11717:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12707839/HADOOP-11717-7.patch
  against trunk revision 05499b1.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 2 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-auth.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6016//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6016//artifact/patchprocess/newPatchFindbugsWarningshadoop-auth.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6016//console

This message is automatically generated.

 Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
 -

 Key: HADOOP-11717
 URL: https://issues.apache.org/jira/browse/HADOOP-11717
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Larry McCay
Assignee: Larry McCay
 Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
 HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
 HADOOP-11717-6.patch, HADOOP-11717-7.patch


 Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
 The actual authentication is done by some external service that the handler 
 will redirect to when there is no hadoop.auth cookie and no JWT token found 
 in the incoming request.
 Using JWT provides a number of benefits:
 * It is not tied to any specific authentication mechanism - so buys us many 
 SSO integrations
 * It is cryptographically verifiable for determining whether it can be trusted
 * Checking for expiration allows for a limited lifetime and window for 
 compromised use
 This will introduce the use of nimbus-jose-jwt library for processing, 
 validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11760) Fix typo of javadoc in DistCp

2015-03-27 Thread Harsh J (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11760?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384266#comment-14384266
 ] 

Harsh J commented on HADOOP-11760:
--

[~airbots] - Thanks for these fixes. In future please feel free to combine all 
found typo issues into a single JIRA and patch, to save both ends the extra 
overhead work that otherwise goes into committing trivial fixes.

 Fix typo of javadoc in DistCp
 -

 Key: HADOOP-11760
 URL: https://issues.apache.org/jira/browse/HADOOP-11760
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 3.0.0
Reporter: Chen He
Assignee: Brahma Reddy Battula
Priority: Trivial
  Labels: newbie++
 Fix For: 2.8.0

 Attachments: HADOOP-11760.patch


 /**
* Create a default working folder for the job, under the
* job staging directory
*
* @return Returns the working folder information
* @throws Exception - EXception if any
*/
   private Path createMetaFolderPath() throws Exception {



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-03-27 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay updated HADOOP-11717:
-
Status: Open  (was: Patch Available)

 Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
 -

 Key: HADOOP-11717
 URL: https://issues.apache.org/jira/browse/HADOOP-11717
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Larry McCay
Assignee: Larry McCay
 Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
 HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
 HADOOP-11717-6.patch


 Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
 The actual authentication is done by some external service that the handler 
 will redirect to when there is no hadoop.auth cookie and no JWT token found 
 in the incoming request.
 Using JWT provides a number of benefits:
 * It is not tied to any specific authentication mechanism - so buys us many 
 SSO integrations
 * It is cryptographically verifiable for determining whether it can be trusted
 * Checking for expiration allows for a limited lifetime and window for 
 compromised use
 This will introduce the use of nimbus-jose-jwt library for processing, 
 validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11758) Add options to filter out too much granular tracing spans

2015-03-27 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384334#comment-14384334
 ] 

Colin Patrick McCabe commented on HADOOP-11758:
---

Hmm.  The idea behind HTrace is not to trace every operation.  We should be 
tracing less than 1% of all operations  At that point, we wouldn't really have 
a problem with too many trace spans.

The only time you would turn on tracing for every operation is when doing 
debugging.  In that case it's like turning log4j up to TRACE level-- you know 
you're going to get swamped.

So basically I would argue that we already do have an option to filter out too 
many trace spans-- setting the trace sampler to ProbabilitySampler.

 Add options to filter out too much granular tracing spans
 -

 Key: HADOOP-11758
 URL: https://issues.apache.org/jira/browse/HADOOP-11758
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tracing
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Attachments: testWriteTraceHooks.html


 in order to avoid queue in span receiver spills



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11758) Add options to filter out too much granular tracing spans

2015-03-27 Thread Colin Patrick McCabe (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11758?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384371#comment-14384371
 ] 

Colin Patrick McCabe commented on HADOOP-11758:
---

I also wonder if we can do tracing at a level slightly above writeChunk.  
writeChunk operates at the level of 512-byte chunks, but writes are often 
larger than that.

If you look here:
{code}
  private void writeChecksumChunks(byte b[], int off, int len)
  throws IOException {
sum.calculateChunkedSums(b, off, len, checksum, 0);
for (int i = 0; i  len; i += sum.getBytesPerChecksum()) {
  int chunkLen = Math.min(sum.getBytesPerChecksum(), len - i);
  int ckOffset = i / sum.getBytesPerChecksum() * getChecksumSize();
  writeChunk(b, off + i, chunkLen, checksum, ckOffset, getChecksumSize());
}
  }
{code}
you can see that if we do a 4 kilobyte read, writeChunk will get called 8 
times.  But really it would be better just to have one span representing the 
entire 4k write.

 Add options to filter out too much granular tracing spans
 -

 Key: HADOOP-11758
 URL: https://issues.apache.org/jira/browse/HADOOP-11758
 Project: Hadoop Common
  Issue Type: Improvement
  Components: tracing
Reporter: Masatake Iwasaki
Assignee: Masatake Iwasaki
Priority: Minor
 Attachments: testWriteTraceHooks.html


 in order to avoid queue in span receiver spills



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11717) Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth

2015-03-27 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11717?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384418#comment-14384418
 ] 

Larry McCay commented on HADOOP-11717:
--

Those findbug warnings are unrelated to this patch.

[~owen.omalley] - can you give this a review when you get a chance?

 Add Redirecting WebSSO behavior with JWT Token in Hadoop Auth
 -

 Key: HADOOP-11717
 URL: https://issues.apache.org/jira/browse/HADOOP-11717
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Reporter: Larry McCay
Assignee: Larry McCay
 Attachments: HADOOP-11717-1.patch, HADOOP-11717-2.patch, 
 HADOOP-11717-3.patch, HADOOP-11717-4.patch, HADOOP-11717-5.patch, 
 HADOOP-11717-6.patch, HADOOP-11717-7.patch


 Extend AltKerberosAuthenticationHandler to provide WebSSO flow for UIs.
 The actual authentication is done by some external service that the handler 
 will redirect to when there is no hadoop.auth cookie and no JWT token found 
 in the incoming request.
 Using JWT provides a number of benefits:
 * It is not tied to any specific authentication mechanism - so buys us many 
 SSO integrations
 * It is cryptographically verifiable for determining whether it can be trusted
 * Checking for expiration allows for a limited lifetime and window for 
 compromised use
 This will introduce the use of nimbus-jose-jwt library for processing, 
 validating and parsing JWT tokens.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-27 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384585#comment-14384585
 ] 

Haohui Mai commented on HADOOP-11754:
-

bq. AuthenticationFilter check it customized secret exists (no matter it comes 
from secret file or directly put in the configuration) or not to decide 
failback to random secret no matter AuthenticationFilter is used in secure mode 
(Kerberos handler) or in insecure mode (Pseudo handler).

bq. AuthenticationFilter no longer accepts secret that is put inside the 
configuration file. It may not be the best practice, but it's a valid scenario 
before. AuthenticationFilter also forces the user to have the secret file in 
secure mode, and it's not able to failback to random secret.

We never support this use case. It is a misunderstanding of the code. See 
https://issues.apache.org/jira/browse/HADOOP-10670?focusedCommentId=14380372page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-14380372

For Timeline / RM server, it looks like we have a lot of customized use cases 
here. Looks like the right fix is to move the some of the code in HttpServer2 
and to allow customization.

 RM fails to start in non-secure mode due to authentication filter failure
 -

 Key: HADOOP-11754
 URL: https://issues.apache.org/jira/browse/HADOOP-11754
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
 HADOOP-11754.000.patch, HADOOP-11754.001.patch


 RM fails to start in the non-secure mode with the following exception:
 {noformat}
 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
   at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
   at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 

[jira] [Moved] (HADOOP-11764) Hadoop should have the option to use directory other than tmp for extracting and loading leveldbjni

2015-03-27 Thread Zhijie Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11764?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijie Shen moved YARN-3331 to HADOOP-11764:


Target Version/s: 2.8.0  (was: 2.8.0)
 Key: HADOOP-11764  (was: YARN-3331)
 Project: Hadoop Common  (was: Hadoop YARN)

 Hadoop should have the option to use directory other than tmp for extracting 
 and loading leveldbjni
 ---

 Key: HADOOP-11764
 URL: https://issues.apache.org/jira/browse/HADOOP-11764
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Anubhav Dhoot
Assignee: Anubhav Dhoot
 Attachments: YARN-3331.001.patch, YARN-3331.002.patch


 /tmp can be  required to be noexec in many environments. This causes a 
 problem when  nodemanager tries to load the leveldbjni library which can get 
 unpacked and executed from /tmp.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11764) Hadoop should have the option to use directory other than tmp for extracting and loading leveldbjni

2015-03-27 Thread Anubhav Dhoot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384735#comment-14384735
 ] 

Anubhav Dhoot commented on HADOOP-11764:


bq If the temporal native lib is redirected to another dir, we also needs to 
add that dir to JAVA_LIBRARY_PATH. Otherwise, we may still end up with native 
lib not found.
Hi Zhijie, I am guessing that this is not something needs to be done in this 
jira which tries to address the /tmp noexec problem, right?. 

 Hadoop should have the option to use directory other than tmp for extracting 
 and loading leveldbjni
 ---

 Key: HADOOP-11764
 URL: https://issues.apache.org/jira/browse/HADOOP-11764
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Anubhav Dhoot
Assignee: Anubhav Dhoot
 Attachments: YARN-3331.001.patch, YARN-3331.002.patch


 /tmp can be  required to be noexec in many environments. This causes a 
 problem when  nodemanager tries to load the leveldbjni library which can get 
 unpacked and executed from /tmp.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11664) Loading predefined EC schemas from configuration

2015-03-27 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang resolved HADOOP-11664.

Resolution: Fixed

 Loading predefined EC schemas from configuration
 

 Key: HADOOP-11664
 URL: https://issues.apache.org/jira/browse/HADOOP-11664
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Attachments: HADOOP-11664-v2.patch, HADOOP-11664-v3.patch, 
 HDFS-7371_v1.patch


 System administrator can configure multiple EC codecs in hdfs-site.xml file, 
 and codec instances or schemas in a new configuration file named 
 ec-schema.xml in the conf folder. A codec can be referenced by its instance 
 or schema using the codec name, and a schema can be utilized and specified by 
 the schema name for a folder or EC ZONE to enforce EC. Once a schema is used 
 to define an EC ZONE, then its associated parameter values will be stored as 
 xattributes and respected thereafter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11664) Loading predefined EC schemas from configuration

2015-03-27 Thread Zhe Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11664?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhe Zhang updated HADOOP-11664:
---
Fix Version/s: HDFS-7285

 Loading predefined EC schemas from configuration
 

 Key: HADOOP-11664
 URL: https://issues.apache.org/jira/browse/HADOOP-11664
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Fix For: HDFS-7285

 Attachments: HADOOP-11664-v2.patch, HADOOP-11664-v3.patch, 
 HDFS-7371_v1.patch


 System administrator can configure multiple EC codecs in hdfs-site.xml file, 
 and codec instances or schemas in a new configuration file named 
 ec-schema.xml in the conf folder. A codec can be referenced by its instance 
 or schema using the codec name, and a schema can be utilized and specified by 
 the schema name for a folder or EC ZONE to enforce EC. Once a schema is used 
 to define an EC ZONE, then its associated parameter values will be stored as 
 xattributes and respected thereafter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11639) Clean up Windows native code compilation warnings related to Windows Secure Container Executor.

2015-03-27 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-11639:
---
   Resolution: Fixed
Fix Version/s: 2.7.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

+1 for the patch.  I committed this to trunk, branch-2 and branch-2.7.  Remus, 
thank you for the contribution.  Kiran, thank you for helping with code review 
and testing.

 Clean up Windows native code compilation warnings related to Windows Secure 
 Container Executor.
 ---

 Key: HADOOP-11639
 URL: https://issues.apache.org/jira/browse/HADOOP-11639
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.6.0
Reporter: Chris Nauroth
Assignee: Remus Rusanu
 Fix For: 2.7.0

 Attachments: HADOOP-11639.00.patch, HADOOP-11639.01.patch, 
 HADOOP-11639.02.patch, HADOOP-11639.03.patch


 YARN-2198 introduced additional code in Hadoop Common to support the 
 NodeManager {{WindowsSecureContainerExecutor}}.  The patch introduced new 
 compilation warnings that we need to investigate and resolve.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11664) Loading predefined EC schemas from configuration

2015-03-27 Thread Zhe Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11664?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384755#comment-14384755
 ] 

Zhe Zhang commented on HADOOP-11664:


I agree. +1 on the patch; I committed it.

 Loading predefined EC schemas from configuration
 

 Key: HADOOP-11664
 URL: https://issues.apache.org/jira/browse/HADOOP-11664
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng
 Fix For: HDFS-7285

 Attachments: HADOOP-11664-v2.patch, HADOOP-11664-v3.patch, 
 HDFS-7371_v1.patch


 System administrator can configure multiple EC codecs in hdfs-site.xml file, 
 and codec instances or schemas in a new configuration file named 
 ec-schema.xml in the conf folder. A codec can be referenced by its instance 
 or schema using the codec name, and a schema can be utilized and specified by 
 the schema name for a folder or EC ZONE to enforce EC. Once a schema is used 
 to define an EC ZONE, then its associated parameter values will be stored as 
 xattributes and respected thereafter.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11639) Clean up Windows native code compilation warnings related to Windows Secure Container Executor.

2015-03-27 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11639?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384803#comment-14384803
 ] 

Hudson commented on HADOOP-11639:
-

FAILURE: Integrated in Hadoop-trunk-Commit #7449 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/7449/])
HADOOP-11639. Clean up Windows native code compilation warnings related to 
Windows Secure Container Executor. Contributed by Remus Rusanu. (cnauroth: rev 
3836ad6c0b3331cf60286d134157c13985908230)
* hadoop-common-project/hadoop-common/src/main/winutils/client.c
* hadoop-common-project/hadoop-common/src/main/winutils/task.c
* hadoop-common-project/hadoop-common/src/main/winutils/systeminfo.c
* hadoop-common-project/hadoop-common/src/main/winutils/config.cpp
* 
hadoop-common-project/hadoop-common/src/main/native/src/org/apache/hadoop/yarn/server/nodemanager/windows_secure_container_executor.c
* hadoop-common-project/hadoop-common/src/main/winutils/libwinutils.c
* hadoop-common-project/hadoop-common/src/main/winutils/service.c
* hadoop-common-project/hadoop-common/src/main/winutils/include/winutils.h
* hadoop-common-project/hadoop-common/CHANGES.txt


 Clean up Windows native code compilation warnings related to Windows Secure 
 Container Executor.
 ---

 Key: HADOOP-11639
 URL: https://issues.apache.org/jira/browse/HADOOP-11639
 Project: Hadoop Common
  Issue Type: Bug
  Components: native
Affects Versions: 2.6.0
Reporter: Chris Nauroth
Assignee: Remus Rusanu
 Fix For: 2.7.0

 Attachments: HADOOP-11639.00.patch, HADOOP-11639.01.patch, 
 HADOOP-11639.02.patch, HADOOP-11639.03.patch


 YARN-2198 introduced additional code in Hadoop Common to support the 
 NodeManager {{WindowsSecureContainerExecutor}}.  The patch introduced new 
 compilation warnings that we need to investigate and resolve.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11764) Hadoop should have the option to use directory other than tmp for extracting and loading leveldbjni

2015-03-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384816#comment-14384816
 ] 

Allen Wittenauer commented on HADOOP-11764:
---

bq. I'm afraid we can't set it in config file, because config file is read by 
the daemon, but we need to start the daemon with this opt.

It just need to be set as a system property prior to invoking the class.  
That's all putting it on the command line does, so why can't we do this in 
Configuration?

bq.  If the temporal native lib is redirected to another dir, we also needs to 
add that dir to JAVA_LIBRARY_PATH.

This is sounding more and more like a complete mess, with no real thought as to 
how admins are supposed to deal with it.

 Hadoop should have the option to use directory other than tmp for extracting 
 and loading leveldbjni
 ---

 Key: HADOOP-11764
 URL: https://issues.apache.org/jira/browse/HADOOP-11764
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Anubhav Dhoot
Assignee: Anubhav Dhoot
 Attachments: YARN-3331.001.patch, YARN-3331.002.patch


 /tmp can be  required to be noexec in many environments. This causes a 
 problem when  nodemanager tries to load the leveldbjni library which can get 
 unpacked and executed from /tmp.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Comment Edited] (HADOOP-11764) Hadoop should have the option to use directory other than tmp for extracting and loading leveldbjni

2015-03-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11764?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384816#comment-14384816
 ] 

Allen Wittenauer edited comment on HADOOP-11764 at 3/27/15 10:34 PM:
-

bq. I'm afraid we can't set it in config file, because config file is read by 
the daemon, but we need to start the daemon with this opt.

It just needs to be set as a system property prior to invoking the class.  
That's all putting it on the command line does, so why can't we do this in 
Configuration?

bq.  If the temporal native lib is redirected to another dir, we also needs to 
add that dir to JAVA_LIBRARY_PATH.

This is sounding more and more like a complete mess, with no real thought as to 
how admins are supposed to deal with it.


was (Author: aw):
bq. I'm afraid we can't set it in config file, because config file is read by 
the daemon, but we need to start the daemon with this opt.

It just need to be set as a system property prior to invoking the class.  
That's all putting it on the command line does, so why can't we do this in 
Configuration?

bq.  If the temporal native lib is redirected to another dir, we also needs to 
add that dir to JAVA_LIBRARY_PATH.

This is sounding more and more like a complete mess, with no real thought as to 
how admins are supposed to deal with it.

 Hadoop should have the option to use directory other than tmp for extracting 
 and loading leveldbjni
 ---

 Key: HADOOP-11764
 URL: https://issues.apache.org/jira/browse/HADOOP-11764
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Anubhav Dhoot
Assignee: Anubhav Dhoot
 Attachments: YARN-3331.001.patch, YARN-3331.002.patch


 /tmp can be  required to be noexec in many environments. This causes a 
 problem when  nodemanager tries to load the leveldbjni library which can get 
 unpacked and executed from /tmp.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-27 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai updated HADOOP-11754:

Attachment: HADOOP-11754.002.patch

 RM fails to start in non-secure mode due to authentication filter failure
 -

 Key: HADOOP-11754
 URL: https://issues.apache.org/jira/browse/HADOOP-11754
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
 HADOOP-11754.000.patch, HADOOP-11754.001.patch, HADOOP-11754.002.patch


 RM fails to start in the non-secure mode with the following exception:
 {noformat}
 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
   at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
   at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
   ... 23 more
 ...
 2015-03-25 22:02:42,538 FATAL 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
 ResourceManager
 org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.io.IOException: Problem in starting http server. Server 
 

[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-27 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384864#comment-14384864
 ] 

Haohui Mai commented on HADOOP-11754:
-

The v2 patch moves the instance of {{SignerSecretProvider}} to {{HttpServer2}}, 
which allows HDFS / RM / AM / Timeline server to customize their needs. The 
default is to allow falling back to random secret if the provider fails to read 
the file. All HDFS daemons will not fall back to random secret provider in 
secure mode, which is consistent with the existing behavior.

 RM fails to start in non-secure mode due to authentication filter failure
 -

 Key: HADOOP-11754
 URL: https://issues.apache.org/jira/browse/HADOOP-11754
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
 HADOOP-11754.000.patch, HADOOP-11754.001.patch, HADOOP-11754.002.patch


 RM fails to start in the non-secure mode with the following exception:
 {noformat}
 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
   at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
   at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
   ... 23 more
 ...
 2015-03-25 22:02:42,538 FATAL 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
 ResourceManager
 org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
   at 
 

[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384867#comment-14384867
 ] 

Allen Wittenauer commented on HADOOP-11754:
---

bq.  All HDFS daemons will not fall back to random secret provider in secure 
mode, which is consistent with the existing behavior.

I don't think that's consistent with pre-2.7 behavior though.

 RM fails to start in non-secure mode due to authentication filter failure
 -

 Key: HADOOP-11754
 URL: https://issues.apache.org/jira/browse/HADOOP-11754
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
 HADOOP-11754.000.patch, HADOOP-11754.001.patch, HADOOP-11754.002.patch


 RM fails to start in the non-secure mode with the following exception:
 {noformat}
 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
   at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
   at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
   ... 23 more
 ...
 2015-03-25 22:02:42,538 FATAL 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
 ResourceManager
 org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 

[jira] [Updated] (HADOOP-8545) Filesystem Implementation for OpenStack Swift

2015-03-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8545?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-8545:
-
Release Note: 
Added file system implementation for OpenStack Swift.
There are two implementation: block and native (similar to Amazon S3 
integration).
Data locality issue solved by patch in Swift, commit procedure to OpenStack is 
in progress.

To use implementation add to core-site.xml following:

```xml
property
namefs.swift.impl/name
valuecom.mirantis.fs.SwiftFileSystem/value
/property
property
namefs.swift.block.impl/name
 valuecom.mirantis.fs.block.SwiftBlockFileSystem/value
/property
```

In MapReduce job specify following configs for OpenStack Keystone 
authentication:
```java
conf.set(swift.auth.url, http://172.18.66.117:5000/v2.0/tokens;);
conf.set(swift.tenant, superuser);
conf.set(swift.username, admin1);
conf.set(swift.password, password);
conf.setInt(swift.http.port, 8080);
conf.setInt(swift.https.port, 443);
```java

Additional information specified on github: 
https://github.com/DmitryMezhensky/Hadoop-and-Swift-integration

  was:
Added file system implementation for OpenStack Swift.
There are two implementation: block and native (similar to Amazon S3 
integration).
Data locality issue solved by patch in Swift, commit procedure to OpenStack is 
in progress.

To use implementation add to core-site.xml following:
...
property
namefs.swift.impl/name
valuecom.mirantis.fs.SwiftFileSystem/value
/property
property
namefs.swift.block.impl/name
 valuecom.mirantis.fs.block.SwiftBlockFileSystem/value
/property
...

In MapReduce job specify following configs for OpenStack Keystone 
authentication:
conf.set(swift.auth.url, http://172.18.66.117:5000/v2.0/tokens;);
conf.set(swift.tenant, superuser);
conf.set(swift.username, admin1);
conf.set(swift.password, password);
conf.setInt(swift.http.port, 8080);
conf.setInt(swift.https.port, 443);

Additional information specified on github: 
https://github.com/DmitryMezhensky/Hadoop-and-Swift-integration


 Filesystem Implementation for OpenStack Swift
 -

 Key: HADOOP-8545
 URL: https://issues.apache.org/jira/browse/HADOOP-8545
 Project: Hadoop Common
  Issue Type: New Feature
  Components: fs
Affects Versions: 1.2.0, 2.0.3-alpha
Reporter: Tim Miller
Assignee: Dmitry Mezhensky
  Labels: hadoop, patch
 Fix For: 2.3.0

 Attachments: HADOOP-8545-026.patch, HADOOP-8545-027.patch, 
 HADOOP-8545-028.patch, HADOOP-8545-029.patch, HADOOP-8545-030.patch, 
 HADOOP-8545-031.patch, HADOOP-8545-032.patch, HADOOP-8545-033.patch, 
 HADOOP-8545-034.patch, HADOOP-8545-035.patch, HADOOP-8545-035.patch, 
 HADOOP-8545-036.patch, HADOOP-8545-037.patch, HADOOP-8545-1.patch, 
 HADOOP-8545-10.patch, HADOOP-8545-11.patch, HADOOP-8545-12.patch, 
 HADOOP-8545-13.patch, HADOOP-8545-14.patch, HADOOP-8545-15.patch, 
 HADOOP-8545-16.patch, HADOOP-8545-17.patch, HADOOP-8545-18.patch, 
 HADOOP-8545-19.patch, HADOOP-8545-2.patch, HADOOP-8545-20.patch, 
 HADOOP-8545-21.patch, HADOOP-8545-22.patch, HADOOP-8545-23.patch, 
 HADOOP-8545-24.patch, HADOOP-8545-25.patch, HADOOP-8545-3.patch, 
 HADOOP-8545-4.patch, HADOOP-8545-5.patch, HADOOP-8545-6.patch, 
 HADOOP-8545-7.patch, HADOOP-8545-8.patch, HADOOP-8545-9.patch, 
 HADOOP-8545-javaclouds-2.patch, HADOOP-8545.patch, HADOOP-8545.patch, 
 HADOOP-8545.suresh.patch


 ,Add a filesystem implementation for OpenStack Swift object store, similar to 
 the one which exists today for S3.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11761) Fix findbugs warnings in org.apache.hadoop.security.authentication

2015-03-27 Thread Haohui Mai (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11761?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384930#comment-14384930
 ] 

Haohui Mai commented on HADOOP-11761:
-

Cloning the bytes every time might be too expensive as {{getSecret()}} is 
called on every authentication request.  We can either disable the warnings or 
change the API to return a read-only {{ByteBuffer}}.

 Fix findbugs warnings in org.apache.hadoop.security.authentication
 --

 Key: HADOOP-11761
 URL: https://issues.apache.org/jira/browse/HADOOP-11761
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Li Lu
Assignee: Li Lu
Priority: Minor
  Labels: findbugs
 Attachments: HADOOP-11761-032615.patch


 As discovered in HADOOP-11748, we need to fix the findbugs warnings in 
 org.apache.hadoop.security.authentication. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-8698) Do not call unneceseary setConf(null) in Configured constructor

2015-03-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-8698:
-
Fix Version/s: (was: 3.0.0)
   (was: 0.24.0)

 Do not call unneceseary setConf(null) in Configured constructor
 ---

 Key: HADOOP-8698
 URL: https://issues.apache.org/jira/browse/HADOOP-8698
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 0.23.3, 3.0.0
Reporter: Radim Kolar
Assignee: Radim Kolar
Priority: Minor
 Attachments: setconf-null.txt, setconf-null2.txt, setconf-null3.txt, 
 setconf-null4.txt


 no-arg constructor of /org/apache/hadoop/conf/Configured calls setConf(null). 
 This is unnecessary and it increases complexity of setConf() code because you 
 have to check for not null object reference before using it. Under normal 
 conditions setConf() is never called with null reference, so not null check 
 is unnecessary.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14385075#comment-14385075
 ] 

Hadoop QA commented on HADOOP-11754:


{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12707912/HADOOP-11754.002.patch
  against trunk revision 3836ad6.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+1 tests included{color}.  The patch appears to include 1 new 
or modified test files.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:red}-1 findbugs{color}.  The patch appears to introduce 2 new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:red}-1 core tests{color}.  The patch failed these unit tests in 
hadoop-common-project/hadoop-auth hadoop-common-project/hadoop-common 
hadoop-hdfs-project/hadoop-hdfs:

  org.apache.hadoop.hdfs.qjournal.TestSecureNNWithQJM
  
org.apache.hadoop.hdfs.protocol.datatransfer.sasl.TestSaslDataTransfer
  
org.apache.hadoop.hdfs.server.balancer.TestBalancerWithSaslDataTransfer

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6017//testReport/
Findbugs warnings: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6017//artifact/patchprocess/newPatchFindbugsWarningshadoop-auth.html
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6017//console

This message is automatically generated.

 RM fails to start in non-secure mode due to authentication filter failure
 -

 Key: HADOOP-11754
 URL: https://issues.apache.org/jira/browse/HADOOP-11754
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
 HADOOP-11754.000.patch, HADOOP-11754.001.patch, HADOOP-11754.002.patch


 RM fails to start in the non-secure mode with the following exception:
 {noformat}
 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
   at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
   at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 

[jira] [Commented] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-27 Thread Zhijie Shen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14384996#comment-14384996
 ] 

Zhijie Shen commented on HADOOP-11754:
--

Haohui, thank for the latest patch. It looks good to me. I applied the patch 
and try RM in an insecure mode, it wont' crash again. I tried timeline server 
in a secure mode, it had fallback to use random secret. [~vinodkv], do you want 
to take a second look.

 RM fails to start in non-secure mode due to authentication filter failure
 -

 Key: HADOOP-11754
 URL: https://issues.apache.org/jira/browse/HADOOP-11754
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Assignee: Haohui Mai
Priority: Blocker
 Attachments: HADOOP-11754-v1.patch, HADOOP-11754-v2.patch, 
 HADOOP-11754.000.patch, HADOOP-11754.001.patch, HADOOP-11754.002.patch


 RM fails to start in the non-secure mode with the following exception:
 {noformat}
 2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
 2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
 org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
   at 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
   at 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
   at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
   at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
   at 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
   at 
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
   at 
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
   at 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at 
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
   at org.mortbay.jetty.Server.doStart(Server.java:224)
   at 
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
   at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
   at 
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
 Caused by: java.lang.RuntimeException: Could not read signature secret file: 
 /Users/sjlee/hadoop-http-auth-signature-secret
   at 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
   at 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
   ... 23 more
 ...
 2015-03-25 22:02:42,538 FATAL 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
 ResourceManager
 org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
   at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
   at 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
   at 
 

[jira] [Comment Edited] (HADOOP-11731) Rework the changelog and releasenotes

2015-03-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14385047#comment-14385047
 ] 

Allen Wittenauer edited comment on HADOOP-11731 at 3/28/15 1:46 AM:


-05:
* fix Colin's issues
* reverse sort the index so newer on top
* fix a few issues where some char weren't properly escaped which caused doxia 
to blow up on some of the older releases of Hadoop
* change clean -Preleasedocs to remove the entire directory not just the files
* rebase after HADOOP-11553 got committed


was (Author: aw):
-05:
* fix Colin's issues
* reverse sort the index so newer on top
* fix a few issues where some char weren't properly escaped which caused doxia 
to blow up on some of the older releases of Hadoop
* change clean -Preleasedocs to remove the entire directory not just the files
 

 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11731) Rework the changelog and releasenotes

2015-03-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11731:
--
Attachment: HADOOP-11731-05.patch

-05:
* fix Colin's issues
* reverse sort the index so newer on top
* fix a few issues where some char weren't properly escaped which caused doxia 
to blow up on some of the older releases of Hadoop
* change clean -Preleasedocs to remove the entire directory not just the files
 

 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11731) Rework the changelog and releasenotes

2015-03-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11731:
--
Status: Open  (was: Patch Available)

 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11731) Rework the changelog and releasenotes

2015-03-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11731:
--
Status: Patch Available  (was: Open)

 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11743) maven doesn't clean all the site files

2015-03-27 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-11743:
--
Description: After building the site files, performing a mvn clean 
-Preleasedocs doesn't actually clean everything up as git complains about 
untracked files.  (was: After building the site files, performing a mvn clean 
doesn't actually clean everything up as git complains about untracked files.)

 maven doesn't clean all the site files
 --

 Key: HADOOP-11743
 URL: https://issues.apache.org/jira/browse/HADOOP-11743
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Priority: Minor

 After building the site files, performing a mvn clean -Preleasedocs doesn't 
 actually clean everything up as git complains about untracked files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11743) maven doesn't clean all the site files

2015-03-27 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11743?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14385058#comment-14385058
 ] 

Allen Wittenauer commented on HADOOP-11743:
---

I cleaned up some of this as part of HADOOP-11553.

 maven doesn't clean all the site files
 --

 Key: HADOOP-11743
 URL: https://issues.apache.org/jira/browse/HADOOP-11743
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
Priority: Minor

 After building the site files, performing a mvn clean -Preleasedocs doesn't 
 actually clean everything up as git complains about untracked files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11731) Rework the changelog and releasenotes

2015-03-27 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11731?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14385071#comment-14385071
 ] 

Hadoop QA commented on HADOOP-11731:


{color:green}+1 overall{color}.  Here are the results of testing the latest 
attachment 
  http://issues.apache.org/jira/secure/attachment/12707952/HADOOP-11731-05.patch
  against trunk revision 3836ad6.

{color:green}+1 @author{color}.  The patch does not contain any @author 
tags.

{color:green}+0 tests included{color}.  The patch appears to be a 
documentation patch that doesn't require tests.

{color:green}+1 javac{color}.  The applied patch does not increase the 
total number of javac compiler warnings.

{color:green}+1 javadoc{color}.  There were no new javadoc warning messages.

{color:green}+1 eclipse:eclipse{color}.  The patch built with 
eclipse:eclipse.

{color:green}+1 findbugs{color}.  The patch does not introduce any new 
Findbugs (version 2.0.3) warnings.

{color:green}+1 release audit{color}.  The applied patch does not increase 
the total number of release audit warnings.

{color:green}+1 core tests{color}.  The patch passed unit tests in 
hadoop-common-project/hadoop-common.

Test results: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6019//testReport/
Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/6019//console

This message is automatically generated.

 Rework the changelog and releasenotes
 -

 Key: HADOOP-11731
 URL: https://issues.apache.org/jira/browse/HADOOP-11731
 Project: Hadoop Common
  Issue Type: New Feature
  Components: documentation
Affects Versions: 3.0.0
Reporter: Allen Wittenauer
 Attachments: HADOOP-11731-00.patch, HADOOP-11731-01.patch, 
 HADOOP-11731-03.patch, HADOOP-11731-04.patch, HADOOP-11731-05.patch


 The current way we generate these build artifacts is awful.  Plus they are 
 ugly and, in the case of release notes, very hard to pick out what is 
 important.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11767) Genefic token API and representation

2015-03-27 Thread Kai Zheng (JIRA)
Kai Zheng created HADOOP-11767:
--

 Summary: Genefic token API and representation
 Key: HADOOP-11767
 URL: https://issues.apache.org/jira/browse/HADOOP-11767
 Project: Hadoop Common
  Issue Type: Sub-task
Reporter: Kai Zheng
Assignee: Kai Zheng


This will abstract common token aspects and defines a generic token interface 
and representation. A JWT token implementation of such API will be provided 
separately in another issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11769) Pluggable token encoder, decoder and validator

2015-03-27 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11769?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-11769:
---
Description: This is to define a common token encoder, decoder and 
validator interfaces, considering token serialization and deserialization, 
encryption and decryption, signing and verifying, expiration and audience 
checking, and etc. By such APIs pluggable and configurable token encoder, 
decoder and validator will be implemented in other issues.  (was: This is to 
define a common token encoder and decoder interface, considering token 
serialization and deserialization, encryption and decryption, signing and 
verifying, expiration and audience checking, and etc. By such API pluggable and 
configurable token encoder and decoder will be implemented in other issue.)

 Pluggable token encoder, decoder and validator
 --

 Key: HADOOP-11769
 URL: https://issues.apache.org/jira/browse/HADOOP-11769
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: security
Reporter: Kai Zheng
Assignee: Kai Zheng

 This is to define a common token encoder, decoder and validator interfaces, 
 considering token serialization and deserialization, encryption and 
 decryption, signing and verifying, expiration and audience checking, and etc. 
 By such APIs pluggable and configurable token encoder, decoder and validator 
 will be implemented in other issues.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


  1   2   >