[jira] [Created] (HADOOP-11747) Why not re-use the security model offered by SELINUX?

2015-03-25 Thread Madhan Sundararajan Devaki (JIRA)
Madhan Sundararajan Devaki created HADOOP-11747:
---

 Summary: Why not re-use the security model offered by SELINUX?
 Key: HADOOP-11747
 URL: https://issues.apache.org/jira/browse/HADOOP-11747
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Madhan Sundararajan Devaki
Priority: Critical


SELINUX was introduced to bring in a robust security management in Linux OS.
In all installations of Hadoop (Cloudera/Hortonworks/...) one of the 
pre-installation checklist items is to disable SELINUX in all the nodes of the 
cluster.
Why not re-use the security model offered by SELINUX setting instead of 
re-inventing from scratch through Sentry/Knox/etc...?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11747) Why not re-use the security model offered by SELINUX?

2015-03-25 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas resolved HADOOP-11747.

Resolution: Invalid

Please direct questions like this to the mailing list. JIRA is the bug tracker.

 Why not re-use the security model offered by SELINUX?
 -

 Key: HADOOP-11747
 URL: https://issues.apache.org/jira/browse/HADOOP-11747
 Project: Hadoop Common
  Issue Type: Improvement
Reporter: Madhan Sundararajan Devaki
Priority: Critical

 SELINUX was introduced to bring in a robust security management in Linux OS.
 In all distributions of Hadoop (Cloudera/Hortonworks/...) one of the 
 pre-installation checklist items is to disable SELINUX in all the nodes of 
 the cluster.
 Why not re-use the security model offered by SELINUX setting instead of 
 re-inventing from scratch through Sentry/Knox/etc...?



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11748) Secrets for auth cookies can be specified in clear text

2015-03-25 Thread Haohui Mai (JIRA)
Haohui Mai created HADOOP-11748:
---

 Summary: Secrets for auth cookies can be specified in clear text
 Key: HADOOP-11748
 URL: https://issues.apache.org/jira/browse/HADOOP-11748
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Haohui Mai
Priority: Critical


Based on the discussion on HADOOP-10670, this jira proposes to remove 
{{StringSecretProvider}} as it opens up possibilities for misconfiguration and 
security vulnerabilities.

{quote}

My understanding is that the use case of inlining the secret is never 
supported. The property is used to pass the secret internally. The way it works 
before HADOOP-10868 is the following:

* Users specify the initializer of the authentication filter in the 
configuration.
* AuthenticationFilterInitializer reads the secret file. The server will not 
start if the secret file does not exists. The initializer will set the property 
if it read the file correctly.
*There is no way to specify the secret in the configuration out-of-the-box – 
the secret is always overwritten by AuthenticationFilterInitializer.

{quote}





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11749) HttpServer2 thread pool is set to daemon

2015-03-25 Thread Zhijie Shen (JIRA)
Zhijie Shen created HADOOP-11749:


 Summary: HttpServer2 thread pool is set to daemon
 Key: HADOOP-11749
 URL: https://issues.apache.org/jira/browse/HADOOP-11749
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Zhijie Shen


In many cases, it is not the problem because the rpc protocol will bock the 
process from exit. However, if the process only has a web server, since the 
thread pool is set to daemon, the process will immediately exit after starting 
it, but not stay and listen to the incoming requests.

It's possible for us to work around to make the main thread being blocked, but 
I'm wondering if we can resolve it within HttpServer2.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


trunk issue?

2015-03-25 Thread Sangjin Lee
I just pulled the latest trunk, and ran some unit tests that involve
starting up the mini YARN cluster. The RM fails to come up with the
following error:

2015-03-25 17:52:55,573 WARN  [RM-0] mortbay.log (Slf4jLog.java:warn(89)) -
Failed startup of context org.mortbay.jetty.webapp.WebAppContext@2122b536
{/,jar:file:/Users/sjlee/.m2/repository/org/apache/hadoop/hadoop-yarn-common/3.0.0-SNAPSHOT/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
javax.servlet.ServletException: java.lang.RuntimeException: Could not read
signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
at
org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
at
org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
at
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
at
org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at
org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
at
org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at
org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
at
org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
at org.mortbay.jetty.Server.doStart(Server.java:224)
at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
at
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
at
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at
org.apache.hadoop.yarn.server.MiniYARNCluster$2.run(MiniYARNCluster.java:312)
Caused by: java.lang.RuntimeException: Could not read signature secret
file: /Users/sjlee/hadoop-http-auth-signature-secret
at
org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
at
org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
... 23 more

Is this a known issue? Could this be related to HADOOP-10670
https://issues.apache.org/jira/browse/HADOOP-10670?

Thanks,
Sangjin


[jira] [Created] (HADOOP-11751) Parquet scan fails when directory contains _SUCCESS or _logs

2015-03-25 Thread Steven Phillips (JIRA)
Steven Phillips created HADOOP-11751:


 Summary: Parquet scan fails when directory contains _SUCCESS or 
_logs
 Key: HADOOP-11751
 URL: https://issues.apache.org/jira/browse/HADOOP-11751
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Steven Phillips
Assignee: Steven Phillips


_SUCCESS and _logs are often created from map reduce jobs, and typically 
ignored. This is commonly done using the OutputFilesFilter in hadoop.

The new FooterGatherer class, which is used to read parquet footers in 
parallel, does not use this filter. So it attempts to read these as parquet 
files and fails.

The fix is to use the DrillPathFilter, which extends OutputFilesFilter, in the 
FooterGather code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-11751) Parquet scan fails when directory contains _SUCCESS or _logs

2015-03-25 Thread Steven Phillips (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11751?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steven Phillips resolved HADOOP-11751.
--
Resolution: Invalid

 Parquet scan fails when directory contains _SUCCESS or _logs
 

 Key: HADOOP-11751
 URL: https://issues.apache.org/jira/browse/HADOOP-11751
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Steven Phillips
Assignee: Steven Phillips

 _SUCCESS and _logs are often created from map reduce jobs, and typically 
 ignored. This is commonly done using the OutputFilesFilter in hadoop.
 The new FooterGatherer class, which is used to read parquet footers in 
 parallel, does not use this filter. So it attempts to read these as parquet 
 files and fails.
 The fix is to use the DrillPathFilter, which extends OutputFilesFilter, in 
 the FooterGather code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11752) Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:2.4.0:protoc (compile-protoc) on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: pr

2015-03-25 Thread Venkata Sravan Kumar Talasila (JIRA)
Venkata Sravan Kumar Talasila created HADOOP-11752:
--

 Summary: Failed to execute goal 
org.apache.hadoop:hadoop-maven-plugins:2.4.0:protoc (compile-protoc) on project 
hadoop-common: org.apache.maven.plugin.MojoExecutionException: protoc failure 
- [Help 1]
 Key: HADOOP-11752
 URL: https://issues.apache.org/jira/browse/HADOOP-11752
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.6.0, 2.4.0
 Environment: Operating System: Windows 8.1 64Bit
Cygwin 64Bit
protobuf-2.5.0
protoc 2.5.0
hadoop-2.4.0-src
apache-maven-3.3.1
Reporter: Venkata Sravan Kumar Talasila


while build of Hadoop, I am facing the below error

Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:2.4.0:protoc 
(compile-protoc) on project hadoop-common: 
org.apache.maven.plugin.MojoExecutionException: protoc failure - [Help 1]

[INFO] 
[INFO] Building Apache Hadoop Common 2.4.0
[INFO] 
[INFO]
[INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-common ---
[INFO] Executing tasks

main:
[INFO] Executed tasks
[INFO]
[INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-os) @ hadoop-common ---
[INFO]
[INFO] --- hadoop-maven-plugins:2.4.0:protoc (compile-protoc) @ hadoop-common 
---
[WARNING] [C:\cygwin64\usr\local\bin\protoc.exe, 
--java_out=C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\target\generated-sources\java,
-IC:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto,
 
C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\GetUserMappingsProtocol.proto,
 
C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\HAServiceProtocol.proto,
 
C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\IpcConnectionContext.proto,
 
C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\ProtobufRpcEngine.proto,
 
C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\ProtocolInfo.proto,
 
C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\RefreshAuthorizationPolicyProtocol.proto,
 
C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\RefreshCallQueueProtocol.proto,
 
C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\RefreshUserMappingsProtocol.proto,
 
C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\RpcHeader.proto,
 
C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\Security.proto,
 
C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\ZKFCProtocol.proto]
 failed with error code 1
[ERROR] protoc compiler error
[INFO] 
[INFO] Reactor Summary:
[INFO]
[INFO] Apache Hadoop Main . SUCCESS [  6.080 s]
[INFO] Apache Hadoop Project POM .. SUCCESS [  2.140 s]
[INFO] Apache Hadoop Annotations .. SUCCESS [  2.691 s]
[INFO] Apache Hadoop Project Dist POM . SUCCESS [  1.250 s]
[INFO] Apache Hadoop Assemblies ... SUCCESS [  0.453 s]
[INFO] Apache Hadoop Maven Plugins  SUCCESS [  6.932 s]
[INFO] Apache Hadoop MiniKDC .. SUCCESS [01:59 min]
[INFO] Apache Hadoop Auth . SUCCESS [11:02 min]
[INFO] Apache Hadoop Auth Examples  SUCCESS [  3.697 s]
[INFO] Apache Hadoop Common ... FAILURE [  4.067 s]
[INFO] Apache Hadoop NFS .. SKIPPED
[INFO] Apache Hadoop Common Project ... SKIPPED
[INFO] Apache Hadoop HDFS . SKIPPED
[INFO] Apache Hadoop HttpFS ... SKIPPED
[INFO] Apache Hadoop HDFS BookKeeper Journal .. SKIPPED
[INFO] Apache Hadoop HDFS-NFS . SKIPPED
[INFO] Apache Hadoop HDFS Project . SKIPPED
[INFO] hadoop-yarn  SKIPPED
[INFO] hadoop-yarn-api  SKIPPED
[INFO] hadoop-yarn-common . SKIPPED
[INFO] hadoop-yarn-server . SKIPPED
[INFO] hadoop-yarn-server-common .. SKIPPED
[INFO] hadoop-yarn-server-nodemanager . SKIPPED
[INFO] hadoop-yarn-server-web-proxy ... SKIPPED
[INFO] 

[jira] [Created] (HADOOP-11750) distcp fails if we copy data from swift to secure HDFS

2015-03-25 Thread Chen He (JIRA)
Chen He created HADOOP-11750:


 Summary: distcp fails if we copy data from swift to secure HDFS
 Key: HADOOP-11750
 URL: https://issues.apache.org/jira/browse/HADOOP-11750
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/swift
Affects Versions: 2.3.0
Reporter: Chen He
Assignee: Chen He


ERROR tools.DistCp: Exception encountered
java.lang.IllegalArgumentException: java.net.UnknownHostException: 
babynames.main
at 
org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:373)
at 
org.apache.hadoop.security.SecurityUtil.buildDTServiceName(SecurityUtil.java:258)
at org.apache.hadoop.fs.FileSystem.getCanonicalServiceName(FileSystem.java:301)
at org.apache.hadoop.fs.FileSystem.collectDelegationTokens(FileSystem.java:523)
at org.apache.hadoop.fs.FileSystem.addDelegationTokens(FileSystem.java:507)
at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:121)
at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodesInternal(TokenCache.java:100)
at 
org.apache.hadoop.mapreduce.security.TokenCache.obtainTokensForNamenodes(TokenCache.java:80)
at 
org.apache.hadoop.tools.SimpleCopyListing.validatePaths(SimpleCopyListing.java:133)
at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:83)
at 
org.apache.hadoop.tools.GlobbedCopyListing.doBuildListing(GlobbedCopyListing.java:90)
at org.apache.hadoop.tools.CopyListing.buildListing(CopyListing.java:84)
at org.apache.hadoop.tools.DistCp.createInputFileListing(DistCp.java:353)
at org.apache.hadoop.tools.DistCp.execute(DistCp.java:160)
at org.apache.hadoop.tools.DistCp.run(DistCp.java:121)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.hadoop.tools.DistCp.main(DistCp.java:401)
Caused by: java.net.UnknownHostException: babynames.main
... 17 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11754) RM fails to start in non-secure mode due to authentication filter failure

2015-03-25 Thread Sangjin Lee (JIRA)
Sangjin Lee created HADOOP-11754:


 Summary: RM fails to start in non-secure mode due to 
authentication filter failure
 Key: HADOOP-11754
 URL: https://issues.apache.org/jira/browse/HADOOP-11754
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.0
Reporter: Sangjin Lee
Priority: Blocker


RM fails to start in the non-secure mode with the following exception:

{noformat}
2015-03-25 22:02:42,526 WARN org.mortbay.log: failed RMAuthenticationFilter: 
javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
2015-03-25 22:02:42,526 WARN org.mortbay.log: Failed startup of context 
org.mortbay.jetty.webapp.WebAppContext@6de50b08{/,jar:file:/Users/sjlee/hadoop-3.0.0-SNAPSHOT/share/hadoop/yarn/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
javax.servlet.ServletException: java.lang.RuntimeException: Could not read 
signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
at 
org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
at 
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at 
org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
at 
org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
at 
org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
at 
org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
at 
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at 
org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
at 
org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
at 
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at 
org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
at org.mortbay.jetty.Server.doStart(Server.java:224)
at 
org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
Caused by: java.lang.RuntimeException: Could not read signature secret file: 
/Users/sjlee/hadoop-http-auth-signature-secret
at 
org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
... 23 more
...
2015-03-25 22:02:42,538 FATAL 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager: Error starting 
ResourceManager
org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:279)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
at 
org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
at 
org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1208)
Caused by: java.io.IOException: Problem in starting http server. Server 
handlers failed
at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:785)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
... 4 more
{noformat}

This is likely a regression introduced by HADOOP-10670.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: trunk issue?

2015-03-25 Thread Sangjin Lee
Filed https://issues.apache.org/jira/browse/HADOOP-11754.

On Wed, Mar 25, 2015 at 9:04 PM, Sangjin Lee sjl...@gmail.com wrote:

 It's clearly in the insecure mode. All I did is to pull the trunk and ran
 a unit test (thus the mini YARN cluster). For example,

 mvn clean install -DskipTests
 cd
 hadoop-mapreduce-project/hadoop-mapreduce-client/hadoop-mapreduce-client-jobclient/
 mvn test -Dtest=TestUberAM#testConfVerificationWithClassloader

 Sangjin

 On Wed, Mar 25, 2015 at 8:38 PM, Mai Haohui ricet...@gmail.com wrote:

 It looks like it is related to HADOOP-10670. Are you running in secure
 mode? It's a bug if you see this error in insecure mode.

 ~Haohui

 On Wed, Mar 25, 2015 at 5:55 PM, Sangjin Lee sj...@apache.org wrote:
  I just pulled the latest trunk, and ran some unit tests that involve
  starting up the mini YARN cluster. The RM fails to come up with the
  following error:
 
  2015-03-25 17:52:55,573 WARN  [RM-0] mortbay.log
 (Slf4jLog.java:warn(89)) -
  Failed startup of context
 org.mortbay.jetty.webapp.WebAppContext@2122b536
 
 {/,jar:file:/Users/sjlee/.m2/repository/org/apache/hadoop/hadoop-yarn-common/3.0.0-SNAPSHOT/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
  javax.servlet.ServletException: java.lang.RuntimeException: Could not
 read
  signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
  at
 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
  at
 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
  at
 
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
  at
 
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
  at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
  at
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
  at
 
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
  at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
  at
 
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
  at
 org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
  at
 org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
  at
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
  at
 
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
  at
 
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
  at
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
  at
 org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
  at org.mortbay.jetty.Server.doStart(Server.java:224)
  at
 org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
  at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
  at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
  at
 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
  at
 
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
  at
 org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
  at
 
 org.apache.hadoop.yarn.server.MiniYARNCluster$2.run(MiniYARNCluster.java:312)
  Caused by: java.lang.RuntimeException: Could not read signature secret
  file: /Users/sjlee/hadoop-http-auth-signature-secret
  at
 
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
  at
 
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
  ... 23 more
 
  Is this a known issue? Could this be related to HADOOP-10670
  https://issues.apache.org/jira/browse/HADOOP-10670?
 
  Thanks,
  Sangjin





[jira] [Resolved] (HADOOP-9461) JobTracker and NameNode both grant delegation tokens to non-secure clients

2015-03-25 Thread Harsh J (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9461?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Harsh J resolved HADOOP-9461.
-
Resolution: Won't Fix

Not an issue on trunk/branch-2.

 JobTracker and NameNode both grant delegation tokens to non-secure clients
 --

 Key: HADOOP-9461
 URL: https://issues.apache.org/jira/browse/HADOOP-9461
 Project: Hadoop Common
  Issue Type: Bug
  Components: security
Reporter: Harsh J
Assignee: Harsh J
Priority: Minor

 If one looks at the MAPREDUCE-1516 added logic in JobTracker.java's 
 isAllowedDelegationTokenOp() method, and apply non-secure states of 
 UGI.isSecurityEnabled == false and authMethod == SIMPLE, the return result is 
 true when the intention is false (due to the shorted conditionals).
 This is allowing non-secure JobClients to easily request and use 
 DelegationTokens and cause unwanted errors to be printed in the JobTracker 
 when the renewer attempts to run. Ideally such clients ought to get an error 
 if they request a DT in non-secure mode.
 HDFS in trunk and branch-1 both too have the same problem. Trunk MR 
 (HistoryServer) and YARN are however, unaffected due to a simpler, inlined 
 logic instead of reuse of this faulty method.
 Note that fixing this will break Oozie today, due to the merged logic of 
 OOZIE-734. Oozie will require a fix as well if this is to be fixed in 
 branch-1. As a result, I'm going to mark this as an Incompatible Change.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


Re: trunk issue?

2015-03-25 Thread Mai Haohui
It looks like it is related to HADOOP-10670. Are you running in secure
mode? It's a bug if you see this error in insecure mode.

~Haohui

On Wed, Mar 25, 2015 at 5:55 PM, Sangjin Lee sj...@apache.org wrote:
 I just pulled the latest trunk, and ran some unit tests that involve
 starting up the mini YARN cluster. The RM fails to come up with the
 following error:

 2015-03-25 17:52:55,573 WARN  [RM-0] mortbay.log (Slf4jLog.java:warn(89)) -
 Failed startup of context org.mortbay.jetty.webapp.WebAppContext@2122b536
 {/,jar:file:/Users/sjlee/.m2/repository/org/apache/hadoop/hadoop-yarn-common/3.0.0-SNAPSHOT/hadoop-yarn-common-3.0.0-SNAPSHOT.jar!/webapps/cluster}
 javax.servlet.ServletException: java.lang.RuntimeException: Could not read
 signature secret file: /Users/sjlee/hadoop-http-auth-signature-secret
 at
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:266)
 at
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.init(AuthenticationFilter.java:225)
 at
 org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationFilter.init(DelegationTokenAuthenticationFilter.java:161)
 at
 org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.init(RMAuthenticationFilter.java:53)
 at org.mortbay.jetty.servlet.FilterHolder.doStart(FilterHolder.java:97)
 at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 at
 org.mortbay.jetty.servlet.ServletHandler.initialize(ServletHandler.java:713)
 at org.mortbay.jetty.servlet.Context.startContext(Context.java:140)
 at
 org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.java:1282)
 at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java:518)
 at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:499)
 at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 at
 org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection.java:152)
 at
 org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHandlerCollection.java:156)
 at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java:130)
 at org.mortbay.jetty.Server.doStart(Server.java:224)
 at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java:50)
 at org.apache.hadoop.http.HttpServer2.start(HttpServer2.java:773)
 at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:274)
 at
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:974)
 at
 org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1074)
 at org.apache.hadoop.service.AbstractService.start(AbstractService.java:193)
 at
 org.apache.hadoop.yarn.server.MiniYARNCluster$2.run(MiniYARNCluster.java:312)
 Caused by: java.lang.RuntimeException: Could not read signature secret
 file: /Users/sjlee/hadoop-http-auth-signature-secret
 at
 org.apache.hadoop.security.authentication.util.FileSignerSecretProvider.init(FileSignerSecretProvider.java:59)
 at
 org.apache.hadoop.security.authentication.server.AuthenticationFilter.initializeSecretProvider(AuthenticationFilter.java:264)
 ... 23 more

 Is this a known issue? Could this be related to HADOOP-10670
 https://issues.apache.org/jira/browse/HADOOP-10670?

 Thanks,
 Sangjin


[jira] [Resolved] (HADOOP-11752) Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:2.4.0:protoc (compile-protoc) on project hadoop-common: org.apache.maven.plugin.MojoExecutionException: p

2015-03-25 Thread Haohui Mai (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11752?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Haohui Mai resolved HADOOP-11752.
-
  Resolution: Invalid
Target Version/s: 2.6.0, 2.4.0  (was: 2.4.0, 2.6.0)

 Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:2.4.0:protoc 
 (compile-protoc) on project hadoop-common: 
 org.apache.maven.plugin.MojoExecutionException: protoc failure - [Help 1]
 

 Key: HADOOP-11752
 URL: https://issues.apache.org/jira/browse/HADOOP-11752
 Project: Hadoop Common
  Issue Type: Bug
  Components: build
Affects Versions: 2.4.0, 2.6.0
 Environment: Operating System: Windows 8.1 64Bit
 Cygwin 64Bit
 protobuf-2.5.0
 protoc 2.5.0
 hadoop-2.4.0-src
 apache-maven-3.3.1
Reporter: Venkata Sravan Kumar Talasila
  Labels: build, maven

 while build of Hadoop, I am facing the below error
 Failed to execute goal org.apache.hadoop:hadoop-maven-plugins:2.4.0:protoc 
 (compile-protoc) on project hadoop-common: 
 org.apache.maven.plugin.MojoExecutionException: protoc failure - [Help 1]
 [INFO] 
 
 [INFO] Building Apache Hadoop Common 2.4.0
 [INFO] 
 
 [INFO]
 [INFO] --- maven-antrun-plugin:1.7:run (create-testdirs) @ hadoop-common ---
 [INFO] Executing tasks
 main:
 [INFO] Executed tasks
 [INFO]
 [INFO] --- maven-enforcer-plugin:1.3.1:enforce (enforce-os) @ hadoop-common 
 ---
 [INFO]
 [INFO] --- hadoop-maven-plugins:2.4.0:protoc (compile-protoc) @ hadoop-common 
 ---
 [WARNING] [C:\cygwin64\usr\local\bin\protoc.exe, 
 --java_out=C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\target\generated-sources\java,
 -IC:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto,
  
 C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\GetUserMappingsProtocol.proto,
  
 C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\HAServiceProtocol.proto,
  
 C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\IpcConnectionContext.proto,
  
 C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\ProtobufRpcEngine.proto,
  
 C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\ProtocolInfo.proto,
  
 C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\RefreshAuthorizationPolicyProtocol.proto,
  
 C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\RefreshCallQueueProtocol.proto,
  
 C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\RefreshUserMappingsProtocol.proto,
  
 C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\RpcHeader.proto,
  
 C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\Security.proto,
  
 C:\cygwin64\usr\local\hadoop-2.4.0-src\hadoop-common-project\hadoop-common\src\main\proto\ZKFCProtocol.proto]
  failed with error code 1
 [ERROR] protoc compiler error
 [INFO] 
 
 [INFO] Reactor Summary:
 [INFO]
 [INFO] Apache Hadoop Main . SUCCESS [  6.080 
 s]
 [INFO] Apache Hadoop Project POM .. SUCCESS [  2.140 
 s]
 [INFO] Apache Hadoop Annotations .. SUCCESS [  2.691 
 s]
 [INFO] Apache Hadoop Project Dist POM . SUCCESS [  1.250 
 s]
 [INFO] Apache Hadoop Assemblies ... SUCCESS [  0.453 
 s]
 [INFO] Apache Hadoop Maven Plugins  SUCCESS [  6.932 
 s]
 [INFO] Apache Hadoop MiniKDC .. SUCCESS [01:59 
 min]
 [INFO] Apache Hadoop Auth . SUCCESS [11:02 
 min]
 [INFO] Apache Hadoop Auth Examples  SUCCESS [  3.697 
 s]
 [INFO] Apache Hadoop Common ... FAILURE [  4.067 
 s]
 [INFO] Apache Hadoop NFS .. SKIPPED
 [INFO] Apache Hadoop Common Project ... SKIPPED
 [INFO] Apache Hadoop HDFS . SKIPPED
 [INFO] Apache Hadoop HttpFS ... SKIPPED
 [INFO] Apache Hadoop HDFS BookKeeper Journal .. SKIPPED
 [INFO] Apache Hadoop HDFS-NFS . SKIPPED
 [INFO] Apache Hadoop HDFS Project . SKIPPED
 

[jira] [Created] (HADOOP-11753) TestS3AContractOpen#testOpenReadZeroByteFile fals due to negative range header

2015-03-25 Thread Takenori Sato (JIRA)
Takenori Sato created HADOOP-11753:
--

 Summary: TestS3AContractOpen#testOpenReadZeroByteFile fals due to 
negative range header
 Key: HADOOP-11753
 URL: https://issues.apache.org/jira/browse/HADOOP-11753
 Project: Hadoop Common
  Issue Type: Bug
  Components: fs/s3
Reporter: Takenori Sato


_TestS3AContractOpen#testOpenReadZeroByteFile_ fails as follows.

{code}
testOpenReadZeroByteFile(org.apache.hadoop.fs.contract.s3a.TestS3AContractOpen) 
 Time elapsed: 3.312 sec   ERROR!
com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 416, AWS 
Service: Amazon S3, AWS Request ID: A58A95E0D36811E4, AWS Error Code: 
InvalidRange, AWS Error Message: The requested range cannot be satisfied.
at 
com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
at 
com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
at 
com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
at 
com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
at 
com.amazonaws.services.s3.AmazonS3Client.getObject(AmazonS3Client.java:)
at 
org.apache.hadoop.fs.s3a.S3AInputStream.reopen(S3AInputStream.java:91)
at 
org.apache.hadoop.fs.s3a.S3AInputStream.openIfNeeded(S3AInputStream.java:62)
at org.apache.hadoop.fs.s3a.S3AInputStream.read(S3AInputStream.java:127)
at java.io.FilterInputStream.read(FilterInputStream.java:83)
at 
org.apache.hadoop.fs.contract.AbstractContractOpenTest.testOpenReadZeroByteFile(AbstractContractOpenTest.java:66)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:47)
at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:44)
at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
at 
org.junit.internal.runners.statements.RunAfters.evaluate(RunAfters.java:27)
at 
org.junit.internal.runners.statements.FailOnTimeout$StatementThread.run(FailOnTimeout.java:74)
{code}

This is because the header is wrong when calling _S3AInputStream#read_ after 
_S3AInputStream#open_.

{code}
Range: bytes=0--1
* from 0 to -1
{code}

Tested on the latest branch-2.7.

{quote}
$ git log
commit d286673c602524af08935ea132c8afd181b6e2e4
Author: Jitendra Pandey Jitendra@Jitendra-Pandeys-MacBook-Pro-4.local
Date:   Tue Mar 24 16:17:06 2015 -0700
{quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)