[jira] [Updated] (HADOOP-13135) Encounter response code 500 when accessing /metrics endpoint

2016-05-29 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated HADOOP-13135:

Description: 
When accessing /metrics endpoint on hbase master through hadoop 2.7.1, I got:
{code}
HTTP ERROR 500

Problem accessing /metrics. Reason:

INTERNAL_SERVER_ERROR
Caused by:

java.lang.NullPointerException
at 
org.apache.hadoop.http.HttpServer2.isInstrumentationAccessAllowed(HttpServer2.java:1029)
at 
org.apache.hadoop.metrics.MetricsServlet.doGet(MetricsServlet.java:109)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
at 
org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:113)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
{code}
[~ajisakaa] suggested that code 500 should be 404 (NOT FOUND).

  was:
When accessing /metrics endpoint on hbase master through hadoop 2.7.1, I got:
{code}
HTTP ERROR 500

Problem accessing /metrics. Reason:

INTERNAL_SERVER_ERROR
Caused by:

java.lang.NullPointerException
at 
org.apache.hadoop.http.HttpServer2.isInstrumentationAccessAllowed(HttpServer2.java:1029)
at 
org.apache.hadoop.metrics.MetricsServlet.doGet(MetricsServlet.java:109)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
at 
org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
at 
org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:113)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
{code}

[~ajisakaa] suggested that code 500 should be 404 (NOT FOUND).


> Encounter response code 500 when accessing /metrics endpoint
> 
>
> Key: HADOOP-13135
> URL: https://issues.apache.org/jira/browse/HADOOP-13135
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.1
>Reporter: Ted Yu
>
> When accessing /metrics endpoint on hbase master through hadoop 2.7.1, I got:
> {code}
> HTTP ERROR 500
> Problem accessing /metrics. Reason:
> INTERNAL_SERVER_ERROR
> Caused by:
> java.lang.NullPointerException
>   at 
> org.apache.hadoop.http.HttpServer2.isInstrumentationAccessAllowed(HttpServer2.java:1029)
>   at 
> org.apache.hadoop.metrics.MetricsServlet.doGet(MetricsServlet.java:109)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:707)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:820)
>   at 
> org.mortbay.jetty.servlet.ServletHolder.handle(ServletHolder.java:511)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1221)
>   at 
> org.apache.hadoop.hbase.http.lib.StaticUserWebFilter$StaticUserFilter.doFilter(StaticUserWebFilter.java:113)
>   at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> {code}
> [~ajisakaa] suggested that code 500 should be 404 (NOT FOUND).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12579) Deprecate and remove WriteableRPCEngine

2016-05-29 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15306191#comment-15306191
 ] 

Kai Zheng commented on HADOOP-12579:


Yes I will fix it too. Thanks [~bibinchundatt]!

> Deprecate and remove WriteableRPCEngine
> ---
>
> Key: HADOOP-12579
> URL: https://issues.apache.org/jira/browse/HADOOP-12579
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Kai Zheng
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12579-v1.patch, HADOOP-12579-v10.patch, 
> HADOOP-12579-v11.patch, HADOOP-12579-v3.patch, HADOOP-12579-v4.patch, 
> HADOOP-12579-v5.patch, HADOOP-12579-v6.patch, HADOOP-12579-v7.patch, 
> HADOOP-12579-v8.patch, HADOOP-12579-v9.patch
>
>
> The {{WriteableRPCEninge}} depends on Java's serialization mechanisms for RPC 
> requests. Without proper checks, it has be shown that it can lead to security 
> vulnerabilities such as remote code execution (e.g., COLLECTIONS-580, 
> HADOOP-12577).
> The current implementation has migrated from {{WriteableRPCEngine}} to 
> {{ProtobufRPCEngine}} now. This jira proposes to deprecate 
> {{WriteableRPCEngine}} in branch-2 and to remove it in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12579) Deprecate and remove WriteableRPCEngine

2016-05-29 Thread Bibin A Chundatt (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15306190#comment-15306190
 ] 

Bibin A Chundatt commented on HADOOP-12579:
---

Same reason  for  MAPREDUCE-6705 failure too

> Deprecate and remove WriteableRPCEngine
> ---
>
> Key: HADOOP-12579
> URL: https://issues.apache.org/jira/browse/HADOOP-12579
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Kai Zheng
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12579-v1.patch, HADOOP-12579-v10.patch, 
> HADOOP-12579-v11.patch, HADOOP-12579-v3.patch, HADOOP-12579-v4.patch, 
> HADOOP-12579-v5.patch, HADOOP-12579-v6.patch, HADOOP-12579-v7.patch, 
> HADOOP-12579-v8.patch, HADOOP-12579-v9.patch
>
>
> The {{WriteableRPCEninge}} depends on Java's serialization mechanisms for RPC 
> requests. Without proper checks, it has be shown that it can lead to security 
> vulnerabilities such as remote code execution (e.g., COLLECTIONS-580, 
> HADOOP-12577).
> The current implementation has migrated from {{WriteableRPCEngine}} to 
> {{ProtobufRPCEngine}} now. This jira proposes to deprecate 
> {{WriteableRPCEngine}} in branch-2 and to remove it in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12579) Deprecate and remove WriteableRPCEngine

2016-05-29 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15306188#comment-15306188
 ] 

Kai Zheng commented on HADOOP-12579:


Thanks [~asuresh] for raising these!! Yes I will look at and fix them ASAP.

> Deprecate and remove WriteableRPCEngine
> ---
>
> Key: HADOOP-12579
> URL: https://issues.apache.org/jira/browse/HADOOP-12579
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Kai Zheng
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12579-v1.patch, HADOOP-12579-v10.patch, 
> HADOOP-12579-v11.patch, HADOOP-12579-v3.patch, HADOOP-12579-v4.patch, 
> HADOOP-12579-v5.patch, HADOOP-12579-v6.patch, HADOOP-12579-v7.patch, 
> HADOOP-12579-v8.patch, HADOOP-12579-v9.patch
>
>
> The {{WriteableRPCEninge}} depends on Java's serialization mechanisms for RPC 
> requests. Without proper checks, it has be shown that it can lead to security 
> vulnerabilities such as remote code execution (e.g., COLLECTIONS-580, 
> HADOOP-12577).
> The current implementation has migrated from {{WriteableRPCEngine}} to 
> {{ProtobufRPCEngine}} now. This jira proposes to deprecate 
> {{WriteableRPCEngine}} in branch-2 and to remove it in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-13215) INTERNAL_SERVER_ERROR due to NPE from PseudoAuthenticationHandler

2016-05-29 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki resolved HADOOP-13215.
-
Resolution: Duplicate

> INTERNAL_SERVER_ERROR due to NPE from PseudoAuthenticationHandler
> -
>
> Key: HADOOP-13215
> URL: https://issues.apache.org/jira/browse/HADOOP-13215
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2, 3.0.0-alpha1
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>
> After upgrading httpc* in HADOOP-12767, ServletRequest can pass null value 
> with {{getQueryString}}. 
> When {{PseudoAuthenticationHandler}} is used, NPE has been occurred.
> We have encountered this exception in backporting to Hadoop 2.7.x, but it 
> looks like it can occur later version Hadoops. 
> {code}
> HTTP ERROR 500
> Problem accessing /cluster. Reason:
> INTERNAL_SERVER_ERROR
> Caused by:
> java.lang.NullPointerException
> at 
> org.apache.http.client.utils.URLEncodedUtils.parse(URLEncodedUtils.java:235)
> at 
> org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler.getUserName(PseudoAuthenticationHandler.java:145)
> at 
> org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler.authenticate(PseudoAuthenticationHandler.java:181)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:348)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:519)
> at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.doFilter(RMAuthenticationFilter.java:82)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1243)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
> at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
> at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
> at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
> at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
> at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
> at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
> at org.mortbay.jetty.Server.handle(Server.java:326)
> at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
> at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
> at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
> at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
> at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
> at 
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
> at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13215) INTERNAL_SERVER_ERROR due to NPE from PseudoAuthenticationHandler

2016-05-29 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13215?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15306184#comment-15306184
 ] 

Kai Sasaki commented on HADOOP-13215:
-

Sorry, it seems to be already resolved in HADOOP-11859. We can make it void.
https://issues.apache.org/jira/browse/HADOOP-11859

> INTERNAL_SERVER_ERROR due to NPE from PseudoAuthenticationHandler
> -
>
> Key: HADOOP-13215
> URL: https://issues.apache.org/jira/browse/HADOOP-13215
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2, 3.0.0-alpha1
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>
> After upgrading httpc* in HADOOP-12767, ServletRequest can pass null value 
> with {{getQueryString}}. 
> When {{PseudoAuthenticationHandler}} is used, NPE has been occurred.
> We have encountered this exception in backporting to Hadoop 2.7.x, but it 
> looks like it can occur later version Hadoops. 
> {code}
> HTTP ERROR 500
> Problem accessing /cluster. Reason:
> INTERNAL_SERVER_ERROR
> Caused by:
> java.lang.NullPointerException
> at 
> org.apache.http.client.utils.URLEncodedUtils.parse(URLEncodedUtils.java:235)
> at 
> org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler.getUserName(PseudoAuthenticationHandler.java:145)
> at 
> org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler.authenticate(PseudoAuthenticationHandler.java:181)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:348)
> at 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:519)
> at 
> org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.doFilter(RMAuthenticationFilter.java:82)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1243)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
> at 
> org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
> at 
> org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
> at 
> org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
> at 
> org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
> at 
> org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
> at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
> at 
> org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
> at 
> org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
> at org.mortbay.jetty.Server.handle(Server.java:326)
> at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
> at 
> org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
> at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
> at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
> at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
> at 
> org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
> at 
> org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13215) INTERNAL_SERVER_ERROR due to NPE from PseudoAuthenticationHandler

2016-05-29 Thread Kai Sasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Sasaki updated HADOOP-13215:

Description: 
After upgrading httpc* in HADOOP-12767, ServletRequest can pass null value with 
{{getQueryString}}. 
When {{PseudoAuthenticationHandler}} is used, NPE has been occurred.
We have encountered this exception in backporting to Hadoop 2.7.x, but it looks 
like it can occur later version Hadoops. 

{code}
HTTP ERROR 500

Problem accessing /cluster. Reason:

INTERNAL_SERVER_ERROR
Caused by:

java.lang.NullPointerException
at 
org.apache.http.client.utils.URLEncodedUtils.parse(URLEncodedUtils.java:235)
at 
org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler.getUserName(PseudoAuthenticationHandler.java:145)
at 
org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler.authenticate(PseudoAuthenticationHandler.java:181)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:348)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:519)
at 
org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.doFilter(RMAuthenticationFilter.java:82)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1243)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
at 
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
at 
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at 
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at 
org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
at 
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)

{code}

  was:
After upgrading httpc* in HADOOP-12767, ServletRequest can pass null value with 
{{getQueryString}}. 
When {{PseudoAuthenticationHandler}} is used, NPE has been occurred.
We have encountered this exception in backporting to Hadoop 2.7.x, but it looks 
like it can occur later version Hadoops. 


> INTERNAL_SERVER_ERROR due to NPE from PseudoAuthenticationHandler
> -
>
> Key: HADOOP-13215
> URL: https://issues.apache.org/jira/browse/HADOOP-13215
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2, 3.0.0-alpha1
>Reporter: Kai Sasaki
>Assignee: Kai Sasaki
>
> After upgrading httpc* in HADOOP-12767, ServletRequest can pass null value 
> with {{getQueryString}}. 
> When {{PseudoAuthenticationHandler}} is used, NPE has been occurred.
> We have encountered this exception in backporting to Hadoop 2.7.x, but it 
> looks like it can occur later version Hadoops. 
> {code}
> HTTP ERROR 500
> Problem accessing /cluster. Reason:
> INTERNAL_SERVER_ERROR
> Caused by:
> java.lang.NullPointerException
> at 
> org.apache.http.client.utils.URLEncodedUtils.parse(URLEncodedUtils.java:235)
> at 
> org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler.getUserName(PseudoAuthenticationHandler.java:145)
> at 
> org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler.authenticate(PseudoAuthenticationHandler.java:181)
> at 
> org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:348)
> at 
> 

[jira] [Created] (HADOOP-13215) INTERNAL_SERVER_ERROR due to NPE from PseudoAuthenticationHandler

2016-05-29 Thread Kai Sasaki (JIRA)
Kai Sasaki created HADOOP-13215:
---

 Summary: INTERNAL_SERVER_ERROR due to NPE from 
PseudoAuthenticationHandler
 Key: HADOOP-13215
 URL: https://issues.apache.org/jira/browse/HADOOP-13215
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.7.2, 3.0.0-alpha1
Reporter: Kai Sasaki
Assignee: Kai Sasaki


After upgrading httpc* in HADOOP-12767, ServletRequest can pass null value with 
{{getQueryString}}. 
When {{PseudoAuthenticationHandler}} is used, NPE has been occurred.
We have encountered this exception in backporting to Hadoop 2.7.x, but it looks 
like it can occur later version Hadoops. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12579) Deprecate and remove WriteableRPCEngine

2016-05-29 Thread Arun Suresh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12579?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15306174#comment-15306174
 ] 

Arun Suresh commented on HADOOP-12579:
--

[~drankye], [~wheat9], Am seeing a bunch of test failures in trunk builds since 
this was committed. For YARN, I had raised YARN-5163 to track the specific 
failures. I notice that once I revert HADOOP-12579 locally, 
{{TestClientToAMTokens}} does indeed pass.

Furthermore, a lot of MR tests that use MiniYarnCluster seem to fail with the 
following error message in its Map tasks :
{noformat}
2016-05-29 19:35:08,358 WARN [main] org.apache.hadoop.mapred.YarnChild: 
Exception running child : java.lang.reflect.UndeclaredThrowableException
  at com.sun.proxy.$Proxy10.getTask(Unknown Source)
  at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:136)
Caused by: com.google.protobuf.ServiceException: Too many or few parameters for 
request. Method: [getTask], Expected: 2, Actual: 1
  at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)
  ... 2 more

2016-05-29 19:35:08,359 INFO [main] 
org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping MapTask metrics 
system...
{noformat}
Again, after reverting this locally, they pass..

Kindly do take a look

> Deprecate and remove WriteableRPCEngine
> ---
>
> Key: HADOOP-12579
> URL: https://issues.apache.org/jira/browse/HADOOP-12579
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Haohui Mai
>Assignee: Kai Zheng
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12579-v1.patch, HADOOP-12579-v10.patch, 
> HADOOP-12579-v11.patch, HADOOP-12579-v3.patch, HADOOP-12579-v4.patch, 
> HADOOP-12579-v5.patch, HADOOP-12579-v6.patch, HADOOP-12579-v7.patch, 
> HADOOP-12579-v8.patch, HADOOP-12579-v9.patch
>
>
> The {{WriteableRPCEninge}} depends on Java's serialization mechanisms for RPC 
> requests. Without proper checks, it has be shown that it can lead to security 
> vulnerabilities such as remote code execution (e.g., COLLECTIONS-580, 
> HADOOP-12577).
> The current implementation has migrated from {{WriteableRPCEngine}} to 
> {{ProtobufRPCEngine}} now. This jira proposes to deprecate 
> {{WriteableRPCEngine}} in branch-2 and to remove it in trunk.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13213) Small Documentation bug with AuthenticatedURL in hadoop-auth

2016-05-29 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15306168#comment-15306168
 ] 

Wei-Chiu Chuang commented on HADOOP-13213:
--

Thanks for the contribution, [~tellisnz]!
The patch looks good to me, and I can verify the usage is correct. I had used 
AuthenticatedURL before and was confused by its Javadocs. Please assign 
yourself as the assignee.

> Small Documentation bug with AuthenticatedURL in hadoop-auth
> 
>
> Key: HADOOP-13213
> URL: https://issues.apache.org/jira/browse/HADOOP-13213
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.2
>Reporter: Tom Ellis
>Priority: Trivial
>  Labels: documentation, patch
>
> Small documentation error in hadoop-auth.
> AuthenticatedURL doesn't have a constructor that takes URL and Token, these 
> params are passed to openConnection(url, token).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12767) update apache httpclient version to 4.5.2; httpcore to 4.4.4

2016-05-29 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12767?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15306159#comment-15306159
 ] 

Kai Sasaki commented on HADOOP-12767:
-

I tried the patch to Hadoop 2.7.2 and found ResourceManager did not work fine. 
Do you think this patch does not work with Hadoop 2.7.x?
{code}
Problem accessing /cluster. Reason:

INTERNAL_SERVER_ERROR
Caused by:

java.lang.NullPointerException
at 
org.apache.http.client.utils.URLEncodedUtils.parse(URLEncodedUtils.java:235)
at 
org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler.getUserName(PseudoAuthenticationHandler.java:145)
at 
org.apache.hadoop.security.authentication.server.PseudoAuthenticationHandler.authenticate(PseudoAuthenticationHandler.java:181)
at 
org.apache.hadoop.security.token.delegation.web.DelegationTokenAuthenticationHandler.authenticate(DelegationTokenAuthenticationHandler.java:348)
at 
org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:519)
at 
org.apache.hadoop.yarn.server.security.http.RMAuthenticationFilter.doFilter(RMAuthenticationFilter.java:82)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1243)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at 
org.mortbay.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1212)
at 
org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:399)
at 
org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216)
at 
org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:182)
at 
org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:766)
at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:450)
at 
org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:230)
at 
org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)
at org.mortbay.jetty.Server.handle(Server.java:326)
at 
org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)
at 
org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)
at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)
at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)
at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)
at 
org.mortbay.io.nio.SelectChannelEndPoint.run(SelectChannelEndPoint.java:410)
at 
org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
{code}

> update apache httpclient version to 4.5.2; httpcore to 4.4.4
> 
>
> Key: HADOOP-12767
> URL: https://issues.apache.org/jira/browse/HADOOP-12767
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.7.2
>Reporter: Artem Aliev
>Assignee: Artem Aliev
> Fix For: 2.8.0
>
> Attachments: HADOOP-12767-branch-2-005.patch, 
> HADOOP-12767-branch-2.004.patch, HADOOP-12767-branch-2.005.patch, 
> HADOOP-12767.001.patch, HADOOP-12767.002.patch, HADOOP-12767.003.patch, 
> HADOOP-12767.004.patch
>
>
> Various SSL security fixes are needed.  See:  CVE-2012-6153, CVE-2011-4461, 
> CVE-2014-3577, CVE-2015-5262.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-05-29 Thread stack (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15306068#comment-15306068
 ] 

stack commented on HADOOP-12910:


OpenTSDB is LGPL but Deferred is not. It looks to be BSD. A jar that has 
Deferred only is available up on mvnrepository here: 
http://mvnrepository.com/artifact/com.stumbleupon/async/1.4.1 so don't have to 
copy. If interested, we can ping the author to get further clarity on license.

> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12910-HDFS-9924.000.patch, 
> HADOOP-12910-HDFS-9924.001.patch, HADOOP-12910-HDFS-9924.002.patch
>
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-05-29 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15306059#comment-15306059
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-12910:
--

It seems that people really want Future with callback.  I will think about how 
to do it.  Thanks.

> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12910-HDFS-9924.000.patch, 
> HADOOP-12910-HDFS-9924.001.patch, HADOOP-12910-HDFS-9924.002.patch
>
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-05-29 Thread Tsz Wo Nicholas Sze (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15306058#comment-15306058
 ] 

Tsz Wo Nicholas Sze commented on HADOOP-12910:
--

It would be less work if we could copy/paste of Deferred.  However, below 
quoted from http://opentsdb.net/faq.html says that ASF does not allow it.
{quote}
Why does OpenTSDB use the LGPL?
...
- The LGPL is perfectly compatible with the ASF2 license. Many people are 
misled to believe that there is an incompatibility because the Apache Software 
Foundation (ASF) decided to not allow inclusion of LGPL'ed code in its own 
projects. This choice only applies to the projects managed by the ASF itself 
and doesn't stem from any license incompability.
{quote}


> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12910-HDFS-9924.000.patch, 
> HADOOP-12910-HDFS-9924.001.patch, HADOOP-12910-HDFS-9924.002.patch
>
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13214) sync(0); next(); yields wrong key-values on block-compressed files

2016-05-29 Thread Illes S (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15306016#comment-15306016
 ] 

Illes S commented on HADOOP-13214:
--

A workaround until the issue is fixed:
{code:java}
reader.sync(0);
reader.seek(reader.getPosition()); // drops buffered key-values without doing 
I/O
reader.next(...); // yields key-values from the beginning of the file
{code}

> sync(0); next(); yields wrong key-values on block-compressed files
> --
>
> Key: HADOOP-13214
> URL: https://issues.apache.org/jira/browse/HADOOP-13214
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Illes S
>  Labels: easyfix, newbie
> Attachments: HADOOP-13214.patch
>
>
> Calling {{sync(0); next(...);}} on a block-compressed {{SequenceFileReader}} 
> that has already been used may not yield the key-values at the beginning, but 
> those following the previous position. The issue is caused by {{sync(0)}} not 
> releasing previously buffered keys and values. The issue was introduced by 
> HADOOP-6196



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13214) sync(0); next(); yields wrong key-values on block-compressed files

2016-05-29 Thread Illes S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Illes S updated HADOOP-13214:
-
Attachment: HADOOP-13214.patch

> sync(0); next(); yields wrong key-values on block-compressed files
> --
>
> Key: HADOOP-13214
> URL: https://issues.apache.org/jira/browse/HADOOP-13214
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Reporter: Illes S
>  Labels: easyfix, newbie
> Attachments: HADOOP-13214.patch
>
>
> Calling {{sync(0); next(...);}} on a block-compressed {{SequenceFileReader}} 
> that has already been used may not yield the key-values at the beginning, but 
> those following the previous position. The issue is caused by {{sync(0)}} not 
> releasing previously buffered keys and values. The issue was introduced by 
> HADOOP-6196



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13214) sync(0); next(); yields wrong key-values on block-compressed files

2016-05-29 Thread Illes S (JIRA)
Illes S created HADOOP-13214:


 Summary: sync(0); next(); yields wrong key-values on 
block-compressed files
 Key: HADOOP-13214
 URL: https://issues.apache.org/jira/browse/HADOOP-13214
 Project: Hadoop Common
  Issue Type: Bug
  Components: io
Reporter: Illes S


Calling {{sync(0); next(...);}} on a block-compressed {{SequenceFileReader}} 
that has already been used may not yield the key-values at the beginning, but 
those following the previous position. The issue is caused by {{sync(0)}} not 
releasing previously buffered keys and values. The issue was introduced by 
HADOOP-6196



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13213) Small Documentation bug with AuthenticatedURL in hadoop-auth

2016-05-29 Thread Tom Ellis (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305920#comment-15305920
 ] 

Tom Ellis commented on HADOOP-13213:


No tests added as simply a documentation change.

> Small Documentation bug with AuthenticatedURL in hadoop-auth
> 
>
> Key: HADOOP-13213
> URL: https://issues.apache.org/jira/browse/HADOOP-13213
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.2
>Reporter: Tom Ellis
>Priority: Trivial
>  Labels: documentation, patch
>
> Small documentation error in hadoop-auth.
> AuthenticatedURL doesn't have a constructor that takes URL and Token, these 
> params are passed to openConnection(url, token).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13213) Small Documentation bug with AuthenticatedURL in hadoop-auth

2016-05-29 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305918#comment-15305918
 ] 

Hadoop QA commented on HADOOP-13213:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 0m 14s 
{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green} 0m 0s 
{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red} 0m 0s 
{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 6m 
35s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 49s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
13s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
19s {color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 11s 
{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 0m 
12s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 6m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 6m 42s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green} 0m 
11s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 0m 16s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green} 0m 
9s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green} 0m 
0s {color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 0m 
29s {color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green} 0m 15s 
{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 3m 15s 
{color} | {color:green} hadoop-auth in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green} 0m 
19s {color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 27m 20s {color} 
| {color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:2c91fd8 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12806886/HADOOP-13213.001.patch
 |
| JIRA Issue | HADOOP-13213 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3c79dbb923c6 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f5ff05c |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9613/testReport/ |
| modules | C: hadoop-common-project/hadoop-auth U: 
hadoop-common-project/hadoop-auth |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9613/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Small Documentation bug with AuthenticatedURL in hadoop-auth
> 
>
> Key: HADOOP-13213
> URL: https://issues.apache.org/jira/browse/HADOOP-13213
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.2
>Reporter: Tom Ellis
>Priority: Trivial
>  Labels: documentation, patch
>
> Small documentation 

[jira] [Commented] (HADOOP-13213) Small Documentation bug with AuthenticatedURL in hadoop-auth

2016-05-29 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13213?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305915#comment-15305915
 ] 

ASF GitHub Bot commented on HADOOP-13213:
-

GitHub user tellisnz opened a pull request:

https://github.com/apache/hadoop/pull/97

HADOOP-13213 - Fix documentation for hadoop-auth client.



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/tellisnz/hadoop HADOOP-13213

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/97.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #97


commit f8a928a67116e1dbf146acfbffc59be000a70905
Author: Tom Ellis 
Date:   2016-05-29T13:57:31Z

HADOOP-13213 - Fix documentation for hadoop-auth client.




> Small Documentation bug with AuthenticatedURL in hadoop-auth
> 
>
> Key: HADOOP-13213
> URL: https://issues.apache.org/jira/browse/HADOOP-13213
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.2
>Reporter: Tom Ellis
>Priority: Trivial
>  Labels: documentation, patch
>
> Small documentation error in hadoop-auth.
> AuthenticatedURL doesn't have a constructor that takes URL and Token, these 
> params are passed to openConnection(url, token).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13213) Small Documentation bug with AuthenticatedURL in hadoop-auth

2016-05-29 Thread Tom Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Ellis updated HADOOP-13213:
---
Attachment: (was: HADOOP-13213.001.patch)

> Small Documentation bug with AuthenticatedURL in hadoop-auth
> 
>
> Key: HADOOP-13213
> URL: https://issues.apache.org/jira/browse/HADOOP-13213
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.2
>Reporter: Tom Ellis
>Priority: Trivial
>  Labels: documentation, patch
>
> Small documentation error in hadoop-auth.
> AuthenticatedURL doesn't have a constructor that takes URL and Token, these 
> params are passed to openConnection(url, token).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12910) Add new FileSystem API to support asynchronous method calls

2016-05-29 Thread Duo Zhang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12910?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15305911#comment-15305911
 ] 

Duo Zhang commented on HADOOP-12910:


I plan to pick up HDFS-916. Most works will be done there. But we need to reach 
an agreement on the AsyncFileSystem API here. Future is not enough, at least we 
need a callback support.

I will write a proposal soon. Just came back to China. Just adapt to the time 
zone of Bay Area and now I need to adapt to the time zone of Beijing again...

Thanks.

> Add new FileSystem API to support asynchronous method calls
> ---
>
> Key: HADOOP-12910
> URL: https://issues.apache.org/jira/browse/HADOOP-12910
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Tsz Wo Nicholas Sze
>Assignee: Xiaobing Zhou
> Attachments: HADOOP-12910-HDFS-9924.000.patch, 
> HADOOP-12910-HDFS-9924.001.patch, HADOOP-12910-HDFS-9924.002.patch
>
>
> Add a new API, namely FutureFileSystem (or AsynchronousFileSystem, if it is a 
> better name).  All the APIs in FutureFileSystem are the same as FileSystem 
> except that the return type is wrapped by Future, e.g.
> {code}
>   //FileSystem
>   public boolean rename(Path src, Path dst) throws IOException;
>   //FutureFileSystem
>   public Future rename(Path src, Path dst) throws IOException;
> {code}
> Note that FutureFileSystem does not extend FileSystem.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13213) Small Documentation bug with AuthenticatedURL in hadoop-auth

2016-05-29 Thread Tom Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Ellis updated HADOOP-13213:
---
Attachment: HADOOP-13213.001.patch

> Small Documentation bug with AuthenticatedURL in hadoop-auth
> 
>
> Key: HADOOP-13213
> URL: https://issues.apache.org/jira/browse/HADOOP-13213
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.2
>Reporter: Tom Ellis
>Priority: Trivial
>  Labels: documentation, patch
> Attachments: HADOOP-13213.001.patch
>
>
> Small documentation error in hadoop-auth.
> AuthenticatedURL doesn't have a constructor that takes URL and Token, these 
> params are passed to openConnection(url, token).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13213) Small Documentation bug with AuthenticatedURL in hadoop-auth

2016-05-29 Thread Tom Ellis (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom Ellis updated HADOOP-13213:
---
Status: Patch Available  (was: Open)

> Small Documentation bug with AuthenticatedURL in hadoop-auth
> 
>
> Key: HADOOP-13213
> URL: https://issues.apache.org/jira/browse/HADOOP-13213
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Affects Versions: 2.7.2
>Reporter: Tom Ellis
>Priority: Trivial
>  Labels: documentation, patch
> Attachments: HADOOP-13213.001.patch
>
>
> Small documentation error in hadoop-auth.
> AuthenticatedURL doesn't have a constructor that takes URL and Token, these 
> params are passed to openConnection(url, token).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13213) Small Documentation bug with AuthenticatedURL in hadoop-auth

2016-05-29 Thread Tom Ellis (JIRA)
Tom Ellis created HADOOP-13213:
--

 Summary: Small Documentation bug with AuthenticatedURL in 
hadoop-auth
 Key: HADOOP-13213
 URL: https://issues.apache.org/jira/browse/HADOOP-13213
 Project: Hadoop Common
  Issue Type: Bug
  Components: documentation
Affects Versions: 2.7.2
Reporter: Tom Ellis
Priority: Trivial


Small documentation error in hadoop-auth.

AuthenticatedURL doesn't have a constructor that takes URL and Token, these 
params are passed to openConnection(url, token).





--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12782) Faster LDAP group name resolution with ActiveDirectory

2016-05-29 Thread Kai Zheng (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kai Zheng updated HADOOP-12782:
---
Fix Version/s: (was: 3.0.0-alpha1)
   2.8.0

> Faster LDAP group name resolution with ActiveDirectory
> --
>
> Key: HADOOP-12782
> URL: https://issues.apache.org/jira/browse/HADOOP-12782
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Fix For: 2.8.0
>
> Attachments: HADOOP-12782.001.patch, HADOOP-12782.002.patch, 
> HADOOP-12782.003.patch, HADOOP-12782.004.patch, HADOOP-12782.005.patch, 
> HADOOP-12782.006.patch, HADOOP-12782.007.patch, HADOOP-12782.008.patch, 
> HADOOP-12782.009.patch, HADOOP-12782.branch-2.010.patch
>
>
> The typical LDAP group name resolution works well under typical scenarios. 
> However, we have seen cases where a user is mapped to many groups (in an 
> extreme case, a user is mapped to more than 100 groups). The way it's being 
> implemented now makes this case super slow resolving groups from 
> ActiveDirectory.
> The current LDAP group resolution implementation sends two queries to a 
> ActiveDirectory server. The first query returns a user object, which contains 
> DN (distinguished name). The second query looks for groups where the user DN 
> is a member. If a user is mapped to many groups, the second query returns all 
> group objects associated with the user, and is thus very slow.
> After studying a user object in ActiveDirectory, I found a user object 
> actually contains a "memberOf" field, which is the DN of all group objects 
> where the user belongs to. Assuming that an organization has no recursive 
> group relation (that is, a user A is a member of group G1, and group G1 is a 
> member of group G2), we can use this properties to avoid the second query, 
> which can potentially run very slow.
> I propose that we add a configuration to only enable this feature for users 
> who want to reduce group resolution time and who does not have recursive 
> groups, so that existing behavior will not be broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org