[jira] [Created] (HADOOP-11215) DT management ops in DelegationTokenAuthenticatedURL assume the authenticator is KerberosDelegationTokenAuthenticator

2014-10-20 Thread Zhijie Shen (JIRA)
Zhijie Shen created HADOOP-11215:


 Summary: DT management ops in DelegationTokenAuthenticatedURL 
assume the authenticator is KerberosDelegationTokenAuthenticator
 Key: HADOOP-11215
 URL: https://issues.apache.org/jira/browse/HADOOP-11215
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Zhijie Shen


Here's the code in get/renew/cancel DT:
{code}
  return ((KerberosDelegationTokenAuthenticator) getAuthenticator()).
  renewDelegationToken(url, token, token.delegationToken, doAsUser);
{code}

It seems not to be right because PseudoDelegationTokenAuthenticator should work 
here as well. At least, it is inconsistent in the context of delegation token 
authentication, as DelegationTokenAuthenticationHandler doesn't require the 
authentication must be Kerberos. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11215) DT management ops in DelegationTokenAuthenticatedURL assume the authenticator is KerberosDelegationTokenAuthenticator

2014-10-20 Thread Zhijie Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijie Shen updated HADOOP-11215:
-
Labels: se  (was: )

> DT management ops in DelegationTokenAuthenticatedURL assume the authenticator 
> is KerberosDelegationTokenAuthenticator
> -
>
> Key: HADOOP-11215
> URL: https://issues.apache.org/jira/browse/HADOOP-11215
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Zhijie Shen
>  Labels: se
>
> Here's the code in get/renew/cancel DT:
> {code}
>   return ((KerberosDelegationTokenAuthenticator) getAuthenticator()).
>   renewDelegationToken(url, token, token.delegationToken, doAsUser);
> {code}
> It seems not to be right because PseudoDelegationTokenAuthenticator should 
> work here as well. At least, it is inconsistent in the context of delegation 
> token authentication, as DelegationTokenAuthenticationHandler doesn't require 
> the authentication must be Kerberos. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11215) DT management ops in DelegationTokenAuthenticatedURL assume the authenticator is KerberosDelegationTokenAuthenticator

2014-10-20 Thread Zhijie Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijie Shen updated HADOOP-11215:
-
Labels:   (was: se)

> DT management ops in DelegationTokenAuthenticatedURL assume the authenticator 
> is KerberosDelegationTokenAuthenticator
> -
>
> Key: HADOOP-11215
> URL: https://issues.apache.org/jira/browse/HADOOP-11215
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Zhijie Shen
>
> Here's the code in get/renew/cancel DT:
> {code}
>   return ((KerberosDelegationTokenAuthenticator) getAuthenticator()).
>   renewDelegationToken(url, token, token.delegationToken, doAsUser);
> {code}
> It seems not to be right because PseudoDelegationTokenAuthenticator should 
> work here as well. At least, it is inconsistent in the context of delegation 
> token authentication, as DelegationTokenAuthenticationHandler doesn't require 
> the authentication must be Kerberos. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11215) DT management ops in DelegationTokenAuthenticatedURL assume the authenticator is KerberosDelegationTokenAuthenticator

2014-10-20 Thread Zhijie Shen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11215?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhijie Shen updated HADOOP-11215:
-
Component/s: security

> DT management ops in DelegationTokenAuthenticatedURL assume the authenticator 
> is KerberosDelegationTokenAuthenticator
> -
>
> Key: HADOOP-11215
> URL: https://issues.apache.org/jira/browse/HADOOP-11215
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Zhijie Shen
>  Labels: se
>
> Here's the code in get/renew/cancel DT:
> {code}
>   return ((KerberosDelegationTokenAuthenticator) getAuthenticator()).
>   renewDelegationToken(url, token, token.delegationToken, doAsUser);
> {code}
> It seems not to be right because PseudoDelegationTokenAuthenticator should 
> work here as well. At least, it is inconsistent in the context of delegation 
> token authentication, as DelegationTokenAuthenticationHandler doesn't require 
> the authentication must be Kerberos. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-9640) RPC Congestion Control with FairCallQueue

2014-10-20 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14177698#comment-14177698
 ] 

Hadoop QA commented on HADOOP-9640:
---

{color:red}-1 overall{color}.  Here are the results of testing the latest 
attachment 
  
http://issues.apache.org/jira/secure/attachment/12641612/FairCallQueue-PerformanceOnCluster.pdf
  against trunk revision e90718f.

{color:red}-1 patch{color}.  The patch command could not apply the patch.

Console output: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/4932//console

This message is automatically generated.

> RPC Congestion Control with FairCallQueue
> -
>
> Key: HADOOP-9640
> URL: https://issues.apache.org/jira/browse/HADOOP-9640
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Xiaobo Peng
>Assignee: Chris Li
>  Labels: hdfs, qos, rpc
> Attachments: FairCallQueue-PerformanceOnCluster.pdf, 
> MinorityMajorityPerformance.pdf, NN-denial-of-service-updated-plan.pdf, 
> faircallqueue.patch, faircallqueue2.patch, faircallqueue3.patch, 
> faircallqueue4.patch, faircallqueue5.patch, faircallqueue6.patch, 
> faircallqueue7_with_runtime_swapping.patch, 
> rpc-congestion-control-draft-plan.pdf
>
>
> For an easy-to-read summary see: 
> http://www.ebaytechblog.com/2014/08/21/quality-of-service-in-hadoop/
> Several production Hadoop cluster incidents occurred where the Namenode was 
> overloaded and failed to respond. 
> We can improve quality of service for users during namenode peak loads by 
> replacing the FIFO call queue with a [Fair Call 
> Queue|https://issues.apache.org/jira/secure/attachment/12616864/NN-denial-of-service-updated-plan.pdf].
>  (this plan supersedes rpc-congestion-control-draft-plan).
> Excerpted from the communication of one incident, “The map task of a user was 
> creating huge number of small files in the user directory. Due to the heavy 
> load on NN, the JT also was unable to communicate with NN...The cluster 
> became responsive only once the job was killed.”
> Excerpted from the communication of another incident, “Namenode was 
> overloaded by GetBlockLocation requests (Correction: should be getFileInfo 
> requests. the job had a bug that called getFileInfo for a nonexistent file in 
> an endless loop). All other requests to namenode were also affected by this 
> and hence all jobs slowed down. Cluster almost came to a grinding 
> halt…Eventually killed jobtracker to kill all jobs that are running.”
> Excerpted from HDFS-945, “We've seen defective applications cause havoc on 
> the NameNode, for e.g. by doing 100k+ 'listStatus' on very large directories 
> (60k files) etc.”



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-9640) RPC Congestion Control with FairCallQueue

2014-10-20 Thread Chris Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-9640?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Li updated HADOOP-9640:
-
Description: 
For an easy-to-read summary see: 
http://www.ebaytechblog.com/2014/08/21/quality-of-service-in-hadoop/

Several production Hadoop cluster incidents occurred where the Namenode was 
overloaded and failed to respond. 

We can improve quality of service for users during namenode peak loads by 
replacing the FIFO call queue with a [Fair Call 
Queue|https://issues.apache.org/jira/secure/attachment/12616864/NN-denial-of-service-updated-plan.pdf].
 (this plan supersedes rpc-congestion-control-draft-plan).

Excerpted from the communication of one incident, “The map task of a user was 
creating huge number of small files in the user directory. Due to the heavy 
load on NN, the JT also was unable to communicate with NN...The cluster became 
responsive only once the job was killed.”

Excerpted from the communication of another incident, “Namenode was overloaded 
by GetBlockLocation requests (Correction: should be getFileInfo requests. the 
job had a bug that called getFileInfo for a nonexistent file in an endless 
loop). All other requests to namenode were also affected by this and hence all 
jobs slowed down. Cluster almost came to a grinding halt…Eventually killed 
jobtracker to kill all jobs that are running.”

Excerpted from HDFS-945, “We've seen defective applications cause havoc on the 
NameNode, for e.g. by doing 100k+ 'listStatus' on very large directories (60k 
files) etc.”


  was:
Several production Hadoop cluster incidents occurred where the Namenode was 
overloaded and failed to respond. 

We can improve quality of service for users during namenode peak loads by 
replacing the FIFO call queue with a [Fair Call 
Queue|https://issues.apache.org/jira/secure/attachment/12616864/NN-denial-of-service-updated-plan.pdf].
 (this plan supersedes rpc-congestion-control-draft-plan).

Excerpted from the communication of one incident, “The map task of a user was 
creating huge number of small files in the user directory. Due to the heavy 
load on NN, the JT also was unable to communicate with NN...The cluster became 
responsive only once the job was killed.”

Excerpted from the communication of another incident, “Namenode was overloaded 
by GetBlockLocation requests (Correction: should be getFileInfo requests. the 
job had a bug that called getFileInfo for a nonexistent file in an endless 
loop). All other requests to namenode were also affected by this and hence all 
jobs slowed down. Cluster almost came to a grinding halt…Eventually killed 
jobtracker to kill all jobs that are running.”

Excerpted from HDFS-945, “We've seen defective applications cause havoc on the 
NameNode, for e.g. by doing 100k+ 'listStatus' on very large directories (60k 
files) etc.”



> RPC Congestion Control with FairCallQueue
> -
>
> Key: HADOOP-9640
> URL: https://issues.apache.org/jira/browse/HADOOP-9640
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0, 2.2.0
>Reporter: Xiaobo Peng
>Assignee: Chris Li
>  Labels: hdfs, qos, rpc
> Attachments: FairCallQueue-PerformanceOnCluster.pdf, 
> MinorityMajorityPerformance.pdf, NN-denial-of-service-updated-plan.pdf, 
> faircallqueue.patch, faircallqueue2.patch, faircallqueue3.patch, 
> faircallqueue4.patch, faircallqueue5.patch, faircallqueue6.patch, 
> faircallqueue7_with_runtime_swapping.patch, 
> rpc-congestion-control-draft-plan.pdf
>
>
> For an easy-to-read summary see: 
> http://www.ebaytechblog.com/2014/08/21/quality-of-service-in-hadoop/
> Several production Hadoop cluster incidents occurred where the Namenode was 
> overloaded and failed to respond. 
> We can improve quality of service for users during namenode peak loads by 
> replacing the FIFO call queue with a [Fair Call 
> Queue|https://issues.apache.org/jira/secure/attachment/12616864/NN-denial-of-service-updated-plan.pdf].
>  (this plan supersedes rpc-congestion-control-draft-plan).
> Excerpted from the communication of one incident, “The map task of a user was 
> creating huge number of small files in the user directory. Due to the heavy 
> load on NN, the JT also was unable to communicate with NN...The cluster 
> became responsive only once the job was killed.”
> Excerpted from the communication of another incident, “Namenode was 
> overloaded by GetBlockLocation requests (Correction: should be getFileInfo 
> requests. the job had a bug that called getFileInfo for a nonexistent file in 
> an endless loop). All other requests to namenode were also affected by this 
> and hence all jobs slowed down. Cluster almost came to a grinding 
> halt…Eventually killed jobtracker to kill all jobs that are running.”
> Excerpted from HDFS-945, “We've seen defective applications 

[jira] [Commented] (HADOOP-10904) Provide Alt to Clear Text Passwords through Cred Provider API

2014-10-20 Thread Larry McCay (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14177685#comment-14177685
 ] 

Larry McCay commented on HADOOP-10904:
--

Hi [~jnp] - You are right - I've closed it. Thanks!

> Provide Alt to Clear Text Passwords through Cred Provider API
> -
>
> Key: HADOOP-10904
> URL: https://issues.apache.org/jira/browse/HADOOP-10904
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
>
> This is an umbrella jira to track various child tasks to uptake the 
> credential provider API to enable deployments without storing 
> passwords/credentials in clear text.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Resolved] (HADOOP-10904) Provide Alt to Clear Text Passwords through Cred Provider API

2014-10-20 Thread Larry McCay (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10904?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Larry McCay resolved HADOOP-10904.
--
Resolution: Fixed

> Provide Alt to Clear Text Passwords through Cred Provider API
> -
>
> Key: HADOOP-10904
> URL: https://issues.apache.org/jira/browse/HADOOP-10904
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
>
> This is an umbrella jira to track various child tasks to uptake the 
> credential provider API to enable deployments without storing 
> passwords/credentials in clear text.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-10904) Provide Alt to Clear Text Passwords through Cred Provider API

2014-10-20 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10904?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14177683#comment-14177683
 ] 

Jitendra Nath Pandey commented on HADOOP-10904:
---

[~lmccay] Can this be marked as closed now, given that all sub-tasks are 
resolved?

> Provide Alt to Clear Text Passwords through Cred Provider API
> -
>
> Key: HADOOP-10904
> URL: https://issues.apache.org/jira/browse/HADOOP-10904
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Reporter: Larry McCay
>Assignee: Larry McCay
>
> This is an umbrella jira to track various child tasks to uptake the 
> credential provider API to enable deployments without storing 
> passwords/credentials in clear text.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11214) Add web UI for NFS gateway

2014-10-20 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-11214:

Issue Type: Improvement  (was: Bug)

> Add web UI for NFS gateway
> --
>
> Key: HADOOP-11214
> URL: https://issues.apache.org/jira/browse/HADOOP-11214
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: nfs
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>
> This JIRA is to track the effort to add web UI for NFS gateway to show some 
> metrics and configuration related information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11214) Add web UI for NFS gateway

2014-10-20 Thread Brandon Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11214?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Brandon Li updated HADOOP-11214:

Affects Version/s: 2.2.0

> Add web UI for NFS gateway
> --
>
> Key: HADOOP-11214
> URL: https://issues.apache.org/jira/browse/HADOOP-11214
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: nfs
>Affects Versions: 2.2.0
>Reporter: Brandon Li
>
> This JIRA is to track the effort to add web UI for NFS gateway to show some 
> metrics and configuration related information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11214) Add web UI for NFS gateway

2014-10-20 Thread Brandon Li (JIRA)
Brandon Li created HADOOP-11214:
---

 Summary: Add web UI for NFS gateway
 Key: HADOOP-11214
 URL: https://issues.apache.org/jira/browse/HADOOP-11214
 Project: Hadoop Common
  Issue Type: Bug
  Components: nfs
Reporter: Brandon Li


This JIRA is to track the effort to add web UI for NFS gateway to show some 
metrics and configuration related information.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11213) Fix typos in html pages: SecureMode and EncryptedShuffle

2014-10-20 Thread Wei Yan (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11213?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei Yan updated HADOOP-11213:
-
Attachment: HADOOP-11213-1.patch

> Fix typos in html pages: SecureMode and EncryptedShuffle
> 
>
> Key: HADOOP-11213
> URL: https://issues.apache.org/jira/browse/HADOOP-11213
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Wei Yan
>Assignee: Wei Yan
>Priority: Minor
> Attachments: HADOOP-11213-1.patch
>
>
> In SecureMode.html, 
> {noformat}
> banned.users  |   hfds,yarn,mapred,bin
> {noformat}
> Here hfds should be hdfs.
> In EncryptedShuffle.html,
> {noformat}
> hadoop.ssl.server.conf|  ss-server.xml
> hadoop.ssl.client.conf|  ss-client.xml
> {noformat}
> Here the two xml files should be ssl-*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11213) Fix typos in html pages: SecureMode and EncryptedShuffle

2014-10-20 Thread Wei Yan (JIRA)
Wei Yan created HADOOP-11213:


 Summary: Fix typos in html pages: SecureMode and EncryptedShuffle
 Key: HADOOP-11213
 URL: https://issues.apache.org/jira/browse/HADOOP-11213
 Project: Hadoop Common
  Issue Type: Bug
Reporter: Wei Yan
Assignee: Wei Yan
Priority: Minor


In SecureMode.html, 
{noformat}
banned.users|   hfds,yarn,mapred,bin
{noformat}
Here hfds should be hdfs.

In EncryptedShuffle.html,
{noformat}
hadoop.ssl.server.conf  |  ss-server.xml
hadoop.ssl.client.conf  |  ss-client.xml
{noformat}
Here the two xml files should be ssl-*.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11211) mapreduce.job.classloader.system.classes property behaves differently when the exclusion and inclusion order is different

2014-10-20 Thread Gera Shegalov (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14177468#comment-14177468
 ] 

Gera Shegalov commented on HADOOP-11211:


In grep and and seemingly with ant's fileset exclusion has higher precedence 
than inclusion.

{quote}
 --include
 If specified, only files matching the given filename pattern are 
searched.  Note that --exclude patterns take priority over
 --include patterns.  Patterns are matched to the full path 
specified, not only to the filename component.
{quote}

I suggest we follow this for consistency. Mathematically it matches best the 
intuition.
SET = INCL_SET \  EXCL_SET

> mapreduce.job.classloader.system.classes property behaves differently when 
> the exclusion and inclusion order is different
> -
>
> Key: HADOOP-11211
> URL: https://issues.apache.org/jira/browse/HADOOP-11211
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yitong Zhou
>Assignee: Yitong Zhou
>
> If we want to include package foo.bar.* but exclude all sub packages named 
> foo.bar.tar.* in system classes, configuring 
> "mapreduce.job.classloader.system.classes=foo.bar.,-foo.bar.tar." won't work. 
> foo.bar.tar will still be pulled in. But if we change the order:
> "mapreduce.job.classloader.system.classes=-foo.bar.tar.,foo.bar.", then it 
> will work.
> This bug is due to the implementation of ApplicationClassLoaser#isSystemClass 
> in hadoop-common, where we simply return the matching result immediately when 
> the class name hits the first match (either positive or negative).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11211) mapreduce.job.classloader.system.classes property behaves differently when the exclusion and inclusion order is different

2014-10-20 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14177453#comment-14177453
 ] 

Sangjin Lee commented on HADOOP-11211:
--

Thanks for reporting this [~timyitong]!

Actually this needs a little discussion. I agree that the current behavior is 
not as well documented as it should. As you stated, it is a "first-match-wins" 
behavior.

What you propose also seems reasonable, but needs more clarification. Your 
proposal is to disregard the order and consider all matches and do their 
logical AND's. But there can be cases where that rule is counter-intuitive. In 
your example, if you have "foo.bar.,-foo.bar." it would mean that the class is 
not considered a system class. Does that mean that an exclusion match is 
stronger than an inclusion match?

Here is an even more interesting case: how about "foo.bar.tar.,-foo.bar."? One 
reasonable interpretation of this may be that "exclude everything that belongs 
to foo.bar. but still include foo.bar.tar.". However, using the AND rule, it 
would mean that all classes under "foo.bar" would be excluded and the 
"foo.bar.tar." rule would be effectively ignored.

Also, how about a exact match (e.g. "foo.bar.MyClass")? Would it need to be 
considered over package matches?

> mapreduce.job.classloader.system.classes property behaves differently when 
> the exclusion and inclusion order is different
> -
>
> Key: HADOOP-11211
> URL: https://issues.apache.org/jira/browse/HADOOP-11211
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Yitong Zhou
>Assignee: Yitong Zhou
>
> If we want to include package foo.bar.* but exclude all sub packages named 
> foo.bar.tar.* in system classes, configuring 
> "mapreduce.job.classloader.system.classes=foo.bar.,-foo.bar.tar." won't work. 
> foo.bar.tar will still be pulled in. But if we change the order:
> "mapreduce.job.classloader.system.classes=-foo.bar.tar.,foo.bar.", then it 
> will work.
> This bug is due to the implementation of ApplicationClassLoaser#isSystemClass 
> in hadoop-common, where we simply return the matching result immediately when 
> the class name hits the first match (either positive or negative).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11102) Downgrade curator version to 2.4.1

2014-10-20 Thread Robert Kanter (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14177430#comment-14177430
 ] 

Robert Kanter commented on HADOOP-11102:


+1 (non-binding)

The {{curator-x-discovery-server}} is for the Service Discovery Server, which 
creates a RESTful server interface for non-Java programs to use the Service 
Discovery recipe ({{curator-x-discovery}}), and I don't think Hadoop is 
currently using either of those, so it should be fine; at least for now.

> Downgrade curator version to 2.4.1
> --
>
> Key: HADOOP-11102
> URL: https://issues.apache.org/jira/browse/HADOOP-11102
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-11102-001.patch, HADOOP-11102-002.patch
>
>
> HADOOP-10868 includes apache curator 2.6.0
> This depends on Guava 16.01
> It's not being picked up, as Hadoop is forcing in 11.0.2 -but this means:
> there is now a risk that curator depends on methods and classes that are not 
> in the Hadoop version



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11212) NetUtils.wrapException to handle SocketException explicitly

2014-10-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14177364#comment-14177364
 ] 

Steve Loughran commented on HADOOP-11212:
-

Stack when trying to talk to a local-subnet-VM that's down. 
{code}
Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 22.766 sec <<< 
FAILURE! - in org.apache.slider.funtest.lifecycle.AgentClusterLifecycleIT
org.apache.slider.funtest.lifecycle.AgentClusterLifecycleIT  Time elapsed: 
22.765 sec  <<< ERROR!
java.io.IOException: Failed on local exception: java.net.SocketException: Host 
is down; Host Details : local host is: "stevel-763.local/240.0.0.1"; 
destination host is: "nn.example.com":8020; 
at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
at org.apache.hadoop.ipc.Client.call(Client.java:1472)
at org.apache.hadoop.ipc.Client.call(Client.java:1399)
at 
org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
at com.sun.proxy.$Proxy22.getFileInfo(Unknown Source)
at 
org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:752)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
at 
org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
at com.sun.proxy.$Proxy23.getFileInfo(Unknown Source)
at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1957)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1118)
at 
org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
at 
org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at 
org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
at org.apache.hadoop.fs.FileSystem.exists(FileSystem.java:1400)
at 
org.apache.slider.funtest.framework.FileUploader.mkHomeDir(FileUploader.groovy:123)
at 
org.apache.slider.funtest.framework.FileUploader$mkHomeDir.call(Unknown Source)
at 
org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:45)
at 
org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:108)
at 
org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:112)
at 
org.apache.slider.funtest.framework.AgentCommandTestBase.setupAgent(AgentCommandTestBase.groovy:105)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
{code]

> NetUtils.wrapException to handle SocketException explicitly
> ---
>
> Key: HADOOP-11212
> URL: https://issues.apache.org/jira/browse/HADOOP-11212
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Affects Versions: 3.0.0
>Reporter: Steve Loughran
>
> the {{NetUtil.wrapException()} method doesn't catch {{SocketException}}, so 
> it is wrapped with an IOE —this loses information, and stops any extra diags 
> /wiki links being added



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (HADOOP-11212) NetUtils.wrapException to handle SocketException explicitly

2014-10-20 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-11212:
---

 Summary: NetUtils.wrapException to handle SocketException 
explicitly
 Key: HADOOP-11212
 URL: https://issues.apache.org/jira/browse/HADOOP-11212
 Project: Hadoop Common
  Issue Type: Improvement
  Components: util
Affects Versions: 3.0.0
Reporter: Steve Loughran


the {{NetUtil.wrapException()} method doesn't catch {{SocketException}}, so it 
is wrapped with an IOE —this loses information, and stops any extra diags /wiki 
links being added



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11102) Downgrade curator version to 2.4.1

2014-10-20 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11102:

Summary: Downgrade curator version to 2.4.1  (was: Hadoop now has transient 
dependency on Guava 16)

> Downgrade curator version to 2.4.1
> --
>
> Key: HADOOP-11102
> URL: https://issues.apache.org/jira/browse/HADOOP-11102
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-11102-001.patch, HADOOP-11102-002.patch
>
>
> HADOOP-10868 includes apache curator 2.6.0
> This depends on Guava 16.01
> It's not being picked up, as Hadoop is forcing in 11.0.2 -but this means:
> there is now a risk that curator depends on methods and classes that are not 
> in the Hadoop version



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11102) Hadoop now has transient dependency on Guava 16

2014-10-20 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11102:

Status: Patch Available  (was: Open)

> Hadoop now has transient dependency on Guava 16
> ---
>
> Key: HADOOP-11102
> URL: https://issues.apache.org/jira/browse/HADOOP-11102
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-11102-001.patch, HADOOP-11102-002.patch
>
>
> HADOOP-10868 includes apache curator 2.6.0
> This depends on Guava 16.01
> It's not being picked up, as Hadoop is forcing in 11.0.2 -but this means:
> there is now a risk that curator depends on methods and classes that are not 
> in the Hadoop version



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11102) Hadoop now has transient dependency on Guava 16

2014-10-20 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11102:

Status: Open  (was: Patch Available)

> Hadoop now has transient dependency on Guava 16
> ---
>
> Key: HADOOP-11102
> URL: https://issues.apache.org/jira/browse/HADOOP-11102
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-11102-001.patch, HADOOP-11102-002.patch
>
>
> HADOOP-10868 includes apache curator 2.6.0
> This depends on Guava 16.01
> It's not being picked up, as Hadoop is forcing in 11.0.2 -but this means:
> there is now a risk that curator depends on methods and classes that are not 
> in the Hadoop version



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11102) Hadoop now has transient dependency on Guava 16

2014-10-20 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11102?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14177343#comment-14177343
 ] 

Steve Loughran commented on HADOOP-11102:
-

I did two test builds of curator with guava set to 11.0.2. Curator 2.4.1 
builds; Curator 2.6.0 fails in {{curator-x-discovery-server}}:
{code}
[ERROR] 
/Users/stevel/Projects/Hortonworks/Projects/curator/curator-x-discovery-server/src/main/java/org/apache/curator/x/discovery/server/contexts/GenericDiscoveryContext.java:[21,33]
 package com.google.common.reflect does not exist
[ERROR] 
/Users/stevel/Projects/Hortonworks/Projects/curator/curator-x-discovery-server/src/main/java/org/apache/curator/x/discovery/server/contexts/GenericDiscoveryContext.java:[40,19]
 cannot find symbol
[ERROR] symbol:   class TypeToken
[ERROR] location: class 
org.apache.curator.x.discovery.server.contexts.GenericDiscoveryContext
[ERROR] 
/Users/stevel/Projects/Hortonworks/Projects/curator/curator-x-discovery-server/src/main/java/org/apache/curator/x/discovery/server/contexts/GenericDiscoveryContext.java:[47,135]
 cannot find symbol
[ERROR] symbol:   class TypeToken
[ERROR] location: class 
org.apache.curator.x.discovery.server.contexts.GenericDiscoveryContext
[ERROR] 
/Users/stevel/Projects/Hortonworks/Projects/curator/curator-x-discovery-server/src/main/java/org/apache/curator/x/discovery/server/contexts/MapDiscoveryContext.java:[21,33]
 package com.google.common.reflect does not exist
[ERROR] 
/Users/stevel/Projects/Hortonworks/Projects/curator/curator-x-discovery-server/src/main/java/org/apache/curator/x/discovery/server/contexts/GenericDiscoveryContext.java:[44,69]
 cannot find symbol
[ERROR] symbol:   variable TypeToken
[ERROR] location: class 
org.apache.curator.x.discovery.server.contexts.GenericDiscoveryContext
[ERROR] 
/Users/stevel/Projects/Hortonworks/Projects/curator/curator-x-discovery-server/src/main/java/org/apache/curator/x/discovery/server/contexts/GenericDiscoveryContext.java:[42,12]
 recursive constructor invocation
[ERROR] 
/Users/stevel/Projects/Hortonworks/Projects/curator/curator-x-discovery-server/src/main/java/org/apache/curator/x/discovery/server/contexts/MapDiscoveryContext.java:[38,74]
 cannot find symbol
[ERROR] symbol:   class TypeToken
[ERROR] location: class 
org.apache.curator.x.discovery.server.contexts.MapDiscoveryContext
[ERROR] -> [Help 1]
{code}

This implies that part of curator is not compatible with guava 11, though it is 
not a core module.
As 2.4.1 builds without problems, I think the patch should still go in

> Hadoop now has transient dependency on Guava 16
> ---
>
> Key: HADOOP-11102
> URL: https://issues.apache.org/jira/browse/HADOOP-11102
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-11102-001.patch, HADOOP-11102-002.patch
>
>
> HADOOP-10868 includes apache curator 2.6.0
> This depends on Guava 16.01
> It's not being picked up, as Hadoop is forcing in 11.0.2 -but this means:
> there is now a risk that curator depends on methods and classes that are not 
> in the Hadoop version



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11194) Ignore .keep files

2014-10-20 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14176990#comment-14176990
 ] 

Hudson commented on HADOOP-11194:
-

FAILURE: Integrated in Hadoop-trunk-Commit #6291 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/6291/])
HADOOP-11194. Ignore .keep files (kasha) (kasha: rev 
d5084b9fa30771bffb03f2bad69304141c6e4303)
* .gitignore
* hadoop-common-project/hadoop-common/CHANGES.txt


> Ignore .keep files
> --
>
> Key: HADOOP-11194
> URL: https://issues.apache.org/jira/browse/HADOOP-11194
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.5.1
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Fix For: 2.6.0
>
> Attachments: hadoop-11194.patch, hadoop-11194.patch
>
>
> Given we don't need to keep empty directories, I suppose we can get rid of 
> the .keep files. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (HADOOP-11194) Ignore .keep files

2014-10-20 Thread Karthik Kambatla (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11194?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Karthik Kambatla updated HADOOP-11194:
--
   Resolution: Fixed
Fix Version/s: 2.6.0
 Hadoop Flags: Reviewed
   Status: Resolved  (was: Patch Available)

Thanks for the review, Steve. Just committed this to trunk, branch-2 and 
branch-2.6.

> Ignore .keep files
> --
>
> Key: HADOOP-11194
> URL: https://issues.apache.org/jira/browse/HADOOP-11194
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.5.1
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Fix For: 2.6.0
>
> Attachments: hadoop-11194.patch, hadoop-11194.patch
>
>
> Given we don't need to keep empty directories, I suppose we can get rid of 
> the .keep files. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (HADOOP-11194) Ignore .keep files

2014-10-20 Thread Karthik Kambatla (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11194?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14176980#comment-14176980
 ] 

Karthik Kambatla commented on HADOOP-11194:
---

Checking this in... 

> Ignore .keep files
> --
>
> Key: HADOOP-11194
> URL: https://issues.apache.org/jira/browse/HADOOP-11194
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.5.1
>Reporter: Karthik Kambatla
>Assignee: Karthik Kambatla
> Attachments: hadoop-11194.patch, hadoop-11194.patch
>
>
> Given we don't need to keep empty directories, I suppose we can get rid of 
> the .keep files. 



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)