[jira] [Commented] (HADOOP-14728) Configuring AuthenticationFilterInitializer throws IllegalArgumentException: Null user

2017-08-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112224#comment-16112224
 ] 

Hadoop QA commented on HADOOP-14728:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
1s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
36s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  1s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 57m 47s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestRaceWhenRelogin |
|   | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14728 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880165/HADOOP-14728.01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux fc457fde2ca9 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 79df1e7 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12937/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12937/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12937/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Configuring AuthenticationFilterInitializer throws IllegalArgumentException: 
> Null user
> --
>
> Key: HADOOP-14728
> URL: 

[jira] [Updated] (HADOOP-14439) regression: secret stripping from S3x URIs breaks some downstream code

2017-08-02 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-14439:
---
Attachment: HADOOP-14439-02.patch

Attached the patch with findbugs,checkstyle and javadoc fix.

> regression: secret stripping from S3x URIs breaks some downstream code
> --
>
> Key: HADOOP-14439
> URL: https://issues.apache.org/jira/browse/HADOOP-14439
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: Spark 2.1
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14439-01.patch, HADOOP-14439-02.patch
>
>
> Surfaced in SPARK-20799
> Spark is listing the contents of a path with getFileStatus(path), then 
> looking up the path value doing a lookup of the contents.
> Apparently the lookup is failing to find files if you have a secret in the 
> key, {{s3a://key:secret@bucket/path}}. 
> Presumably this is because the stripped values aren't matching.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14728) Configuring AuthenticationFilterInitializer throws IllegalArgumentException: Null user

2017-08-02 Thread Rohith Sharma K S (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14728?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112163#comment-16112163
 ] 

Rohith Sharma K S commented on HADOOP-14728:


HttpServletRequest#getRemoteUser could be null. Need to handle null check 
before creating proxyUGI otherwise UGI creation will throw an exception. 

> Configuring AuthenticationFilterInitializer throws IllegalArgumentException: 
> Null user
> --
>
> Key: HADOOP-14728
> URL: https://issues.apache.org/jira/browse/HADOOP-14728
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Krishna Pandey
> Attachments: HADOOP-14728.01.patch
>
>
> Configured AuthenticationFilterInitializer and started a cluster. When 
> accessing YARN UI using doAs, encountering following error. 
> URL : http://localhost:25005/cluster??doAs=guest
> {noformat}
> org.apache.hadoop.security.authentication.util.SignerException: Invalid 
> signature
> 2017-08-01 15:34:22,163 ERROR org.apache.hadoop.yarn.webapp.Dispatcher: error 
> handling URI: /cluster
> java.lang.IllegalArgumentException: Null user
>   at 
> org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:1499)
>   at 
> org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:1486)
>   at 
> org.apache.hadoop.security.AuthenticationWithProxyUserFilter$1.getRemoteOrProxyUser(AuthenticationWithProxyUserFilter.java:82)
>   at 
> org.apache.hadoop.security.AuthenticationWithProxyUserFilter$1.getRemoteUser(AuthenticationWithProxyUserFilter.java:92)
>   at 
> javax.servlet.http.HttpServletRequestWrapper.getRemoteUser(HttpServletRequestWrapper.java:207)
>   at 
> javax.servlet.http.HttpServletRequestWrapper.getRemoteUser(HttpServletRequestWrapper.java:207)
>   at 
> org.apache.hadoop.yarn.webapp.view.HeaderBlock.render(HeaderBlock.java:28)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlPage$Page.subView(HtmlPage.java:49)
>   at 
> org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117)
>   at org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$TD.__(Hamlet.java:848)
>   at 
> org.apache.hadoop.yarn.webapp.view.TwoColumnLayout.render(TwoColumnLayout.java:61)
>   at org.apache.hadoop.yarn.webapp.view.HtmlPage.render(HtmlPage.java:82)
>   at org.apache.hadoop.yarn.webapp.Dispatcher.render(Dispatcher.java:206)
>   at org.apache.hadoop.yarn.webapp.Dispatcher.service(Dispatcher.java:165)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>   at 
> com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:287)
>   at 
> com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:277)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14723) reinstate URI parameter in AWSCredentialProvider constructors

2017-08-02 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14723?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112151#comment-16112151
 ] 

Mingliang Liu commented on HADOOP-14723:


I'm +1 on the proposal as the time I was working on [HADOOP-14135], the two use 
cases above were not clear to me. Thanks Steve for taking care of this.

> reinstate URI parameter in AWSCredentialProvider constructors
> -
>
> Key: HADOOP-14723
> URL: https://issues.apache.org/jira/browse/HADOOP-14723
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> I need to revert HADOOP-14135 "Remove URI parameter in AWSCredentialProvider 
> constructors", as knowing the bucket in use is needed for
> * HADOOP-14507: per bucket secrets in JCEKS files
> * HADOOP-14556: delegation tokens in S3A
> these providers need the URI as it needs to it to decide which keys to scan 
> for/what token to look up.
> I know we pulled it out to allow us to talk to DDB without needing a FS URI, 
> but for these specific cases, it is needed —we just won't be able to use the 
> specific auth providers to talk to AWS except to an S3 bucket. 
> Rather than just revert the patch, I propose waiting for s3guard phase I to 
> be merged in to trunk, then do it, with the JCEKS auth mech being set up to 
> skip looking for a per-bucket secret and key if it doesn't know its bucket 
> name.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14728) Configuring AuthenticationFilterInitializer throws IllegalArgumentException: Null user

2017-08-02 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated HADOOP-14728:
---
Status: Patch Available  (was: Open)

> Configuring AuthenticationFilterInitializer throws IllegalArgumentException: 
> Null user
> --
>
> Key: HADOOP-14728
> URL: https://issues.apache.org/jira/browse/HADOOP-14728
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Krishna Pandey
> Attachments: HADOOP-14728.01.patch
>
>
> Configured AuthenticationFilterInitializer and started a cluster. When 
> accessing YARN UI using doAs, encountering following error. 
> URL : http://localhost:25005/cluster??doAs=guest
> {noformat}
> org.apache.hadoop.security.authentication.util.SignerException: Invalid 
> signature
> 2017-08-01 15:34:22,163 ERROR org.apache.hadoop.yarn.webapp.Dispatcher: error 
> handling URI: /cluster
> java.lang.IllegalArgumentException: Null user
>   at 
> org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:1499)
>   at 
> org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:1486)
>   at 
> org.apache.hadoop.security.AuthenticationWithProxyUserFilter$1.getRemoteOrProxyUser(AuthenticationWithProxyUserFilter.java:82)
>   at 
> org.apache.hadoop.security.AuthenticationWithProxyUserFilter$1.getRemoteUser(AuthenticationWithProxyUserFilter.java:92)
>   at 
> javax.servlet.http.HttpServletRequestWrapper.getRemoteUser(HttpServletRequestWrapper.java:207)
>   at 
> javax.servlet.http.HttpServletRequestWrapper.getRemoteUser(HttpServletRequestWrapper.java:207)
>   at 
> org.apache.hadoop.yarn.webapp.view.HeaderBlock.render(HeaderBlock.java:28)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlPage$Page.subView(HtmlPage.java:49)
>   at 
> org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117)
>   at org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$TD.__(Hamlet.java:848)
>   at 
> org.apache.hadoop.yarn.webapp.view.TwoColumnLayout.render(TwoColumnLayout.java:61)
>   at org.apache.hadoop.yarn.webapp.view.HtmlPage.render(HtmlPage.java:82)
>   at org.apache.hadoop.yarn.webapp.Dispatcher.render(Dispatcher.java:206)
>   at org.apache.hadoop.yarn.webapp.Dispatcher.service(Dispatcher.java:165)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>   at 
> com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:287)
>   at 
> com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:277)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14728) Configuring AuthenticationFilterInitializer throws IllegalArgumentException: Null user

2017-08-02 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S updated HADOOP-14728:
---
Attachment: HADOOP-14728.01.patch

> Configuring AuthenticationFilterInitializer throws IllegalArgumentException: 
> Null user
> --
>
> Key: HADOOP-14728
> URL: https://issues.apache.org/jira/browse/HADOOP-14728
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Krishna Pandey
> Attachments: HADOOP-14728.01.patch
>
>
> Configured AuthenticationFilterInitializer and started a cluster. When 
> accessing YARN UI using doAs, encountering following error. 
> URL : http://localhost:25005/cluster??doAs=guest
> {noformat}
> org.apache.hadoop.security.authentication.util.SignerException: Invalid 
> signature
> 2017-08-01 15:34:22,163 ERROR org.apache.hadoop.yarn.webapp.Dispatcher: error 
> handling URI: /cluster
> java.lang.IllegalArgumentException: Null user
>   at 
> org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:1499)
>   at 
> org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:1486)
>   at 
> org.apache.hadoop.security.AuthenticationWithProxyUserFilter$1.getRemoteOrProxyUser(AuthenticationWithProxyUserFilter.java:82)
>   at 
> org.apache.hadoop.security.AuthenticationWithProxyUserFilter$1.getRemoteUser(AuthenticationWithProxyUserFilter.java:92)
>   at 
> javax.servlet.http.HttpServletRequestWrapper.getRemoteUser(HttpServletRequestWrapper.java:207)
>   at 
> javax.servlet.http.HttpServletRequestWrapper.getRemoteUser(HttpServletRequestWrapper.java:207)
>   at 
> org.apache.hadoop.yarn.webapp.view.HeaderBlock.render(HeaderBlock.java:28)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlPage$Page.subView(HtmlPage.java:49)
>   at 
> org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117)
>   at org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$TD.__(Hamlet.java:848)
>   at 
> org.apache.hadoop.yarn.webapp.view.TwoColumnLayout.render(TwoColumnLayout.java:61)
>   at org.apache.hadoop.yarn.webapp.view.HtmlPage.render(HtmlPage.java:82)
>   at org.apache.hadoop.yarn.webapp.Dispatcher.render(Dispatcher.java:206)
>   at org.apache.hadoop.yarn.webapp.Dispatcher.service(Dispatcher.java:165)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>   at 
> com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:287)
>   at 
> com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:277)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-14728) Configuring AuthenticationFilterInitializer throws IllegalArgumentException: Null user

2017-08-02 Thread Rohith Sharma K S (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14728?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rohith Sharma K S moved YARN-6928 to HADOOP-14728:
--

Key: HADOOP-14728  (was: YARN-6928)
Project: Hadoop Common  (was: Hadoop YARN)

> Configuring AuthenticationFilterInitializer throws IllegalArgumentException: 
> Null user
> --
>
> Key: HADOOP-14728
> URL: https://issues.apache.org/jira/browse/HADOOP-14728
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Krishna Pandey
>
> Configured AuthenticationFilterInitializer and started a cluster. When 
> accessing YARN UI using doAs, encountering following error. 
> URL : http://localhost:25005/cluster??doAs=guest
> {noformat}
> org.apache.hadoop.security.authentication.util.SignerException: Invalid 
> signature
> 2017-08-01 15:34:22,163 ERROR org.apache.hadoop.yarn.webapp.Dispatcher: error 
> handling URI: /cluster
> java.lang.IllegalArgumentException: Null user
>   at 
> org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:1499)
>   at 
> org.apache.hadoop.security.UserGroupInformation.createRemoteUser(UserGroupInformation.java:1486)
>   at 
> org.apache.hadoop.security.AuthenticationWithProxyUserFilter$1.getRemoteOrProxyUser(AuthenticationWithProxyUserFilter.java:82)
>   at 
> org.apache.hadoop.security.AuthenticationWithProxyUserFilter$1.getRemoteUser(AuthenticationWithProxyUserFilter.java:92)
>   at 
> javax.servlet.http.HttpServletRequestWrapper.getRemoteUser(HttpServletRequestWrapper.java:207)
>   at 
> javax.servlet.http.HttpServletRequestWrapper.getRemoteUser(HttpServletRequestWrapper.java:207)
>   at 
> org.apache.hadoop.yarn.webapp.view.HeaderBlock.render(HeaderBlock.java:28)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.render(HtmlBlock.java:69)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlBlock.renderPartial(HtmlBlock.java:79)
>   at org.apache.hadoop.yarn.webapp.View.render(View.java:235)
>   at 
> org.apache.hadoop.yarn.webapp.view.HtmlPage$Page.subView(HtmlPage.java:49)
>   at 
> org.apache.hadoop.yarn.webapp.hamlet2.HamletImpl$EImp._v(HamletImpl.java:117)
>   at org.apache.hadoop.yarn.webapp.hamlet2.Hamlet$TD.__(Hamlet.java:848)
>   at 
> org.apache.hadoop.yarn.webapp.view.TwoColumnLayout.render(TwoColumnLayout.java:61)
>   at org.apache.hadoop.yarn.webapp.view.HtmlPage.render(HtmlPage.java:82)
>   at org.apache.hadoop.yarn.webapp.Dispatcher.render(Dispatcher.java:206)
>   at org.apache.hadoop.yarn.webapp.Dispatcher.service(Dispatcher.java:165)
>   at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
>   at 
> com.google.inject.servlet.ServletDefinition.doServiceImpl(ServletDefinition.java:287)
>   at 
> com.google.inject.servlet.ServletDefinition.doService(ServletDefinition.java:277)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14627) Support MSI and DeviceCode token provider

2017-08-02 Thread John Zhuge (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14627?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

John Zhuge updated HADOOP-14627:

Summary: Support MSI and DeviceCode token provider  (was: Enable new 
features of ADLS SDK (MSI, Device Code auth))

> Support MSI and DeviceCode token provider
> -
>
> Key: HADOOP-14627
> URL: https://issues.apache.org/jira/browse/HADOOP-14627
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
> Environment: MSI Change applies only to Hadoop running in an Azure VM
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HADOOP-14627-001.patch
>
>
> This change is to upgrade the Hadoop ADLS connector to enable new auth 
> features exposed by the ADLS Java SDK.
> Specifically:
> MSI Tokens: MSI (Managed Service Identity) is a way to provide an identity to 
> an Azure Service. In the case of VMs, they can be used to give an identity to 
> a VM deployment. This simplifies managing Service Principals, since the creds 
> don’t have to be managed in core-site files anymore. The way this works is 
> that during VM deployment, the ARM (Azure Resource Manager) template needs to 
> be modified to enable MSI. Once deployed, the MSI extension runs a service on 
> the VM that exposes a token endpoint to http://localhost at a port specified 
> in the template. The SDK has a new TokenProvider to fetch the token from this 
> local endpoint. This change would expose that TokenProvider as an auth option.
> DeviceCode auth: This enables a token to be obtained from an interactive 
> login. The user is given a URL and a token to use on the login screen. User 
> can use the token to login from any device. Once the login is done, the token 
> that is obtained is in the name of the user who logged in. Note that because 
> of the interactive login involved, this is not very suitable for job 
> scenarios, but can work for ad-hoc scenarios like running “hdfs dfs” commands.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14627) Enable new features of ADLS SDK (MSI, Device Code auth)

2017-08-02 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082960#comment-16082960
 ] 

John Zhuge edited comment on HADOOP-14627 at 8/3/17 3:50 AM:
-

* Name the patch in the format of HADOOP-14627.00X.patch.
* Add all new properties and their default values to core-default.xml.
* Update SDK version whenever it is GA.
* Add unit tests for the new token providers to TestAzureADTokenProvider


was (Author: jzhuge):
* Add all new properties and their default values to core-default.xml.
* Update SDK version whenever it is GA.
* Add unit tests for the new token providers to TestAzureADTokenProvider

> Enable new features of ADLS SDK (MSI, Device Code auth)
> ---
>
> Key: HADOOP-14627
> URL: https://issues.apache.org/jira/browse/HADOOP-14627
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
> Environment: MSI Change applies only to Hadoop running in an Azure VM
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HADOOP-14627-001.patch
>
>
> This change is to upgrade the Hadoop ADLS connector to enable new auth 
> features exposed by the ADLS Java SDK.
> Specifically:
> MSI Tokens: MSI (Managed Service Identity) is a way to provide an identity to 
> an Azure Service. In the case of VMs, they can be used to give an identity to 
> a VM deployment. This simplifies managing Service Principals, since the creds 
> don’t have to be managed in core-site files anymore. The way this works is 
> that during VM deployment, the ARM (Azure Resource Manager) template needs to 
> be modified to enable MSI. Once deployed, the MSI extension runs a service on 
> the VM that exposes a token endpoint to http://localhost at a port specified 
> in the template. The SDK has a new TokenProvider to fetch the token from this 
> local endpoint. This change would expose that TokenProvider as an auth option.
> DeviceCode auth: This enables a token to be obtained from an interactive 
> login. The user is given a URL and a token to use on the login screen. User 
> can use the token to login from any device. Once the login is done, the token 
> that is obtained is in the name of the user who logged in. Note that because 
> of the interactive login involved, this is not very suitable for job 
> scenarios, but can work for ad-hoc scenarios like running “hdfs dfs” commands.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14627) Enable new features of ADLS SDK (MSI, Device Code auth)

2017-08-02 Thread John Zhuge (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14627?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16082960#comment-16082960
 ] 

John Zhuge edited comment on HADOOP-14627 at 8/3/17 3:47 AM:
-

* Add all new properties and their default values to core-default.xml.
* Update SDK version whenever it is GA.
* Add unit tests for the new token providers to TestAzureADTokenProvider


was (Author: jzhuge):
Add {{fs.adl.oauth2.msi.port}} with default value to core-default.xml.

> Enable new features of ADLS SDK (MSI, Device Code auth)
> ---
>
> Key: HADOOP-14627
> URL: https://issues.apache.org/jira/browse/HADOOP-14627
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
> Environment: MSI Change applies only to Hadoop running in an Azure VM
>Reporter: Atul Sikaria
>Assignee: Atul Sikaria
> Attachments: HADOOP-14627-001.patch
>
>
> This change is to upgrade the Hadoop ADLS connector to enable new auth 
> features exposed by the ADLS Java SDK.
> Specifically:
> MSI Tokens: MSI (Managed Service Identity) is a way to provide an identity to 
> an Azure Service. In the case of VMs, they can be used to give an identity to 
> a VM deployment. This simplifies managing Service Principals, since the creds 
> don’t have to be managed in core-site files anymore. The way this works is 
> that during VM deployment, the ARM (Azure Resource Manager) template needs to 
> be modified to enable MSI. Once deployed, the MSI extension runs a service on 
> the VM that exposes a token endpoint to http://localhost at a port specified 
> in the template. The SDK has a new TokenProvider to fetch the token from this 
> local endpoint. This change would expose that TokenProvider as an auth option.
> DeviceCode auth: This enables a token to be obtained from an interactive 
> login. The user is given a URL and a token to use on the login screen. User 
> can use the token to login from any device. Once the login is done, the token 
> that is obtained is in the name of the user who logged in. Note that because 
> of the interactive login involved, this is not very suitable for job 
> scenarios, but can work for ad-hoc scenarios like running “hdfs dfs” commands.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14706) Adding a helper method to determine whether a log is Log4j implement

2017-08-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112101#comment-16112101
 ] 

Hadoop QA commented on HADOOP-14706:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
47s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
27s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
42s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
15s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
25s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
49s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} branch-2 passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
51s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  5m 
41s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  5m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
43s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
59s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
0s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed with JDK v1.8.0_144 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
17s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
23s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 82m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5e40efe |
| JIRA Issue | HADOOP-14706 |
| GITHUB PR | https://github.com/apache/hadoop/pull/258 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 338c04934ce2 3.13.0-119-generic #166-Ubuntu SMP Wed May 3 
12:18:55 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / 065a906 |
| Default Java | 1.7.0_131 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_144 
/usr/lib/jvm/java-7-openjdk-amd64:1.7.0_131 |
| findbugs | 

[jira] [Commented] (HADOOP-14565) Azure: Add Authorization support to ADLS

2017-08-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14565?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112066#comment-16112066
 ] 

Hadoop QA commented on HADOOP-14565:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
19s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 15m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
25s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 10s{color} | {color:orange} hadoop-tools/hadoop-azure-datalake: The patch 
generated 27 new + 0 unchanged - 0 fixed = 27 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m 33s{color} 
| {color:red} hadoop-azure-datalake in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
14s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 24m 21s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.fs.adl.TestGetFileStatus |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14565 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880108/HADOOP_14565__Added_authorizer_functionality_to_ADL_driver.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d9d8041a8487 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 79df1e7 |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12935/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-azure-datalake.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12935/artifact/patchprocess/patch-unit-hadoop-tools_hadoop-azure-datalake.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12935/testReport/ |
| modules | C: hadoop-tools/hadoop-azure-datalake U: 
hadoop-tools/hadoop-azure-datalake |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12935/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Azure: Add Authorization support to ADLS
> 
>
> Key: HADOOP-14565
> URL: 

[jira] [Commented] (HADOOP-14498) HADOOP_OPTIONAL_TOOLS not parsed correctly

2017-08-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112062#comment-16112062
 ] 

Hadoop QA commented on HADOOP-14498:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 2s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
23s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
15s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} shellcheck {color} | {color:red}  0m  
4s{color} | {color:red} The patch generated 4 new + 20 unchanged - 0 fixed = 24 
total (was 20) {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
9s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
50s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 46s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14498 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880140/HADOOP-14498.002.patch
 |
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux de9b9a42bfa8 4.4.0-43-generic #63-Ubuntu SMP Wed Oct 12 
13:48:03 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 79df1e7 |
| shellcheck | v0.4.6 |
| shellcheck | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12934/artifact/patchprocess/diff-patch-shellcheck.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12934/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12934/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> HADOOP_OPTIONAL_TOOLS not parsed correctly
> --
>
> Key: HADOOP-14498
> URL: https://issues.apache.org/jira/browse/HADOOP-14498
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Mingliang Liu
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HADOOP-14498.001.patch, HADOOP-14498.002.patch
>
>
> # This will make hadoop-azure not show up in the hadoop classpath, though 
> both hadoop-aws and hadoop-azure-datalake are in the 
> classpath.{code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws,hadoop-azure-datalake"
> {code}
> # And if we put only hadoop-azure and hadoop-aws, both of them are shown in 
> the classpath.
> {code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws"
> {code}
> This makes me guess that, while parsing the {{HADOOP_OPTIONAL_TOOLS}}, we 
> make some assumptions that hadoop tool modules have a single "-" in names, 
> and the _hadoop-azure-datalake_ overrides the _hadoop-azure_. Or any other 
> assumptions about the {{${project.artifactId\}}}?
> Ping [~aw].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14706) Adding a helper method to determine whether a log is Log4j implement

2017-08-02 Thread Wenxin He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenxin He updated HADOOP-14706:
---
Status: Open  (was: Patch Available)

> Adding a helper method to determine whether a log is Log4j implement
> 
>
> Key: HADOOP-14706
> URL: https://issues.apache.org/jira/browse/HADOOP-14706
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Wenxin He
>Assignee: Wenxin He
>Priority: Minor
> Attachments: HADOOP-14706.001.patch, HADOOP-14706-branch-2.001.patch, 
> HADOOP-14706-branch-2.002.patch
>
>
> Base on the comments in YARN-6873, we'd like to add a helper method to 
> determine whether a log is Log4j implement.
> Using this helper method, we don't have to care about it's 
> org.apache.commons.logging or org.slf4j.Logger used in our system.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14706) Adding a helper method to determine whether a log is Log4j implement

2017-08-02 Thread Wenxin He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenxin He updated HADOOP-14706:
---
Attachment: HADOOP-14706-branch-2.002.patch

> Adding a helper method to determine whether a log is Log4j implement
> 
>
> Key: HADOOP-14706
> URL: https://issues.apache.org/jira/browse/HADOOP-14706
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Wenxin He
>Assignee: Wenxin He
>Priority: Minor
> Attachments: HADOOP-14706.001.patch, HADOOP-14706-branch-2.001.patch, 
> HADOOP-14706-branch-2.002.patch
>
>
> Base on the comments in YARN-6873, we'd like to add a helper method to 
> determine whether a log is Log4j implement.
> Using this helper method, we don't have to care about it's 
> org.apache.commons.logging or org.slf4j.Logger used in our system.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14706) Adding a helper method to determine whether a log is Log4j implement

2017-08-02 Thread Wenxin He (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14706?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wenxin He updated HADOOP-14706:
---
Status: Patch Available  (was: Open)

> Adding a helper method to determine whether a log is Log4j implement
> 
>
> Key: HADOOP-14706
> URL: https://issues.apache.org/jira/browse/HADOOP-14706
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: util
>Reporter: Wenxin He
>Assignee: Wenxin He
>Priority: Minor
> Attachments: HADOOP-14706.001.patch, HADOOP-14706-branch-2.001.patch, 
> HADOOP-14706-branch-2.002.patch
>
>
> Base on the comments in YARN-6873, we'd like to add a helper method to 
> determine whether a log is Log4j implement.
> Using this helper method, we don't have to care about it's 
> org.apache.commons.logging or org.slf4j.Logger used in our system.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14398) Modify documents for the FileSystem Builder API

2017-08-02 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112037#comment-16112037
 ] 

Andrew Wang commented on HADOOP-14398:
--

One more thing I noticed, there's a typo in the builder doc: "recurisve"

> Modify documents for the FileSystem Builder API
> ---
>
> Key: HADOOP-14398
> URL: https://issues.apache.org/jira/browse/HADOOP-14398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: docuentation
> Attachments: HADOOP-14398.00.patch, HADOOP-14398.01.patch
>
>
> After finishes the API, we should update the document to describe the 
> interface, capability and contract which the APIs hold. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14398) Modify documents for the FileSystem Builder API

2017-08-02 Thread Andrew Wang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14398?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112035#comment-16112035
 ] 

Andrew Wang commented on HADOOP-14398:
--

Thanks for working on this Eddy, a few review comments:

h3. filesystem.md

bq. be invoked 

should be "is invoked"

{quote}
* Files are overwritten by default, unless specify `builder.overwrite(false)`.
* Missing parent directories are not created by default, unless specify
`builder.recursive()`.
{quote}

I know the second is a behavior change compared to the current create APIs, is 
the first too? We should call these out as differences if so.

h3. fsdataoutputstreambuilder.md

bq. being invoked.

Should be "is invoked"

* Should we also call out the change in default behavior compared to the 
existing create call?
* The behavior of what {{opt}} and {{must}} do is not specified. What kind of 
exception is thrown?
* Are there provisions for probing FS capabilities without {{must}} ?
* The example copy-pasted from the FSDataOutputStream builder class javadoc 
looks realistic, but I don't think any of these are actually hooked up. I think 
this make the example confusing. Would be better to use an fake "FooFileSystem" 
or something in the example.
* Since this is a generic document, we might want to move the HDFS-specific 
builder parameters to an HDFS-specific page. I'd normally suggest the class 
javadoc, but for whatever reason they aren't published (they could be). Up to 
you.

Overall I don't want to hold this up over things that can be improved later, 
feel free to leave the harder stuff to follow-on work. Thanks again!

> Modify documents for the FileSystem Builder API
> ---
>
> Key: HADOOP-14398
> URL: https://issues.apache.org/jira/browse/HADOOP-14398
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-alpha3
>Reporter: Lei (Eddy) Xu
>Assignee: Lei (Eddy) Xu
>  Labels: docuentation
> Attachments: HADOOP-14398.00.patch, HADOOP-14398.01.patch
>
>
> After finishes the API, we should update the document to describe the 
> interface, capability and contract which the APIs hold. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14498) HADOOP_OPTIONAL_TOOLS not parsed correctly

2017-08-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16112029#comment-16112029
 ] 

Allen Wittenauer commented on HADOOP-14498:
---

bq. I guess that's what the comment means by "different syntaxes". 

Yeah.  In the beginning I was trying really really really hard to avoid arrays, 
for lots of reasons.  One of the big ones that I'm willing to write down was 
that not all of the array functions are available in our target bash 3.2.  
(e.g., associative arrays, mapfile, etc).   Plus backward compatibility with 
the raw string format of HADOOP_OPTS + trying to solve the duplicate parameter 
problem led to add_param.  It's not pretty and I'm not proud of it.  But it 
works.

Sidenote:

HADOOP_OPTS is probably at this point the biggest hindsight-20-20 mistake in 
Hadoop.  I don't think people really understand how much of an impact it's had 
on literally everything in the system.  For example, it's *the* reason that 
spaces in file paths are a complete nightmare.  HADOOP-13365 is my attempt at 
fixing it.  I'm not sure if it makes it worse or better though.

bq.  It'd be nice to replace that inline loop with a join function. 

Yeah, I'd love for someone to take another whack at it. I can't remember what 
all I tried before I ended up just settling on the loop.  I seem to recall I 
had a better way, but it only worked with bash 4.x.  I guess we could always 
put a version check in there.  (There's one or two other places like that 
already.)

bq. Or should I just throw it in hadoop-functions.sh?

I've just been throwing everything into hadoop-functions.sh, as it ends up 
creating one big API doc at mvn site time.  Pretty convenient. 

> HADOOP_OPTIONAL_TOOLS not parsed correctly
> --
>
> Key: HADOOP-14498
> URL: https://issues.apache.org/jira/browse/HADOOP-14498
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Mingliang Liu
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HADOOP-14498.001.patch, HADOOP-14498.002.patch
>
>
> # This will make hadoop-azure not show up in the hadoop classpath, though 
> both hadoop-aws and hadoop-azure-datalake are in the 
> classpath.{code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws,hadoop-azure-datalake"
> {code}
> # And if we put only hadoop-azure and hadoop-aws, both of them are shown in 
> the classpath.
> {code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws"
> {code}
> This makes me guess that, while parsing the {{HADOOP_OPTIONAL_TOOLS}}, we 
> make some assumptions that hadoop tool modules have a single "-" in names, 
> and the _hadoop-azure-datalake_ overrides the _hadoop-azure_. Or any other 
> assumptions about the {{${project.artifactId\}}}?
> Ping [~aw].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14498) HADOOP_OPTIONAL_TOOLS not parsed correctly

2017-08-02 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14498:
---
Attachment: HADOOP-14498.002.patch

Attaching said patch It'd be nice to replace that inline loop with a join 
function. I don't immediately see one - do we have a file for 
non-Hadoop-related helper functions like that? Or should I just throw it in 
hadoop-functions.sh? (logging off for the night - will iterate tomorrow).

> HADOOP_OPTIONAL_TOOLS not parsed correctly
> --
>
> Key: HADOOP-14498
> URL: https://issues.apache.org/jira/browse/HADOOP-14498
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Mingliang Liu
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HADOOP-14498.001.patch, HADOOP-14498.002.patch
>
>
> # This will make hadoop-azure not show up in the hadoop classpath, though 
> both hadoop-aws and hadoop-azure-datalake are in the 
> classpath.{code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws,hadoop-azure-datalake"
> {code}
> # And if we put only hadoop-azure and hadoop-aws, both of them are shown in 
> the classpath.
> {code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws"
> {code}
> This makes me guess that, while parsing the {{HADOOP_OPTIONAL_TOOLS}}, we 
> make some assumptions that hadoop tool modules have a single "-" in names, 
> and the _hadoop-azure-datalake_ overrides the _hadoop-azure_. Or any other 
> assumptions about the {{${project.artifactId\}}}?
> Ping [~aw].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14498) HADOOP_OPTIONAL_TOOLS not parsed correctly

2017-08-02 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16111994#comment-16111994
 ] 

Sean Mackrory commented on HADOOP-14498:


Aah that makes sense. I guess that's what the comment means by "different 
syntaxes". Testing a patch now that uses an array for HADOOP_SHELL_PROFILES and 
a corresponding test.

> HADOOP_OPTIONAL_TOOLS not parsed correctly
> --
>
> Key: HADOOP-14498
> URL: https://issues.apache.org/jira/browse/HADOOP-14498
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Mingliang Liu
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HADOOP-14498.001.patch
>
>
> # This will make hadoop-azure not show up in the hadoop classpath, though 
> both hadoop-aws and hadoop-azure-datalake are in the 
> classpath.{code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws,hadoop-azure-datalake"
> {code}
> # And if we put only hadoop-azure and hadoop-aws, both of them are shown in 
> the classpath.
> {code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws"
> {code}
> This makes me guess that, while parsing the {{HADOOP_OPTIONAL_TOOLS}}, we 
> make some assumptions that hadoop tool modules have a single "-" in names, 
> and the _hadoop-azure-datalake_ overrides the _hadoop-azure_. Or any other 
> assumptions about the {{${project.artifactId\}}}?
> Ping [~aw].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14685) Test jars to exclude from hadoop-client-minicluster jar

2017-08-02 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16111991#comment-16111991
 ] 

Junping Du commented on HADOOP-14685:
-

I just built hadoop-client-minicluster jar and check that no classes from 
test-jar/test-shell get involved. I think latest patch (01) should be good to 
go. [~busbey], do you have further comments?

> Test jars to exclude from hadoop-client-minicluster jar
> ---
>
> Key: HADOOP-14685
> URL: https://issues.apache.org/jira/browse/HADOOP-14685
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.0.0-beta1
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HADOOP-14685.01.patch, HADOOP-14685.patch
>
>
> This jira is to discuss, what test jars to be included/excluded from 
> hadoop-client-minicluster
> Jars included/excluded when building hadoop-client-minicluster
> [INFO] --- maven-shade-plugin:2.4.3:shade (default) @ 
> hadoop-client-minicluster ---
> [INFO] Excluding org.apache.hadoop:hadoop-client-api:jar:3.0.0-beta1-SNAPSHOT 
> from the shaded jar.
> [INFO] Excluding 
> org.apache.hadoop:hadoop-client-runtime:jar:3.0.0-beta1-SNAPSHOT from the 
> shaded jar.
> [INFO] Excluding org.apache.htrace:htrace-core4:jar:4.1.0-incubating from the 
> shaded jar.
> [INFO] Excluding org.slf4j:slf4j-api:jar:1.7.25 from the shaded jar.
> [INFO] Excluding commons-logging:commons-logging:jar:1.1.3 from the shaded 
> jar.
> [INFO] Excluding junit:junit:jar:4.11 from the shaded jar.
> [INFO] Including org.hamcrest:hamcrest-core:jar:1.3 in the shaded jar.
> [INFO] Excluding 
> org.apache.hadoop:hadoop-annotations:jar:3.0.0-beta1-SNAPSHOT from the shaded 
> jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-minicluster:jar:3.0.0-beta1-SNAPSHOT in the shaded 
> jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-tests:test-jar:tests:3.0.0-beta1-SNAPSHOT
>  in the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-resourcemanager:jar:3.0.0-beta1-SNAPSHOT 
> in the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-applicationhistoryservice:jar:3.0.0-beta1-SNAPSHOT
>  in the shaded jar.
> [INFO] Including de.ruedigermoeller:fst:jar:2.50 in the shaded jar.
> [INFO] Including com.cedarsoftware:java-util:jar:1.9.0 in the shaded jar.
> [INFO] Including com.cedarsoftware:json-io:jar:2.5.1 in the shaded jar.
> [INFO] Including org.apache.curator:curator-test:jar:2.12.0 in the shaded jar.
> [INFO] Including org.javassist:javassist:jar:3.18.1-GA in the shaded jar.
> [INFO] Including org.apache.hadoop:hadoop-hdfs:jar:3.0.0-beta1-SNAPSHOT in 
> the shaded jar.
> [INFO] Including org.eclipse.jetty:jetty-util-ajax:jar:9.3.11.v20160721 in 
> the shaded jar.
> [INFO] Including commons-daemon:commons-daemon:jar:1.0.13 in the shaded jar.
> [INFO] Including io.netty:netty-all:jar:4.0.23.Final in the shaded jar.
> [INFO] Including xerces:xercesImpl:jar:2.9.1 in the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-mapreduce-client-hs:jar:3.0.0-beta1-SNAPSHOT in the 
> shaded jar.
> [INFO] Excluding 
> org.apache.hadoop:hadoop-yarn-server-timelineservice:jar:3.0.0-beta1-SNAPSHOT 
> from the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-common:test-jar:tests:3.0.0-beta1-SNAPSHOT in the 
> shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-hdfs:test-jar:tests:3.0.0-beta1-SNAPSHOT in the 
> shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:3.0.0-beta1-SNAPSHOT
>  in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-core:jar:1.19 in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-client:jar:1.19 in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-json:jar:1.19 in the shaded jar.
> [INFO] Including org.codehaus.jettison:jettison:jar:1.1 in the shaded jar.
> [INFO] Including com.sun.xml.bind:jaxb-impl:jar:2.2.3-1 in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-server:jar:1.19 in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-servlet:jar:1.19 in the shaded jar.
> [INFO] Including org.eclipse.jdt:core:jar:3.1.1 in the shaded jar.
> [INFO] Including net.sf.kosmosfs:kfs:jar:0.3 in the shaded jar.
> [INFO] Including net.java.dev.jets3t:jets3t:jar:0.9.0 in the shaded jar.
> [INFO] Including com.jamesmurty.utils:java-xmlbuilder:jar:0.4 in the shaded 
> jar.
> [INFO] Including com.jcraft:jsch:jar:0.1.54 in the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-nodemanager:jar:3.0.0-beta1-SNAPSHOT in 
> the shaded jar.
> [INFO] Including com.codahale.metrics:metrics-core:jar:3.0.1 in the shaded 
> jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-web-proxy:jar:3.0.0-beta1-SNAPSHOT in 
> the shaded jar.
> [INFO] 

[jira] [Commented] (HADOOP-14498) HADOOP_OPTIONAL_TOOLS not parsed correctly

2017-08-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16111959#comment-16111959
 ] 

Allen Wittenauer commented on HADOOP-14498:
---

The -00 patch will likely break the world.

e.g.:

{code}
hadoop_add_param HADOOP_OPTS Xmx "-Xmx${HADOOP_HEAPSIZE_MAX}"
{code}

will now add multiple Xmx lines.

> HADOOP_OPTIONAL_TOOLS not parsed correctly
> --
>
> Key: HADOOP-14498
> URL: https://issues.apache.org/jira/browse/HADOOP-14498
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Mingliang Liu
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HADOOP-14498.001.patch
>
>
> # This will make hadoop-azure not show up in the hadoop classpath, though 
> both hadoop-aws and hadoop-azure-datalake are in the 
> classpath.{code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws,hadoop-azure-datalake"
> {code}
> # And if we put only hadoop-azure and hadoop-aws, both of them are shown in 
> the classpath.
> {code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws"
> {code}
> This makes me guess that, while parsing the {{HADOOP_OPTIONAL_TOOLS}}, we 
> make some assumptions that hadoop tool modules have a single "-" in names, 
> and the _hadoop-azure-datalake_ overrides the _hadoop-azure_. Or any other 
> assumptions about the {{${project.artifactId\}}}?
> Ping [~aw].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14498) HADOOP_OPTIONAL_TOOLS not parsed correctly

2017-08-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16111952#comment-16111952
 ] 

Allen Wittenauer commented on HADOOP-14498:
---

The thing is, is that add_param was really meant for dealing with de-duping 
stuff like HADOOP_OPTS.  arrays are probably a better choice here because we 
know we can do an exact match.

> HADOOP_OPTIONAL_TOOLS not parsed correctly
> --
>
> Key: HADOOP-14498
> URL: https://issues.apache.org/jira/browse/HADOOP-14498
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Mingliang Liu
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HADOOP-14498.001.patch
>
>
> # This will make hadoop-azure not show up in the hadoop classpath, though 
> both hadoop-aws and hadoop-azure-datalake are in the 
> classpath.{code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws,hadoop-azure-datalake"
> {code}
> # And if we put only hadoop-azure and hadoop-aws, both of them are shown in 
> the classpath.
> {code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws"
> {code}
> This makes me guess that, while parsing the {{HADOOP_OPTIONAL_TOOLS}}, we 
> make some assumptions that hadoop tool modules have a single "-" in names, 
> and the _hadoop-azure-datalake_ overrides the _hadoop-azure_. Or any other 
> assumptions about the {{${project.artifactId\}}}?
> Ping [~aw].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14498) HADOOP_OPTIONAL_TOOLS not parsed correctly

2017-08-02 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14498:
---
Status: Patch Available  (was: Open)

> HADOOP_OPTIONAL_TOOLS not parsed correctly
> --
>
> Key: HADOOP-14498
> URL: https://issues.apache.org/jira/browse/HADOOP-14498
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Mingliang Liu
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HADOOP-14498.001.patch
>
>
> # This will make hadoop-azure not show up in the hadoop classpath, though 
> both hadoop-aws and hadoop-azure-datalake are in the 
> classpath.{code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws,hadoop-azure-datalake"
> {code}
> # And if we put only hadoop-azure and hadoop-aws, both of them are shown in 
> the classpath.
> {code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws"
> {code}
> This makes me guess that, while parsing the {{HADOOP_OPTIONAL_TOOLS}}, we 
> make some assumptions that hadoop tool modules have a single "-" in names, 
> and the _hadoop-azure-datalake_ overrides the _hadoop-azure_. Or any other 
> assumptions about the {{${project.artifactId\}}}?
> Ping [~aw].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14498) HADOOP_OPTIONAL_TOOLS not parsed correctly

2017-08-02 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-14498:
---
Attachment: HADOOP-14498.001.patch

Now that HADOOP-13595 is in I had a look at this. It is indeed the regex 
matching logic in hadoop_add_param being overzealous. That function does 
already state an assumption that it is space-delimited, so I used spaces or 
line boundaries to ensure a full-word patch instead of just a partial one. 
Rather than have it use arrays, let's fix this function for all the other 
places it's used instead of using something from HADOOP-13595 instead.

Attaching a patch. I got a clean Yetus run locally, but it wasn't running bats 
tests for some reason. I ran the hadoop_add_param tests manually but not the 
others, and some unrelated Java test failures seem to be blocking it (unless 
there's a way to bypass that and easily run all bats tests?)

> HADOOP_OPTIONAL_TOOLS not parsed correctly
> --
>
> Key: HADOOP-14498
> URL: https://issues.apache.org/jira/browse/HADOOP-14498
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Mingliang Liu
>Assignee: Sean Mackrory
>Priority: Critical
> Attachments: HADOOP-14498.001.patch
>
>
> # This will make hadoop-azure not show up in the hadoop classpath, though 
> both hadoop-aws and hadoop-azure-datalake are in the 
> classpath.{code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws,hadoop-azure-datalake"
> {code}
> # And if we put only hadoop-azure and hadoop-aws, both of them are shown in 
> the classpath.
> {code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws"
> {code}
> This makes me guess that, while parsing the {{HADOOP_OPTIONAL_TOOLS}}, we 
> make some assumptions that hadoop tool modules have a single "-" in names, 
> and the _hadoop-azure-datalake_ overrides the _hadoop-azure_. Or any other 
> assumptions about the {{${project.artifactId\}}}?
> Ping [~aw].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-14498) HADOOP_OPTIONAL_TOOLS not parsed correctly

2017-08-02 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory reassigned HADOOP-14498:
--

Assignee: Sean Mackrory

> HADOOP_OPTIONAL_TOOLS not parsed correctly
> --
>
> Key: HADOOP-14498
> URL: https://issues.apache.org/jira/browse/HADOOP-14498
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build
>Affects Versions: 3.0.0-alpha1
>Reporter: Mingliang Liu
>Assignee: Sean Mackrory
>Priority: Critical
>
> # This will make hadoop-azure not show up in the hadoop classpath, though 
> both hadoop-aws and hadoop-azure-datalake are in the 
> classpath.{code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws,hadoop-azure-datalake"
> {code}
> # And if we put only hadoop-azure and hadoop-aws, both of them are shown in 
> the classpath.
> {code:title=hadoop-env.sh}
> export HADOOP_OPTIONAL_TOOLS="hadoop-azure,hadoop-aws"
> {code}
> This makes me guess that, while parsing the {{HADOOP_OPTIONAL_TOOLS}}, we 
> make some assumptions that hadoop tool modules have a single "-" in names, 
> and the _hadoop-azure-datalake_ overrides the _hadoop-azure_. Or any other 
> assumptions about the {{${project.artifactId\}}}?
> Ping [~aw].



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14565) Azure: Add Authorization support to ADLS

2017-08-02 Thread Ryan Waters (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Waters updated HADOOP-14565:
-
Status: Patch Available  (was: Open)

> Azure: Add Authorization support to ADLS
> 
>
> Key: HADOOP-14565
> URL: https://issues.apache.org/jira/browse/HADOOP-14565
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: Ryan Waters
>Assignee: Sivaguru Sankaridurg
> Attachments: 
> HADOOP_14565__Added_authorizer_functionality_to_ADL_driver.patch
>
>
> This task is meant to add an Authorizer interface to be used by the ADLS 
> driver in a similar way to the one used by WASB. The primary difference in 
> functionality being that the implementation of this Authorizer will be 
> provided by an external jar. This class will be specified through 
> configuration using "adl.external.authorization.class". 
> If this configuration is provided, an instance of the provided class will be 
> created and all file system calls will be passed through the authorizer, 
> allowing implementations to determine if the file path and access type 
> (create, open, delete, etc.) being requested is valid. If the requested 
> implementation class is not found or it fails to initialize, it will fail 
> initialization of the ADL driver. If no configuration is provided, calls to 
> the authorizer will be skipped and the driver will behave as it did 
> previously.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12077) Provide a multi-URI replication Inode for ViewFs

2017-08-02 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-12077:
---
Attachment: HADOOP-12077.006.patch

Rebased [~jira.shegalov] patch

> Provide a multi-URI replication Inode for ViewFs
> 
>
> Key: HADOOP-12077
> URL: https://issues.apache.org/jira/browse/HADOOP-12077
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs
>Reporter: Gera Shegalov
>Assignee: Gera Shegalov
> Attachments: HADOOP-12077.001.patch, HADOOP-12077.002.patch, 
> HADOOP-12077.003.patch, HADOOP-12077.004.patch, HADOOP-12077.005.patch, 
> HADOOP-12077.006.patch
>
>
> This JIRA is to provide simple "replication" capabilities for applications 
> that maintain logically equivalent paths in multiple locations for caching or 
> failover (e.g., S3 and HDFS). We noticed a simple common HDFS usage pattern 
> in our applications. They host their data on some logical cluster C. There 
> are corresponding HDFS clusters in multiple datacenters. When the application 
> runs in DC1, it prefers to read from C in DC1, and the applications prefers 
> to failover to C in DC2 if the application is migrated to DC2 or when C in 
> DC1 is unavailable. New application data versions are created 
> periodically/relatively infrequently. 
> In order to address many common scenarios in a general fashion, and to avoid 
> unnecessary code duplication, we implement this functionality in ViewFs (our 
> default FileSystem spanning all clusters in all datacenters) in a project 
> code-named Nfly (N as in N datacenters). Currently each ViewFs Inode points 
> to a single URI via ChRootedFileSystem. Consequently, we introduce a new type 
> of links that points to a list of URIs that are each going to be wrapped in 
> ChRootedFileSystem. A typical usage: 
> /nfly/C/user->/DC1/C/user,/DC2/C/user,... This collection of 
> ChRootedFileSystem instances is fronted by the Nfly filesystem object that is 
> actually used for the mount point/Inode. Nfly filesystems backs a single 
> logical path /nfly/C/user//path by multiple physical paths.
> Nfly filesystem supports setting minReplication. As long as the number of 
> URIs on which an update has succeeded is greater than or equal to 
> minReplication exceptions are only logged but not thrown. Each update 
> operation is currently executed serially (client-bandwidth driven parallelism 
> will be added later). 
> A file create/write: 
> # Creates a temporary invisible _nfly_tmp_file in the intended chrooted 
> filesystem. 
> # Returns a FSDataOutputStream that wraps output streams returned by 1
> # All writes are forwarded to each output stream.
> # On close of stream created by 2, all n streams are closed, and the files 
> are renamed from _nfly_tmp_file to file. All files receive the same mtime 
> corresponding to the client system time as of beginning of this step. 
> # If at least minReplication destinations has gone through steps 1-4 without 
> failures the transaction is considered logically committed, otherwise a 
> best-effort attempt of cleaning up the temporary files is attempted.
> As for reads, we support a notion of locality similar to HDFS  /DC/rack/node. 
> We sort Inode URIs using NetworkTopology by their authorities. These are 
> typically host names in simple HDFS URIs. If the authority is missing as is 
> the case with the local file:/// the local host name is assumed 
> InetAddress.getLocalHost(). This makes sure that the local file system is 
> always the closest one to the reader in this approach. For our Hadoop 2 hdfs 
> URIs that are based on nameservice ids instead of hostnames it is very easy 
> to adjust the topology script since our nameservice ids already contain the 
> datacenter. As for rack and node we can simply output any string such as 
> /DC/rack-nsid/node-nsid, since we only care about datacenter-locality for 
> such filesystem clients.
> There are 2 policies/additions to the read call path that makes it more 
> expensive, but improve user experience:
> - readMostRecent - when this policy is enabled, Nfly first checks mtime for 
> the path under all URIs, sorts them from most recent to least recent. Nfly 
> then sorts the set of most recent URIs topologically in the same manner as 
> described above.
> - repairOnRead - when readMostRecent is enabled Nfly already has to RPC all 
> underlying destinations. With repairOnRead, Nfly filesystem would 
> additionally attempt to refresh destinations with the path missing or a stale 
> version of the path using the nearest available most recent destination. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For 

[jira] [Updated] (HADOOP-14700) NativeAzureFileSystem.open() ignores blob container name

2017-08-02 Thread Cheng Lian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cheng Lian updated HADOOP-14700:

Description: 
{{NativeAzureFileSystem}} instances are associated with the blob container used 
to initialize the file system. Assuming that a file system instance {{fs}} is 
associated with a container {{A}}, when trying to access a blob inside another 
container {{B}}, {{fs}} still tries to find the blob inside container {{A}}. If 
there happens to be two blobs with the same name inside both containers, the 
user may get a wrong result because {{fs}} reads the contents from the blob 
inside container {{A}} instead of container {{B}}.

You may reproduce it by running the following self-contained Scala script using 
[Ammonite|http://ammonite.io/]:
{code}
#!/usr/bin/env amm --no-remote-logging

import $ivy.`com.jsuereth::scala-arm:2.0`
import $ivy.`com.microsoft.azure:azure-storage:5.2.0`
import $ivy.`org.apache.hadoop:hadoop-azure:3.0.0-alpha4`
import $ivy.`org.apache.hadoop:hadoop-common:3.0.0-alpha4`
import $ivy.`org.scalatest::scalatest:3.0.3`

import java.io.{BufferedReader, InputStreamReader}
import java.net.URI
import java.time.{Duration, Instant}
import java.util.{Date, EnumSet}

import com.microsoft.azure.storage.{CloudStorageAccount, 
StorageCredentialsAccountAndKey}
import com.microsoft.azure.storage.blob.{SharedAccessBlobPermissions, 
SharedAccessBlobPolicy}
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.fs.{FileSystem, Path}
import org.apache.hadoop.fs.azure.{AzureException, NativeAzureFileSystem}
import org.scalatest.Assertions._
import resource._

// Utility implicit conversion for auto resource management.
implicit def `Closable->Resource`[T <: { def close() }]: Resource[T] = new 
Resource[T] {
  override def close(closable: T): Unit = closable.close()
}

// Credentials information
val ACCOUNT = "** REDACTED **"
val ACCESS_KEY = "** REDACTED **"

// We'll create two different containers, both contain a blob named "test-blob" 
but with different
// contents.
val CONTAINER_A = "container-a"
val CONTAINER_B = "container-b"
val TEST_BLOB = "test-blob"

val blobClient = {
  val credentials = new StorageCredentialsAccountAndKey(ACCOUNT, ACCESS_KEY)
  val account = new CloudStorageAccount(credentials, /* useHttps */ true)
  account.createCloudBlobClient()
}

// Generates a read-only SAS key restricted within "container-a".
val sasKeyForContainerA = {
  val since = Instant.now() minus Duration.ofMinutes(10)
  val duration = Duration.ofHours(1)
  val policy = new SharedAccessBlobPolicy()

  policy.setSharedAccessStartTime(Date.from(since))
  policy.setSharedAccessExpiryTime(Date.from(since plus duration))
  policy.setPermissions(EnumSet.of(
SharedAccessBlobPermissions.READ,
SharedAccessBlobPermissions.LIST
  ))

  blobClient
.getContainerReference(CONTAINER_A)
.generateSharedAccessSignature(policy, null)
}

// Sets up testing containers and blobs using the Azure storage SDK:
//
//   container-a/test-blob => "foo"
//   container-b/test-blob => "bar"
{
  val containerARef = blobClient.getContainerReference(CONTAINER_A)
  val containerBRef = blobClient.getContainerReference(CONTAINER_B)

  containerARef.createIfNotExists()
  containerARef.getBlockBlobReference(TEST_BLOB).uploadText("foo")

  containerBRef.createIfNotExists()
  containerBRef.getBlockBlobReference(TEST_BLOB).uploadText("bar")
}

val pathA = new 
Path(s"wasbs://$CONTAINER_A@$ACCOUNT.blob.core.windows.net/$TEST_BLOB")
val pathB = new 
Path(s"wasbs://$CONTAINER_B@$ACCOUNT.blob.core.windows.net/$TEST_BLOB")

for {
  // Creates a file system associated with "container-a".
  fs <- managed {
val conf = new Configuration
conf.set("fs.wasbs.impl", classOf[NativeAzureFileSystem].getName)
conf.set(s"fs.azure.sas.$CONTAINER_A.$ACCOUNT.blob.core.windows.net", 
sasKeyForContainerA)
pathA.getFileSystem(conf)
  }

  // Opens a reader pointing to "container-a/test-blob". We expect to get the 
string "foo" written
  // to this blob previously.
  readerA <- managed(new BufferedReader(new InputStreamReader(fs open pathA)))

  // Opens a reader pointing to "container-b/test-blob". We expect to get an 
exception since the SAS
  // key used to create the `FileSystem` instance is restricted to 
"container-a".
  readerB <- managed(new BufferedReader(new InputStreamReader(fs open pathB)))
} {
  // Should get "foo"
  assert(readerA.readLine() == "foo")

  // Should catch an exception ...
  intercept[AzureException] {
// ... but instead, we get string "foo" here, which indicates that the 
readerB was reading from
// "container-a" instead of "container-b".
val contents = readerB.readLine()
println(s"Should not reach here but we got $contents")
  }
}
{code}

  was:
{{NativeAzureFileSystem}} instances are associated with the blob container used 
to initialize the file system. Assuming that a file system 

[jira] [Updated] (HADOOP-14727) Socket not closed properly when reading Configurations with BlockReaderRemote

2017-08-02 Thread Andrew Wang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andrew Wang updated HADOOP-14727:
-
Target Version/s: 2.9.0, 3.0.0-beta1  (was: 2.9.0, 3.0.0-alpha4)

> Socket not closed properly when reading Configurations with BlockReaderRemote
> -
>
> Key: HADOOP-14727
> URL: https://issues.apache.org/jira/browse/HADOOP-14727
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Xiao Chen
>Priority: Blocker
>
> This is caught by Cloudera's internal testing over the alpha3 release.
> We got reports that some hosts ran out of FDs. Triaging that, found out both 
> oozie server and Yarn JobHistoryServer have tons of sockets on {{CLOSE_WAIT}} 
> state.
> [~haibochen] helped narrow down to a consistent reproduction by simply 
> visiting the JHS web UI, and clicking through a job and its logs.
> I then look at the {{BlockReaderRemote}} and related code, and didn't spot 
> any leaks in the implementation. After adding a debug log whenever a {{Peer}} 
> is created/closed/in/out {{PeerCache}}, it looks like all the {{CLOSE_WAIT}} 
> sockets are created from this call stack:
> {noformat}
> 2017-08-02 13:58:59,901 INFO 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory:  associated peer 
> NioInetPeer(Socket[addr=/10.17.196.28,port=20002,localport=42512]) with 
> blockreader org.apache.hadoop.hdfs.client.impl.BlockReaderRemote@717ce109
> java.lang.Exception: test
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:745)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:385)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:636)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:566)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:749)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:807)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at 
> com.ctc.wstx.io.StreamBootstrapper.ensureLoaded(StreamBootstrapper.java:482)
> at 
> com.ctc.wstx.io.StreamBootstrapper.resolveStreamEncoding(StreamBootstrapper.java:306)
> at 
> com.ctc.wstx.io.StreamBootstrapper.bootstrapInput(StreamBootstrapper.java:167)
> at 
> com.ctc.wstx.stax.WstxInputFactory.doCreateSR(WstxInputFactory.java:573)
> at 
> com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:633)
> at 
> com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:647)
> at 
> com.ctc.wstx.stax.WstxInputFactory.createXMLStreamReader(WstxInputFactory.java:366)
> at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2649)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2697)
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2662)
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2545)
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:1076)
> at 
> org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1126)
> at 
> org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1344)
> at org.apache.hadoop.mapreduce.counters.Limits.init(Limits.java:45)
> at org.apache.hadoop.mapreduce.counters.Limits.reset(Limits.java:130)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CompletedJob.loadFullHistoryData(CompletedJob.java:363)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CompletedJob.(CompletedJob.java:105)
> at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$HistoryFileInfo.loadJob(HistoryFileManager.java:473)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.loadJob(CachedHistoryStorage.java:180)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.access$000(CachedHistoryStorage.java:52)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:103)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:100)
> at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
> at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
> at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
> at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
> at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
>  

[jira] [Updated] (HADOOP-14727) Socket not closed properly when reading Configurations with BlockReaderRemote

2017-08-02 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14727:
---
Description: 
This is caught by Cloudera's internal testing over the alpha3 release.

We got reports that some hosts ran out of FDs. Triaging that, found out both 
oozie server and Yarn JobHistoryServer have tons of sockets on {{CLOSE_WAIT}} 
state.

[~haibochen] helped narrow down to a consistent reproduction by simply visiting 
the JHS web UI, and clicking through a job and its logs.

I then look at the {{BlockReaderRemote}} and related code, and didn't spot any 
leaks in the implementation. After adding a debug log whenever a {{Peer}} is 
created/closed/in/out {{PeerCache}}, it looks like all the {{CLOSE_WAIT}} 
sockets are created from this call stack:
{noformat}
2017-08-02 13:58:59,901 INFO 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory:  associated peer 
NioInetPeer(Socket[addr=/10.17.196.28,port=20002,localport=42512]) with 
blockreader org.apache.hadoop.hdfs.client.impl.BlockReaderRemote@717ce109
java.lang.Exception: test
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:745)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:385)
at 
org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:636)
at 
org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:566)
at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:749)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:807)
at java.io.DataInputStream.read(DataInputStream.java:149)
at 
com.ctc.wstx.io.StreamBootstrapper.ensureLoaded(StreamBootstrapper.java:482)
at 
com.ctc.wstx.io.StreamBootstrapper.resolveStreamEncoding(StreamBootstrapper.java:306)
at 
com.ctc.wstx.io.StreamBootstrapper.bootstrapInput(StreamBootstrapper.java:167)
at 
com.ctc.wstx.stax.WstxInputFactory.doCreateSR(WstxInputFactory.java:573)
at 
com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:633)
at 
com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:647)
at 
com.ctc.wstx.stax.WstxInputFactory.createXMLStreamReader(WstxInputFactory.java:366)
at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2649)
at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2697)
at 
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2662)
at 
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2545)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:1076)
at 
org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1126)
at org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1344)
at org.apache.hadoop.mapreduce.counters.Limits.init(Limits.java:45)
at org.apache.hadoop.mapreduce.counters.Limits.reset(Limits.java:130)
at 
org.apache.hadoop.mapreduce.v2.hs.CompletedJob.loadFullHistoryData(CompletedJob.java:363)
at 
org.apache.hadoop.mapreduce.v2.hs.CompletedJob.(CompletedJob.java:105)
at 
org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$HistoryFileInfo.loadJob(HistoryFileManager.java:473)
at 
org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.loadJob(CachedHistoryStorage.java:180)
at 
org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.access$000(CachedHistoryStorage.java:52)
at 
org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:103)
at 
org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:100)
at 
com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
at 
com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
at 
com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969)
at 
com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829)
at 
com.google.common.cache.LocalCache$LocalManualCache.getUnchecked(LocalCache.java:4834)
at 
org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.getFullJob(CachedHistoryStorage.java:193)
at 
org.apache.hadoop.mapreduce.v2.hs.JobHistory.getJob(JobHistory.java:220)
at 
org.apache.hadoop.mapreduce.v2.app.webapp.AppController.requireJob(AppController.java:416)
at 
org.apache.hadoop.mapreduce.v2.app.webapp.AppController.attempts(AppController.java:277)
at 

[jira] [Updated] (HADOOP-14727) Socket not closed properly when reading Configurations with BlockReaderRemote

2017-08-02 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14727?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-14727:
---
Description: 
This is caught by Cloudera's internal testing over the alpha3 release.

We got report that some hosts ran out of FDs. Triaging that, found out both 
oozie server and Yarn JobHistoryServer have tons of sockets on {{CLOSE_WAIT}} 
state.

[~haibochen] helped then narrow down a consistent reproduction by simply 
visiting the JHS webui, and clicking through a job and its logs.

I then look at the {{BlockReaderRemote}} and related code. After adding a debug 
log whenever a {{Peer}} is created/closed/in/out {{PeerCache}}, it looks like 
all the {{CLOSE_WAIT}} sockets are created from this call stack:
{noformat}
2017-08-02 13:58:59,901 INFO 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory:  associated peer 
NioInetPeer(Socket[addr=/10.17.196.28,port=20002,localport=42512]) with 
blockreader org.apache.hadoop.hdfs.client.impl.BlockReaderRemote@717ce109
java.lang.Exception: test
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:745)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:385)
at 
org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:636)
at 
org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:566)
at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:749)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:807)
at java.io.DataInputStream.read(DataInputStream.java:149)
at 
com.ctc.wstx.io.StreamBootstrapper.ensureLoaded(StreamBootstrapper.java:482)
at 
com.ctc.wstx.io.StreamBootstrapper.resolveStreamEncoding(StreamBootstrapper.java:306)
at 
com.ctc.wstx.io.StreamBootstrapper.bootstrapInput(StreamBootstrapper.java:167)
at 
com.ctc.wstx.stax.WstxInputFactory.doCreateSR(WstxInputFactory.java:573)
at 
com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:633)
at 
com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:647)
at 
com.ctc.wstx.stax.WstxInputFactory.createXMLStreamReader(WstxInputFactory.java:366)
at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2649)
at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2697)
at 
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2662)
at 
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2545)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:1076)
at 
org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1126)
at org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1344)
at org.apache.hadoop.mapreduce.counters.Limits.init(Limits.java:45)
at org.apache.hadoop.mapreduce.counters.Limits.reset(Limits.java:130)
at 
org.apache.hadoop.mapreduce.v2.hs.CompletedJob.loadFullHistoryData(CompletedJob.java:363)
at 
org.apache.hadoop.mapreduce.v2.hs.CompletedJob.(CompletedJob.java:105)
at 
org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$HistoryFileInfo.loadJob(HistoryFileManager.java:473)
at 
org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.loadJob(CachedHistoryStorage.java:180)
at 
org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.access$000(CachedHistoryStorage.java:52)
at 
org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:103)
at 
org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:100)
at 
com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
at 
com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
at 
com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969)
at 
com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829)
at 
com.google.common.cache.LocalCache$LocalManualCache.getUnchecked(LocalCache.java:4834)
at 
org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.getFullJob(CachedHistoryStorage.java:193)
at 
org.apache.hadoop.mapreduce.v2.hs.JobHistory.getJob(JobHistory.java:220)
at 
org.apache.hadoop.mapreduce.v2.app.webapp.AppController.requireJob(AppController.java:416)
at 
org.apache.hadoop.mapreduce.v2.app.webapp.AppController.attempts(AppController.java:277)
at 

[jira] [Commented] (HADOOP-14727) Socket not closed properly when reading Configurations with BlockReaderRemote

2017-08-02 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14727?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16111789#comment-16111789
 ] 

Xiao Chen commented on HADOOP-14727:


Hi [~jeagles],
Would you be able to look into this issue, as the author of HADOOP-14216 and 
HADOOP-14501?

> Socket not closed properly when reading Configurations with BlockReaderRemote
> -
>
> Key: HADOOP-14727
> URL: https://issues.apache.org/jira/browse/HADOOP-14727
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: conf
>Affects Versions: 2.9.0, 3.0.0-alpha4
>Reporter: Xiao Chen
>Priority: Blocker
>
> This is caught by Cloudera's internal testing over the alpha3 release.
> We got report that some hosts ran out of FDs. Triaging that, found out both 
> oozie server and Yarn JobHistoryServer have tons of sockets on {{CLOSE_WAIT}} 
> state.
> [~haibochen] helped then narrow down a consistent reproduction by simply 
> visiting the JHS webui, and clicking through a job and its logs.
> I then look at the {{BlockReaderRemote}} and related code. After adding a 
> debug log whenever a {{Peer}} is created/closed/in/out {{PeerCache}}, it 
> looks like all the {{CLOSE_WAIT}} sockets are created from this call stack:
> {noformat}
> 2017-08-02 13:58:59,901 INFO 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory:  associated peer 
> NioInetPeer(Socket[addr=/10.17.196.28,port=20002,localport=42512]) with 
> blockreader org.apache.hadoop.hdfs.client.impl.BlockReaderRemote@717ce109
> java.lang.Exception: test
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:745)
> at 
> org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:385)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:636)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:566)
> at 
> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:749)
> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:807)
> at java.io.DataInputStream.read(DataInputStream.java:149)
> at 
> com.ctc.wstx.io.StreamBootstrapper.ensureLoaded(StreamBootstrapper.java:482)
> at 
> com.ctc.wstx.io.StreamBootstrapper.resolveStreamEncoding(StreamBootstrapper.java:306)
> at 
> com.ctc.wstx.io.StreamBootstrapper.bootstrapInput(StreamBootstrapper.java:167)
> at 
> com.ctc.wstx.stax.WstxInputFactory.doCreateSR(WstxInputFactory.java:573)
> at 
> com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:633)
> at 
> com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:647)
> at 
> com.ctc.wstx.stax.WstxInputFactory.createXMLStreamReader(WstxInputFactory.java:366)
> at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2649)
> at 
> org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2697)
> at 
> org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2662)
> at 
> org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2545)
> at org.apache.hadoop.conf.Configuration.get(Configuration.java:1076)
> at 
> org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1126)
> at 
> org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1344)
> at org.apache.hadoop.mapreduce.counters.Limits.init(Limits.java:45)
> at org.apache.hadoop.mapreduce.counters.Limits.reset(Limits.java:130)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CompletedJob.loadFullHistoryData(CompletedJob.java:363)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CompletedJob.(CompletedJob.java:105)
> at 
> org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$HistoryFileInfo.loadJob(HistoryFileManager.java:473)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.loadJob(CachedHistoryStorage.java:180)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.access$000(CachedHistoryStorage.java:52)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:103)
> at 
> org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:100)
> at 
> com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
> at 
> com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
> at 
> com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
> at 
> com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
> at 

[jira] [Created] (HADOOP-14727) Socket not closed properly when reading Configurations with BlockReaderRemote

2017-08-02 Thread Xiao Chen (JIRA)
Xiao Chen created HADOOP-14727:
--

 Summary: Socket not closed properly when reading Configurations 
with BlockReaderRemote
 Key: HADOOP-14727
 URL: https://issues.apache.org/jira/browse/HADOOP-14727
 Project: Hadoop Common
  Issue Type: Bug
  Components: conf
Affects Versions: 3.0.0-alpha4, 2.9.0
Reporter: Xiao Chen
Priority: Blocker


This is caught by Cloudera's internal testing over the alpha3 release.

We got report that some hosts ran out of FDs. Triaging that, found out both 
oozie server and Yarn JobHistoryServer have tons of sockets on {{CLOSE_WAIT}} 
state.

[~haibochen] helped then narrow down a consistent reproduction by simply 
visiting the JHS webui, and clicking through a job and its logs.

I then look at the {{BlockReaderRemote}} and related code. After adding a debug 
log whenever a {{Peer}} is created/closed/in/out {{PeerCache}}, it looks like 
all the {{CLOSE_WAIT}} sockets are created from this call stack:
{noformat}
2017-08-02 13:58:59,901 INFO 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory:  associated peer 
NioInetPeer(Socket[addr=/10.17.196.28,port=20002,localport=42512]) with 
blockreader org.apache.hadoop.hdfs.client.impl.BlockReaderRemote@717ce109
java.lang.Exception: test
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.getRemoteBlockReaderFromTcp(BlockReaderFactory.java:745)
at 
org.apache.hadoop.hdfs.client.impl.BlockReaderFactory.build(BlockReaderFactory.java:385)
at 
org.apache.hadoop.hdfs.DFSInputStream.getBlockReader(DFSInputStream.java:636)
at 
org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(DFSInputStream.java:566)
at 
org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:749)
at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:807)
at java.io.DataInputStream.read(DataInputStream.java:149)
at 
com.ctc.wstx.io.StreamBootstrapper.ensureLoaded(StreamBootstrapper.java:482)
at 
com.ctc.wstx.io.StreamBootstrapper.resolveStreamEncoding(StreamBootstrapper.java:306)
at 
com.ctc.wstx.io.StreamBootstrapper.bootstrapInput(StreamBootstrapper.java:167)
at 
com.ctc.wstx.stax.WstxInputFactory.doCreateSR(WstxInputFactory.java:573)
at 
com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:633)
at 
com.ctc.wstx.stax.WstxInputFactory.createSR(WstxInputFactory.java:647)
at 
com.ctc.wstx.stax.WstxInputFactory.createXMLStreamReader(WstxInputFactory.java:366)
at org.apache.hadoop.conf.Configuration.parse(Configuration.java:2649)
at 
org.apache.hadoop.conf.Configuration.loadResource(Configuration.java:2697)
at 
org.apache.hadoop.conf.Configuration.loadResources(Configuration.java:2662)
at 
org.apache.hadoop.conf.Configuration.getProps(Configuration.java:2545)
at org.apache.hadoop.conf.Configuration.get(Configuration.java:1076)
at 
org.apache.hadoop.conf.Configuration.getTrimmed(Configuration.java:1126)
at org.apache.hadoop.conf.Configuration.getInt(Configuration.java:1344)
at org.apache.hadoop.mapreduce.counters.Limits.init(Limits.java:45)
at org.apache.hadoop.mapreduce.counters.Limits.reset(Limits.java:130)
at 
org.apache.hadoop.mapreduce.v2.hs.CompletedJob.loadFullHistoryData(CompletedJob.java:363)
at 
org.apache.hadoop.mapreduce.v2.hs.CompletedJob.(CompletedJob.java:105)
at 
org.apache.hadoop.mapreduce.v2.hs.HistoryFileManager$HistoryFileInfo.loadJob(HistoryFileManager.java:473)
at 
org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.loadJob(CachedHistoryStorage.java:180)
at 
org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.access$000(CachedHistoryStorage.java:52)
at 
org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:103)
at 
org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage$1.load(CachedHistoryStorage.java:100)
at 
com.google.common.cache.LocalCache$LoadingValueReference.loadFuture(LocalCache.java:3568)
at 
com.google.common.cache.LocalCache$Segment.loadSync(LocalCache.java:2350)
at 
com.google.common.cache.LocalCache$Segment.lockedGetOrLoad(LocalCache.java:2313)
at com.google.common.cache.LocalCache$Segment.get(LocalCache.java:2228)
at com.google.common.cache.LocalCache.get(LocalCache.java:3965)
at com.google.common.cache.LocalCache.getOrLoad(LocalCache.java:3969)
at 
com.google.common.cache.LocalCache$LocalManualCache.get(LocalCache.java:4829)
at 
com.google.common.cache.LocalCache$LocalManualCache.getUnchecked(LocalCache.java:4834)
at 
org.apache.hadoop.mapreduce.v2.hs.CachedHistoryStorage.getFullJob(CachedHistoryStorage.java:193)
at 
org.apache.hadoop.mapreduce.v2.hs.JobHistory.getJob(JobHistory.java:220)
at 

[jira] [Commented] (HADOOP-14685) Test jars to exclude from hadoop-client-minicluster jar

2017-08-02 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16111780#comment-16111780
 ] 

Arpit Agarwal commented on HADOOP-14685:


Sure, I won't commit it. Thanks.

> Test jars to exclude from hadoop-client-minicluster jar
> ---
>
> Key: HADOOP-14685
> URL: https://issues.apache.org/jira/browse/HADOOP-14685
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.0.0-beta1
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HADOOP-14685.01.patch, HADOOP-14685.patch
>
>
> This jira is to discuss, what test jars to be included/excluded from 
> hadoop-client-minicluster
> Jars included/excluded when building hadoop-client-minicluster
> [INFO] --- maven-shade-plugin:2.4.3:shade (default) @ 
> hadoop-client-minicluster ---
> [INFO] Excluding org.apache.hadoop:hadoop-client-api:jar:3.0.0-beta1-SNAPSHOT 
> from the shaded jar.
> [INFO] Excluding 
> org.apache.hadoop:hadoop-client-runtime:jar:3.0.0-beta1-SNAPSHOT from the 
> shaded jar.
> [INFO] Excluding org.apache.htrace:htrace-core4:jar:4.1.0-incubating from the 
> shaded jar.
> [INFO] Excluding org.slf4j:slf4j-api:jar:1.7.25 from the shaded jar.
> [INFO] Excluding commons-logging:commons-logging:jar:1.1.3 from the shaded 
> jar.
> [INFO] Excluding junit:junit:jar:4.11 from the shaded jar.
> [INFO] Including org.hamcrest:hamcrest-core:jar:1.3 in the shaded jar.
> [INFO] Excluding 
> org.apache.hadoop:hadoop-annotations:jar:3.0.0-beta1-SNAPSHOT from the shaded 
> jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-minicluster:jar:3.0.0-beta1-SNAPSHOT in the shaded 
> jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-tests:test-jar:tests:3.0.0-beta1-SNAPSHOT
>  in the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-resourcemanager:jar:3.0.0-beta1-SNAPSHOT 
> in the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-applicationhistoryservice:jar:3.0.0-beta1-SNAPSHOT
>  in the shaded jar.
> [INFO] Including de.ruedigermoeller:fst:jar:2.50 in the shaded jar.
> [INFO] Including com.cedarsoftware:java-util:jar:1.9.0 in the shaded jar.
> [INFO] Including com.cedarsoftware:json-io:jar:2.5.1 in the shaded jar.
> [INFO] Including org.apache.curator:curator-test:jar:2.12.0 in the shaded jar.
> [INFO] Including org.javassist:javassist:jar:3.18.1-GA in the shaded jar.
> [INFO] Including org.apache.hadoop:hadoop-hdfs:jar:3.0.0-beta1-SNAPSHOT in 
> the shaded jar.
> [INFO] Including org.eclipse.jetty:jetty-util-ajax:jar:9.3.11.v20160721 in 
> the shaded jar.
> [INFO] Including commons-daemon:commons-daemon:jar:1.0.13 in the shaded jar.
> [INFO] Including io.netty:netty-all:jar:4.0.23.Final in the shaded jar.
> [INFO] Including xerces:xercesImpl:jar:2.9.1 in the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-mapreduce-client-hs:jar:3.0.0-beta1-SNAPSHOT in the 
> shaded jar.
> [INFO] Excluding 
> org.apache.hadoop:hadoop-yarn-server-timelineservice:jar:3.0.0-beta1-SNAPSHOT 
> from the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-common:test-jar:tests:3.0.0-beta1-SNAPSHOT in the 
> shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-hdfs:test-jar:tests:3.0.0-beta1-SNAPSHOT in the 
> shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:3.0.0-beta1-SNAPSHOT
>  in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-core:jar:1.19 in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-client:jar:1.19 in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-json:jar:1.19 in the shaded jar.
> [INFO] Including org.codehaus.jettison:jettison:jar:1.1 in the shaded jar.
> [INFO] Including com.sun.xml.bind:jaxb-impl:jar:2.2.3-1 in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-server:jar:1.19 in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-servlet:jar:1.19 in the shaded jar.
> [INFO] Including org.eclipse.jdt:core:jar:3.1.1 in the shaded jar.
> [INFO] Including net.sf.kosmosfs:kfs:jar:0.3 in the shaded jar.
> [INFO] Including net.java.dev.jets3t:jets3t:jar:0.9.0 in the shaded jar.
> [INFO] Including com.jamesmurty.utils:java-xmlbuilder:jar:0.4 in the shaded 
> jar.
> [INFO] Including com.jcraft:jsch:jar:0.1.54 in the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-nodemanager:jar:3.0.0-beta1-SNAPSHOT in 
> the shaded jar.
> [INFO] Including com.codahale.metrics:metrics-core:jar:3.0.1 in the shaded 
> jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-web-proxy:jar:3.0.0-beta1-SNAPSHOT in 
> the shaded jar.
> [INFO] Including org.eclipse.jetty:jetty-server:jar:9.3.11.v20160721 in the 
> shaded jar.
> [INFO] Including org.eclipse.jetty:jetty-http:jar:9.3.11.v20160721 in the 

[jira] [Updated] (HADOOP-14565) Azure: Add Authorization support to ADLS

2017-08-02 Thread Ryan Waters (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Waters updated HADOOP-14565:
-
Attachment: HADOOP_14565__Added_authorizer_functionality_to_ADL_driver.patch

> Azure: Add Authorization support to ADLS
> 
>
> Key: HADOOP-14565
> URL: https://issues.apache.org/jira/browse/HADOOP-14565
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: Ryan Waters
>Assignee: Sivaguru Sankaridurg
> Attachments: 
> HADOOP_14565__Added_authorizer_functionality_to_ADL_driver.patch
>
>
> This task is meant to add an Authorizer interface to be used by the ADLS 
> driver in a similar way to the one used by WASB. The primary difference in 
> functionality being that the implementation of this Authorizer will be 
> provided by an external jar. This class will be specified through 
> configuration using "adl.external.authorization.class". 
> If this configuration is provided, an instance of the provided class will be 
> created and all file system calls will be passed through the authorizer, 
> allowing implementations to determine if the file path and access type 
> (create, open, delete, etc.) being requested is valid. If the requested 
> implementation class is not found or it fails to initialize, it will fail 
> initialization of the ADL driver. If no configuration is provided, calls to 
> the authorizer will be skipped and the driver will behave as it did 
> previously.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14685) Test jars to exclude from hadoop-client-minicluster jar

2017-08-02 Thread Junping Du (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16111777#comment-16111777
 ] 

Junping Du commented on HADOOP-14685:
-

Hi [~arpitagarwal], I could need more time for verifying the patch. Let me back 
to you before the end of today. Ok?

> Test jars to exclude from hadoop-client-minicluster jar
> ---
>
> Key: HADOOP-14685
> URL: https://issues.apache.org/jira/browse/HADOOP-14685
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.0.0-beta1
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HADOOP-14685.01.patch, HADOOP-14685.patch
>
>
> This jira is to discuss, what test jars to be included/excluded from 
> hadoop-client-minicluster
> Jars included/excluded when building hadoop-client-minicluster
> [INFO] --- maven-shade-plugin:2.4.3:shade (default) @ 
> hadoop-client-minicluster ---
> [INFO] Excluding org.apache.hadoop:hadoop-client-api:jar:3.0.0-beta1-SNAPSHOT 
> from the shaded jar.
> [INFO] Excluding 
> org.apache.hadoop:hadoop-client-runtime:jar:3.0.0-beta1-SNAPSHOT from the 
> shaded jar.
> [INFO] Excluding org.apache.htrace:htrace-core4:jar:4.1.0-incubating from the 
> shaded jar.
> [INFO] Excluding org.slf4j:slf4j-api:jar:1.7.25 from the shaded jar.
> [INFO] Excluding commons-logging:commons-logging:jar:1.1.3 from the shaded 
> jar.
> [INFO] Excluding junit:junit:jar:4.11 from the shaded jar.
> [INFO] Including org.hamcrest:hamcrest-core:jar:1.3 in the shaded jar.
> [INFO] Excluding 
> org.apache.hadoop:hadoop-annotations:jar:3.0.0-beta1-SNAPSHOT from the shaded 
> jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-minicluster:jar:3.0.0-beta1-SNAPSHOT in the shaded 
> jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-tests:test-jar:tests:3.0.0-beta1-SNAPSHOT
>  in the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-resourcemanager:jar:3.0.0-beta1-SNAPSHOT 
> in the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-applicationhistoryservice:jar:3.0.0-beta1-SNAPSHOT
>  in the shaded jar.
> [INFO] Including de.ruedigermoeller:fst:jar:2.50 in the shaded jar.
> [INFO] Including com.cedarsoftware:java-util:jar:1.9.0 in the shaded jar.
> [INFO] Including com.cedarsoftware:json-io:jar:2.5.1 in the shaded jar.
> [INFO] Including org.apache.curator:curator-test:jar:2.12.0 in the shaded jar.
> [INFO] Including org.javassist:javassist:jar:3.18.1-GA in the shaded jar.
> [INFO] Including org.apache.hadoop:hadoop-hdfs:jar:3.0.0-beta1-SNAPSHOT in 
> the shaded jar.
> [INFO] Including org.eclipse.jetty:jetty-util-ajax:jar:9.3.11.v20160721 in 
> the shaded jar.
> [INFO] Including commons-daemon:commons-daemon:jar:1.0.13 in the shaded jar.
> [INFO] Including io.netty:netty-all:jar:4.0.23.Final in the shaded jar.
> [INFO] Including xerces:xercesImpl:jar:2.9.1 in the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-mapreduce-client-hs:jar:3.0.0-beta1-SNAPSHOT in the 
> shaded jar.
> [INFO] Excluding 
> org.apache.hadoop:hadoop-yarn-server-timelineservice:jar:3.0.0-beta1-SNAPSHOT 
> from the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-common:test-jar:tests:3.0.0-beta1-SNAPSHOT in the 
> shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-hdfs:test-jar:tests:3.0.0-beta1-SNAPSHOT in the 
> shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:3.0.0-beta1-SNAPSHOT
>  in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-core:jar:1.19 in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-client:jar:1.19 in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-json:jar:1.19 in the shaded jar.
> [INFO] Including org.codehaus.jettison:jettison:jar:1.1 in the shaded jar.
> [INFO] Including com.sun.xml.bind:jaxb-impl:jar:2.2.3-1 in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-server:jar:1.19 in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-servlet:jar:1.19 in the shaded jar.
> [INFO] Including org.eclipse.jdt:core:jar:3.1.1 in the shaded jar.
> [INFO] Including net.sf.kosmosfs:kfs:jar:0.3 in the shaded jar.
> [INFO] Including net.java.dev.jets3t:jets3t:jar:0.9.0 in the shaded jar.
> [INFO] Including com.jamesmurty.utils:java-xmlbuilder:jar:0.4 in the shaded 
> jar.
> [INFO] Including com.jcraft:jsch:jar:0.1.54 in the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-nodemanager:jar:3.0.0-beta1-SNAPSHOT in 
> the shaded jar.
> [INFO] Including com.codahale.metrics:metrics-core:jar:3.0.1 in the shaded 
> jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-web-proxy:jar:3.0.0-beta1-SNAPSHOT in 
> the shaded jar.
> [INFO] Including org.eclipse.jetty:jetty-server:jar:9.3.11.v20160721 in the 
> shaded jar.

[jira] [Updated] (HADOOP-14726) Remove FileStatus#isDir

2017-08-02 Thread Chris Douglas (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Douglas updated HADOOP-14726:
---
Attachment: HADOOP-14726.000.patch

Alternative marking {{FileStatus#isDir}} as final, delegating to 
{{FileStatus#isDirectory}} (and removing calls to {{isDir}} within the project. 
If we can't remove this in 3.0, we can at least set its semantics.

> Remove FileStatus#isDir
> ---
>
> Key: HADOOP-14726
> URL: https://issues.apache.org/jira/browse/HADOOP-14726
> Project: Hadoop Common
>  Issue Type: Task
>  Components: fs
>Reporter: Chris Douglas
>Priority: Minor
> Attachments: HADOOP-14726.000.patch
>
>
> FileStatus#isDir was deprecated in 0.21 (HADOOP-6585).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14685) Test jars to exclude from hadoop-client-minicluster jar

2017-08-02 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16111773#comment-16111773
 ] 

Arpit Agarwal commented on HADOOP-14685:


+1 lgtm.

[~djp], [~busbey], any objections to committing this?

> Test jars to exclude from hadoop-client-minicluster jar
> ---
>
> Key: HADOOP-14685
> URL: https://issues.apache.org/jira/browse/HADOOP-14685
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common
>Affects Versions: 3.0.0-beta1
>Reporter: Bharat Viswanadham
>Assignee: Bharat Viswanadham
> Attachments: HADOOP-14685.01.patch, HADOOP-14685.patch
>
>
> This jira is to discuss, what test jars to be included/excluded from 
> hadoop-client-minicluster
> Jars included/excluded when building hadoop-client-minicluster
> [INFO] --- maven-shade-plugin:2.4.3:shade (default) @ 
> hadoop-client-minicluster ---
> [INFO] Excluding org.apache.hadoop:hadoop-client-api:jar:3.0.0-beta1-SNAPSHOT 
> from the shaded jar.
> [INFO] Excluding 
> org.apache.hadoop:hadoop-client-runtime:jar:3.0.0-beta1-SNAPSHOT from the 
> shaded jar.
> [INFO] Excluding org.apache.htrace:htrace-core4:jar:4.1.0-incubating from the 
> shaded jar.
> [INFO] Excluding org.slf4j:slf4j-api:jar:1.7.25 from the shaded jar.
> [INFO] Excluding commons-logging:commons-logging:jar:1.1.3 from the shaded 
> jar.
> [INFO] Excluding junit:junit:jar:4.11 from the shaded jar.
> [INFO] Including org.hamcrest:hamcrest-core:jar:1.3 in the shaded jar.
> [INFO] Excluding 
> org.apache.hadoop:hadoop-annotations:jar:3.0.0-beta1-SNAPSHOT from the shaded 
> jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-minicluster:jar:3.0.0-beta1-SNAPSHOT in the shaded 
> jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-tests:test-jar:tests:3.0.0-beta1-SNAPSHOT
>  in the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-resourcemanager:jar:3.0.0-beta1-SNAPSHOT 
> in the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-applicationhistoryservice:jar:3.0.0-beta1-SNAPSHOT
>  in the shaded jar.
> [INFO] Including de.ruedigermoeller:fst:jar:2.50 in the shaded jar.
> [INFO] Including com.cedarsoftware:java-util:jar:1.9.0 in the shaded jar.
> [INFO] Including com.cedarsoftware:json-io:jar:2.5.1 in the shaded jar.
> [INFO] Including org.apache.curator:curator-test:jar:2.12.0 in the shaded jar.
> [INFO] Including org.javassist:javassist:jar:3.18.1-GA in the shaded jar.
> [INFO] Including org.apache.hadoop:hadoop-hdfs:jar:3.0.0-beta1-SNAPSHOT in 
> the shaded jar.
> [INFO] Including org.eclipse.jetty:jetty-util-ajax:jar:9.3.11.v20160721 in 
> the shaded jar.
> [INFO] Including commons-daemon:commons-daemon:jar:1.0.13 in the shaded jar.
> [INFO] Including io.netty:netty-all:jar:4.0.23.Final in the shaded jar.
> [INFO] Including xerces:xercesImpl:jar:2.9.1 in the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-mapreduce-client-hs:jar:3.0.0-beta1-SNAPSHOT in the 
> shaded jar.
> [INFO] Excluding 
> org.apache.hadoop:hadoop-yarn-server-timelineservice:jar:3.0.0-beta1-SNAPSHOT 
> from the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-common:test-jar:tests:3.0.0-beta1-SNAPSHOT in the 
> shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-hdfs:test-jar:tests:3.0.0-beta1-SNAPSHOT in the 
> shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-mapreduce-client-jobclient:test-jar:tests:3.0.0-beta1-SNAPSHOT
>  in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-core:jar:1.19 in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-client:jar:1.19 in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-json:jar:1.19 in the shaded jar.
> [INFO] Including org.codehaus.jettison:jettison:jar:1.1 in the shaded jar.
> [INFO] Including com.sun.xml.bind:jaxb-impl:jar:2.2.3-1 in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-server:jar:1.19 in the shaded jar.
> [INFO] Including com.sun.jersey:jersey-servlet:jar:1.19 in the shaded jar.
> [INFO] Including org.eclipse.jdt:core:jar:3.1.1 in the shaded jar.
> [INFO] Including net.sf.kosmosfs:kfs:jar:0.3 in the shaded jar.
> [INFO] Including net.java.dev.jets3t:jets3t:jar:0.9.0 in the shaded jar.
> [INFO] Including com.jamesmurty.utils:java-xmlbuilder:jar:0.4 in the shaded 
> jar.
> [INFO] Including com.jcraft:jsch:jar:0.1.54 in the shaded jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-nodemanager:jar:3.0.0-beta1-SNAPSHOT in 
> the shaded jar.
> [INFO] Including com.codahale.metrics:metrics-core:jar:3.0.1 in the shaded 
> jar.
> [INFO] Including 
> org.apache.hadoop:hadoop-yarn-server-web-proxy:jar:3.0.0-beta1-SNAPSHOT in 
> the shaded jar.
> [INFO] Including org.eclipse.jetty:jetty-server:jar:9.3.11.v20160721 in the 
> shaded jar.
> [INFO] Including 

[jira] [Updated] (HADOOP-14565) Azure: Add Authorization support to ADLS

2017-08-02 Thread Ryan Waters (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14565?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan Waters updated HADOOP-14565:
-
Attachment: (was: 
HADOOP_14565__Added_authorizer_functionality_to_ADL_driver__Updated_javadoc_to_correspond_1.patch)

> Azure: Add Authorization support to ADLS
> 
>
> Key: HADOOP-14565
> URL: https://issues.apache.org/jira/browse/HADOOP-14565
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs/adl
>Affects Versions: 2.8.0
>Reporter: Ryan Waters
>Assignee: Sivaguru Sankaridurg
>
> This task is meant to add an Authorizer interface to be used by the ADLS 
> driver in a similar way to the one used by WASB. The primary difference in 
> functionality being that the implementation of this Authorizer will be 
> provided by an external jar. This class will be specified through 
> configuration using "adl.external.authorization.class". 
> If this configuration is provided, an instance of the provided class will be 
> created and all file system calls will be passed through the authorizer, 
> allowing implementations to determine if the file path and access type 
> (create, open, delete, etc.) being requested is valid. If the requested 
> implementation class is not found or it fails to initialize, it will fail 
> initialization of the ADL driver. If no configuration is provided, calls to 
> the authorizer will be skipped and the driver will behave as it did 
> previously.  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14726) Remove FileStatus#isDir

2017-08-02 Thread Chris Douglas (JIRA)
Chris Douglas created HADOOP-14726:
--

 Summary: Remove FileStatus#isDir
 Key: HADOOP-14726
 URL: https://issues.apache.org/jira/browse/HADOOP-14726
 Project: Hadoop Common
  Issue Type: Task
  Components: fs
Reporter: Chris Douglas
Priority: Minor


FileStatus#isDir was deprecated in 0.21 (HADOOP-6585).



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Resolved] (HADOOP-14725) hadoop-aws parallel tests do not work under Windows

2017-08-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14725?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer resolved HADOOP-14725.
---
Resolution: Duplicate

> hadoop-aws parallel tests do not work under Windows
> ---
>
> Key: HADOOP-14725
> URL: https://issues.apache.org/jira/browse/HADOOP-14725
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Priority: Minor
>




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14696) common's parallel tests don't work for Windows

2017-08-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-14696:
--
Attachment: HADOOP-14696.01.patch

-01:
* globally define and use the "safe" paths

> common's parallel tests don't work for Windows
> --
>
> Key: HADOOP-14696
> URL: https://issues.apache.org/jira/browse/HADOOP-14696
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0-beta1
> Environment: Windows
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Minor
> Attachments: HADOOP-14696.00.patch, HADOOP-14696.01.patch
>
>
> If hadoop-common-project/hadoop-common is run with the -Pparallel-tests flag, 
> it fails in create-parallel-tests-dirs from the pom.xml
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run 
> (create-parallel-tests-dirs) on project hadoop-common: An Ant BuildException 
> has occured: Directory 
> F:\jenkins\jenkins-slave\workspace\hadoop-trunk-win\s\hadoop-common-project\hadoop-common\jenkinsjenkins-slaveworkspacehadoop-trunk-winshadoop-common-projecthadoop-common
> arget\test\data\1 creation was not successful for an unknown reason
> [ERROR] around Ant part 

[jira] [Updated] (HADOOP-14696) parallel tests don't work for Windows

2017-08-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-14696:
--
Summary: parallel tests don't work for Windows  (was: common's parallel 
tests don't work for Windows)

> parallel tests don't work for Windows
> -
>
> Key: HADOOP-14696
> URL: https://issues.apache.org/jira/browse/HADOOP-14696
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0-beta1
> Environment: Windows
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Minor
> Attachments: HADOOP-14696.00.patch, HADOOP-14696.01.patch
>
>
> If hadoop-common-project/hadoop-common is run with the -Pparallel-tests flag, 
> it fails in create-parallel-tests-dirs from the pom.xml
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run 
> (create-parallel-tests-dirs) on project hadoop-common: An Ant BuildException 
> has occured: Directory 
> F:\jenkins\jenkins-slave\workspace\hadoop-trunk-win\s\hadoop-common-project\hadoop-common\jenkinsjenkins-slaveworkspacehadoop-trunk-winshadoop-common-projecthadoop-common
> arget\test\data\1 creation was not successful for an unknown reason
> [ERROR] around Ant part 

[jira] [Updated] (HADOOP-14700) NativeAzureFileSystem.open() ignores blob container name

2017-08-02 Thread Cheng Lian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cheng Lian updated HADOOP-14700:

Description: 
{{NativeAzureFileSystem}} instances are associated with the blob container used 
to initialize the file system. Assuming that a file system instance {{fs}} is 
associated with a container {{A}}, when trying to access a blob inside another 
container {{B}}, {{fs}} still tries to find the blob inside container {{A}}. If 
there happens to be two blobs with the same name inside both containers, the 
user may get a wrong result because {{fs}} reads the contents from the blob 
inside container {{A}} instead of container {{B}}.

The following self-contained Scala code snippet illustrates this issue. You may 
reproduce it by running the following Scala script using 
[Ammonite|http://ammonite.io/].
{code}
#!/usr/bin/env amm

import $ivy.`com.jsuereth::scala-arm:2.0`
import $ivy.`com.microsoft.azure:azure-storage:5.2.0`
import $ivy.`org.apache.hadoop:hadoop-azure:3.0.0-alpha4`
import $ivy.`org.apache.hadoop:hadoop-common:3.0.0-alpha4`
import $ivy.`org.scalatest::scalatest:3.0.3`

import java.io.{BufferedReader, InputStreamReader}
import java.net.URI
import java.time.{Duration, Instant}
import java.util.{Date, EnumSet}

import com.microsoft.azure.storage.{CloudStorageAccount, 
StorageCredentialsAccountAndKey}
import com.microsoft.azure.storage.blob.{SharedAccessBlobPermissions, 
SharedAccessBlobPolicy}
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.fs.{FileSystem, Path}
import org.apache.hadoop.fs.azure.{AzureException, NativeAzureFileSystem}
import org.scalatest.Assertions._
import resource._

// Utility implicit conversion for auto resource management.
implicit def `Closable->Resource`[T <: { def close() }]: Resource[T] = new 
Resource[T] {
  override def close(closable: T): Unit = closable.close()
}

// Credentials information
val ACCOUNT = "** REDACTED **"
val ACCESS_KEY = "** REDACTED **"

// We'll create two different containers, both contain a blob named "test-blob" 
but with different
// contents.
val CONTAINER_A = "container-a"
val CONTAINER_B = "container-b"
val TEST_BLOB = "test-blob"

val blobClient = {
  val credentials = new StorageCredentialsAccountAndKey(ACCOUNT, ACCESS_KEY)
  val account = new CloudStorageAccount(credentials, /* useHttps */ true)
  account.createCloudBlobClient()
}

// Generates a read-only SAS key restricted within "container-a".
val sasKeyForContainerA = {
  val since = Instant.now() minus Duration.ofMinutes(10)
  val duration = Duration.ofHours(1)
  val policy = new SharedAccessBlobPolicy()

  policy.setSharedAccessStartTime(Date.from(since))
  policy.setSharedAccessExpiryTime(Date.from(since plus duration))
  policy.setPermissions(EnumSet.of(
SharedAccessBlobPermissions.READ,
SharedAccessBlobPermissions.LIST
  ))

  blobClient
.getContainerReference(CONTAINER_A)
.generateSharedAccessSignature(policy, null)
}

// Sets up testing containers and blobs using the Azure storage SDK:
//
//   container-a/test-blob => "foo"
//   container-b/test-blob => "bar"
{
  val containerARef = blobClient.getContainerReference(CONTAINER_A)
  val containerBRef = blobClient.getContainerReference(CONTAINER_B)

  containerARef.createIfNotExists()
  containerARef.getBlockBlobReference(TEST_BLOB).uploadText("foo")

  containerBRef.createIfNotExists()
  containerBRef.getBlockBlobReference(TEST_BLOB).uploadText("bar")
}

val pathA = new 
Path(s"wasbs://$CONTAINER_A@$ACCOUNT.blob.core.windows.net/$TEST_BLOB")
val pathB = new 
Path(s"wasbs://$CONTAINER_B@$ACCOUNT.blob.core.windows.net/$TEST_BLOB")

for {
  // Creates a file system associated with "container-a".
  fs <- managed {
val conf = new Configuration
conf.set("fs.wasbs.impl", classOf[NativeAzureFileSystem].getName)
conf.set(s"fs.azure.sas.$CONTAINER_A.$ACCOUNT.blob.core.windows.net", 
sasKeyForContainerA)
pathA.getFileSystem(conf)
  }

  // Opens a reader pointing to "container-a/test-blob". We expect to get the 
string "foo" written
  // to this blob previously.
  readerA <- managed(new BufferedReader(new InputStreamReader(fs open pathA)))

  // Opens a reader pointing to "container-b/test-blob". We expect to get an 
exception since the SAS
  // key used to create the `FileSystem` instance is restricted to 
"container-a".
  readerB <- managed(new BufferedReader(new InputStreamReader(fs open pathB)))
} {
  // Should get "foo"
  assert(readerA.readLine() == "foo")

  // Should catch an exception ...
  intercept[AzureException] {
// ... but instead, we get string "foo" here, which indicates that the 
readerB was reading from
// "container-a" instead of "container-b".
val contents = readerB.readLine()
println(s"Should not reach here but we got $contents")
  }
}
{code}

  was:
{{NativeAzureFileSystem}} instances are associated with the blob container used 
to initialize the 

[jira] [Commented] (HADOOP-14700) NativeAzureFileSystem.open() ignores blob container name

2017-08-02 Thread Cheng Lian (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14700?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16111645#comment-16111645
 ] 

Cheng Lian commented on HADOOP-14700:
-

Oops... Thanks for pointing out the typo, [~ste...@apache.org]! This issue 
still remains after fixing the path, though.

> NativeAzureFileSystem.open() ignores blob container name
> 
>
> Key: HADOOP-14700
> URL: https://issues.apache.org/jira/browse/HADOOP-14700
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 3.0.0-beta1, 3.0.0-alpha4
>Reporter: Cheng Lian
>
> {{NativeAzureFileSystem}} instances are associated with the blob container 
> used to initialize the file system. Assuming that a file system instance 
> {{fs}} is associated with a container {{A}}, when trying to access a blob 
> inside another container {{B}}, {{fs}} still tries to find the blob inside 
> container {{A}}. If there happens to be two blobs with the same name inside 
> both containers, the user may get a wrong result because {{fs}} reads the 
> contents from the blob inside container {{A}} instead of container {{B}}.
> The following self-contained Scala code snippet illustrates this issue. You 
> may reproduce it by running the script inside the [Ammonite 
> REPL|http://ammonite.io/].
> {code}
> #!/usr/bin/env amm
> import $ivy.`com.jsuereth::scala-arm:2.0`
> import $ivy.`com.microsoft.azure:azure-storage:5.2.0`
> import $ivy.`org.apache.hadoop:hadoop-azure:3.0.0-alpha4`
> import $ivy.`org.apache.hadoop:hadoop-common:3.0.0-alpha4`
> import $ivy.`org.scalatest::scalatest:3.0.3`
> import java.io.{BufferedReader, InputStreamReader}
> import java.net.URI
> import java.time.{Duration, Instant}
> import java.util.{Date, EnumSet}
> import com.microsoft.azure.storage.{CloudStorageAccount, 
> StorageCredentialsAccountAndKey}
> import com.microsoft.azure.storage.blob.{SharedAccessBlobPermissions, 
> SharedAccessBlobPolicy}
> import org.apache.hadoop.conf.Configuration
> import org.apache.hadoop.fs.{FileSystem, Path}
> import org.apache.hadoop.fs.azure.{AzureException, NativeAzureFileSystem}
> import org.scalatest.Assertions._
> import resource._
> // Utility implicit conversion for auto resource management.
> implicit def `Closable->Resource`[T <: { def close() }]: Resource[T] = new 
> Resource[T] {
>   override def close(closable: T): Unit = closable.close()
> }
> // Credentials information
> val ACCOUNT = "** REDACTED **"
> val ACCESS_KEY = "** REDACTED **"
> // We'll create two different containers, both contain a blob named 
> "test-blob" but with different
> // contents.
> val CONTAINER_A = "container-a"
> val CONTAINER_B = "container-b"
> val TEST_BLOB = "test-blob"
> val blobClient = {
>   val credentials = new StorageCredentialsAccountAndKey(ACCOUNT, ACCESS_KEY)
>   val account = new CloudStorageAccount(credentials, /* useHttps */ true)
>   account.createCloudBlobClient()
> }
> // Generates a read-only SAS key restricted within "container-a".
> val sasKeyForContainerA = {
>   val since = Instant.now() minus Duration.ofMinutes(10)
>   val duration = Duration.ofHours(1)
>   val policy = new SharedAccessBlobPolicy()
>   policy.setSharedAccessStartTime(Date.from(since))
>   policy.setSharedAccessExpiryTime(Date.from(since plus duration))
>   policy.setPermissions(EnumSet.of(
> SharedAccessBlobPermissions.READ,
> SharedAccessBlobPermissions.LIST
>   ))
>   blobClient
> .getContainerReference(CONTAINER_A)
> .generateSharedAccessSignature(policy, null)
> }
> // Sets up testing containers and blobs using the Azure storage SDK:
> //
> //   container-a/test-blob => "foo"
> //   container-b/test-blob => "bar"
> {
>   val containerARef = blobClient.getContainerReference(CONTAINER_A)
>   val containerBRef = blobClient.getContainerReference(CONTAINER_B)
>   containerARef.createIfNotExists()
>   containerARef.getBlockBlobReference(TEST_BLOB).uploadText("foo")
>   containerBRef.createIfNotExists()
>   containerBRef.getBlockBlobReference(TEST_BLOB).uploadText("bar")
> }
> val pathA = new 
> Path(s"wasbs://$CONTAINER_A@$ACCOUNT.blob.core.windows.net/$TEST_BLOB")
> val pathB = new 
> Path(s"wasbs://$CONTAINER_B@$ACCOUNT.blob.core.windows.net/$TEST_BLOB")
> for {
>   // Creates a file system associated with "container-a".
>   fs <- managed {
> val conf = new Configuration
> conf.set("fs.wasbs.impl", classOf[NativeAzureFileSystem].getName)
> conf.set(s"fs.azure.sas.$CONTAINER_A.$ACCOUNT.blob.core.windows.net", 
> sasKeyForContainerA)
> pathA.getFileSystem(conf)
>   }
>   // Opens a reader pointing to "container-a/test-blob". We expect to get the 
> string "foo" written
>   // to this blob previously.
>   readerA <- managed(new BufferedReader(new InputStreamReader(fs open pathA)))
>   // Opens a reader pointing to 

[jira] [Updated] (HADOOP-14700) NativeAzureFileSystem.open() ignores blob container name

2017-08-02 Thread Cheng Lian (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14700?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Cheng Lian updated HADOOP-14700:

Description: 
{{NativeAzureFileSystem}} instances are associated with the blob container used 
to initialize the file system. Assuming that a file system instance {{fs}} is 
associated with a container {{A}}, when trying to access a blob inside another 
container {{B}}, {{fs}} still tries to find the blob inside container {{A}}. If 
there happens to be two blobs with the same name inside both containers, the 
user may get a wrong result because {{fs}} reads the contents from the blob 
inside container {{A}} instead of container {{B}}.

The following self-contained Scala code snippet illustrates this issue. You may 
reproduce it by running the script inside the [Ammonite 
REPL|http://ammonite.io/].
{code}
#!/usr/bin/env amm

import $ivy.`com.jsuereth::scala-arm:2.0`
import $ivy.`com.microsoft.azure:azure-storage:5.2.0`
import $ivy.`org.apache.hadoop:hadoop-azure:3.0.0-alpha4`
import $ivy.`org.apache.hadoop:hadoop-common:3.0.0-alpha4`
import $ivy.`org.scalatest::scalatest:3.0.3`

import java.io.{BufferedReader, InputStreamReader}
import java.net.URI
import java.time.{Duration, Instant}
import java.util.{Date, EnumSet}

import com.microsoft.azure.storage.{CloudStorageAccount, 
StorageCredentialsAccountAndKey}
import com.microsoft.azure.storage.blob.{SharedAccessBlobPermissions, 
SharedAccessBlobPolicy}
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.fs.{FileSystem, Path}
import org.apache.hadoop.fs.azure.{AzureException, NativeAzureFileSystem}
import org.scalatest.Assertions._
import resource._

// Utility implicit conversion for auto resource management.
implicit def `Closable->Resource`[T <: { def close() }]: Resource[T] = new 
Resource[T] {
  override def close(closable: T): Unit = closable.close()
}

// Credentials information
val ACCOUNT = "** REDACTED **"
val ACCESS_KEY = "** REDACTED **"

// We'll create two different containers, both contain a blob named "test-blob" 
but with different
// contents.
val CONTAINER_A = "container-a"
val CONTAINER_B = "container-b"
val TEST_BLOB = "test-blob"

val blobClient = {
  val credentials = new StorageCredentialsAccountAndKey(ACCOUNT, ACCESS_KEY)
  val account = new CloudStorageAccount(credentials, /* useHttps */ true)
  account.createCloudBlobClient()
}

// Generates a read-only SAS key restricted within "container-a".
val sasKeyForContainerA = {
  val since = Instant.now() minus Duration.ofMinutes(10)
  val duration = Duration.ofHours(1)
  val policy = new SharedAccessBlobPolicy()

  policy.setSharedAccessStartTime(Date.from(since))
  policy.setSharedAccessExpiryTime(Date.from(since plus duration))
  policy.setPermissions(EnumSet.of(
SharedAccessBlobPermissions.READ,
SharedAccessBlobPermissions.LIST
  ))

  blobClient
.getContainerReference(CONTAINER_A)
.generateSharedAccessSignature(policy, null)
}

// Sets up testing containers and blobs using the Azure storage SDK:
//
//   container-a/test-blob => "foo"
//   container-b/test-blob => "bar"
{
  val containerARef = blobClient.getContainerReference(CONTAINER_A)
  val containerBRef = blobClient.getContainerReference(CONTAINER_B)

  containerARef.createIfNotExists()
  containerARef.getBlockBlobReference(TEST_BLOB).uploadText("foo")

  containerBRef.createIfNotExists()
  containerBRef.getBlockBlobReference(TEST_BLOB).uploadText("bar")
}

val pathA = new 
Path(s"wasbs://$CONTAINER_A@$ACCOUNT.blob.core.windows.net/$TEST_BLOB")
val pathB = new 
Path(s"wasbs://$CONTAINER_B@$ACCOUNT.blob.core.windows.net/$TEST_BLOB")

for {
  // Creates a file system associated with "container-a".
  fs <- managed {
val conf = new Configuration
conf.set("fs.wasbs.impl", classOf[NativeAzureFileSystem].getName)
conf.set(s"fs.azure.sas.$CONTAINER_A.$ACCOUNT.blob.core.windows.net", 
sasKeyForContainerA)
pathA.getFileSystem(conf)
  }

  // Opens a reader pointing to "container-a/test-blob". We expect to get the 
string "foo" written
  // to this blob previously.
  readerA <- managed(new BufferedReader(new InputStreamReader(fs open pathA)))

  // Opens a reader pointing to "container-b/test-blob". We expect to get an 
exception since the SAS
  // key used to create the `FileSystem` instance is restricted to 
"container-a".
  readerB <- managed(new BufferedReader(new InputStreamReader(fs open pathB)))
} {
  // Should get "foo"
  assert(readerA.readLine() == "foo")

  // Should catch an exception ...
  intercept[AzureException] {
// ... but instead, we get string "foo" here, which indicates that the 
readerB was reading from
// "container-a" instead of "container-b".
val contents = readerB.readLine()
println(s"Should not reach here but we got $contents")
  }
}
{code}

  was:
{{NativeAzureFileSystem}} instances are associated with the blob container used 
to initialize the file 

[jira] [Commented] (HADOOP-13595) Rework hadoop_usage to be broken up by clients/daemons/etc.

2017-08-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16111557#comment-16111557
 ] 

Allen Wittenauer commented on HADOOP-13595:
---

Thanks!

> Rework hadoop_usage to be broken up by clients/daemons/etc.
> ---
>
> Key: HADOOP-13595
> URL: https://issues.apache.org/jira/browse/HADOOP-13595
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-13595.00.patch, HADOOP-13595.01.patch, 
> HADOOP-13595.02.patch, HADOOP-13595.03.patch, HADOOP-13595.04.patch, 
> HADOOP-13595.05.patch, HADOOP-13595.06.patch
>
>
> Part of the feedback from HADOOP-13341 was that it wasn't obvious what was a 
> client and what was a daemon.  Reworking the hadoop_usage output so that it 
> is obvious helps fix this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13595) Rework hadoop_usage to be broken up by clients/daemons/etc.

2017-08-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16111519#comment-16111519
 ] 

Hudson commented on HADOOP-13595:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12106 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12106/])
HADOOP-13595. Rework hadoop_usage to be broken up by (mackrorysd: rev 
1a1bf6b7d044929bc9d6a4763780d916b00ccf5a)
* (add) 
hadoop-common-project/hadoop-common/src/test/scripts/hadoop_add_array_param.bats
* (edit) 
hadoop-common-project/hadoop-kms/src/main/libexec/shellprofile.d/hadoop-kms.sh
* (edit) 
hadoop-tools/hadoop-archive-logs/src/main/shellprofile.d/hadoop-archive-logs.sh
* (edit) hadoop-tools/hadoop-gridmix/src/main/shellprofile.d/hadoop-gridmix.sh
* (add) 
hadoop-common-project/hadoop-common/src/test/scripts/hadoop_sort_array.bats
* (edit) hadoop-common-project/hadoop-common/src/site/markdown/UnixShellGuide.md
* (add) 
hadoop-common-project/hadoop-common/src/test/scripts/hadoop_array_contains.bats
* (edit) hadoop-tools/hadoop-distcp/src/main/shellprofile.d/hadoop-distcp.sh
* (edit) 
hadoop-tools/hadoop-streaming/src/main/shellprofile.d/hadoop-streaming.sh
* (edit) hadoop-mapreduce-project/bin/mapred
* (edit) 
hadoop-hdfs-project/hadoop-hdfs-httpfs/src/main/libexec/shellprofile.d/hadoop-httpfs.sh
* (edit) hadoop-tools/hadoop-extras/src/main/shellprofile.d/hadoop-extras.sh
* (edit) hadoop-hdfs-project/hadoop-hdfs/src/main/bin/hdfs
* (edit) hadoop-common-project/hadoop-common/src/main/bin/hadoop-functions.sh
* (edit) hadoop-tools/hadoop-rumen/src/main/shellprofile.d/hadoop-rumen.sh
* (edit) hadoop-yarn-project/hadoop-yarn/bin/yarn
* (edit) hadoop-common-project/hadoop-common/src/main/bin/hadoop
* (edit) hadoop-tools/hadoop-archives/src/main/shellprofile.d/hadoop-archives.sh


> Rework hadoop_usage to be broken up by clients/daemons/etc.
> ---
>
> Key: HADOOP-13595
> URL: https://issues.apache.org/jira/browse/HADOOP-13595
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-13595.00.patch, HADOOP-13595.01.patch, 
> HADOOP-13595.02.patch, HADOOP-13595.03.patch, HADOOP-13595.04.patch, 
> HADOOP-13595.05.patch, HADOOP-13595.06.patch
>
>
> Part of the feedback from HADOOP-13341 was that it wasn't obvious what was a 
> client and what was a daemon.  Reworking the hadoop_usage output so that it 
> is obvious helps fix this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14725) hadoop-aws parallel tests do not work under Windows

2017-08-02 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-14725:
-

 Summary: hadoop-aws parallel tests do not work under Windows
 Key: HADOOP-14725
 URL: https://issues.apache.org/jira/browse/HADOOP-14725
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 3.0.0-beta1
Reporter: Allen Wittenauer
Priority: Minor






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14661) S3A to support Requester Pays Buckets using

2017-08-02 Thread Mandus Momberg (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16111473#comment-16111473
 ] 

Mandus Momberg commented on HADOOP-14661:
-

Cool. I'll do all of this. 
I've been tied up with a bunch of stuff, so haven't been able to make the 
changes. 

Should be able to do this in a week or two. 

> S3A to support Requester Pays Buckets using
> ---
>
> Key: HADOOP-14661
> URL: https://issues.apache.org/jira/browse/HADOOP-14661
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common, util
>Affects Versions: 3.0.0-alpha3
>Reporter: Mandus Momberg
>Assignee: Mandus Momberg
>Priority: Minor
> Attachments: HADOOP-14661.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Amazon S3 has the ability to charge the requester for the cost of accessing 
> S3. This is called Requester Pays Buckets. 
> In order to access these buckets, each request needs to be signed with a 
> specific header. 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/RequesterPaysBuckets.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13595) Rework hadoop_usage to be broken up by clients/daemons/etc.

2017-08-02 Thread Sean Mackrory (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sean Mackrory updated HADOOP-13595:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-beta1
   Status: Resolved  (was: Patch Available)

Pushed.

> Rework hadoop_usage to be broken up by clients/daemons/etc.
> ---
>
> Key: HADOOP-13595
> URL: https://issues.apache.org/jira/browse/HADOOP-13595
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Fix For: 3.0.0-beta1
>
> Attachments: HADOOP-13595.00.patch, HADOOP-13595.01.patch, 
> HADOOP-13595.02.patch, HADOOP-13595.03.patch, HADOOP-13595.04.patch, 
> HADOOP-13595.05.patch, HADOOP-13595.06.patch
>
>
> Part of the feedback from HADOOP-13341 was that it wasn't obvious what was a 
> client and what was a daemon.  Reworking the hadoop_usage output so that it 
> is obvious helps fix this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14724) Get a daily QBT run for Windows

2017-08-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-14724:
--
Description: We used to have Windows as part of our testing infrastructure. 
 Let's get it back up and running now that the ASF has more boxes (and who 
knows what the status of the hadoop-win-1 box is)  (was: We used to have a 
Windows as part of our testing infrastructure.  Let's get it back up and 
running now that the ASF has some boxes.)

> Get a daily QBT run for Windows
> ---
>
> Key: HADOOP-14724
> URL: https://issues.apache.org/jira/browse/HADOOP-14724
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>
> We used to have Windows as part of our testing infrastructure.  Let's get it 
> back up and running now that the ASF has more boxes (and who knows what the 
> status of the hadoop-win-1 box is)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14569) NativeAzureFileSystem, AzureBlobStorageTestAccount to have useful toString() values

2017-08-02 Thread Thomas Marquardt (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14569?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16111444#comment-16111444
 ] 

Thomas Marquardt commented on HADOOP-14569:
---

Trying to understand this better.  How would this be used?  Is it primarily for 
unit testing?  When I run tests, I know the account name and container name and 
don't have a need for it in the logs. I would however find it useful to access 
certain metrics in my test, for example to verify that the number of open files 
equals the number of closed files at the end of my test.  This is already done 
in TestAzureFileSystemInstrumentation.java, but perhaps it could be improved 
and become a more generally usable utility.

> NativeAzureFileSystem, AzureBlobStorageTestAccount to have useful toString() 
> values
> ---
>
> Key: HADOOP-14569
> URL: https://issues.apache.org/jira/browse/HADOOP-14569
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Priority: Minor
>
> {{NativeAzureFileSystem.toString()}},  and 
> {{AzureBlobStorageTestAccount.toString()}} should return data meaningful in 
> logging & test runs
> * account name
> * container name/status
> * ideally, FS instrumentation statistics
> * + not to NPE if invoked before calling FileSystem.initialize(), or after 
> being closed.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14724) Get a daily QBT run for Windows

2017-08-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16111442#comment-16111442
 ] 

Allen Wittenauer commented on HADOOP-14724:
---

Here's the link to Jenkins:

https://builds.apache.org/view/H-L/view/Hadoop/job/hadoop-trunk-win


> Get a daily QBT run for Windows
> ---
>
> Key: HADOOP-14724
> URL: https://issues.apache.org/jira/browse/HADOOP-14724
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0-beta1
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>
> We used to have a Windows as part of our testing infrastructure.  Let's get 
> it back up and running now that the ASF has some boxes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14724) Get a daily QBT run for Windows

2017-08-02 Thread Allen Wittenauer (JIRA)
Allen Wittenauer created HADOOP-14724:
-

 Summary: Get a daily QBT run for Windows
 Key: HADOOP-14724
 URL: https://issues.apache.org/jira/browse/HADOOP-14724
 Project: Hadoop Common
  Issue Type: Test
  Components: test
Affects Versions: 3.0.0-beta1
Reporter: Allen Wittenauer
Assignee: Allen Wittenauer


We used to have a Windows as part of our testing infrastructure.  Let's get it 
back up and running now that the ASF has some boxes.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14716) SwiftNativeFileSystem should not eat the exception when rename

2017-08-02 Thread Chen He (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16111387#comment-16111387
 ] 

Chen He commented on HADOOP-14716:
--

Thank you for the quick reply, [~steve_l]. IMHO, HADOOP-11452 is very helpful, 
at the same time, I will come up with a patch. 

> SwiftNativeFileSystem should not eat the exception when rename
> --
>
> Key: HADOOP-14716
> URL: https://issues.apache.org/jira/browse/HADOOP-14716
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Chen He
>Assignee: Chen He
>Priority: Minor
>
> Currently, if "rename" will eat excpetions and return "false" in 
> SwiftNativeFileSystem. It is not easy for user to find root cause about why 
> rename failed. It has to, at least, write out some logs instead of directly 
> eats these exceptions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13595) Rework hadoop_usage to be broken up by clients/daemons/etc.

2017-08-02 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16111341#comment-16111341
 ] 

Sean Mackrory commented on HADOOP-13595:


+1. Can commit shortly...

> Rework hadoop_usage to be broken up by clients/daemons/etc.
> ---
>
> Key: HADOOP-13595
> URL: https://issues.apache.org/jira/browse/HADOOP-13595
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-13595.00.patch, HADOOP-13595.01.patch, 
> HADOOP-13595.02.patch, HADOOP-13595.03.patch, HADOOP-13595.04.patch, 
> HADOOP-13595.05.patch, HADOOP-13595.06.patch
>
>
> Part of the feedback from HADOOP-13341 was that it wasn't obvious what was a 
> client and what was a daemon.  Reworking the hadoop_usage output so that it 
> is obvious helps fix this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14715) TestWasbRemoteCallHelper failing

2017-08-02 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14715:

Summary: TestWasbRemoteCallHelper failing  (was: failure of test 
org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper)

> TestWasbRemoteCallHelper failing
> 
>
> Key: HADOOP-14715
> URL: https://issues.apache.org/jira/browse/HADOOP-14715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>
> {{org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper.testWhenOneInstanceIsDown}}
>  is failing for me on trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14715) failure of test org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper

2017-08-02 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14715:

Priority: Major  (was: Minor)

> failure of test org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper
> ---
>
> Key: HADOOP-14715
> URL: https://issues.apache.org/jira/browse/HADOOP-14715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>
> {{org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper.testWhenOneInstanceIsDown}}
>  is failing for me on trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14715) failure of test org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper

2017-08-02 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14715?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16111281#comment-16111281
 ] 

Steve Loughran commented on HADOOP-14715:
-

cause is probably HADOOP-14642

> failure of test org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper
> ---
>
> Key: HADOOP-14715
> URL: https://issues.apache.org/jira/browse/HADOOP-14715
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/azure
>Affects Versions: 3.0.0-beta1
>Reporter: Steve Loughran
>Priority: Minor
>
> {{org.apache.hadoop.fs.azure.TestWasbRemoteCallHelper.testWhenOneInstanceIsDown}}
>  is failing for me on trunk



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13595) Rework hadoop_usage to be broken up by clients/daemons/etc.

2017-08-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16111226#comment-16111226
 ] 

Hadoop QA commented on HADOOP-13595:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
34s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 17m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 14m 
21s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
20s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 14m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} shellcheck {color} | {color:green}  1m 
30s{color} | {color:green} There were no new shellcheck issues. {color} |
| {color:green}+1{color} | {color:green} shelldocs {color} | {color:green}  0m  
9s{color} | {color:green} There were no new shelldocs issues. {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
18s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
30s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
8s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-hdfs-httpfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green} 11m  
0s{color} | {color:green} hadoop-yarn in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m 
32s{color} | {color:green} hadoop-mapreduce-project in the patch passed. 
{color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-streaming in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
35s{color} | {color:green} hadoop-distcp in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
32s{color} | {color:green} hadoop-archives in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-archive-logs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-rumen in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
36s{color} | {color:green} hadoop-gridmix in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
31s{color} | {color:green} hadoop-extras in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
40s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 72m 45s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-13595 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880061/HADOOP-13595.05.patch 
|
| Optional Tests |  asflicense  mvnsite  unit  shellcheck  shelldocs  |
| uname | Linux fb8cf5542e16 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5e4434f |
| shellcheck | v0.4.6 |
|  Test Results | 

[jira] [Updated] (HADOOP-13595) Rework hadoop_usage to be broken up by clients/daemons/etc.

2017-08-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13595:
--
Attachment: HADOOP-13595.06.patch

-06:
* restore IFS in sort_array
* add IFS check to the sort test code (which failed before!)

Thanks for catching that!

> Rework hadoop_usage to be broken up by clients/daemons/etc.
> ---
>
> Key: HADOOP-13595
> URL: https://issues.apache.org/jira/browse/HADOOP-13595
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-13595.00.patch, HADOOP-13595.01.patch, 
> HADOOP-13595.02.patch, HADOOP-13595.03.patch, HADOOP-13595.04.patch, 
> HADOOP-13595.05.patch, HADOOP-13595.06.patch
>
>
> Part of the feedback from HADOOP-13341 was that it wasn't obvious what was a 
> client and what was a daemon.  Reworking the hadoop_usage output so that it 
> is obvious helps fix this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14698) Make copyFromLocal's -t option available for put as well

2017-08-02 Thread Andras Bokor (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Andras Bokor updated HADOOP-14698:
--
Attachment: HADOOP-14698.03.patch

Thanks for your answer [~msingh].
I agree. Option 2 is the easiest for now so I created a patch where we disallow 
-t for moveFromLocal. We can make moveFromLocal multithreaded later. I have 
some ideas, I will file a separate JIRA for that to discuss.

> Make copyFromLocal's -t option available for put as well
> 
>
> Key: HADOOP-14698
> URL: https://issues.apache.org/jira/browse/HADOOP-14698
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Attachments: HADOOP-14698.01.patch, HADOOP-14698.02.patch, 
> HADOOP-14698.03.patch
>
>
> After HDFS-11786 copyFromLocal and put are no longer identical.
> I do not see any reason why not to add the new feature to put as well.
> Being non-identical makes the understanding/usage of command more complicated 
> from user point of view.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13595) Rework hadoop_usage to be broken up by clients/daemons/etc.

2017-08-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1650#comment-1650
 ] 

Allen Wittenauer edited comment on HADOOP-13595 at 8/2/17 4:00 PM:
---

Argh. Actually, it's not getting reset because there is no fork, built-in being 
called (see the column printer code), and the $() call doesn't count.  I should 
add that to the unit test.

It kind of sucks that IFS getting set like this is situational. :(


was (Author: aw):
Argh. Actually, it's not getting reset because there is no fork, built-in being 
called (see the column printer code), and the $() call doesn't count.  I should 
add that to the unit test.



> Rework hadoop_usage to be broken up by clients/daemons/etc.
> ---
>
> Key: HADOOP-13595
> URL: https://issues.apache.org/jira/browse/HADOOP-13595
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-13595.00.patch, HADOOP-13595.01.patch, 
> HADOOP-13595.02.patch, HADOOP-13595.03.patch, HADOOP-13595.04.patch, 
> HADOOP-13595.05.patch
>
>
> Part of the feedback from HADOOP-13341 was that it wasn't obvious what was a 
> client and what was a daemon.  Reworking the hadoop_usage output so that it 
> is obvious helps fix this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13595) Rework hadoop_usage to be broken up by clients/daemons/etc.

2017-08-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1650#comment-1650
 ] 

Allen Wittenauer commented on HADOOP-13595:
---

Argh. Actually, it's not getting reset because there is no fork, built-in being 
called (see the column printer code), and the $() call doesn't count.  I should 
add that to the unit test.



> Rework hadoop_usage to be broken up by clients/daemons/etc.
> ---
>
> Key: HADOOP-13595
> URL: https://issues.apache.org/jira/browse/HADOOP-13595
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-13595.00.patch, HADOOP-13595.01.patch, 
> HADOOP-13595.02.patch, HADOOP-13595.03.patch, HADOOP-13595.04.patch, 
> HADOOP-13595.05.patch
>
>
> Part of the feedback from HADOOP-13341 was that it wasn't obvious what was a 
> client and what was a daemon.  Reworking the hadoop_usage output so that it 
> is obvious helps fix this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13595) Rework hadoop_usage to be broken up by clients/daemons/etc.

2017-08-02 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1644#comment-1644
 ] 

Sean Mackrory commented on HADOOP-13595:


Ooh I see that now. Thanks. +1 from me, pending clean Yetus run.

> Rework hadoop_usage to be broken up by clients/daemons/etc.
> ---
>
> Key: HADOOP-13595
> URL: https://issues.apache.org/jira/browse/HADOOP-13595
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-13595.00.patch, HADOOP-13595.01.patch, 
> HADOOP-13595.02.patch, HADOOP-13595.03.patch, HADOOP-13595.04.patch, 
> HADOOP-13595.05.patch
>
>
> Part of the feedback from HADOOP-13341 was that it wasn't obvious what was a 
> client and what was a daemon.  Reworking the hadoop_usage output so that it 
> is obvious helps fix this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13595) Rework hadoop_usage to be broken up by clients/daemons/etc.

2017-08-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1622#comment-1622
 ] 

Allen Wittenauer commented on HADOOP-13595:
---

It's actually not needed. IFS should get reset at the next line.

> Rework hadoop_usage to be broken up by clients/daemons/etc.
> ---
>
> Key: HADOOP-13595
> URL: https://issues.apache.org/jira/browse/HADOOP-13595
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-13595.00.patch, HADOOP-13595.01.patch, 
> HADOOP-13595.02.patch, HADOOP-13595.03.patch, HADOOP-13595.04.patch, 
> HADOOP-13595.05.patch
>
>
> Part of the feedback from HADOOP-13341 was that it wasn't obvious what was a 
> client and what was a daemon.  Reworking the hadoop_usage output so that it 
> is obvious helps fix this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13595) Rework hadoop_usage to be broken up by clients/daemons/etc.

2017-08-02 Thread Sean Mackrory (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13595?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=1616#comment-1616
 ] 

Sean Mackrory commented on HADOOP-13595:


Thanks for the iteration. What's up with oifs stuff missing in the latest patch?

> Rework hadoop_usage to be broken up by clients/daemons/etc.
> ---
>
> Key: HADOOP-13595
> URL: https://issues.apache.org/jira/browse/HADOOP-13595
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-13595.00.patch, HADOOP-13595.01.patch, 
> HADOOP-13595.02.patch, HADOOP-13595.03.patch, HADOOP-13595.04.patch, 
> HADOOP-13595.05.patch
>
>
> Part of the feedback from HADOOP-13341 was that it wasn't obvious what was a 
> client and what was a daemon.  Reworking the hadoop_usage output so that it 
> is obvious helps fix this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14696) common's parallel tests don't work for Windows

2017-08-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16111096#comment-16111096
 ] 

Allen Wittenauer commented on HADOOP-14696:
---

This patch has been (forcibly) added to hadoop-trunk-win build #144:

{code}
[INFO] 
[INFO] --- maven-antrun-plugin:1.7:run (create-parallel-tests-dirs) @ 
hadoop-common ---
[INFO] Executing tasks

main:
[mkdir] Created dir: 
F:\jenkins\jenkins-slave\workspace\hadoop-trunk-win\s\hadoop-common-project\hadoop-common\target\test\data\1
[mkdir] Created dir: 
F:\jenkins\jenkins-slave\workspace\hadoop-trunk-win\s\hadoop-common-project\hadoop-common\target\test\data\2
[mkdir] Created dir: 
F:\jenkins\jenkins-slave\workspace\hadoop-trunk-win\s\hadoop-common-project\hadoop-common\target\test\data\3
[mkdir] Created dir: 
F:\jenkins\jenkins-slave\workspace\hadoop-trunk-win\s\hadoop-common-project\hadoop-common\target\test\data\4
[mkdir] Created dir: 
F:\jenkins\jenkins-slave\workspace\hadoop-trunk-win\s\hadoop-common-project\hadoop-common\target\test-dir\1
[mkdir] Created dir: 
F:\jenkins\jenkins-slave\workspace\hadoop-trunk-win\s\hadoop-common-project\hadoop-common\target\test-dir\2
[mkdir] Created dir: 
F:\jenkins\jenkins-slave\workspace\hadoop-trunk-win\s\hadoop-common-project\hadoop-common\target\test-dir\3
[mkdir] Created dir: 
F:\jenkins\jenkins-slave\workspace\hadoop-trunk-win\s\hadoop-common-project\hadoop-common\target\test-dir\4
[mkdir] Created dir: 
F:\jenkins\jenkins-slave\workspace\hadoop-trunk-win\s\hadoop-common-project\hadoop-common\target\test\1
[mkdir] Created dir: 
F:\jenkins\jenkins-slave\workspace\hadoop-trunk-win\s\hadoop-common-project\hadoop-common\target\test\2
[mkdir] Created dir: 
F:\jenkins\jenkins-slave\workspace\hadoop-trunk-win\s\hadoop-common-project\hadoop-common\target\test\3
[mkdir] Created dir: 
F:\jenkins\jenkins-slave\workspace\hadoop-trunk-win\s\hadoop-common-project\hadoop-common\target\test\4
[INFO] Executed tasks
[INFO] 
{code}


> common's parallel tests don't work for Windows
> --
>
> Key: HADOOP-14696
> URL: https://issues.apache.org/jira/browse/HADOOP-14696
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0-beta1
> Environment: Windows
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Minor
> Attachments: HADOOP-14696.00.patch
>
>
> If hadoop-common-project/hadoop-common is run with the -Pparallel-tests flag, 
> it fails in create-parallel-tests-dirs from the pom.xml
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run 
> (create-parallel-tests-dirs) on project hadoop-common: An Ant BuildException 
> has occured: Directory 
> F:\jenkins\jenkins-slave\workspace\hadoop-trunk-win\s\hadoop-common-project\hadoop-common\jenkinsjenkins-slaveworkspacehadoop-trunk-winshadoop-common-projecthadoop-common
> arget\test\data\1 creation was not successful for an unknown reason
> [ERROR] around Ant part 

[jira] [Commented] (HADOOP-14696) common's parallel tests don't work for Windows

2017-08-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14696?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16111092#comment-16111092
 ] 

Hadoop QA commented on HADOOP-14696:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
40s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m  3s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 53m  6s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.security.TestKDiag |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14696 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880047/HADOOP-14696.00.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux 98de30d794d1 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5e4434f |
| Default Java | 1.8.0_131 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12932/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12932/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12932/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> common's parallel tests don't work for Windows
> --
>
> Key: HADOOP-14696
> URL: https://issues.apache.org/jira/browse/HADOOP-14696
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0-beta1
> Environment: Windows
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Minor
> Attachments: HADOOP-14696.00.patch
>
>
> If hadoop-common-project/hadoop-common is run with the -Pparallel-tests flag, 
> it fails in create-parallel-tests-dirs from the pom.xml
> {code}
> [ERROR] Failed to execute goal 
> 

[jira] [Updated] (HADOOP-13595) Rework hadoop_usage to be broken up by clients/daemons/etc.

2017-08-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13595?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13595:
--
Attachment: HADOOP-13595.05.patch

-05:
* fix shellcheck errors

> Rework hadoop_usage to be broken up by clients/daemons/etc.
> ---
>
> Key: HADOOP-13595
> URL: https://issues.apache.org/jira/browse/HADOOP-13595
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: scripts
>Affects Versions: 3.0.0-alpha2
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Blocker
> Attachments: HADOOP-13595.00.patch, HADOOP-13595.01.patch, 
> HADOOP-13595.02.patch, HADOOP-13595.03.patch, HADOOP-13595.04.patch, 
> HADOOP-13595.05.patch
>
>
> Part of the feedback from HADOOP-13341 was that it wasn't obvious what was a 
> client and what was a daemon.  Reworking the hadoop_usage output so that it 
> is obvious helps fix this issue.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints

2017-08-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16111073#comment-16111073
 ] 

Hadoop QA commented on HADOOP-13786:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 42 new or modified test 
files. {color} |
|| || || || {color:brown} HADOOP-13345 Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  1m 
35s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 16m 
46s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 17m 
19s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 
15s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m  
8s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
21s{color} | {color:green} HADOOP-13345 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
26s{color} | {color:green} HADOOP-13345 passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
19s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 13m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 13m 
10s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
2m 15s{color} | {color:orange} root: The patch generated 41 new + 121 unchanged 
- 24 fixed = 162 total (was 145) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
23s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
0s{color} | {color:red} The patch has 30 line(s) that end in whitespace. Use 
git apply --whitespace=fix <>. Refer 
https://git-scm.com/docs/git-apply {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
5s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m  
5s{color} | {color:red} hadoop-tools/hadoop-aws generated 1 new + 0 unchanged - 
0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
4s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
28s{color} | {color:green} hadoop-yarn-project_hadoop-yarn_hadoop-yarn-registry 
generated 0 new + 45 unchanged - 3 fixed = 45 total (was 48) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
39s{color} | {color:green} hadoop-mapreduce-client-core in the patch passed. 
{color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
30s{color} | {color:red} hadoop-aws in the patch failed. {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  8m 59s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
0s{color} | {color:green} hadoop-yarn-registry in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  3m 16s{color} 
| {color:red} hadoop-mapreduce-client-core in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
23s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
45s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}115m 49s{color} | 
{color:black} {color} |
\\
\\
|| Reason 

[jira] [Updated] (HADOOP-14696) common's parallel tests don't work for Windows

2017-08-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-14696:
--
Priority: Minor  (was: Major)

> common's parallel tests don't work for Windows
> --
>
> Key: HADOOP-14696
> URL: https://issues.apache.org/jira/browse/HADOOP-14696
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0-beta1
> Environment: Windows
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
>Priority: Minor
> Attachments: HADOOP-14696.00.patch
>
>
> If hadoop-common-project/hadoop-common is run with the -Pparallel-tests flag, 
> it fails in create-parallel-tests-dirs from the pom.xml
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run 
> (create-parallel-tests-dirs) on project hadoop-common: An Ant BuildException 
> has occured: Directory 
> F:\jenkins\jenkins-slave\workspace\hadoop-trunk-win\s\hadoop-common-project\hadoop-common\jenkinsjenkins-slaveworkspacehadoop-trunk-winshadoop-common-projecthadoop-common
> arget\test\data\1 creation was not successful for an unknown reason
> [ERROR] around Ant part 

[jira] [Updated] (HADOOP-14696) common's parallel tests don't work for Windows

2017-08-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-14696:
--
Attachment: HADOOP-14696.00.patch

-00:
* add path conversions to unix so that the javascript doesn't take e.g., \h as 
an escaped h.

> common's parallel tests don't work for Windows
> --
>
> Key: HADOOP-14696
> URL: https://issues.apache.org/jira/browse/HADOOP-14696
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0-beta1
> Environment: Windows
>Reporter: Allen Wittenauer
> Attachments: HADOOP-14696.00.patch
>
>
> If hadoop-common-project/hadoop-common is run with the -Pparallel-tests flag, 
> it fails in create-parallel-tests-dirs from the pom.xml
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run 
> (create-parallel-tests-dirs) on project hadoop-common: An Ant BuildException 
> has occured: Directory 
> F:\jenkins\jenkins-slave\workspace\hadoop-trunk-win\s\hadoop-common-project\hadoop-common\jenkinsjenkins-slaveworkspacehadoop-trunk-winshadoop-common-projecthadoop-common
> arget\test\data\1 creation was not successful for an unknown reason
> [ERROR] around Ant part 

[jira] [Updated] (HADOOP-14696) common's parallel tests don't work for Windows

2017-08-02 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14696?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-14696:
--
Assignee: Allen Wittenauer
  Status: Patch Available  (was: Open)

> common's parallel tests don't work for Windows
> --
>
> Key: HADOOP-14696
> URL: https://issues.apache.org/jira/browse/HADOOP-14696
> Project: Hadoop Common
>  Issue Type: Test
>  Components: test
>Affects Versions: 3.0.0-beta1
> Environment: Windows
>Reporter: Allen Wittenauer
>Assignee: Allen Wittenauer
> Attachments: HADOOP-14696.00.patch
>
>
> If hadoop-common-project/hadoop-common is run with the -Pparallel-tests flag, 
> it fails in create-parallel-tests-dirs from the pom.xml
> {code}
> [ERROR] Failed to execute goal 
> org.apache.maven.plugins:maven-antrun-plugin:1.7:run 
> (create-parallel-tests-dirs) on project hadoop-common: An Ant BuildException 
> has occured: Directory 
> F:\jenkins\jenkins-slave\workspace\hadoop-trunk-win\s\hadoop-common-project\hadoop-common\jenkinsjenkins-slaveworkspacehadoop-trunk-winshadoop-common-projecthadoop-common
> arget\test\data\1 creation was not successful for an unknown reason
> [ERROR] around Ant part 

[jira] [Commented] (HADOOP-13250) jdiff and dependency reports aren't linked in site web pages

2017-08-02 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13250?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110981#comment-16110981
 ] 

Allen Wittenauer commented on HADOOP-13250:
---

I'm not sure what generates the dependency reports...

> jdiff and dependency reports aren't linked in site web pages
> 
>
> Key: HADOOP-13250
> URL: https://issues.apache.org/jira/browse/HADOOP-13250
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: build, documentation
>Affects Versions: 3.0.0-alpha1
>Reporter: Allen Wittenauer
>Assignee: Vinod Kumar Vavilapalli
>Priority: Critical
>
> Even though they are in the site tar ball (after HADOOP-13245), they aren't 
> actually reachable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14439) regression: secret stripping from S3x URIs breaks some downstream code

2017-08-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110896#comment-16110896
 ] 

Hadoop QA commented on HADOOP-14439:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 13m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 1 
new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
33s{color} | {color:red} hadoop-tools/hadoop-aws generated 1 new + 0 unchanged 
- 0 fixed = 1 total (was 0) {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 
11s{color} | {color:red} hadoop-tools_hadoop-aws generated 1 new + 0 unchanged 
- 0 fixed = 1 total (was 0) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
13s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| FindBugs | module:hadoop-tools/hadoop-aws |
|  |  Possible null pointer dereference of authority in 
org.apache.hadoop.fs.s3native.S3xLoginHelper.buildFSURI(URI)  Dereferenced at 
S3xLoginHelper.java:authority in 
org.apache.hadoop.fs.s3native.S3xLoginHelper.buildFSURI(URI)  Dereferenced at 
S3xLoginHelper.java:[line 77] |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14439 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12880030/HADOOP-14439-01.patch 
|
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 9c0a88ded5ec 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 5e4434f |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12930/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
| findbugs | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12930/artifact/patchprocess/new-findbugs-hadoop-tools_hadoop-aws.html
 |
| javadoc | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12930/artifact/patchprocess/diff-javadoc-javadoc-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12930/testReport/ |

[jira] [Updated] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints

2017-08-02 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13786:

Attachment: HADOOP-13786-HADOOP-13345-035.patch

Patch 035

* Merges in some changes from Ewan Higgs
* per-schema factory so that different filesystems/object stores can declare 
their own committer, and not interfere with the choice of the others.
example:

Example:
{code}
mapreduce.pathoutputcommitter.factory.scheme.s3a=org.apache.hadoop.fs.s3a.commit.DynamicCommitterFactory
{code}

This switches s3a to using a committer factory whose choice of committer is 
then based on the value of {{fs.s3a.committer.name}}. All other filesystems get 
the default committer set in the option 
{{"mapreduce.pathoutputcommitter.factory.class"}}

> Add S3Guard committer for zero-rename commits to S3 endpoints
> -
>
> Key: HADOOP-13786
> URL: https://issues.apache.org/jira/browse/HADOOP-13786
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-HADOOP-13345-001.patch, 
> HADOOP-13786-HADOOP-13345-002.patch, HADOOP-13786-HADOOP-13345-003.patch, 
> HADOOP-13786-HADOOP-13345-004.patch, HADOOP-13786-HADOOP-13345-005.patch, 
> HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-006.patch, 
> HADOOP-13786-HADOOP-13345-007.patch, HADOOP-13786-HADOOP-13345-009.patch, 
> HADOOP-13786-HADOOP-13345-010.patch, HADOOP-13786-HADOOP-13345-011.patch, 
> HADOOP-13786-HADOOP-13345-012.patch, HADOOP-13786-HADOOP-13345-013.patch, 
> HADOOP-13786-HADOOP-13345-015.patch, HADOOP-13786-HADOOP-13345-016.patch, 
> HADOOP-13786-HADOOP-13345-017.patch, HADOOP-13786-HADOOP-13345-018.patch, 
> HADOOP-13786-HADOOP-13345-019.patch, HADOOP-13786-HADOOP-13345-020.patch, 
> HADOOP-13786-HADOOP-13345-021.patch, HADOOP-13786-HADOOP-13345-022.patch, 
> HADOOP-13786-HADOOP-13345-023.patch, HADOOP-13786-HADOOP-13345-024.patch, 
> HADOOP-13786-HADOOP-13345-025.patch, HADOOP-13786-HADOOP-13345-026.patch, 
> HADOOP-13786-HADOOP-13345-027.patch, HADOOP-13786-HADOOP-13345-028.patch, 
> HADOOP-13786-HADOOP-13345-028.patch, HADOOP-13786-HADOOP-13345-029.patch, 
> HADOOP-13786-HADOOP-13345-030.patch, HADOOP-13786-HADOOP-13345-031.patch, 
> HADOOP-13786-HADOOP-13345-032.patch, HADOOP-13786-HADOOP-13345-033.patch, 
> HADOOP-13786-HADOOP-13345-035.patch, objectstore.pdf, s3committer-master.zip
>
>
> A goal of this code is "support O(1) commits to S3 repositories in the 
> presence of failures". Implement it, including whatever is needed to 
> demonstrate the correctness of the algorithm. (that is, assuming that s3guard 
> provides a consistent view of the presence/absence of blobs, show that we can 
> commit directly).
> I consider ourselves free to expose the blobstore-ness of the s3 output 
> streams (ie. not visible until the close()), if we need to use that to allow 
> us to abort commit operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13786) Add S3Guard committer for zero-rename commits to S3 endpoints

2017-08-02 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13786?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13786:

Status: Patch Available  (was: Open)

> Add S3Guard committer for zero-rename commits to S3 endpoints
> -
>
> Key: HADOOP-13786
> URL: https://issues.apache.org/jira/browse/HADOOP-13786
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: fs/s3
>Affects Versions: HADOOP-13345
>Reporter: Steve Loughran
>Assignee: Steve Loughran
> Attachments: HADOOP-13786-HADOOP-13345-001.patch, 
> HADOOP-13786-HADOOP-13345-002.patch, HADOOP-13786-HADOOP-13345-003.patch, 
> HADOOP-13786-HADOOP-13345-004.patch, HADOOP-13786-HADOOP-13345-005.patch, 
> HADOOP-13786-HADOOP-13345-006.patch, HADOOP-13786-HADOOP-13345-006.patch, 
> HADOOP-13786-HADOOP-13345-007.patch, HADOOP-13786-HADOOP-13345-009.patch, 
> HADOOP-13786-HADOOP-13345-010.patch, HADOOP-13786-HADOOP-13345-011.patch, 
> HADOOP-13786-HADOOP-13345-012.patch, HADOOP-13786-HADOOP-13345-013.patch, 
> HADOOP-13786-HADOOP-13345-015.patch, HADOOP-13786-HADOOP-13345-016.patch, 
> HADOOP-13786-HADOOP-13345-017.patch, HADOOP-13786-HADOOP-13345-018.patch, 
> HADOOP-13786-HADOOP-13345-019.patch, HADOOP-13786-HADOOP-13345-020.patch, 
> HADOOP-13786-HADOOP-13345-021.patch, HADOOP-13786-HADOOP-13345-022.patch, 
> HADOOP-13786-HADOOP-13345-023.patch, HADOOP-13786-HADOOP-13345-024.patch, 
> HADOOP-13786-HADOOP-13345-025.patch, HADOOP-13786-HADOOP-13345-026.patch, 
> HADOOP-13786-HADOOP-13345-027.patch, HADOOP-13786-HADOOP-13345-028.patch, 
> HADOOP-13786-HADOOP-13345-028.patch, HADOOP-13786-HADOOP-13345-029.patch, 
> HADOOP-13786-HADOOP-13345-030.patch, HADOOP-13786-HADOOP-13345-031.patch, 
> HADOOP-13786-HADOOP-13345-032.patch, HADOOP-13786-HADOOP-13345-033.patch, 
> HADOOP-13786-HADOOP-13345-035.patch, objectstore.pdf, s3committer-master.zip
>
>
> A goal of this code is "support O(1) commits to S3 repositories in the 
> presence of failures". Implement it, including whatever is needed to 
> demonstrate the correctness of the algorithm. (that is, assuming that s3guard 
> provides a consistent view of the presence/absence of blobs, show that we can 
> commit directly).
> I consider ourselves free to expose the blobstore-ness of the s3 output 
> streams (ie. not visible until the close()), if we need to use that to allow 
> us to abort commit operations.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14439) regression: secret stripping from S3x URIs breaks some downstream code

2017-08-02 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-14439:
---
Status: Patch Available  (was: Open)

> regression: secret stripping from S3x URIs breaks some downstream code
> --
>
> Key: HADOOP-14439
> URL: https://issues.apache.org/jira/browse/HADOOP-14439
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: Spark 2.1
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14439-01.patch
>
>
> Surfaced in SPARK-20799
> Spark is listing the contents of a path with getFileStatus(path), then 
> looking up the path value doing a lookup of the contents.
> Apparently the lookup is failing to find files if you have a secret in the 
> key, {{s3a://key:secret@bucket/path}}. 
> Presumably this is because the stripped values aren't matching.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-14439) regression: secret stripping from S3x URIs breaks some downstream code

2017-08-02 Thread Vinayakumar B (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110842#comment-16110842
 ] 

Vinayakumar B edited comment on HADOOP-14439 at 8/2/17 1:00 PM:


Re-adding the secret to S3X urI in the same secret encoded format.

Following are the highlights of changes:

1. Builds the FSUri containing only schema and authority part.
2. Strips down secret part if {{user}} was not provided in the {{userinfo}} 
section.
3. Encodes the secret part, even if original passed uri was not containing 
encoded secret.

So, direct comparison of fs uri and provided uri still might fail in above 
cases.

Please review.

Ran TestS3xLoginHelper test, didnt run any Integration tests, as dont have any 
accessible s3 environments.


was (Author: vinayrpet):
Re-adding the secret to S3X urI in the same secret encoded format.

Following are the highlights of changes:

1. Builds the FSUri containing only schema and authority part.
2. Strips down secret part if {{user}} was not provided in the {{userinfo}} 
section.
3. Encodes the secret part, even if original passed uri was not containing 
encoded secret.

So, direct comparison of fs uri and provided uri still might fail in above 
cases.

Please review.

> regression: secret stripping from S3x URIs breaks some downstream code
> --
>
> Key: HADOOP-14439
> URL: https://issues.apache.org/jira/browse/HADOOP-14439
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: Spark 2.1
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14439-01.patch
>
>
> Surfaced in SPARK-20799
> Spark is listing the contents of a path with getFileStatus(path), then 
> looking up the path value doing a lookup of the contents.
> Apparently the lookup is failing to find files if you have a secret in the 
> key, {{s3a://key:secret@bucket/path}}. 
> Presumably this is because the stripped values aren't matching.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14439) regression: secret stripping from S3x URIs breaks some downstream code

2017-08-02 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-14439:
---
Attachment: HADOOP-14439-01.patch

Re-adding the secret to S3X urI in the same secret encoded format.

Following are the highlights of changes:

1. Builds the FSUri containing only schema and authority part.
2. Strips down secret part if {{user}} was not provided in the {{userinfo}} 
section.
3. Encodes the secret part, even if original passed uri was not containing 
encoded secret.

So, direct comparison of fs uri and provided uri still might fail in above 
cases.

Please review.

> regression: secret stripping from S3x URIs breaks some downstream code
> --
>
> Key: HADOOP-14439
> URL: https://issues.apache.org/jira/browse/HADOOP-14439
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: Spark 2.1
>Reporter: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-14439-01.patch
>
>
> Surfaced in SPARK-20799
> Spark is listing the contents of a path with getFileStatus(path), then 
> looking up the path value doing a lookup of the contents.
> Apparently the lookup is failing to find files if you have a secret in the 
> key, {{s3a://key:secret@bucket/path}}. 
> Presumably this is because the stripped values aren't matching.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14661) S3A to support Requester Pays Buckets using

2017-08-02 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14661?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110804#comment-16110804
 ] 

Steve Loughran commented on HADOOP-14661:
-

+ a mention in the s3a docs

> S3A to support Requester Pays Buckets using
> ---
>
> Key: HADOOP-14661
> URL: https://issues.apache.org/jira/browse/HADOOP-14661
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: common, util
>Affects Versions: 3.0.0-alpha3
>Reporter: Mandus Momberg
>Assignee: Mandus Momberg
>Priority: Minor
> Attachments: HADOOP-14661.patch
>
>   Original Estimate: 2h
>  Remaining Estimate: 2h
>
> Amazon S3 has the ability to charge the requester for the cost of accessing 
> S3. This is called Requester Pays Buckets. 
> In order to access these buckets, each request needs to be signed with a 
> specific header. 
> http://docs.aws.amazon.com/AmazonS3/latest/dev/RequesterPaysBuckets.html



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14723) reinstate URI parameter in AWSCredentialProvider constructors

2017-08-02 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14723?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14723:

Target Version/s: 2.9.0, 3.0.0-beta1

> reinstate URI parameter in AWSCredentialProvider constructors
> -
>
> Key: HADOOP-14723
> URL: https://issues.apache.org/jira/browse/HADOOP-14723
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.9.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>
> I need to revert HADOOP-14135 "Remove URI parameter in AWSCredentialProvider 
> constructors", as knowing the bucket in use is needed for
> * HADOOP-14507: per bucket secrets in JCEKS files
> * HADOOP-14556: delegation tokens in S3A
> these providers need the URI as it needs to it to decide which keys to scan 
> for/what token to look up.
> I know we pulled it out to allow us to talk to DDB without needing a FS URI, 
> but for these specific cases, it is needed —we just won't be able to use the 
> specific auth providers to talk to AWS except to an S3 bucket. 
> Rather than just revert the patch, I propose waiting for s3guard phase I to 
> be merged in to trunk, then do it, with the JCEKS auth mech being set up to 
> skip looking for a per-bucket secret and key if it doesn't know its bucket 
> name.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-14723) reinstate URI parameter in AWSCredentialProvider constructors

2017-08-02 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-14723:
---

 Summary: reinstate URI parameter in AWSCredentialProvider 
constructors
 Key: HADOOP-14723
 URL: https://issues.apache.org/jira/browse/HADOOP-14723
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.9.0
Reporter: Steve Loughran
Assignee: Steve Loughran


I need to revert HADOOP-14135 "Remove URI parameter in AWSCredentialProvider 
constructors", as knowing the bucket in use is needed for

* HADOOP-14507: per bucket secrets in JCEKS files
* HADOOP-14556: delegation tokens in S3A

these providers need the URI as it needs to it to decide which keys to scan 
for/what token to look up.

I know we pulled it out to allow us to talk to DDB without needing a FS URI, 
but for these specific cases, it is needed —we just won't be able to use the 
specific auth providers to talk to AWS except to an S3 bucket. 

Rather than just revert the patch, I propose waiting for s3guard phase I to be 
merged in to trunk, then do it, with the JCEKS auth mech being set up to skip 
looking for a per-bucket secret and key if it doesn't know its bucket name.




--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14709) Fix checkstyle warnings in ContractTestUtils

2017-08-02 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14709?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110769#comment-16110769
 ] 

Hudson commented on HADOOP-14709:
-

SUCCESS: Integrated in Jenkins build Hadoop-trunk-Commit #12102 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/12102/])
HADOOP-14709. Fix checkstyle warnings in ContractTestUtils. Contributed 
(stevel: rev 5e4434f62890eb60048e8132ebe89e0c2a9580db)
* (edit) 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/contract/ContractTestUtils.java


> Fix checkstyle warnings in ContractTestUtils
> 
>
> Key: HADOOP-14709
> URL: https://issues.apache.org/jira/browse/HADOOP-14709
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Assignee: Thomas Marquardt
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14709-001.patch, HADOOP-14709-branch-2-001.patch
>
>
> {{ContractTestUtils}} is generating a lot of minor checkstyle complaints 
> which make patching against the file noisier. Clean up
> (based on work in HADOOP-14660)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14709) Fix checkstyle warnings in ContractTestUtils

2017-08-02 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14709:

   Resolution: Fixed
Fix Version/s: 3.0.0-beta1
   2.9.0
   Status: Resolved  (was: Patch Available)

+1
committed to branch-2 & trunk. Thanks!

> Fix checkstyle warnings in ContractTestUtils
> 
>
> Key: HADOOP-14709
> URL: https://issues.apache.org/jira/browse/HADOOP-14709
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Assignee: Thomas Marquardt
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-beta1
>
> Attachments: HADOOP-14709-001.patch, HADOOP-14709-branch-2-001.patch
>
>
> {{ContractTestUtils}} is generating a lot of minor checkstyle complaints 
> which make patching against the file noisier. Clean up
> (based on work in HADOOP-14660)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14709) Fix checkstyle warnings in ContractTestUtils

2017-08-02 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14709?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-14709:

Summary: Fix checkstyle warnings in ContractTestUtils  (was: fix checkstyle 
warnings in ContractTestUtils)

> Fix checkstyle warnings in ContractTestUtils
> 
>
> Key: HADOOP-14709
> URL: https://issues.apache.org/jira/browse/HADOOP-14709
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: test
>Affects Versions: 2.8.1
>Reporter: Steve Loughran
>Assignee: Thomas Marquardt
>Priority: Minor
> Attachments: HADOOP-14709-001.patch, HADOOP-14709-branch-2-001.patch
>
>
> {{ContractTestUtils}} is generating a lot of minor checkstyle complaints 
> which make patching against the file noisier. Clean up
> (based on work in HADOOP-14660)



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14698) Make copyFromLocal's -t option available for put as well

2017-08-02 Thread Mukul Kumar Singh (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110734#comment-16110734
 ] 

Mukul Kumar Singh commented on HADOOP-14698:


Thanks [~boky01], I feel that among these options

In my opinion, I feel that option 2, will be the simplest (we can disallow 
using -t in case of moveFromLocal).

We can also consider option 1, in which we will need to execute the following :-
1) if the path is a file then postProcessPath can run immediately after the 
copy of the file succeeds.
2) post process will be skipped for the directory and all the directories can 
then finally be cleaned by traversing the entire tree again and deleting the 
directories.


> Make copyFromLocal's -t option available for put as well
> 
>
> Key: HADOOP-14698
> URL: https://issues.apache.org/jira/browse/HADOOP-14698
> Project: Hadoop Common
>  Issue Type: Improvement
>Reporter: Andras Bokor
>Assignee: Andras Bokor
> Attachments: HADOOP-14698.01.patch, HADOOP-14698.02.patch
>
>
> After HDFS-11786 copyFromLocal and put are no longer identical.
> I do not see any reason why not to add the new feature to put as well.
> Being non-identical makes the understanding/usage of command more complicated 
> from user point of view.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14389) Exception handling is incorrect in KerberosName.java

2017-08-02 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110609#comment-16110609
 ] 

Steve Loughran commented on HADOOP-14389:
-

thanks: that makes it something you are free to use :). Guava is something I am 
coming to fear

> Exception handling is incorrect in KerberosName.java
> 
>
> Key: HADOOP-14389
> URL: https://issues.apache.org/jira/browse/HADOOP-14389
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>  Labels: supportability
> Attachments: HADOOP-14389.01.patch, HADOOP-14389.02.patch
>
>
> I found multiple inconsistency:
> Rule: {{RULE:\[2:$1/$2\@$3\](.\*)s/.\*/hdfs/}}
> Principal: {{nn/host.dom...@realm.tld}}
> Expected exception: {{BadStringFormat: ...3 is out of range...}}
> Actual exception: {{ArrayIndexOutOfBoundsException: 3}}
> 
> Rule: {{RULE:\[:$1/$2\@$0](.\*)s/.\*/hdfs/}} (Missing num of components)
> Expected: {{IllegalArgumentException}}
> Actual: {{java.lang.NumberFormatException: For input string: ""}}
> 
> Rule: {{RULE:\[2:$-1/$2\@$3\](.\*)s/.\*/hdfs/}}
> Expected {{BadStringFormat: -1 is outside of valid range...}}
> Actual: {{java.lang.NumberFormatException: For input string: ""}}
> 
> Rule: {{RULE:\[2:$one/$2\@$3\](.\*)s/.\*/hdfs/}}
> Expected {{java.lang.NumberFormatException: For input string: "one"}}
> Acutal {{java.lang.NumberFormatException: For input string: ""}}
> 
> In addtion:
> {code}[^\\]]{code}
> does not really make sense in {{ruleParser}}. Most probably it was needed 
> because we parse the whole rule string and remove the parsed rule from 
> beginning of the string: {{KerberosName#parseRules}}. This made the regex 
> engine parse wrong without it.
> In addition:
> In tests some corner cases are not covered.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14716) SwiftNativeFileSystem should not eat the exception when rename

2017-08-02 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110607#comment-16110607
 ] 

Steve Loughran commented on HADOOP-14716:
-

This is actually a fundamental issue with rename() itself, many failures 
(source doesn't exist, destination is a file) are required to be caught and 
downgraded to a "return false", which hides many problems and means that 
application code often goes 

{code}
if (!fs.rename(src, dest)) throw new IOException("rename failed")
{code}

# there's a protected rename operation,  {{FileSystem.rename(final Path src, 
final Path dst, final Rename... options)}}  which I've proposed making public 
and adopting broadly for renaming, which means "more spec, tests, update uses". 
See HADOOP-11452 for details. I've not done anything on that since january
# we ought to split out "exceptions to swallow" from "exceptions to throw up": 
authentication, networking,  should be thrown. If the Swift client is 
catching them too, it shouldn't. Patches welcome
# And yes, logging too

Take a look at 
[https://github.com/apache/hadoop/blob/trunk/hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AFileSystem.java#L690]
 to see how rename failures are handled there: the {{innerRename()}} method 
does throw exceptions, including explicit ones with conditions not being met; 
the outer one catches and downgrades exceptions

> SwiftNativeFileSystem should not eat the exception when rename
> --
>
> Key: HADOOP-14716
> URL: https://issues.apache.org/jira/browse/HADOOP-14716
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: tools
>Affects Versions: 2.8.1, 3.0.0-alpha3
>Reporter: Chen He
>Assignee: Chen He
>Priority: Minor
>
> Currently, if "rename" will eat excpetions and return "false" in 
> SwiftNativeFileSystem. It is not easy for user to find root cause about why 
> rename failed. It has to, at least, write out some logs instead of directly 
> eats these exceptions.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-3733) "s3:" URLs break when Secret Key contains a slash, even if encoded

2017-08-02 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-3733?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110580#comment-16110580
 ] 

Steve Loughran commented on HADOOP-3733:


I think that seems a reasonable strategy. Can you supply a patch on the 
HADOOP-14439 JIRA. We should also look and see if that fixes spark's issues

> "s3:" URLs break when Secret Key contains a slash, even if encoded
> --
>
> Key: HADOOP-3733
> URL: https://issues.apache.org/jira/browse/HADOOP-3733
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 0.17.1, 2.0.2-alpha
>Reporter: Stuart Sierra
>Assignee: Steve Loughran
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha1
>
> Attachments: HADOOP-3733-20130223T011025Z.patch, 
> HADOOP-3733-branch-2-001.patch, HADOOP-3733-branch-2-002.patch, 
> HADOOP-3733-branch-2-003.patch, HADOOP-3733-branch-2-004.patch, 
> HADOOP-3733-branch-2-005.patch, HADOOP-3733-branch-2-006.patch, 
> HADOOP-3733-branch-2-007.patch, hadoop-3733.patch, HADOOP-3733.patch
>
>
> When using URLs of the form s3://ID:SECRET@BUCKET/ at the command line, 
> distcp fails if the SECRET contains a slash, even when the slash is 
> URL-encoded as %2F.
> Say your AWS Access Key ID is RYWX12N9WCY42XVOL8WH
> And your AWS Secret Key is Xqj1/NMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> And your bucket is called "mybucket"
> You can URL-encode the Secret KKey as 
> Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv
> But this doesn't work:
> {noformat}
> $ bin/hadoop distcp file:///source  
> s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:22 INFO util.CopyFiles: srcPaths=[file:///source]
> 08/07/09 15:05:22 INFO util.CopyFiles: 
> destPath=s3://RYWX12N9WCY42XVOL8WH:Xqj1%2FNMvKBhl1jqKlzbYJS66ua0e8z7Kkvptl9bv@mybucket/dest
> 08/07/09 15:05:23 WARN httpclient.RestS3Service: Unable to access bucket: 
> mybucket
> org.jets3t.service.S3ServiceException: S3 HEAD request failed. 
> ResponseCode=403, ResponseMessage=Forbidden
> at 
> org.jets3t.service.impl.rest.httpclient.RestS3Service.performRequest(RestS3Service.java:339)
> ...
> With failures, global counters are inaccurate; consider running with -i
> Copy failed: org.apache.hadoop.fs.s3.S3Exception: 
> org.jets3t.service.S3ServiceException: S3 PUT failed. XML Error Message: 
>  encoding="UTF-8"?>SignatureDoesNotMatchThe 
> request signature we calculated does not match the signature you provided. 
> Check your key and signing method.
> at 
> org.apache.hadoop.fs.s3.Jets3tFileSystemStore.createBucket(Jets3tFileSystemStore.java:141)
> ...
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14389) Exception handling is incorrect in KerberosName.java

2017-08-02 Thread Andras Bokor (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14389?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110567#comment-16110567
 ] 

Andras Bokor commented on HADOOP-14389:
---

[~ste...@apache.org],

{{Splitter.split(final CharSequence sequence)}} did not go away on later 
versions.
[I checked guava's master branch and the method is still 
there.|https://github.com/google/guava/blob/master/guava/src/com/google/common/base/Splitter.java#L375]

> Exception handling is incorrect in KerberosName.java
> 
>
> Key: HADOOP-14389
> URL: https://issues.apache.org/jira/browse/HADOOP-14389
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Andras Bokor
>Assignee: Andras Bokor
>  Labels: supportability
> Attachments: HADOOP-14389.01.patch, HADOOP-14389.02.patch
>
>
> I found multiple inconsistency:
> Rule: {{RULE:\[2:$1/$2\@$3\](.\*)s/.\*/hdfs/}}
> Principal: {{nn/host.dom...@realm.tld}}
> Expected exception: {{BadStringFormat: ...3 is out of range...}}
> Actual exception: {{ArrayIndexOutOfBoundsException: 3}}
> 
> Rule: {{RULE:\[:$1/$2\@$0](.\*)s/.\*/hdfs/}} (Missing num of components)
> Expected: {{IllegalArgumentException}}
> Actual: {{java.lang.NumberFormatException: For input string: ""}}
> 
> Rule: {{RULE:\[2:$-1/$2\@$3\](.\*)s/.\*/hdfs/}}
> Expected {{BadStringFormat: -1 is outside of valid range...}}
> Actual: {{java.lang.NumberFormatException: For input string: ""}}
> 
> Rule: {{RULE:\[2:$one/$2\@$3\](.\*)s/.\*/hdfs/}}
> Expected {{java.lang.NumberFormatException: For input string: "one"}}
> Acutal {{java.lang.NumberFormatException: For input string: ""}}
> 
> In addtion:
> {code}[^\\]]{code}
> does not really make sense in {{ruleParser}}. Most probably it was needed 
> because we parse the whole rule string and remove the parsed rule from 
> beginning of the string: {{KerberosName#parseRules}}. This made the regex 
> engine parse wrong without it.
> In addition:
> In tests some corner cases are not covered.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-14706) Adding a helper method to determine whether a log is Log4j implement

2017-08-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110566#comment-16110566
 ] 

Hadoop QA commented on HADOOP-14706:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue} 17m 
52s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} branch-2 Compile Tests {color} ||
| {color:red}-1{color} | {color:red} mvninstall {color} | {color:red}  4m 
11s{color} | {color:red} root in branch-2 failed. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  2m 
15s{color} | {color:red} root in branch-2 failed with JDK v1.8.0_131. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  2m 
20s{color} | {color:red} root in branch-2 failed with JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
28s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
7s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
55s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} branch-2 passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
55s{color} | {color:green} branch-2 passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  2m 
15s{color} | {color:red} root in the patch failed with JDK v1.8.0_131. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  2m 15s{color} 
| {color:red} root in the patch failed with JDK v1.8.0_131. {color} |
| {color:red}-1{color} | {color:red} compile {color} | {color:red}  2m 
17s{color} | {color:red} root in the patch failed with JDK v1.7.0_131. {color} |
| {color:red}-1{color} | {color:red} javac {color} | {color:red}  2m 17s{color} 
| {color:red} root in the patch failed with JDK v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
27s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
5s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
1s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  2m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
49s{color} | {color:green} the patch passed with JDK v1.8.0_131 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} the patch passed with JDK v1.7.0_131 {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
46s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_131. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 64m  5s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:5e40efe |
| JIRA Issue | HADOOP-14706 |
| GITHUB PR | https://github.com/apache/hadoop/pull/258 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  findbugs  checkstyle  |
| uname | Linux 04f5d18729b0 3.13.0-116-generic #163-Ubuntu SMP Fri Mar 31 
14:13:22 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | branch-2 / 6ee0fe7 |
| Default Java | 1.7.0_131 |
| Multi-JDK versions |  /usr/lib/jvm/java-8-oracle:1.8.0_131 

[jira] [Commented] (HADOOP-14475) Metrics of S3A don't print out when enable it in Hadoop metrics property file

2017-08-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110545#comment-16110545
 ] 

Hadoop QA commented on HADOOP-14475:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
18s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 5s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} hadoop-tools/hadoop-aws: The patch generated 6 
new + 23 unchanged - 0 fixed = 29 total (was 23) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
20s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
15s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 19m 34s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14475 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879983/HADOOP-14475.006.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 5cee9acb71d3 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f9139ac |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12929/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-aws.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12929/testReport/ |
| modules | C: hadoop-tools/hadoop-aws U: hadoop-tools/hadoop-aws |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12929/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Metrics of S3A don't print out  when enable it in Hadoop metrics property file
> --
>
> Key: HADOOP-14475
> URL: https://issues.apache.org/jira/browse/HADOOP-14475
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: uname -a
> Linux client01 4.4.0-74-generic #95-Ubuntu 

[jira] [Commented] (HADOOP-14722) Azure: BlockBlobInputStream position incorrect after seek

2017-08-02 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-14722?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16110510#comment-16110510
 ] 

Hadoop QA commented on HADOOP-14722:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
17s{color} | {color:blue} Docker mode activated. {color} |
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
|| || || || {color:brown} trunk Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 14m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
28s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 11s{color} | {color:orange} hadoop-tools/hadoop-azure: The patch generated 1 
new + 27 unchanged - 0 fixed = 28 total (was 27) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  2m  
3s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
16s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 21m 18s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:14b5c93 |
| JIRA Issue | HADOOP-14722 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12879977/HADOOP-14722-002.patch
 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 4d22356774aa 3.13.0-117-generic #164-Ubuntu SMP Fri Apr 7 
11:05:26 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / f9139ac |
| Default Java | 1.8.0_131 |
| findbugs | v3.1.0-RC1 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12928/artifact/patchprocess/diff-checkstyle-hadoop-tools_hadoop-azure.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12928/testReport/ |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/12928/console |
| Powered by | Apache Yetus 0.6.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Azure: BlockBlobInputStream position incorrect after seek
> -
>
> Key: HADOOP-14722
> URL: https://issues.apache.org/jira/browse/HADOOP-14722
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
> Attachments: HADOOP-14722-001.patch, 

[jira] [Updated] (HADOOP-14475) Metrics of S3A don't print out when enable it in Hadoop metrics property file

2017-08-02 Thread Yonger (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yonger updated HADOOP-14475:

Attachment: HADOOP-14475.006.patch

update:
1.add ASF license for new test class 

> Metrics of S3A don't print out  when enable it in Hadoop metrics property file
> --
>
> Key: HADOOP-14475
> URL: https://issues.apache.org/jira/browse/HADOOP-14475
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.8.0
> Environment: uname -a
> Linux client01 4.4.0-74-generic #95-Ubuntu SMP Wed Apr 12 09:50:34 UTC 2017 
> x86_64 x86_64 x86_64 GNU/Linux
>  cat /etc/issue
> Ubuntu 16.04.2 LTS \n \l
>Reporter: Yonger
>Assignee: Yonger
> Attachments: failsafe-report-s3a-it.html, 
> failsafe-report-s3a-scale.html, failsafe-report-scale.html, 
> failsafe-report-scale.zip, HADOOP-14475.002.patch, HADOOP-14475-003.patch, 
> HADOOP-14475.005.patch, HADOOP-14475.006.patch, s3a-metrics.patch1, stdout.zip
>
>
> *.sink.file.class=org.apache.hadoop.metrics2.sink.FileSink
> #*.sink.file.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #*.sink.influxdb.url=http:/xx
> #*.sink.influxdb.influxdb_port=8086
> #*.sink.influxdb.database=hadoop
> #*.sink.influxdb.influxdb_username=hadoop
> #*.sink.influxdb.influxdb_password=hadoop
> #*.sink.ingluxdb.cluster=c1
> *.period=10
> #namenode.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> #S3AFileSystem.sink.influxdb.class=org.apache.hadoop.metrics2.sink.influxdb.InfluxdbSink
> S3AFileSystem.sink.file.filename=s3afilesystem-metrics.out
> I can't find the out put file even i run a MR job which should be used s3.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-14722) Azure: BlockBlobInputStream position incorrect after seek

2017-08-02 Thread Thomas Marquardt (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-14722?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Thomas Marquardt updated HADOOP-14722:
--
Attachment: HADOOP-14722-002.patch

Attaching HADOOP-14722-002.patch.  This addresses the findbugs warning.

> Azure: BlockBlobInputStream position incorrect after seek
> -
>
> Key: HADOOP-14722
> URL: https://issues.apache.org/jira/browse/HADOOP-14722
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/azure
>Reporter: Thomas Marquardt
>Assignee: Thomas Marquardt
> Attachments: HADOOP-14722-001.patch, HADOOP-14722-002.patch
>
>
> The seek, skip, and getPos methods of BlockBlobInputStream do not correctly 
> account for the stream's  internal buffer.  This results in invalid stream 
> positions. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



  1   2   >