[jira] [Commented] (HADOOP-11588) Benchmark framework and test for erasure coders

2016-08-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15413021#comment-15413021
 ] 

Hadoop QA commented on HADOOP-11588:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
59s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
4s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
39s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
48s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
56s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
42s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 41m 54s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12822732/HADOOP-11588.7.patch |
| JIRA Issue | HADOOP-11588 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c2edd650fff2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8f9b618 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10209/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10209/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Benchmark framework and test for erasure coders
> ---
>
> Key: HADOOP-11588
> URL: https://issues.apache.org/jira/browse/HADOOP-11588
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Rui Li
> Attachments: HADOOP-11588-HDFS-7285.2.patch, HADOOP-11588-v1.patch, 
> HADOOP-11588.3.patch, HADOOP-11588.4.patch, HADOOP-11588.5.patch, 
> HADOOP-11588.6.patch, HADOOP-11588.7.patch
>
>
> Given more than one erasure code

[jira] [Commented] (HADOOP-13474) Add more details in the log when a token is expired

2016-08-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412988#comment-15412988
 ] 

Hadoop QA commented on HADOOP-13474:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
13s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
29s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  3m 
18s{color} | {color:green} hadoop-auth in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 30m  8s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12822729/HADOOP-13474.01.patch 
|
| JIRA Issue | HADOOP-13474 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 051300f93f0f 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 8f9b618 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10208/testReport/ |
| modules | C: hadoop-common-project/hadoop-auth U: 
hadoop-common-project/hadoop-auth |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10208/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Add more details in the log when a token is expired
> ---
>
> Key: HADOOP-13474
> URL: https://issues.apache.org/jira/browse/HADOOP-13474
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13474.01.patch
>
>
> Currently

[jira] [Updated] (HADOOP-11588) Benchmark framework and test for erasure coders

2016-08-08 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HADOOP-11588:

Attachment: HADOOP-11588.7.patch

Thanks Kai. The v6 patch generates a warning (convertToByteBufferState is 
invoked) when testing ISA-L coder. That's when we initializing the coder and it 
doesn't affect the performance result.
Upload v7 patch to avoid it, and also adds the missing JavaDoc.

> Benchmark framework and test for erasure coders
> ---
>
> Key: HADOOP-11588
> URL: https://issues.apache.org/jira/browse/HADOOP-11588
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Rui Li
> Attachments: HADOOP-11588-HDFS-7285.2.patch, HADOOP-11588-v1.patch, 
> HADOOP-11588.3.patch, HADOOP-11588.4.patch, HADOOP-11588.5.patch, 
> HADOOP-11588.6.patch, HADOOP-11588.7.patch
>
>
> Given more than one erasure coders are implemented for a code scheme, we need 
> benchmark and test to help evaluate which one outperforms in certain 
> environment. This is to implement the benchmark framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13190) LoadBalancingKMSClientProvider should be documented

2016-08-08 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412982#comment-15412982
 ] 

Xiao Chen commented on HADOOP-13190:


Thanks [~jojochuang] for revving! Looks pretty good, and I think we're close. 
Nice catch on the macro at the beginning BTW.

Nits:
- Seems a typo in KMS Client Configuration session. 'mustbe'.
- {{...(for example, a NameNode) }} -> {{(for example, HDFS NameNode) }}
- {{The host names}} -> {{hostnames}}
- {{For example, the following configuration in hdfs-site.xml sets up two KMS 
instances}}. Technically they don't 'set up' the 2, since they're client side. 
How about we s/sets up/configures/g ?
- Suggest we add 1 sentence to describe how LBKMSCP is used. Something like: 
When more than one key provider is configured in the uri, a LBKMSCP is 
automatically created. We can combine this with the intro about round-robin.

> LoadBalancingKMSClientProvider should be documented
> ---
>
> Key: HADOOP-13190
> URL: https://issues.apache.org/jira/browse/HADOOP-13190
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, kms
>Affects Versions: 2.7.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: supportability
> Attachments: HADOOP-13190.001.patch, HADOOP-13190.002.patch
>
>
> Currently, there are two ways to achieve KMS HA.
> The first one, and the only documented one, is running multiple KMS instances 
> behind a load balancer. 
> https://hadoop.apache.org/docs/stable/hadoop-kms/index.html
> The other way, is make use of LoadBalancingKMSClientProvider which is added 
> in HADOOP-11620. However the usage is undocumented.
> I think we should update the KMS document to introduce 
> LoadBalancingKMSClientProvider, provide examples, and also update 
> kms-site.xml to explain it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13474) Add more details in the log when a token is expired

2016-08-08 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13474:
---
Status: Patch Available  (was: Open)

> Add more details in the log when a token is expired
> ---
>
> Key: HADOOP-13474
> URL: https://issues.apache.org/jira/browse/HADOOP-13474
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13474.01.patch
>
>
> Currently when there's an expired token, we see this from the log:
> {noformat}
> 2016-08-06 07:13:20,807 WARN 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> AuthenticationToken ignored: AuthenticationToken expired
> 2016-08-06 09:55:48,665 WARN 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> AuthenticationToken ignored: AuthenticationToken expired
> 2016-08-06 10:01:41,452 WARN 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> AuthenticationToken ignored: AuthenticationToken expired
> {noformat}
> We should log a better 
> [message|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java#L456],
>  to include more details (e.g. token type, username, tokenid) for 
> trouble-shooting purpose.
> I don't think the additional information exposed will lead to any security 
> concern, since the token is expired anyways.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13474) Add more details in the log when a token is expired

2016-08-08 Thread Xiao Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13474?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Xiao Chen updated HADOOP-13474:
---
Attachment: HADOOP-13474.01.patch

Patch 1 logs the details at the server side when such error happens.


> Add more details in the log when a token is expired
> ---
>
> Key: HADOOP-13474
> URL: https://issues.apache.org/jira/browse/HADOOP-13474
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
> Attachments: HADOOP-13474.01.patch
>
>
> Currently when there's an expired token, we see this from the log:
> {noformat}
> 2016-08-06 07:13:20,807 WARN 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> AuthenticationToken ignored: AuthenticationToken expired
> 2016-08-06 09:55:48,665 WARN 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> AuthenticationToken ignored: AuthenticationToken expired
> 2016-08-06 10:01:41,452 WARN 
> org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
> AuthenticationToken ignored: AuthenticationToken expired
> {noformat}
> We should log a better 
> [message|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java#L456],
>  to include more details (e.g. token type, username, tokenid) for 
> trouble-shooting purpose.
> I don't think the additional information exposed will lead to any security 
> concern, since the token is expired anyways.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13439) Fix race between TestMetricsSystemImpl and TestGangliaMetrics

2016-08-08 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-13439:
--
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.8.0
   Status: Resolved  (was: Patch Available)

Committed to branch-2.8, branch-2 and trunk. Thanks, [~vagarychen].

> Fix race between TestMetricsSystemImpl and TestGangliaMetrics
> -
>
> Key: HADOOP-13439
> URL: https://issues.apache.org/jira/browse/HADOOP-13439
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Masatake Iwasaki
>Assignee: Chen Liang
>Priority: Minor
> Fix For: 2.8.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13439.001.patch, HADOOP-13439.001.patch, 
> HADOOP-13439.002.patch
>
>
> TestGangliaMetrics#testGangliaMetrics2 set *.period to 120 but 8 was used.
> {noformat}
> 2016-06-27 15:21:31,480 INFO  impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:startTimer(375)) - Scheduled snapshot period at 8 
> second(s).
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13439) Fix race between TestMetricsSystemImpl and TestGangliaMetrics

2016-08-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412827#comment-15412827
 ] 

Hudson commented on HADOOP-13439:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #10240 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10240/])
HADOOP-13439. Fix race between TestMetricsSystemImpl and (iwasakims: rev 
8f9b61852bf6600b65e49875fec172bac9e0a85d)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/metrics2/impl/TestGangliaMetrics.java


> Fix race between TestMetricsSystemImpl and TestGangliaMetrics
> -
>
> Key: HADOOP-13439
> URL: https://issues.apache.org/jira/browse/HADOOP-13439
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Masatake Iwasaki
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HADOOP-13439.001.patch, HADOOP-13439.001.patch, 
> HADOOP-13439.002.patch
>
>
> TestGangliaMetrics#testGangliaMetrics2 set *.period to 120 but 8 was used.
> {noformat}
> 2016-06-27 15:21:31,480 INFO  impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:startTimer(375)) - Scheduled snapshot period at 8 
> second(s).
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13439) Fix race between TestMetricsSystemImpl and TestGangliaMetrics

2016-08-08 Thread Masatake Iwasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412811#comment-15412811
 ] 

Masatake Iwasaki commented on HADOOP-13439:
---

+1 on 002.

> Fix race between TestMetricsSystemImpl and TestGangliaMetrics
> -
>
> Key: HADOOP-13439
> URL: https://issues.apache.org/jira/browse/HADOOP-13439
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Masatake Iwasaki
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HADOOP-13439.001.patch, HADOOP-13439.001.patch, 
> HADOOP-13439.002.patch
>
>
> TestGangliaMetrics#testGangliaMetrics2 set *.period to 120 but 8 was used.
> {noformat}
> 2016-06-27 15:21:31,480 INFO  impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:startTimer(375)) - Scheduled snapshot period at 8 
> second(s).
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13397) Add dockerfile for Hadoop

2016-08-08 Thread Allen Wittenauer (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412801#comment-15412801
 ] 

Allen Wittenauer commented on HADOOP-13397:
---

A few things:
* need to be able to replace how Hadoop gets installed:  tar ball vs rpm vs deb 
vs 
* need to be able to replace how Hadoop gets configured: single node vs 
multi-node vs. config management vs ...
* need a raw dockerfile because they want their pre-existing frameworks to 
manage the installation (e.g., Kubernetes mentioned above)

That last one is key: there are plenty of other tools already in existence that 
can manage containers (even Jenkins can do it!).  There's no real value in 
re-inventing that for a large segment of users.

HBASE-12721 looks great if one doesn't have anything else to manage or just 
wants a quick up 'n' down for something like testing.  But that's not the 
reality for a large chunk of docker-ized environments.  They want to plug the 
file in and go.

> Add dockerfile for Hadoop
> -
>
> Key: HADOOP-13397
> URL: https://issues.apache.org/jira/browse/HADOOP-13397
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Klaus Ma
>Assignee: Allen Wittenauer
> Attachments: HADOOP-13397.DNC001.patch
>
>
> For now, there's no community version Dockerfile in Hadoop; most of docker 
> images are provided by vendor, e.g. 
> 1. Cloudera's image: https://hub.docker.com/r/cloudera/quickstart/
> 2.  From HortonWorks sequenceiq: 
> https://hub.docker.com/r/sequenceiq/hadoop-docker/
> 3. MapR provides the mapr-sandbox-base: 
> https://hub.docker.com/r/maprtech/mapr-sandbox-base/
> The proposal of this JIRA is to provide a community version Dockerfile in 
> Hadoop, and here's some requirement:
> 1. Seperated docker image for master & agents, e.g. resource manager & node 
> manager
> 2. Default configuration to start master & agent instead of configurating 
> manually
> 3. Start Hadoop process as no-daemon
> Here's my dockerfile to start master/agent: 
> https://github.com/k82cn/outrider/tree/master/kubernetes/imgs/yarn
> I'd like to contribute it after polishing :).
> Email Thread : 
> http://mail-archives.apache.org/mod_mbox/hadoop-user/201607.mbox/%3CSG2PR04MB162977CFE150444FA022510FB6370%40SG2PR04MB1629.apcprd04.prod.outlook.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13439) Fix race between TestMetricsSystemImpl and TestGangliaMetrics

2016-08-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412789#comment-15412789
 ] 

Hadoop QA commented on HADOOP-13439:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
10s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
23s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
17s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
41s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-common-project/hadoop-common: The patch 
generated 0 new + 9 unchanged - 5 fixed = 9 total (was 14) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
43s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12822701/HADOOP-13439.002.patch
 |
| JIRA Issue | HADOOP-13439 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3e4efc68a0e2 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0ad48aa |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10207/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10207/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix race between TestMetricsSystemImpl and TestGangliaMetrics
> -
>
> Key: HADOOP-13439
> URL: https://issues.apache.org/jira/browse/HADOOP-13439
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Masatake Iwasaki
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HADOOP-13439.001.patch, HADOOP-13439.001.patch, 
>

[jira] [Commented] (HADOOP-13397) Add dockerfile for Hadoop

2016-08-08 Thread Dima Spivak (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13397?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412778#comment-15412778
 ] 

Dima Spivak commented on HADOOP-13397:
--

Seems like a lot of work, [~aw], compared to something that plugs into what we 
have over in HBASE-12721. What exactly is the concern with using it? There are 
no weird dependencies; a single user could independently build and deploy a 
cluster on any machine.

> Add dockerfile for Hadoop
> -
>
> Key: HADOOP-13397
> URL: https://issues.apache.org/jira/browse/HADOOP-13397
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Klaus Ma
>Assignee: Allen Wittenauer
> Attachments: HADOOP-13397.DNC001.patch
>
>
> For now, there's no community version Dockerfile in Hadoop; most of docker 
> images are provided by vendor, e.g. 
> 1. Cloudera's image: https://hub.docker.com/r/cloudera/quickstart/
> 2.  From HortonWorks sequenceiq: 
> https://hub.docker.com/r/sequenceiq/hadoop-docker/
> 3. MapR provides the mapr-sandbox-base: 
> https://hub.docker.com/r/maprtech/mapr-sandbox-base/
> The proposal of this JIRA is to provide a community version Dockerfile in 
> Hadoop, and here's some requirement:
> 1. Seperated docker image for master & agents, e.g. resource manager & node 
> manager
> 2. Default configuration to start master & agent instead of configurating 
> manually
> 3. Start Hadoop process as no-daemon
> Here's my dockerfile to start master/agent: 
> https://github.com/k82cn/outrider/tree/master/kubernetes/imgs/yarn
> I'd like to contribute it after polishing :).
> Email Thread : 
> http://mail-archives.apache.org/mod_mbox/hadoop-user/201607.mbox/%3CSG2PR04MB162977CFE150444FA022510FB6370%40SG2PR04MB1629.apcprd04.prod.outlook.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12747) support wildcard in libjars argument

2016-08-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412761#comment-15412761
 ] 

Hudson commented on HADOOP-12747:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #10239 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10239/])
HADOOP-12747. support wildcard in libjars argument (sjlee) (sjlee: rev 
0ad48aa2c8f41196743305c711ea19cc48f186da)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/TestFileUtil.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/GenericOptionsParser.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/ApplicationClassLoader.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestGenericOptionsParser.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileUtil.java


> support wildcard in libjars argument
> 
>
> Key: HADOOP-12747
> URL: https://issues.apache.org/jira/browse/HADOOP-12747
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-12747.01.patch, HADOOP-12747.02.patch, 
> HADOOP-12747.03.patch, HADOOP-12747.04.patch, HADOOP-12747.05.patch, 
> HADOOP-12747.06.patch, HADOOP-12747.07.patch
>
>
> There is a problem when a user job adds too many dependency jars in their 
> command line. The HADOOP_CLASSPATH part can be addressed, including using 
> wildcards (\*). But the same cannot be done with the -libjars argument. Today 
> it takes only fully specified file paths.
> We may want to consider supporting wildcards as a way to help users in this 
> situation. The idea is to handle it the same way the JVM does it: \* expands 
> to the list of jars in that directory. It does not traverse into any child 
> directory.
> Also, it probably would be a good idea to do it only for libjars (i.e. don't 
> do it for -files and -archives).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12747) support wildcard in libjars argument

2016-08-08 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-12747:
-
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha2
   2.9.0
 Release Note: It is now possible to specify multiple jar files for the 
libjars argument using a wildcard. For example, you can specify "-libjars 
'libs/*'" as a shorthand for all jars in the libs directory.
   Status: Resolved  (was: Patch Available)

Committed. Thanks [~cnauroth], [~jira.shegalov], and [~vicaya] for your reviews 
and comments.

> support wildcard in libjars argument
> 
>
> Key: HADOOP-12747
> URL: https://issues.apache.org/jira/browse/HADOOP-12747
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-12747.01.patch, HADOOP-12747.02.patch, 
> HADOOP-12747.03.patch, HADOOP-12747.04.patch, HADOOP-12747.05.patch, 
> HADOOP-12747.06.patch, HADOOP-12747.07.patch
>
>
> There is a problem when a user job adds too many dependency jars in their 
> command line. The HADOOP_CLASSPATH part can be addressed, including using 
> wildcards (\*). But the same cannot be done with the -libjars argument. Today 
> it takes only fully specified file paths.
> We may want to consider supporting wildcards as a way to help users in this 
> situation. The idea is to handle it the same way the JVM does it: \* expands 
> to the list of jars in that directory. It does not traverse into any child 
> directory.
> Also, it probably would be a good idea to do it only for libjars (i.e. don't 
> do it for -files and -archives).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11588) Benchmark framework and test for erasure coders

2016-08-08 Thread Kai Zheng (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412745#comment-15412745
 ] 

Kai Zheng commented on HADOOP-11588:


Thanks [~lirui] for the update! I would try to take a look at it in the week.

> Benchmark framework and test for erasure coders
> ---
>
> Key: HADOOP-11588
> URL: https://issues.apache.org/jira/browse/HADOOP-11588
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Rui Li
> Attachments: HADOOP-11588-HDFS-7285.2.patch, HADOOP-11588-v1.patch, 
> HADOOP-11588.3.patch, HADOOP-11588.4.patch, HADOOP-11588.5.patch, 
> HADOOP-11588.6.patch
>
>
> Given more than one erasure coders are implemented for a code scheme, we need 
> benchmark and test to help evaluate which one outperforms in certain 
> environment. This is to implement the benchmark framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-13043) Add LICENSE.txt entries for bundled javascript dependencies

2016-08-08 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412720#comment-15412720
 ] 

Chris Trezzo edited comment on HADOOP-13043 at 8/9/16 12:40 AM:


[~andrew.wang] For what it is worth, I ran into this "missing commit message" 
while grepping "git log" for 2.6.5. It was fairly easy to figure out what was 
going on based on your above comment. That being said, it was the only commit 
that showed up in branch-2.6 and not in branch-2.7 for the time range I 
compared (since 1/26/2016).


was (Author: ctrezzo):
[~andrew.wang] For what it is worth, I ran into this "missing commit" while 
grepping "git log" for 2.6.5. It was fairly easy to figure out what was going 
on based on your above comment. That being said, it was the only commit that 
showed up in branch-2.6 and not in branch-2.7 for the time range I compared 
(since 1/26/2016).

> Add LICENSE.txt entries for bundled javascript dependencies
> ---
>
> Key: HADOOP-13043
> URL: https://issues.apache.org/jira/browse/HADOOP-13043
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 2.8.0, 2.7.3, 2.6.5
>
> Attachments: hadoop-13043.001.patch, hadoop-13043.002.patch
>
>
> None of our bundled javascript dependencies are mentioned in LICENSE.txt. 
> Let's fix that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13043) Add LICENSE.txt entries for bundled javascript dependencies

2016-08-08 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412720#comment-15412720
 ] 

Chris Trezzo commented on HADOOP-13043:
---

[~andrew.wang] For what it is worth, I ran into this "missing commit" while 
grepping "git log" for 2.6.5. It was fairly easy to figure out what was going 
on based on your above comment. That being said, it was the only commit that 
showed up in branch-2.6 and not in branch-2.7 for the time range I compared 
(since 1/26/2016).

> Add LICENSE.txt entries for bundled javascript dependencies
> ---
>
> Key: HADOOP-13043
> URL: https://issues.apache.org/jira/browse/HADOOP-13043
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.6.4
>Reporter: Andrew Wang
>Assignee: Andrew Wang
> Fix For: 2.8.0, 2.7.3, 2.6.5
>
> Attachments: hadoop-13043.001.patch, hadoop-13043.002.patch
>
>
> None of our bundled javascript dependencies are mentioned in LICENSE.txt. 
> Let's fix that.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13190) LoadBalancingKMSClientProvider should be documented

2016-08-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412718#comment-15412718
 ] 

Hadoop QA commented on HADOOP-13190:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
16s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
10s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
17s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 10m 48s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12822700/HADOOP-13190.002.patch
 |
| JIRA Issue | HADOOP-13190 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux 0b0eecd6fd49 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0705489 |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10206/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> LoadBalancingKMSClientProvider should be documented
> ---
>
> Key: HADOOP-13190
> URL: https://issues.apache.org/jira/browse/HADOOP-13190
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, kms
>Affects Versions: 2.7.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: supportability
> Attachments: HADOOP-13190.001.patch, HADOOP-13190.002.patch
>
>
> Currently, there are two ways to achieve KMS HA.
> The first one, and the only documented one, is running multiple KMS instances 
> behind a load balancer. 
> https://hadoop.apache.org/docs/stable/hadoop-kms/index.html
> The other way, is make use of LoadBalancingKMSClientProvider which is added 
> in HADOOP-11620. However the usage is undocumented.
> I think we should update the KMS document to introduce 
> LoadBalancingKMSClientProvider, provide examples, and also update 
> kms-site.xml to explain it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13475) Adding Append Blob support for WASB

2016-08-08 Thread Raul da Silva Martins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raul da Silva Martins updated HADOOP-13475:
---
Priority: Blocker  (was: Critical)

> Adding Append Blob support for WASB
> ---
>
> Key: HADOOP-13475
> URL: https://issues.apache.org/jira/browse/HADOOP-13475
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: azure
>Affects Versions: 2.7.1
>Reporter: Raul da Silva Martins
>Assignee: Dushyanth
>Priority: Blocker
> Fix For: 2.8.0
>
> Attachments: 0001-Added-Support-for-Azure-AppendBlobs.patch
>
>
> Currently the WASB implementation of the HDFS interface does not support the 
> utilization of Azure AppendBlobs underneath. As owners of a large scale 
> service who intend to migrate to writing to Append blobs
> This JIRA is added to implement Azure AppendBlob support to WASB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13475) Adding Append Blob support for WASB

2016-08-08 Thread Raul da Silva Martins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raul da Silva Martins updated HADOOP-13475:
---
Priority: Critical  (was: Blocker)

> Adding Append Blob support for WASB
> ---
>
> Key: HADOOP-13475
> URL: https://issues.apache.org/jira/browse/HADOOP-13475
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: azure
>Affects Versions: 2.7.1
>Reporter: Raul da Silva Martins
>Assignee: Dushyanth
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: 0001-Added-Support-for-Azure-AppendBlobs.patch
>
>
> Currently the WASB implementation of the HDFS interface does not support the 
> utilization of Azure AppendBlobs underneath. As owners of a large scale 
> service who intend to migrate to writing to Append blobs
> This JIRA is added to implement Azure AppendBlob support to WASB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13439) Fix race between TestMetricsSystemImpl and TestGangliaMetrics

2016-08-08 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HADOOP-13439:

Status: In Progress  (was: Patch Available)

> Fix race between TestMetricsSystemImpl and TestGangliaMetrics
> -
>
> Key: HADOOP-13439
> URL: https://issues.apache.org/jira/browse/HADOOP-13439
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Masatake Iwasaki
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HADOOP-13439.001.patch, HADOOP-13439.001.patch, 
> HADOOP-13439.002.patch
>
>
> TestGangliaMetrics#testGangliaMetrics2 set *.period to 120 but 8 was used.
> {noformat}
> 2016-06-27 15:21:31,480 INFO  impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:startTimer(375)) - Scheduled snapshot period at 8 
> second(s).
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13439) Fix race between TestMetricsSystemImpl and TestGangliaMetrics

2016-08-08 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HADOOP-13439:

Status: Patch Available  (was: In Progress)

> Fix race between TestMetricsSystemImpl and TestGangliaMetrics
> -
>
> Key: HADOOP-13439
> URL: https://issues.apache.org/jira/browse/HADOOP-13439
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Masatake Iwasaki
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HADOOP-13439.001.patch, HADOOP-13439.001.patch, 
> HADOOP-13439.002.patch
>
>
> TestGangliaMetrics#testGangliaMetrics2 set *.period to 120 but 8 was used.
> {noformat}
> 2016-06-27 15:21:31,480 INFO  impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:startTimer(375)) - Scheduled snapshot period at 8 
> second(s).
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13439) Fix race between TestMetricsSystemImpl and TestGangliaMetrics

2016-08-08 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HADOOP-13439:

Attachment: HADOOP-13439.002.patch

Fix for checkstyle

> Fix race between TestMetricsSystemImpl and TestGangliaMetrics
> -
>
> Key: HADOOP-13439
> URL: https://issues.apache.org/jira/browse/HADOOP-13439
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Masatake Iwasaki
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HADOOP-13439.001.patch, HADOOP-13439.001.patch, 
> HADOOP-13439.002.patch
>
>
> TestGangliaMetrics#testGangliaMetrics2 set *.period to 120 but 8 was used.
> {noformat}
> 2016-06-27 15:21:31,480 INFO  impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:startTimer(375)) - Scheduled snapshot period at 8 
> second(s).
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13475) Adding Append Blob support for WASB

2016-08-08 Thread Raul da Silva Martins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raul da Silva Martins updated HADOOP-13475:
---
Description: 
Currently the WASB implementation of the HDFS interface does not support the 
utilization of Azure AppendBlobs underneath. As owners of a large scale service 
who intend to migrate to writing to Append blobs

This JIRA is added to implement Azure AppendBlob support to WASB.

  was:
Currently the WASB implementation of the HDFS interface does not support the 
utilization of Azure AppendBlobs underneath. 

This JIRA is added to implement Azure AppendBlob support to WASB.


> Adding Append Blob support for WASB
> ---
>
> Key: HADOOP-13475
> URL: https://issues.apache.org/jira/browse/HADOOP-13475
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: azure
>Affects Versions: 2.7.1
>Reporter: Raul da Silva Martins
>Assignee: Dushyanth
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: 0001-Added-Support-for-Azure-AppendBlobs.patch
>
>
> Currently the WASB implementation of the HDFS interface does not support the 
> utilization of Azure AppendBlobs underneath. As owners of a large scale 
> service who intend to migrate to writing to Append blobs
> This JIRA is added to implement Azure AppendBlob support to WASB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13475) Adding Append Blob support for WASB

2016-08-08 Thread Raul da Silva Martins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raul da Silva Martins updated HADOOP-13475:
---
Description: 
Currently the WASB implementation of the HDFS interface does not support the 
utilization of Azure AppendBlobs underneath. As owners of a large scale service 
who intend to start writing to Append blobs, we need this support in order to 
be able to keep using our HDI capabilities.

This JIRA is added to implement Azure AppendBlob support to WASB.

  was:
Currently the WASB implementation of the HDFS interface does not support the 
utilization of Azure AppendBlobs underneath. As owners of a large scale service 
who intend to migrate to writing to Append blobs

This JIRA is added to implement Azure AppendBlob support to WASB.


> Adding Append Blob support for WASB
> ---
>
> Key: HADOOP-13475
> URL: https://issues.apache.org/jira/browse/HADOOP-13475
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: azure
>Affects Versions: 2.7.1
>Reporter: Raul da Silva Martins
>Assignee: Dushyanth
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: 0001-Added-Support-for-Azure-AppendBlobs.patch
>
>
> Currently the WASB implementation of the HDFS interface does not support the 
> utilization of Azure AppendBlobs underneath. As owners of a large scale 
> service who intend to start writing to Append blobs, we need this support in 
> order to be able to keep using our HDI capabilities.
> This JIRA is added to implement Azure AppendBlob support to WASB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13190) LoadBalancingKMSClientProvider should be documented

2016-08-08 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13190?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13190:
-
Attachment: HADOOP-13190.002.patch

Thanks [~xiaochen] for reviewing the patch.

bq. Maybe we can change the current $H3 level title to Using Multiple Instances 
of KMS, and list the current LB/VIP and the new LBKMSCP under it? The other 
sub-sections (kerberos, secret-sharing) applies to multiple instances in 
general.
Good idea.
bq. In the new LBKMSCP section, please also add the failure-handling behavior. 
If a request to a KMSCP failed, LBKMSCP will retry the next KMSCP. The request 
is returned as failure only if all KMSCPs failed.
Done.
bq. In the sample xml, maybe also list an http example?
Not sure how to best capture this. I found the section _KMS Client 
Configuration_ is vague, so I put up an example of configuring NameNode as a 
KMS client here.

> LoadBalancingKMSClientProvider should be documented
> ---
>
> Key: HADOOP-13190
> URL: https://issues.apache.org/jira/browse/HADOOP-13190
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, kms
>Affects Versions: 2.7.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: supportability
> Attachments: HADOOP-13190.001.patch, HADOOP-13190.002.patch
>
>
> Currently, there are two ways to achieve KMS HA.
> The first one, and the only documented one, is running multiple KMS instances 
> behind a load balancer. 
> https://hadoop.apache.org/docs/stable/hadoop-kms/index.html
> The other way, is make use of LoadBalancingKMSClientProvider which is added 
> in HADOOP-11620. However the usage is undocumented.
> I think we should update the KMS document to introduce 
> LoadBalancingKMSClientProvider, provide examples, and also update 
> kms-site.xml to explain it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13475) Adding Append Blob support for WASB

2016-08-08 Thread Raul da Silva Martins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raul da Silva Martins updated HADOOP-13475:
---
Attachment: 0001-Added-Support-for-Azure-AppendBlobs.patch

> Adding Append Blob support for WASB
> ---
>
> Key: HADOOP-13475
> URL: https://issues.apache.org/jira/browse/HADOOP-13475
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: azure
>Affects Versions: 2.7.1
>Reporter: Raul da Silva Martins
>Assignee: Dushyanth
>Priority: Critical
> Fix For: 2.8.0
>
> Attachments: 0001-Added-Support-for-Azure-AppendBlobs.patch
>
>
> Currently the WASB implementation of the HDFS interface does not support the 
> utilization of Azure AppendBlobs underneath. 
> This JIRA is added to implement Azure AppendBlob support to WASB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13475) Adding Append Blob support for WASB

2016-08-08 Thread Raul da Silva Martins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raul da Silva Martins updated HADOOP-13475:
---
Release Note:   (was: The Azure Blob Storage file system (WASB) now 
includes optional support for use of the append API by a single writer on a 
path.  Please note that the implementation differs from the semantics of HDFS 
append.  HDFS append internally guarantees that only a single writer may append 
to a path at a given time.  WASB does not enforce this guarantee internally.  
Instead, the application must enforce access by a single writer, such as by 
running single-threaded or relying on some external locking mechanism to 
coordinate concurrent processes.  Refer to the Azure Blob Storage documentation 
page for more details on enabling append in configuration.)

> Adding Append Blob support for WASB
> ---
>
> Key: HADOOP-13475
> URL: https://issues.apache.org/jira/browse/HADOOP-13475
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: azure
>Affects Versions: 2.7.1
>Reporter: Raul da Silva Martins
>Assignee: Dushyanth
>Priority: Critical
> Fix For: 2.8.0
>
>
> Currently the WASB implementation of the HDFS interface does not support the 
> utilization of Azure AppendBlobs underneath. 
> This JIRA is added to implement Azure AppendBlob support to WASB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13475) Adding Append Blob support for WASB

2016-08-08 Thread Raul da Silva Martins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raul da Silva Martins updated HADOOP-13475:
---
   Priority: Critical  (was: Major)
Description: 
Currently the WASB implementation of the HDFS interface does not support the 
utilization of Azure AppendBlobs underneath. 

This JIRA is added to implement Azure AppendBlob support to WASB.

  was:Currently the WASB implementation of the HDFS interface does not support 
Append API. This JIRA is added to design and implement the Append API support 
to WASB. The intended support for Append would only support a single writer.  

Summary: Adding Append Blob support for WASB  (was: CLONE - Adding 
Append Blob support for WASB)

> Adding Append Blob support for WASB
> ---
>
> Key: HADOOP-13475
> URL: https://issues.apache.org/jira/browse/HADOOP-13475
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: azure
>Affects Versions: 2.7.1
>Reporter: Raul da Silva Martins
>Assignee: Dushyanth
>Priority: Critical
> Fix For: 2.8.0
>
>
> Currently the WASB implementation of the HDFS interface does not support the 
> utilization of Azure AppendBlobs underneath. 
> This JIRA is added to implement Azure AppendBlob support to WASB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13475) Adding Append Blob support for WASB

2016-08-08 Thread Raul da Silva Martins (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13475?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Raul da Silva Martins updated HADOOP-13475:
---
Hadoop Flags:   (was: Reviewed)

> Adding Append Blob support for WASB
> ---
>
> Key: HADOOP-13475
> URL: https://issues.apache.org/jira/browse/HADOOP-13475
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: azure
>Affects Versions: 2.7.1
>Reporter: Raul da Silva Martins
>Assignee: Dushyanth
>Priority: Critical
> Fix For: 2.8.0
>
>
> Currently the WASB implementation of the HDFS interface does not support the 
> utilization of Azure AppendBlobs underneath. 
> This JIRA is added to implement Azure AppendBlob support to WASB.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13475) CLONE - Adding Append Blob support for WASB

2016-08-08 Thread Raul da Silva Martins (JIRA)
Raul da Silva Martins created HADOOP-13475:
--

 Summary: CLONE - Adding Append Blob support for WASB
 Key: HADOOP-13475
 URL: https://issues.apache.org/jira/browse/HADOOP-13475
 Project: Hadoop Common
  Issue Type: New Feature
  Components: azure
Affects Versions: 2.7.1
Reporter: Raul da Silva Martins
Assignee: Dushyanth
 Fix For: 2.8.0


Currently the WASB implementation of the HDFS interface does not support Append 
API. This JIRA is added to design and implement the Append API support to WASB. 
The intended support for Append would only support a single writer.  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13439) Fix race between TestMetricsSystemImpl and TestGangliaMetrics

2016-08-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13439?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412653#comment-15412653
 ] 

Hadoop QA commented on HADOOP-13439:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
11s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
50s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
43s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
18s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 9 new + 9 unchanged - 5 fixed = 18 total (was 14) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
14s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
46s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
40s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 49s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12822682/HADOOP-13439.001.patch
 |
| JIRA Issue | HADOOP-13439 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux c1ee284b3bce 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0705489 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10205/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10205/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10205/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Fix race between TestMetricsSystemImpl and TestGangliaMetrics
> -
>
> Key: HADOOP-13439
> URL: https://issues.apache.org/jira/browse/HADOOP-13439
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: M

[jira] [Commented] (HADOOP-8522) ResetableGzipOutputStream creates invalid gzip files when finish() and resetState() are used

2016-08-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412637#comment-15412637
 ] 

Hadoop QA commented on HADOOP-8522:
---

| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
 1s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
0s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
12s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 7 new + 10 unchanged - 1 fixed = 17 total (was 11) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
58s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  8m 
17s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
25s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 42m 24s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12822675/HADOOP-8522-4.patch |
| JIRA Issue | HADOOP-8522 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 1b19f92eb870 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0705489 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10204/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10204/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10204/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> ResetableGzipOutputStream creates invalid gzip files when finish() and 
> resetState() are used
> 
>
> Key: HADOOP-8522
> URL: https://issues.apache.org/jira/browse/HADOOP-8522
> Project: Hadoop Common
>  Iss

[jira] [Commented] (HADOOP-13395) Enhance TestKMSAudit

2016-08-08 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412625#comment-15412625
 ] 

Xiao Chen commented on HADOOP-13395:


Thanks a lot [~jojochuang] for the review and commit, and Andrew for the 
initial review!

> Enhance TestKMSAudit
> 
>
> Key: HADOOP-13395
> URL: https://issues.apache.org/jira/browse/HADOOP-13395
> Project: Hadoop Common
>  Issue Type: Test
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13395.01.patch, HADOOP-13395.02.patch, 
> HADOOP-13395.03.patch
>
>
> This jira serves the goals:
> - Enhance existing test cases in TestKMSAudit, to rule out flakiness.
> - Add a new test case about formatting for different events.
> This will help us ensure audit log compatibility when we add a new log format 
> to KMS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11361) Fix a race condition in MetricsSourceAdapter.updateJmxCache

2016-08-08 Thread Chris Trezzo (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11361?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412622#comment-15412622
 ] 

Chris Trezzo commented on HADOOP-11361:
---

[~ozawa] I noticed that the target version for this jira states 2.6.5, but the 
fix version does not list 2.6.5 (in-line with the commit not being in 
branch-2.6). Do we want to backport to branch-2.6 as well, or should we remove 
2.6.5 from target versions? Thanks!

> Fix a race condition in MetricsSourceAdapter.updateJmxCache
> ---
>
> Key: HADOOP-11361
> URL: https://issues.apache.org/jira/browse/HADOOP-11361
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.4.1, 2.5.1, 2.6.0
>Reporter: Brahma Reddy Battula
>Assignee: Brahma Reddy Battula
>  Labels: supportability
> Fix For: 2.8.0, 2.9.0, 2.7.4, 3.0.0-alpha1
>
> Attachments: HADOOP-111361-003.patch, HADOOP-11361-002.patch, 
> HADOOP-11361-004.patch, HADOOP-11361-005.patch, HADOOP-11361-005.patch, 
> HADOOP-11361-006.patch, HADOOP-11361-007.patch, HADOOP-11361-009.patch, 
> HADOOP-11361.008.patch, HADOOP-11361.patch, HDFS-7487.patch
>
>
> {noformat}
> Caused by: java.lang.NullPointerException
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateAttrCache(MetricsSourceAdapter.java:247)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.updateJmxCache(MetricsSourceAdapter.java:177)
>   at 
> org.apache.hadoop.metrics2.impl.MetricsSourceAdapter.getAttribute(MetricsSourceAdapter.java:102)
>   at 
> com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13380) TestBasicDiskValidator should not write data to /tmp

2016-08-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412611#comment-15412611
 ] 

Hudson commented on HADOOP-13380:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #10238 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10238/])
HADOOP-13380. TestBasicDiskValidator should not write data to /tmp (lei: rev 
6418edd6feeafc0204536e1860942eeb1cb1a9ce)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestDiskChecker.java
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/util/TestBasicDiskValidator.java


> TestBasicDiskValidator should not write data to /tmp
> 
>
> Key: HADOOP-13380
> URL: https://issues.apache.org/jira/browse/HADOOP-13380
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.9.0
>Reporter: Lei (Eddy) Xu
>Assignee: Yufei Gu
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13380.001.patch, HADOOP-13380.002.patch
>
>
> In {{TestBasicDiskValidator}}, the following code is confusing
> {code}
>File localDir = File.createTempFile("test", "tmp");
> try {
>if (isDir) {
>// reuse the file path generated by File#createTempFile to create a dir
>   localDir.delete();
>localDir.mkdir();
> }
> {code}
> Btw, as suggested in https://wiki.apache.org/hadoop/CodeReviewChecklist, unit 
> test should not write data into {{/tmp}}:
> bq. * unit tests do not write any temporary files to /tmp (instead, the tests 
> should write to the location specified by the test.build.data system property)
> Finally, should use {{Files}} in these file creation / deletion, so that any 
> error can be thrown as {{IOE}}.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13395) Enhance TestKMSAudit

2016-08-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412612#comment-15412612
 ] 

Hudson commented on HADOOP-13395:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #10238 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10238/])
HADOOP-13395. Enhance TestKMSAudit. Contributed by Xiao Chen. (weichiu: rev 
070548943a16370a74277d1b1d10b713e2ca81d0)
* 
hadoop-common-project/hadoop-kms/src/main/java/org/apache/hadoop/crypto/key/kms/server/KMSAudit.java
* 
hadoop-common-project/hadoop-kms/src/test/java/org/apache/hadoop/crypto/key/kms/server/TestKMSAudit.java


> Enhance TestKMSAudit
> 
>
> Key: HADOOP-13395
> URL: https://issues.apache.org/jira/browse/HADOOP-13395
> Project: Hadoop Common
>  Issue Type: Test
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13395.01.patch, HADOOP-13395.02.patch, 
> HADOOP-13395.03.patch
>
>
> This jira serves the goals:
> - Enhance existing test cases in TestKMSAudit, to rule out flakiness.
> - Add a new test case about formatting for different events.
> This will help us ensure audit log compatibility when we add a new log format 
> to KMS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13439) Fix race between TestMetricsSystemImpl and TestGangliaMetrics

2016-08-08 Thread Masatake Iwasaki (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13439?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Masatake Iwasaki updated HADOOP-13439:
--
Attachment: HADOOP-13439.001.patch

> Fix race between TestMetricsSystemImpl and TestGangliaMetrics
> -
>
> Key: HADOOP-13439
> URL: https://issues.apache.org/jira/browse/HADOOP-13439
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: test
>Reporter: Masatake Iwasaki
>Assignee: Chen Liang
>Priority: Minor
> Attachments: HADOOP-13439.001.patch, HADOOP-13439.001.patch
>
>
> TestGangliaMetrics#testGangliaMetrics2 set *.period to 120 but 8 was used.
> {noformat}
> 2016-06-27 15:21:31,480 INFO  impl.MetricsSystemImpl 
> (MetricsSystemImpl.java:startTimer(375)) - Scheduled snapshot period at 8 
> second(s).
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13461) NPE in KeyProvider.rollNewVersion

2016-08-08 Thread Lei (Eddy) Xu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412575#comment-15412575
 ] 

Lei (Eddy) Xu commented on HADOOP-13461:


[~xiaochen] Done.

> NPE in KeyProvider.rollNewVersion
> -
>
> Key: HADOOP-13461
> URL: https://issues.apache.org/jira/browse/HADOOP-13461
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Colm O hEigeartaigh
>Priority: Minor
> Fix For: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>
> Attachments: HADOOP-13461.patch
>
>
> When KeyProvider.rollNewVersion(String name) is called, it first gets the 
> metadata for the given name. The javadoc states that the getMetadata(String 
> name) method can return null if the key doesn't exist. However rollNewVersion 
> throws a NPE if the returned metadata is null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13395) Enhance TestKMSAudit

2016-08-08 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13395:
-
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha2
   2.9.0
   2.8.0
   Status: Resolved  (was: Patch Available)

Thanks [~xiaochen] for the patch and [~andrew.wang] for the initial review. 
Committed to trunk, branch-2 and branch-2.8.

> Enhance TestKMSAudit
> 
>
> Key: HADOOP-13395
> URL: https://issues.apache.org/jira/browse/HADOOP-13395
> Project: Hadoop Common
>  Issue Type: Test
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Fix For: 2.8.0, 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13395.01.patch, HADOOP-13395.02.patch, 
> HADOOP-13395.03.patch
>
>
> This jira serves the goals:
> - Enhance existing test cases in TestKMSAudit, to rule out flakiness.
> - Add a new test case about formatting for different events.
> This will help us ensure audit log compatibility when we add a new log format 
> to KMS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13395) Enhance TestKMSAudit

2016-08-08 Thread Wei-Chiu Chuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13395?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wei-Chiu Chuang updated HADOOP-13395:
-
Hadoop Flags: Reviewed

> Enhance TestKMSAudit
> 
>
> Key: HADOOP-13395
> URL: https://issues.apache.org/jira/browse/HADOOP-13395
> Project: Hadoop Common
>  Issue Type: Test
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HADOOP-13395.01.patch, HADOOP-13395.02.patch, 
> HADOOP-13395.03.patch
>
>
> This jira serves the goals:
> - Enhance existing test cases in TestKMSAudit, to rule out flakiness.
> - Add a new test case about formatting for different events.
> This will help us ensure audit log compatibility when we add a new log format 
> to KMS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13380) TestBasicDiskValidator should not write data to /tmp

2016-08-08 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13380?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412566#comment-15412566
 ] 

Yufei Gu commented on HADOOP-13380:
---

Thanks a lot for the review and commit, [~eddyxu]!

> TestBasicDiskValidator should not write data to /tmp
> 
>
> Key: HADOOP-13380
> URL: https://issues.apache.org/jira/browse/HADOOP-13380
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.9.0
>Reporter: Lei (Eddy) Xu
>Assignee: Yufei Gu
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13380.001.patch, HADOOP-13380.002.patch
>
>
> In {{TestBasicDiskValidator}}, the following code is confusing
> {code}
>File localDir = File.createTempFile("test", "tmp");
> try {
>if (isDir) {
>// reuse the file path generated by File#createTempFile to create a dir
>   localDir.delete();
>localDir.mkdir();
> }
> {code}
> Btw, as suggested in https://wiki.apache.org/hadoop/CodeReviewChecklist, unit 
> test should not write data into {{/tmp}}:
> bq. * unit tests do not write any temporary files to /tmp (instead, the tests 
> should write to the location specified by the test.build.data system property)
> Finally, should use {{Files}} in these file creation / deletion, so that any 
> error can be thrown as {{IOE}}.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-8522) ResetableGzipOutputStream creates invalid gzip files when finish() and resetState() are used

2016-08-08 Thread Mike Percy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Percy updated HADOOP-8522:
---
Attachment: HADOOP-8522-4.patch

> ResetableGzipOutputStream creates invalid gzip files when finish() and 
> resetState() are used
> 
>
> Key: HADOOP-8522
> URL: https://issues.apache.org/jira/browse/HADOOP-8522
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 1.0.3, 2.0.0-alpha
>Reporter: Mike Percy
>Assignee: Mike Percy
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-8522-4.patch
>
>
> ResetableGzipOutputStream creates invalid gzip files when finish() and 
> resetState() are used. The issue is that finish() flushes the compressor 
> buffer and writes the gzip CRC32 + data length trailer. After that, 
> resetState() does not repeat the gzip header, but simply starts writing more 
> deflate-compressed data. The resultant files are not readable by the Linux 
> "gunzip" tool. ResetableGzipOutputStream should write valid multi-member gzip 
> files.
> The gzip format is specified in [RFC 
> 1952|https://tools.ietf.org/html/rfc1952].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13380) TestBasicDiskValidator should not write data to /tmp

2016-08-08 Thread Lei (Eddy) Xu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13380?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Lei (Eddy) Xu updated HADOOP-13380:
---
   Resolution: Fixed
Fix Version/s: 3.0.0-alpha2
   2.9.0
   Status: Resolved  (was: Patch Available)

Thanks for the work, [~yufeigu]. +1. Committed to trunk and branch-2.

> TestBasicDiskValidator should not write data to /tmp
> 
>
> Key: HADOOP-13380
> URL: https://issues.apache.org/jira/browse/HADOOP-13380
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 2.9.0
>Reporter: Lei (Eddy) Xu
>Assignee: Yufei Gu
>Priority: Minor
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13380.001.patch, HADOOP-13380.002.patch
>
>
> In {{TestBasicDiskValidator}}, the following code is confusing
> {code}
>File localDir = File.createTempFile("test", "tmp");
> try {
>if (isDir) {
>// reuse the file path generated by File#createTempFile to create a dir
>   localDir.delete();
>localDir.mkdir();
> }
> {code}
> Btw, as suggested in https://wiki.apache.org/hadoop/CodeReviewChecklist, unit 
> test should not write data into {{/tmp}}:
> bq. * unit tests do not write any temporary files to /tmp (instead, the tests 
> should write to the location specified by the test.build.data system property)
> Finally, should use {{Files}} in these file creation / deletion, so that any 
> error can be thrown as {{IOE}}.
>  



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-8522) ResetableGzipOutputStream creates invalid gzip files when finish() and resetState() are used

2016-08-08 Thread Mike Percy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Percy updated HADOOP-8522:
---
Attachment: (was: HADOOP-8522-3.patch)

> ResetableGzipOutputStream creates invalid gzip files when finish() and 
> resetState() are used
> 
>
> Key: HADOOP-8522
> URL: https://issues.apache.org/jira/browse/HADOOP-8522
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 1.0.3, 2.0.0-alpha
>Reporter: Mike Percy
>Assignee: Mike Percy
>  Labels: BB2015-05-TBR
>
> ResetableGzipOutputStream creates invalid gzip files when finish() and 
> resetState() are used. The issue is that finish() flushes the compressor 
> buffer and writes the gzip CRC32 + data length trailer. After that, 
> resetState() does not repeat the gzip header, but simply starts writing more 
> deflate-compressed data. The resultant files are not readable by the Linux 
> "gunzip" tool. ResetableGzipOutputStream should write valid multi-member gzip 
> files.
> The gzip format is specified in [RFC 
> 1952|https://tools.ietf.org/html/rfc1952].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-8522) ResetableGzipOutputStream creates invalid gzip files when finish() and resetState() are used

2016-08-08 Thread Mike Percy (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-8522?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mike Percy updated HADOOP-8522:
---
Attachment: (was: HADOOP-8522-2a.patch)

> ResetableGzipOutputStream creates invalid gzip files when finish() and 
> resetState() are used
> 
>
> Key: HADOOP-8522
> URL: https://issues.apache.org/jira/browse/HADOOP-8522
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: io
>Affects Versions: 1.0.3, 2.0.0-alpha
>Reporter: Mike Percy
>Assignee: Mike Percy
>  Labels: BB2015-05-TBR
>
> ResetableGzipOutputStream creates invalid gzip files when finish() and 
> resetState() are used. The issue is that finish() flushes the compressor 
> buffer and writes the gzip CRC32 + data length trailer. After that, 
> resetState() does not repeat the gzip header, but simply starts writing more 
> deflate-compressed data. The resultant files are not readable by the Linux 
> "gunzip" tool. ResetableGzipOutputStream should write valid multi-member gzip 
> files.
> The gzip format is specified in [RFC 
> 1952|https://tools.ietf.org/html/rfc1952].



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13395) Enhance TestKMSAudit

2016-08-08 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412555#comment-15412555
 ] 

Wei-Chiu Chuang commented on HADOOP-13395:
--

+1. The checkstyle is due to the long message matching string. Forcing it to be 
shorter makes it harder to read.

> Enhance TestKMSAudit
> 
>
> Key: HADOOP-13395
> URL: https://issues.apache.org/jira/browse/HADOOP-13395
> Project: Hadoop Common
>  Issue Type: Test
>  Components: kms
>Affects Versions: 2.6.0
>Reporter: Xiao Chen
>Assignee: Xiao Chen
>Priority: Minor
> Attachments: HADOOP-13395.01.patch, HADOOP-13395.02.patch, 
> HADOOP-13395.03.patch
>
>
> This jira serves the goals:
> - Enhance existing test cases in TestKMSAudit, to rule out flakiness.
> - Add a new test case about formatting for different events.
> This will help us ensure audit log compatibility when we add a new log format 
> to KMS.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12747) support wildcard in libjars argument

2016-08-08 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412554#comment-15412554
 ] 

Sangjin Lee commented on HADOOP-12747:
--

Thanks [~vicaya]! Unless there are objections, I'll commit it by EOD today.

> support wildcard in libjars argument
> 
>
> Key: HADOOP-12747
> URL: https://issues.apache.org/jira/browse/HADOOP-12747
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-12747.01.patch, HADOOP-12747.02.patch, 
> HADOOP-12747.03.patch, HADOOP-12747.04.patch, HADOOP-12747.05.patch, 
> HADOOP-12747.06.patch, HADOOP-12747.07.patch
>
>
> There is a problem when a user job adds too many dependency jars in their 
> command line. The HADOOP_CLASSPATH part can be addressed, including using 
> wildcards (\*). But the same cannot be done with the -libjars argument. Today 
> it takes only fully specified file paths.
> We may want to consider supporting wildcards as a way to help users in this 
> situation. The idea is to handle it the same way the JVM does it: \* expands 
> to the list of jars in that directory. It does not traverse into any child 
> directory.
> Also, it probably would be a good idea to do it only for libjars (i.e. don't 
> do it for -files and -archives).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13397) Add dockerfile for Hadoop

2016-08-08 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-13397:
--
Attachment: HADOOP-13397.DNC001.patch

I'm uploading DNC001 ("do not commit" #1) to give folks something to play with, 
get some feedback, etc.

While the patch is intended for trunk, you can still use it to build 
Dockerfiles (theoretically) for any branch-2 release.

Docs are missing, but running:

{code}
mkhdf create --version 2.7.2 --dockerfile /tmp/Dockerfile
{code}

will generate a simple Xenial-based Dockerfile that downloads 2.7.2, does gpg 
verification, etc, with a bootstrap file in /tmp/hadoop-bootstrap.sh to get the 
daemons started.

Lots of this is still untested, but it should be enough for folks to provide 
some feedback if this is useful.  Some portions (such as supplying stubs 
outside of the share dir) aren't quite baked in yet, but that will be coming. 

There are lots of to do's here, too many to name, but some of the big ones are:
* clean up parameter handling to be less finicky
* support for RPMs, DEBs, etc,
* support for non-bundled bits (e.g., supplying your own tar ball)
* actually verify the daemons work. :)

The current focus was to build something that would just be a raw dockerfile 
without any external input.

> Add dockerfile for Hadoop
> -
>
> Key: HADOOP-13397
> URL: https://issues.apache.org/jira/browse/HADOOP-13397
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Klaus Ma
>Assignee: Allen Wittenauer
> Attachments: HADOOP-13397.DNC001.patch
>
>
> For now, there's no community version Dockerfile in Hadoop; most of docker 
> images are provided by vendor, e.g. 
> 1. Cloudera's image: https://hub.docker.com/r/cloudera/quickstart/
> 2.  From HortonWorks sequenceiq: 
> https://hub.docker.com/r/sequenceiq/hadoop-docker/
> 3. MapR provides the mapr-sandbox-base: 
> https://hub.docker.com/r/maprtech/mapr-sandbox-base/
> The proposal of this JIRA is to provide a community version Dockerfile in 
> Hadoop, and here's some requirement:
> 1. Seperated docker image for master & agents, e.g. resource manager & node 
> manager
> 2. Default configuration to start master & agent instead of configurating 
> manually
> 3. Start Hadoop process as no-daemon
> Here's my dockerfile to start master/agent: 
> https://github.com/k82cn/outrider/tree/master/kubernetes/imgs/yarn
> I'd like to contribute it after polishing :).
> Email Thread : 
> http://mail-archives.apache.org/mod_mbox/hadoop-user/201607.mbox/%3CSG2PR04MB162977CFE150444FA022510FB6370%40SG2PR04MB1629.apcprd04.prod.outlook.com%3E



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13474) Add more details in the log when a token is expired

2016-08-08 Thread Xiao Chen (JIRA)
Xiao Chen created HADOOP-13474:
--

 Summary: Add more details in the log when a token is expired
 Key: HADOOP-13474
 URL: https://issues.apache.org/jira/browse/HADOOP-13474
 Project: Hadoop Common
  Issue Type: Improvement
  Components: security
Affects Versions: 2.6.0
Reporter: Xiao Chen
Assignee: Xiao Chen


Currently when there's an expired token, we see this from the log:
{noformat}
2016-08-06 07:13:20,807 WARN 
org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
AuthenticationToken ignored: AuthenticationToken expired
2016-08-06 09:55:48,665 WARN 
org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
AuthenticationToken ignored: AuthenticationToken expired
2016-08-06 10:01:41,452 WARN 
org.apache.hadoop.security.authentication.server.AuthenticationFilter: 
AuthenticationToken ignored: AuthenticationToken expired
{noformat}

We should log a better 
[message|https://github.com/apache/hadoop/blob/trunk/hadoop-common-project/hadoop-auth/src/main/java/org/apache/hadoop/security/authentication/server/AuthenticationFilter.java#L456],
 to include more details (e.g. token type, username, tokenid) for 
trouble-shooting purpose.
I don't think the additional information exposed will lead to any security 
concern, since the token is expired anyways.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13441) Document LdapGroupsMapping keystore password properties

2016-08-08 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13441?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412435#comment-15412435
 ] 

Wei-Chiu Chuang commented on HADOOP-13441:
--

Hi [~yuanbo] thanks again for the updated patch. This is largely good.

Couple of comments:

* instead of skipping the properties in TestCommonConfigurationFields, can you 
define these property constants in {{CommonConfigurationKeysPublic}}, for 
example,
{code:title=CommonConfigurationKeysPublic.java}
public static final String  HADOOP_SECURITY_CREDENTIAL_PASSWORD_FILE_KEY = 
"hadoop.security.credstore.java-keystore-provider.password-file";
{code}
and then in AbstractJavaKeyStoreProvider.java:
{code:title=AbstractJavaKeyStoreProvider.java}
public static final String CREDENTIAL_PASSWORD_FILE_KEY = 
CommonConfigurationKeysPublic.HADOOP_SECURITY_CREDENTIAL_PASSWORD_FILE_KEY;
{code}

* hadoop.security.group.mapping.ldap.bind.password.file
{quote}
+The path to a file containing the password of the bind user. If
+the password is not configured in credential providers and the property
+hadoop.security.group.mapping.ldap.bind.password, LDAPGroupsMapping
+reads password from the file.
{quote}
should be "and the property hadoop.security.group.mapping.ldap.bind.password is 
not set"

Similarly the same change is needed for 
{{hadoop.security.group.mapping.ldap.ssl.keystore.password.file}}.

* GroupsMapping.md
{quote}
+In addition, specify the path to the keystore file for SSL connection in 
`hadoop.security.group.mapping.ldap.ssl.keystore` and keystore password in 
`hadoop.security.group.mapping.ldap.ssl.keystore.password`, at the same time, 
make sure `hadoop.security.credential.clear-text-fallback` is true.
+Alternatively, store the keystore password in a file, and point 
`hadoop.security.group.mapping.ldap.ssl.keystore.password.file` to that file.
+For security purposes, this file should be readable only by the Unix user 
running the daemons, and for preventing recursive dependency, this file should 
be a local file.
{quote}
This is good. Can you also add that "keystore password in 
`hadoop.security.group.mapping.ldap.ssl.keystore.password`" is highly 
discouraged, because it exposes the password in the configuration file. 
Instead, use the credential file and use 
`hadoop.security.group.mapping.ldap.ssl.keystore.password` as the alias in the 
credential file for password, or use 
`hadoop.security.group.mapping.ldap.ssl.keystore.password.file`.

> Document LdapGroupsMapping keystore password properties
> ---
>
> Key: HADOOP-13441
> URL: https://issues.apache.org/jira/browse/HADOOP-13441
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: security
>Affects Versions: 2.6.0
>Reporter: Wei-Chiu Chuang
>Assignee: Yuanbo Liu
>Priority: Minor
>  Labels: documentation
> Attachments: HADOOP-13441.001.patch, HADOOP-13441.002.patch, 
> HADOOP-13441.003.patch
>
>
> A few properties are not documented.
> {{hadoop.security.group.mapping.ldap.ssl.keystore.password}}
> This property is used as an alias to get password from credential providers, 
> or, fall back to using the value as password in clear text. There is also a 
> caveat that credential providers can not be a HDFS-based file system, as 
> mentioned in HADOOP-11934, to prevent cyclic dependency issue.
> This should be documented in core-default.xml and GroupsMapping.md
> {{hadoop.security.credential.clear-text-fallback}}
> This property controls whether or not to fall back to storing credential 
> password as cleartext.
> This should be documented in core-default.xml.
> {{hadoop.security.credential.provider.path}}
> This is mentioned in _CredentialProvider API Guide_, but not in 
> core-default.xml
> The "Supported Features" in _CredentialProvider API Guide_ should link back 
> to GroupsMapping.md#LDAP Groups Mapping 
> {{hadoop.security.credstore.java-keystore-provider.password-file}}
> This is the password file to protect credential files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Moved] (HADOOP-13473) TestTracing#testTracing is failing in trunk

2016-08-08 Thread Kihwal Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13473?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kihwal Lee moved HDFS-10732 to HADOOP-13473:


Affects Version/s: (was: 3.0.0-alpha2)
   3.0.0-alpha2
   2.9.0
  Key: HADOOP-13473  (was: HDFS-10732)
  Project: Hadoop Common  (was: Hadoop HDFS)

> TestTracing#testTracing is failing in trunk 
> 
>
> Key: HADOOP-13473
> URL: https://issues.apache.org/jira/browse/HADOOP-13473
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: Daryn Sharp
>
> Looks like the test has been failing since HADOOP-13438 was committed.
> https://builds.apache.org/job/PreCommit-HDFS-Build/16338/testReport/org.apache.hadoop.tracing/TestTracing/testTracing/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13473) TestTracing#testTracing is failing in trunk

2016-08-08 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13473?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412370#comment-15412370
 ] 

Kihwal Lee commented on HADOOP-13473:
-

Moved this to Common.

> TestTracing#testTracing is failing in trunk 
> 
>
> Key: HADOOP-13473
> URL: https://issues.apache.org/jira/browse/HADOOP-13473
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.9.0, 3.0.0-alpha2
>Reporter: Wei-Chiu Chuang
>Assignee: Daryn Sharp
>
> Looks like the test has been failing since HADOOP-13438 was committed.
> https://builds.apache.org/job/PreCommit-HDFS-Build/16338/testReport/org.apache.hadoop.tracing/TestTracing/testTracing/



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13403) AzureNativeFileSystem rename/delete performance improvements

2016-08-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412361#comment-15412361
 ] 

Hudson commented on HADOOP-13403:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #10236 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10236/])
HADOOP-13403. AzureNativeFileSystem rename/delete performance (cnauroth: rev 
2ed58c40e5dcbf5c5303c00e85096085b1055f85)
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/NativeAzureFileSystem.java
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureFileSystemThreadPoolExecutor.java
* 
hadoop-tools/hadoop-azure/src/test/java/org/apache/hadoop/fs/azure/TestFileSystemOperationsWithThreads.java
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureFileSystemThreadTask.java
* hadoop-tools/hadoop-azure/src/site/markdown/index.md
* 
hadoop-tools/hadoop-azure/src/main/java/org/apache/hadoop/fs/azure/AzureNativeFileSystemStore.java
* hadoop-tools/hadoop-azure/src/test/resources/log4j.properties


> AzureNativeFileSystem rename/delete performance improvements
> 
>
> Key: HADOOP-13403
> URL: https://issues.apache.org/jira/browse/HADOOP-13403
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure
>Affects Versions: 2.7.2
>Reporter: Subramanyam Pattipaka
>Assignee: Subramanyam Pattipaka
> Fix For: 2.9.0
>
> Attachments: HADOOP-13403-001.patch, HADOOP-13403-002.patch, 
> HADOOP-13403-003.patch, HADOOP-13403-004.patch, HADOOP-13403-005.patch, 
> HADOOP-13403-006.patch
>
>
> WASB Performance Improvements
> Problem
> ---
> Azure Native File system operations like rename/delete which has large number 
> of directories and/or files in the source directory are experiencing 
> performance issues. Here are possible reasons
> a)We first list all files under source directory hierarchically. This is 
> a serial operation. 
> b)After collecting the entire list of files under a folder, we delete or 
> rename files one by one serially.
> c)There is no logging information available for these costly operations 
> even in DEBUG mode leading to difficulty in understanding wasb performance 
> issues.
> Proposal
> -
> Step 1: Rename and delete operations will generate a list all files under the 
> source folder. We need to use azure flat listing option to get list with 
> single request to azure store. We have introduced config 
> fs.azure.flatlist.enable to enable this option. The default value is 'false' 
> which means flat listing is disabled.
> Step 2: Create thread pool and threads dynamically based on user 
> configuration. These thread pools will be deleted after operation is over.  
> We are introducing introducing two new configs
>   a)  fs.azure.rename.threads : Config to set number of rename 
> threads. Default value is 0 which means no threading.
>   b)  fs.azure.delete.threads: Config to set number of delete 
> threads. Default value is 0 which means no threading.
>   We have provided debug log information on number of threads not used 
> for the operation which can be useful .
>   Failure Scenarios:
>   If we fail to create thread pool due to ANY reason (for example trying 
> create with thread count with large value such as 100), we fall back to 
> serialization operation. 
> Step 3: Bob operations can be done in parallel using multiple threads 
> executing following snippet
>   while ((currentIndex = fileIndex.getAndIncrement()) < files.length) {
>   FileMetadata file = files[currentIndex];
>   Rename/delete(file);
>   }
>   The above strategy depends on the fact that all files are stored in a 
> final array and each thread has to determine synchronized next index to do 
> the job. The advantage of this strategy is that even if user configures large 
> number of unusable threads, we always ensure that work doesn’t get serialized 
> due to lagging threads. 
>   We are logging following information which can be useful for tuning 
> number of threads
>   a) Number of unusable threads
>   b) Time taken by each thread
>   c) Number of files processed by each thread
>   d) Total time taken for the operation
>   Failure Scenarios:
>   Failure to queue a thread execute request shouldn’t be an issue if we 
> can ensure at least one thread has completed execution successfully. If we 
> couldn't schedule one thread then we should take serialization path. 
> Exceptions raised while executing threads are still considered regular 
> exceptions and returned to client as operation failed. Exceptions raised 
> while stopping threads and deleting thread pool shouldn't can be ignored if 
> operation all files are done with o

[jira] [Commented] (HADOOP-13457) Remove hardcoded absolute path for shell executable

2016-08-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412362#comment-15412362
 ] 

Hudson commented on HADOOP-13457:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #10236 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10236/])
HADOOP-13457. Remove hardcoded absolute path for shell executable. (Chen (arp: 
rev 58e1523c8ea1363ea8ab115fb718227a90bfab87)
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/util/Shell.java


> Remove hardcoded absolute path for shell executable
> ---
>
> Key: HADOOP-13457
> URL: https://issues.apache.org/jira/browse/HADOOP-13457
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Arpit Agarwal
>Assignee: Chen Liang
> Fix For: 2.8.0
>
> Attachments: HADOOP-13457.001.patch
>
>
> Shell.java has a hardcoded path to /bin/bash which is not correct on all 
> platforms. 
> Pointed out by [~aw] while reviewing HADOOP-13434.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13438) Optimize IPC server protobuf decoding

2016-08-08 Thread Daryn Sharp (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412340#comment-15412340
 ] 

Daryn Sharp commented on HADOOP-13438:
--

It's the problem.  I'll fix it.

> Optimize IPC server protobuf decoding
> -
>
> Key: HADOOP-13438
> URL: https://issues.apache.org/jira/browse/HADOOP-13438
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13438.patch, HADOOP-13438.patch.1
>
>
> The current use of the protobuf API uses an expensive code path.  The builder 
> uses the parser to instantiate a message, then copies the message into the 
> builder.  The parser is creating multi-layered internally buffering streams 
> that cause excessive byte[] allocations.
> Using the parser directly with a coded input stream backed by the byte[] from 
> the wire will take a fast-path straight to the pb message's ctor.  
> Substantially less garbage is generated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13457) Remove hardcoded absolute path for shell executable

2016-08-08 Thread Arpit Agarwal (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Arpit Agarwal updated HADOOP-13457:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
   Fix Version/s: 2.8.0
Target Version/s:   (was: 2.8.0)
  Status: Resolved  (was: Patch Available)

I've committed this for 2.8.0. Thanks for the contribution [~vagarychen].

Good catch on {{/bin/ls}}. If it needs a fix we can do so in a separate Jira.

> Remove hardcoded absolute path for shell executable
> ---
>
> Key: HADOOP-13457
> URL: https://issues.apache.org/jira/browse/HADOOP-13457
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Arpit Agarwal
>Assignee: Chen Liang
> Fix For: 2.8.0
>
> Attachments: HADOOP-13457.001.patch
>
>
> Shell.java has a hardcoded path to /bin/bash which is not correct on all 
> platforms. 
> Pointed out by [~aw] while reviewing HADOOP-13434.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13403) AzureNativeFileSystem rename/delete performance improvements

2016-08-08 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13403:
---
Release Note: WASB has added an optional capability to execute certain 
FileSystem operations in parallel on multiple threads for improved performance. 
 Please refer to the Azure Blob Storage documentation page for more information 
on how to enable and control the feature.

> AzureNativeFileSystem rename/delete performance improvements
> 
>
> Key: HADOOP-13403
> URL: https://issues.apache.org/jira/browse/HADOOP-13403
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure
>Affects Versions: 2.7.2
>Reporter: Subramanyam Pattipaka
>Assignee: Subramanyam Pattipaka
> Fix For: 2.9.0
>
> Attachments: HADOOP-13403-001.patch, HADOOP-13403-002.patch, 
> HADOOP-13403-003.patch, HADOOP-13403-004.patch, HADOOP-13403-005.patch, 
> HADOOP-13403-006.patch
>
>
> WASB Performance Improvements
> Problem
> ---
> Azure Native File system operations like rename/delete which has large number 
> of directories and/or files in the source directory are experiencing 
> performance issues. Here are possible reasons
> a)We first list all files under source directory hierarchically. This is 
> a serial operation. 
> b)After collecting the entire list of files under a folder, we delete or 
> rename files one by one serially.
> c)There is no logging information available for these costly operations 
> even in DEBUG mode leading to difficulty in understanding wasb performance 
> issues.
> Proposal
> -
> Step 1: Rename and delete operations will generate a list all files under the 
> source folder. We need to use azure flat listing option to get list with 
> single request to azure store. We have introduced config 
> fs.azure.flatlist.enable to enable this option. The default value is 'false' 
> which means flat listing is disabled.
> Step 2: Create thread pool and threads dynamically based on user 
> configuration. These thread pools will be deleted after operation is over.  
> We are introducing introducing two new configs
>   a)  fs.azure.rename.threads : Config to set number of rename 
> threads. Default value is 0 which means no threading.
>   b)  fs.azure.delete.threads: Config to set number of delete 
> threads. Default value is 0 which means no threading.
>   We have provided debug log information on number of threads not used 
> for the operation which can be useful .
>   Failure Scenarios:
>   If we fail to create thread pool due to ANY reason (for example trying 
> create with thread count with large value such as 100), we fall back to 
> serialization operation. 
> Step 3: Bob operations can be done in parallel using multiple threads 
> executing following snippet
>   while ((currentIndex = fileIndex.getAndIncrement()) < files.length) {
>   FileMetadata file = files[currentIndex];
>   Rename/delete(file);
>   }
>   The above strategy depends on the fact that all files are stored in a 
> final array and each thread has to determine synchronized next index to do 
> the job. The advantage of this strategy is that even if user configures large 
> number of unusable threads, we always ensure that work doesn’t get serialized 
> due to lagging threads. 
>   We are logging following information which can be useful for tuning 
> number of threads
>   a) Number of unusable threads
>   b) Time taken by each thread
>   c) Number of files processed by each thread
>   d) Total time taken for the operation
>   Failure Scenarios:
>   Failure to queue a thread execute request shouldn’t be an issue if we 
> can ensure at least one thread has completed execution successfully. If we 
> couldn't schedule one thread then we should take serialization path. 
> Exceptions raised while executing threads are still considered regular 
> exceptions and returned to client as operation failed. Exceptions raised 
> while stopping threads and deleting thread pool shouldn't can be ignored if 
> operation all files are done with out any issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13403) AzureNativeFileSystem rename/delete performance improvements

2016-08-08 Thread Chris Nauroth (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris Nauroth updated HADOOP-13403:
---
  Resolution: Fixed
Hadoop Flags: Reviewed
  Status: Resolved  (was: Patch Available)

[~pattipaka], thank you again for revising the patch.  +1 for patch 006.  I 
have committed this to trunk and branch-2.

> AzureNativeFileSystem rename/delete performance improvements
> 
>
> Key: HADOOP-13403
> URL: https://issues.apache.org/jira/browse/HADOOP-13403
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure
>Affects Versions: 2.7.2
>Reporter: Subramanyam Pattipaka
>Assignee: Subramanyam Pattipaka
> Fix For: 2.9.0
>
> Attachments: HADOOP-13403-001.patch, HADOOP-13403-002.patch, 
> HADOOP-13403-003.patch, HADOOP-13403-004.patch, HADOOP-13403-005.patch, 
> HADOOP-13403-006.patch
>
>
> WASB Performance Improvements
> Problem
> ---
> Azure Native File system operations like rename/delete which has large number 
> of directories and/or files in the source directory are experiencing 
> performance issues. Here are possible reasons
> a)We first list all files under source directory hierarchically. This is 
> a serial operation. 
> b)After collecting the entire list of files under a folder, we delete or 
> rename files one by one serially.
> c)There is no logging information available for these costly operations 
> even in DEBUG mode leading to difficulty in understanding wasb performance 
> issues.
> Proposal
> -
> Step 1: Rename and delete operations will generate a list all files under the 
> source folder. We need to use azure flat listing option to get list with 
> single request to azure store. We have introduced config 
> fs.azure.flatlist.enable to enable this option. The default value is 'false' 
> which means flat listing is disabled.
> Step 2: Create thread pool and threads dynamically based on user 
> configuration. These thread pools will be deleted after operation is over.  
> We are introducing introducing two new configs
>   a)  fs.azure.rename.threads : Config to set number of rename 
> threads. Default value is 0 which means no threading.
>   b)  fs.azure.delete.threads: Config to set number of delete 
> threads. Default value is 0 which means no threading.
>   We have provided debug log information on number of threads not used 
> for the operation which can be useful .
>   Failure Scenarios:
>   If we fail to create thread pool due to ANY reason (for example trying 
> create with thread count with large value such as 100), we fall back to 
> serialization operation. 
> Step 3: Bob operations can be done in parallel using multiple threads 
> executing following snippet
>   while ((currentIndex = fileIndex.getAndIncrement()) < files.length) {
>   FileMetadata file = files[currentIndex];
>   Rename/delete(file);
>   }
>   The above strategy depends on the fact that all files are stored in a 
> final array and each thread has to determine synchronized next index to do 
> the job. The advantage of this strategy is that even if user configures large 
> number of unusable threads, we always ensure that work doesn’t get serialized 
> due to lagging threads. 
>   We are logging following information which can be useful for tuning 
> number of threads
>   a) Number of unusable threads
>   b) Time taken by each thread
>   c) Number of files processed by each thread
>   d) Total time taken for the operation
>   Failure Scenarios:
>   Failure to queue a thread execute request shouldn’t be an issue if we 
> can ensure at least one thread has completed execution successfully. If we 
> couldn't schedule one thread then we should take serialization path. 
> Exceptions raised while executing threads are still considered regular 
> exceptions and returned to client as operation failed. Exceptions raised 
> while stopping threads and deleting thread pool shouldn't can be ignored if 
> operation all files are done with out any issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10682) Metrics are not output in trunk

2016-08-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10682?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412313#comment-15412313
 ] 

Hudson commented on HADOOP-10682:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #10235 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10235/])
HADOOP-10682. Replace FsDatasetImpl object lock with a separate lock (arp: rev 
8c0638471f8f1dd47667b2d6727d4d2d54e4b48c)
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/SimulatedFSDataset.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/FsDatasetSpi.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestDirectoryScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsVolumeImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/fsdataset/impl/FsDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/TestBlockRecovery.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DiskBalancer.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/datanode/extdataset/ExternalDatasetImpl.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DirectoryScanner.java
* 
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/DataNode.java


> Metrics are not output in trunk
> ---
>
> Key: HADOOP-10682
> URL: https://issues.apache.org/jira/browse/HADOOP-10682
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: metrics
>Reporter: Akira Ajisaka
>
> Metrics are not output in trunk by the following configuration:
> {code}
> *.sink.file.class=org.apache.Hadoop.metrics2.sink.FileSink
> *.period=10
> namenode.sink.file.filename=namenode-metrics.out
> {code}
> The below change worked well.
> {code}
> - namenode.sink.file.filename=namenode-metrics.out
> + NameNode.sink.file.filename=namenode-metrics.out
> {code}
> It means that an old configuration doesn't work on trunk. We should fix it or 
> document to use "NameNode".



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13457) Remove hardcoded absolute path for shell executable

2016-08-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412307#comment-15412307
 ] 

Hadoop QA commented on HADOOP-13457:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
15s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
9s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
34s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
30s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
31s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
13s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 44m 19s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12822625/HADOOP-13457.001.patch
 |
| JIRA Issue | HADOOP-13457 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 3a29886024de 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6255859 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10202/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10202/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Remove hardcoded absolute path for shell executable
> ---
>
> Key: HADOOP-13457
> URL: https://issues.apache.org/jira/browse/HADOOP-13457
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Arpit Agarwal
>Assignee: Chen Liang
> Attachments: HADOOP-13457.001.patch
>
>
> Shell.java has a hardcoded path to /b

[jira] [Commented] (HADOOP-13461) NPE in KeyProvider.rollNewVersion

2016-08-08 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13461?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412303#comment-15412303
 ] 

Xiao Chen commented on HADOOP-13461:


Thanks [~coheig] for reporting the issue and providing a fix. The fix looks 
good.

Nits in the test:
- We usually assert an exception by {{Assert.fail}} after the line where an 
exception is expected, and then verify the exception caught is what we wanted. 
So in this case, we can change
{code}
try {
  kp.rollNewVersion("unknown");
  assertTrue("should have thrown", false);
} catch (IOException e) {
  assertTrue(true);
}
{code}
to
{code}
try {
  kp.rollNewVersion("unknown");
  fail("should have thrown");
} catch (IOException e) {
  GenericTestUtils.assertExceptionContains("Can't find Metadata for key ", e);
}
{code}

We usually use Affect / Target versions when filing a jira. Fix Versions are 
used to track where the jira is actually committed, and are set by committers 
at check-in time. Please correct it, and refer to 
https://wiki.apache.org/hadoop/HowToContribute for details.


And I see you cannot assign the jira to yourself now. Sorry about the 
inconvenience, there are jira permission requirements need to be set. I can't 
do that yet, ping [~ajisakaa] and [~eddyxu] for help. (Akira / Eddy, could you 
also help grant me committer permission so I can do it in the future? Thanks!)

> NPE in KeyProvider.rollNewVersion
> -
>
> Key: HADOOP-13461
> URL: https://issues.apache.org/jira/browse/HADOOP-13461
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.7.2
>Reporter: Colm O hEigeartaigh
>Priority: Minor
> Fix For: 2.8.0, 2.7.3, 2.6.5, 3.0.0-alpha1
>
> Attachments: HADOOP-13461.patch
>
>
> When KeyProvider.rollNewVersion(String name) is called, it first gets the 
> metadata for the given name. The javadoc states that the getMetadata(String 
> name) method can return null if the key doesn't exist. However rollNewVersion 
> throws a NPE if the returned metadata is null.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13438) Optimize IPC server protobuf decoding

2016-08-08 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412281#comment-15412281
 ] 

Kihwal Lee commented on HADOOP-13438:
-

Will link if HDFS-10732 is indeed caused by this commit.

> Optimize IPC server protobuf decoding
> -
>
> Key: HADOOP-13438
> URL: https://issues.apache.org/jira/browse/HADOOP-13438
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13438.patch, HADOOP-13438.patch.1
>
>
> The current use of the protobuf API uses an expensive code path.  The builder 
> uses the parser to instantiate a message, then copies the message into the 
> builder.  The parser is creating multi-layered internally buffering streams 
> that cause excessive byte[] allocations.
> Using the parser directly with a coded input stream backed by the byte[] from 
> the wire will take a fast-path straight to the pb message's ctor.  
> Substantially less garbage is generated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13190) LoadBalancingKMSClientProvider should be documented

2016-08-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412275#comment-15412275
 ] 

Hadoop QA commented on HADOOP-13190:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
9s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
43s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
15s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
17s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}  9m  1s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12805701/HADOOP-13190.001.patch
 |
| JIRA Issue | HADOOP-13190 |
| Optional Tests |  asflicense  mvnsite  |
| uname | Linux fbf55e6bf39e 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6255859 |
| modules | C: hadoop-common-project/hadoop-kms U: 
hadoop-common-project/hadoop-kms |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10203/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> LoadBalancingKMSClientProvider should be documented
> ---
>
> Key: HADOOP-13190
> URL: https://issues.apache.org/jira/browse/HADOOP-13190
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, kms
>Affects Versions: 2.7.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: supportability
> Attachments: HADOOP-13190.001.patch
>
>
> Currently, there are two ways to achieve KMS HA.
> The first one, and the only documented one, is running multiple KMS instances 
> behind a load balancer. 
> https://hadoop.apache.org/docs/stable/hadoop-kms/index.html
> The other way, is make use of LoadBalancingKMSClientProvider which is added 
> in HADOOP-11620. However the usage is undocumented.
> I think we should update the KMS document to introduce 
> LoadBalancingKMSClientProvider, provide examples, and also update 
> kms-site.xml to explain it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13438) Optimize IPC server protobuf decoding

2016-08-08 Thread Kihwal Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412274#comment-15412274
 ] 

Kihwal Lee commented on HADOOP-13438:
-

[~jojochuang], we will take a look.

> Optimize IPC server protobuf decoding
> -
>
> Key: HADOOP-13438
> URL: https://issues.apache.org/jira/browse/HADOOP-13438
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13438.patch, HADOOP-13438.patch.1
>
>
> The current use of the protobuf API uses an expensive code path.  The builder 
> uses the parser to instantiate a message, then copies the message into the 
> builder.  The parser is creating multi-layered internally buffering streams 
> that cause excessive byte[] allocations.
> Using the parser directly with a coded input stream backed by the byte[] from 
> the wire will take a fast-path straight to the pb message's ctor.  
> Substantially less garbage is generated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13403) AzureNativeFileSystem rename/delete performance improvements

2016-08-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412269#comment-15412269
 ] 

Hadoop QA commented on HADOOP-13403:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
12s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  9m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
15s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
21s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 6s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
35s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
14s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  0m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
14s{color} | {color:green} hadoop-tools/hadoop-azure: The patch generated 0 new 
+ 43 unchanged - 1 fixed = 43 total (was 44) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
20s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  0m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
11s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m 
29s{color} | {color:green} hadoop-azure in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
18s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 17m 42s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12822628/HADOOP-13403-006.patch
 |
| JIRA Issue | HADOOP-13403 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 8be9946d46ca 3.13.0-92-generic #139-Ubuntu SMP Tue Jun 28 
20:42:26 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 6255859 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10201/testReport/ |
| modules | C: hadoop-tools/hadoop-azure U: hadoop-tools/hadoop-azure |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10201/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> AzureNativeFileSystem rename/delete performance improvements
> 
>
> Key: HADOOP-13403
> URL: https://issues.apache.org/jira/browse/HADOOP-13403
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure
>Affects Versions: 2.7.2
>Reporter: Subramanyam Pattipaka
>Assignee: Subramanyam Pattipaka
> Fix For: 2.9.0
>
> Attachments: HADOOP-13403-001.patch, HADOOP-13403-002.pa

[jira] [Commented] (HADOOP-13190) LoadBalancingKMSClientProvider should be documented

2016-08-08 Thread Xiao Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13190?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412255#comment-15412255
 ] 

Xiao Chen commented on HADOOP-13190:


Thank you [~jojochuang] for creating this jira and posting a patch!

We should definitely document this, and overall looks good. Comments:
- Maybe we can change the current {{$H3}} level title to {{Using Multiple 
Instances of KMS}}, and list the current LB/VIP and the new LBKMSCP under it? 
The other sub-sections (kerberos, secret-sharing) applies to multiple instances 
in general.
- In the new LBKMSCP section, please also add the failure-handling behavior. If 
a request to a KMSCP failed, LBKMSCP will retry the next KMSCP. The request is 
returned as failure only if all KMSCPs failed.
- In the sample xml, maybe also list an http example?

> LoadBalancingKMSClientProvider should be documented
> ---
>
> Key: HADOOP-13190
> URL: https://issues.apache.org/jira/browse/HADOOP-13190
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: documentation, kms
>Affects Versions: 2.7.0
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: supportability
> Attachments: HADOOP-13190.001.patch
>
>
> Currently, there are two ways to achieve KMS HA.
> The first one, and the only documented one, is running multiple KMS instances 
> behind a load balancer. 
> https://hadoop.apache.org/docs/stable/hadoop-kms/index.html
> The other way, is make use of LoadBalancingKMSClientProvider which is added 
> in HADOOP-11620. However the usage is undocumented.
> I think we should update the KMS document to introduce 
> LoadBalancingKMSClientProvider, provide examples, and also update 
> kms-site.xml to explain it.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13403) AzureNativeFileSystem rename/delete performance improvements

2016-08-08 Thread Subramanyam Pattipaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Subramanyam Pattipaka updated HADOOP-13403:
---
Attachment: HADOOP-13403-006.patch

Latest patch tested on both trunk and branch-2. Verified all tests are passing.

> AzureNativeFileSystem rename/delete performance improvements
> 
>
> Key: HADOOP-13403
> URL: https://issues.apache.org/jira/browse/HADOOP-13403
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure
>Affects Versions: 2.7.2
>Reporter: Subramanyam Pattipaka
>Assignee: Subramanyam Pattipaka
> Fix For: 2.9.0
>
> Attachments: HADOOP-13403-001.patch, HADOOP-13403-002.patch, 
> HADOOP-13403-003.patch, HADOOP-13403-004.patch, HADOOP-13403-005.patch, 
> HADOOP-13403-006.patch
>
>
> WASB Performance Improvements
> Problem
> ---
> Azure Native File system operations like rename/delete which has large number 
> of directories and/or files in the source directory are experiencing 
> performance issues. Here are possible reasons
> a)We first list all files under source directory hierarchically. This is 
> a serial operation. 
> b)After collecting the entire list of files under a folder, we delete or 
> rename files one by one serially.
> c)There is no logging information available for these costly operations 
> even in DEBUG mode leading to difficulty in understanding wasb performance 
> issues.
> Proposal
> -
> Step 1: Rename and delete operations will generate a list all files under the 
> source folder. We need to use azure flat listing option to get list with 
> single request to azure store. We have introduced config 
> fs.azure.flatlist.enable to enable this option. The default value is 'false' 
> which means flat listing is disabled.
> Step 2: Create thread pool and threads dynamically based on user 
> configuration. These thread pools will be deleted after operation is over.  
> We are introducing introducing two new configs
>   a)  fs.azure.rename.threads : Config to set number of rename 
> threads. Default value is 0 which means no threading.
>   b)  fs.azure.delete.threads: Config to set number of delete 
> threads. Default value is 0 which means no threading.
>   We have provided debug log information on number of threads not used 
> for the operation which can be useful .
>   Failure Scenarios:
>   If we fail to create thread pool due to ANY reason (for example trying 
> create with thread count with large value such as 100), we fall back to 
> serialization operation. 
> Step 3: Bob operations can be done in parallel using multiple threads 
> executing following snippet
>   while ((currentIndex = fileIndex.getAndIncrement()) < files.length) {
>   FileMetadata file = files[currentIndex];
>   Rename/delete(file);
>   }
>   The above strategy depends on the fact that all files are stored in a 
> final array and each thread has to determine synchronized next index to do 
> the job. The advantage of this strategy is that even if user configures large 
> number of unusable threads, we always ensure that work doesn’t get serialized 
> due to lagging threads. 
>   We are logging following information which can be useful for tuning 
> number of threads
>   a) Number of unusable threads
>   b) Time taken by each thread
>   c) Number of files processed by each thread
>   d) Total time taken for the operation
>   Failure Scenarios:
>   Failure to queue a thread execute request shouldn’t be an issue if we 
> can ensure at least one thread has completed execution successfully. If we 
> couldn't schedule one thread then we should take serialization path. 
> Exceptions raised while executing threads are still considered regular 
> exceptions and returned to client as operation failed. Exceptions raised 
> while stopping threads and deleting thread pool shouldn't can be ignored if 
> operation all files are done with out any issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13457) Remove hardcoded absolute path for shell executable

2016-08-08 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412221#comment-15412221
 ] 

Chen Liang commented on HADOOP-13457:
-

I also noticed there is "/bin/ls" in the same file, should this be changed to 
"ls" just like bash command? (i.e. is "ls" command also platform-dependent?) 
[~aw] Do you have any comments on this?

Thanks

> Remove hardcoded absolute path for shell executable
> ---
>
> Key: HADOOP-13457
> URL: https://issues.apache.org/jira/browse/HADOOP-13457
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Arpit Agarwal
>Assignee: Chen Liang
> Attachments: HADOOP-13457.001.patch
>
>
> Shell.java has a hardcoded path to /bin/bash which is not correct on all 
> platforms. 
> Pointed out by [~aw] while reviewing HADOOP-13434.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10823) TestReloadingX509TrustManager is flaky

2016-08-08 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412218#comment-15412218
 ] 

Hudson commented on HADOOP-10823:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #10234 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10234/])
HADOOP-10823. TestReloadingX509TrustManager is flaky. Contributed by (jitendra: 
rev 625585950a15461eb032e5e7ed8fdf4e1113b2bb)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/security/ssl/TestReloadingX509TrustManager.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/security/ssl/ReloadingX509TrustManager.java


> TestReloadingX509TrustManager is flaky
> --
>
> Key: HADOOP-10823
> URL: https://issues.apache.org/jira/browse/HADOOP-10823
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.3.0
> Environment: java version "1.6.0_31"
> Java(TM) SE Runtime Environment (build 1.6.0_31-b04)
> Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)
> Linux [hostname] 2.6.32-279.14.1.el6.x86_64 #1 SMP Mon Oct 15 13:44:51 EDT 
> 2012 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Ratandeep Ratti
>Assignee: Mingliang Liu
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HADOOP-10823.001.patch, HADOOP-10823.002.patch, 
> HADOOP-10823.003.patch, HADOOP-10823.004.patch, HADOOP-10823.005.patch, 
> HADOOP-10823.patch
>
>
> Pasting the log
> {quote}
> Error Message
> expected:<2> but was:<1>
> Stacktrace
> junit.framework.AssertionFailedError: expected:<2> but was:<1>
>   at junit.framework.Assert.fail(Assert.java:50)
>   at junit.framework.Assert.failNotEquals(Assert.java:287)
>   at junit.framework.Assert.assertEquals(Assert.java:67)
>   at junit.framework.Assert.assertEquals(Assert.java:199)
>   at junit.framework.Assert.assertEquals(Assert.java:205)
>   at 
> org.apache.hadoop.security.ssl.TestReloadingX509TrustManager.testReload(TestReloadingX509TrustManager.java:112)
> Standard Output
> 2014-07-06 06:12:21,170 WARN  ssl.ReloadingX509TrustManager 
> (ReloadingX509TrustManager.java:run(197)) - Could not load truststore (keep 
> using existing one) : java.io.EOFException
> java.io.EOFException
>   at java.io.DataInputStream.readInt(DataInputStream.java:375)
>   at sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:628)
>   at 
> sun.security.provider.JavaKeyStore$JKS.engineLoad(JavaKeyStore.java:38)
>   at java.security.KeyStore.load(KeyStore.java:1185)
>   at 
> org.apache.hadoop.security.ssl.ReloadingX509TrustManager.loadTrustManager(ReloadingX509TrustManager.java:166)
>   at 
> org.apache.hadoop.security.ssl.ReloadingX509TrustManager.run(ReloadingX509TrustManager.java:195)
>   at java.lang.Thread.run(Thread.java:662)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13438) Optimize IPC server protobuf decoding

2016-08-08 Thread Wei-Chiu Chuang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412217#comment-15412217
 ] 

Wei-Chiu Chuang commented on HADOOP-13438:
--

Looks like TestTracing#testTracing in trunk is failing after this commit.

> Optimize IPC server protobuf decoding
> -
>
> Key: HADOOP-13438
> URL: https://issues.apache.org/jira/browse/HADOOP-13438
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Daryn Sharp
>Assignee: Daryn Sharp
> Fix For: 2.9.0, 3.0.0-alpha2
>
> Attachments: HADOOP-13438.patch, HADOOP-13438.patch.1
>
>
> The current use of the protobuf API uses an expensive code path.  The builder 
> uses the parser to instantiate a message, then copies the message into the 
> builder.  The parser is creating multi-layered internally buffering streams 
> that cause excessive byte[] allocations.
> Using the parser directly with a coded input stream backed by the byte[] from 
> the wire will take a fast-path straight to the pb message's ctor.  
> Substantially less garbage is generated.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13457) Remove hardcoded absolute path for shell executable

2016-08-08 Thread Arpit Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13457?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412210#comment-15412210
 ] 

Arpit Agarwal commented on HADOOP-13457:


+1 pending Jenkins.

> Remove hardcoded absolute path for shell executable
> ---
>
> Key: HADOOP-13457
> URL: https://issues.apache.org/jira/browse/HADOOP-13457
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Arpit Agarwal
>Assignee: Chen Liang
> Attachments: HADOOP-13457.001.patch
>
>
> Shell.java has a hardcoded path to /bin/bash which is not correct on all 
> platforms. 
> Pointed out by [~aw] while reviewing HADOOP-13434.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13457) Remove hardcoded absolute path for shell executable

2016-08-08 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HADOOP-13457:

Status: Patch Available  (was: Open)

> Remove hardcoded absolute path for shell executable
> ---
>
> Key: HADOOP-13457
> URL: https://issues.apache.org/jira/browse/HADOOP-13457
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Arpit Agarwal
>Assignee: Chen Liang
> Attachments: HADOOP-13457.001.patch
>
>
> Shell.java has a hardcoded path to /bin/bash which is not correct on all 
> platforms. 
> Pointed out by [~aw] while reviewing HADOOP-13434.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13457) Remove hardcoded absolute path for shell executable

2016-08-08 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang updated HADOOP-13457:

Attachment: HADOOP-13457.001.patch

> Remove hardcoded absolute path for shell executable
> ---
>
> Key: HADOOP-13457
> URL: https://issues.apache.org/jira/browse/HADOOP-13457
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Arpit Agarwal
>Assignee: Chen Liang
> Attachments: HADOOP-13457.001.patch
>
>
> Shell.java has a hardcoded path to /bin/bash which is not correct on all 
> platforms. 
> Pointed out by [~aw] while reviewing HADOOP-13434.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10823) TestReloadingX509TrustManager is flaky

2016-08-08 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HADOOP-10823:
--
Fix Version/s: 2.8.0

> TestReloadingX509TrustManager is flaky
> --
>
> Key: HADOOP-10823
> URL: https://issues.apache.org/jira/browse/HADOOP-10823
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.3.0
> Environment: java version "1.6.0_31"
> Java(TM) SE Runtime Environment (build 1.6.0_31-b04)
> Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)
> Linux [hostname] 2.6.32-279.14.1.el6.x86_64 #1 SMP Mon Oct 15 13:44:51 EDT 
> 2012 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Ratandeep Ratti
>Assignee: Mingliang Liu
>  Labels: BB2015-05-TBR
> Fix For: 2.8.0
>
> Attachments: HADOOP-10823.001.patch, HADOOP-10823.002.patch, 
> HADOOP-10823.003.patch, HADOOP-10823.004.patch, HADOOP-10823.005.patch, 
> HADOOP-10823.patch
>
>
> Pasting the log
> {quote}
> Error Message
> expected:<2> but was:<1>
> Stacktrace
> junit.framework.AssertionFailedError: expected:<2> but was:<1>
>   at junit.framework.Assert.fail(Assert.java:50)
>   at junit.framework.Assert.failNotEquals(Assert.java:287)
>   at junit.framework.Assert.assertEquals(Assert.java:67)
>   at junit.framework.Assert.assertEquals(Assert.java:199)
>   at junit.framework.Assert.assertEquals(Assert.java:205)
>   at 
> org.apache.hadoop.security.ssl.TestReloadingX509TrustManager.testReload(TestReloadingX509TrustManager.java:112)
> Standard Output
> 2014-07-06 06:12:21,170 WARN  ssl.ReloadingX509TrustManager 
> (ReloadingX509TrustManager.java:run(197)) - Could not load truststore (keep 
> using existing one) : java.io.EOFException
> java.io.EOFException
>   at java.io.DataInputStream.readInt(DataInputStream.java:375)
>   at sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:628)
>   at 
> sun.security.provider.JavaKeyStore$JKS.engineLoad(JavaKeyStore.java:38)
>   at java.security.KeyStore.load(KeyStore.java:1185)
>   at 
> org.apache.hadoop.security.ssl.ReloadingX509TrustManager.loadTrustManager(ReloadingX509TrustManager.java:166)
>   at 
> org.apache.hadoop.security.ssl.ReloadingX509TrustManager.run(ReloadingX509TrustManager.java:195)
>   at java.lang.Thread.run(Thread.java:662)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10823) TestReloadingX509TrustManager is flaky

2016-08-08 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HADOOP-10823:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

I have committed this to trunk, branch-2 and branch-2.8. Thanks Ratandeep for 
an earlier patch, and thanks to [~liuml07] for taking it to completion.

> TestReloadingX509TrustManager is flaky
> --
>
> Key: HADOOP-10823
> URL: https://issues.apache.org/jira/browse/HADOOP-10823
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.3.0
> Environment: java version "1.6.0_31"
> Java(TM) SE Runtime Environment (build 1.6.0_31-b04)
> Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)
> Linux [hostname] 2.6.32-279.14.1.el6.x86_64 #1 SMP Mon Oct 15 13:44:51 EDT 
> 2012 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Ratandeep Ratti
>Assignee: Mingliang Liu
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10823.001.patch, HADOOP-10823.002.patch, 
> HADOOP-10823.003.patch, HADOOP-10823.004.patch, HADOOP-10823.005.patch, 
> HADOOP-10823.patch
>
>
> Pasting the log
> {quote}
> Error Message
> expected:<2> but was:<1>
> Stacktrace
> junit.framework.AssertionFailedError: expected:<2> but was:<1>
>   at junit.framework.Assert.fail(Assert.java:50)
>   at junit.framework.Assert.failNotEquals(Assert.java:287)
>   at junit.framework.Assert.assertEquals(Assert.java:67)
>   at junit.framework.Assert.assertEquals(Assert.java:199)
>   at junit.framework.Assert.assertEquals(Assert.java:205)
>   at 
> org.apache.hadoop.security.ssl.TestReloadingX509TrustManager.testReload(TestReloadingX509TrustManager.java:112)
> Standard Output
> 2014-07-06 06:12:21,170 WARN  ssl.ReloadingX509TrustManager 
> (ReloadingX509TrustManager.java:run(197)) - Could not load truststore (keep 
> using existing one) : java.io.EOFException
> java.io.EOFException
>   at java.io.DataInputStream.readInt(DataInputStream.java:375)
>   at sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:628)
>   at 
> sun.security.provider.JavaKeyStore$JKS.engineLoad(JavaKeyStore.java:38)
>   at java.security.KeyStore.load(KeyStore.java:1185)
>   at 
> org.apache.hadoop.security.ssl.ReloadingX509TrustManager.loadTrustManager(ReloadingX509TrustManager.java:166)
>   at 
> org.apache.hadoop.security.ssl.ReloadingX509TrustManager.run(ReloadingX509TrustManager.java:195)
>   at java.lang.Thread.run(Thread.java:662)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13403) AzureNativeFileSystem rename/delete performance improvements

2016-08-08 Thread Subramanyam Pattipaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13403?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412182#comment-15412182
 ] 

Subramanyam Pattipaka commented on HADOOP-13403:


[~cnauroth], Thanks. I will upload another path after fixing this issue.

> AzureNativeFileSystem rename/delete performance improvements
> 
>
> Key: HADOOP-13403
> URL: https://issues.apache.org/jira/browse/HADOOP-13403
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: azure
>Affects Versions: 2.7.2
>Reporter: Subramanyam Pattipaka
>Assignee: Subramanyam Pattipaka
> Fix For: 2.9.0
>
> Attachments: HADOOP-13403-001.patch, HADOOP-13403-002.patch, 
> HADOOP-13403-003.patch, HADOOP-13403-004.patch, HADOOP-13403-005.patch
>
>
> WASB Performance Improvements
> Problem
> ---
> Azure Native File system operations like rename/delete which has large number 
> of directories and/or files in the source directory are experiencing 
> performance issues. Here are possible reasons
> a)We first list all files under source directory hierarchically. This is 
> a serial operation. 
> b)After collecting the entire list of files under a folder, we delete or 
> rename files one by one serially.
> c)There is no logging information available for these costly operations 
> even in DEBUG mode leading to difficulty in understanding wasb performance 
> issues.
> Proposal
> -
> Step 1: Rename and delete operations will generate a list all files under the 
> source folder. We need to use azure flat listing option to get list with 
> single request to azure store. We have introduced config 
> fs.azure.flatlist.enable to enable this option. The default value is 'false' 
> which means flat listing is disabled.
> Step 2: Create thread pool and threads dynamically based on user 
> configuration. These thread pools will be deleted after operation is over.  
> We are introducing introducing two new configs
>   a)  fs.azure.rename.threads : Config to set number of rename 
> threads. Default value is 0 which means no threading.
>   b)  fs.azure.delete.threads: Config to set number of delete 
> threads. Default value is 0 which means no threading.
>   We have provided debug log information on number of threads not used 
> for the operation which can be useful .
>   Failure Scenarios:
>   If we fail to create thread pool due to ANY reason (for example trying 
> create with thread count with large value such as 100), we fall back to 
> serialization operation. 
> Step 3: Bob operations can be done in parallel using multiple threads 
> executing following snippet
>   while ((currentIndex = fileIndex.getAndIncrement()) < files.length) {
>   FileMetadata file = files[currentIndex];
>   Rename/delete(file);
>   }
>   The above strategy depends on the fact that all files are stored in a 
> final array and each thread has to determine synchronized next index to do 
> the job. The advantage of this strategy is that even if user configures large 
> number of unusable threads, we always ensure that work doesn’t get serialized 
> due to lagging threads. 
>   We are logging following information which can be useful for tuning 
> number of threads
>   a) Number of unusable threads
>   b) Time taken by each thread
>   c) Number of files processed by each thread
>   d) Total time taken for the operation
>   Failure Scenarios:
>   Failure to queue a thread execute request shouldn’t be an issue if we 
> can ensure at least one thread has completed execution successfully. If we 
> couldn't schedule one thread then we should take serialization path. 
> Exceptions raised while executing threads are still considered regular 
> exceptions and returned to client as operation failed. Exceptions raised 
> while stopping threads and deleting thread pool shouldn't can be ignored if 
> operation all files are done with out any issue.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-10823) TestReloadingX509TrustManager is flaky

2016-08-08 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10823?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu reassigned HADOOP-10823:
--

Assignee: Mingliang Liu  (was: Ratandeep Ratti)

> TestReloadingX509TrustManager is flaky
> --
>
> Key: HADOOP-10823
> URL: https://issues.apache.org/jira/browse/HADOOP-10823
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.3.0
> Environment: java version "1.6.0_31"
> Java(TM) SE Runtime Environment (build 1.6.0_31-b04)
> Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)
> Linux [hostname] 2.6.32-279.14.1.el6.x86_64 #1 SMP Mon Oct 15 13:44:51 EDT 
> 2012 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Ratandeep Ratti
>Assignee: Mingliang Liu
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10823.001.patch, HADOOP-10823.002.patch, 
> HADOOP-10823.003.patch, HADOOP-10823.004.patch, HADOOP-10823.005.patch, 
> HADOOP-10823.patch
>
>
> Pasting the log
> {quote}
> Error Message
> expected:<2> but was:<1>
> Stacktrace
> junit.framework.AssertionFailedError: expected:<2> but was:<1>
>   at junit.framework.Assert.fail(Assert.java:50)
>   at junit.framework.Assert.failNotEquals(Assert.java:287)
>   at junit.framework.Assert.assertEquals(Assert.java:67)
>   at junit.framework.Assert.assertEquals(Assert.java:199)
>   at junit.framework.Assert.assertEquals(Assert.java:205)
>   at 
> org.apache.hadoop.security.ssl.TestReloadingX509TrustManager.testReload(TestReloadingX509TrustManager.java:112)
> Standard Output
> 2014-07-06 06:12:21,170 WARN  ssl.ReloadingX509TrustManager 
> (ReloadingX509TrustManager.java:run(197)) - Could not load truststore (keep 
> using existing one) : java.io.EOFException
> java.io.EOFException
>   at java.io.DataInputStream.readInt(DataInputStream.java:375)
>   at sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:628)
>   at 
> sun.security.provider.JavaKeyStore$JKS.engineLoad(JavaKeyStore.java:38)
>   at java.security.KeyStore.load(KeyStore.java:1185)
>   at 
> org.apache.hadoop.security.ssl.ReloadingX509TrustManager.loadTrustManager(ReloadingX509TrustManager.java:166)
>   at 
> org.apache.hadoop.security.ssl.ReloadingX509TrustManager.run(ReloadingX509TrustManager.java:195)
>   at java.lang.Thread.run(Thread.java:662)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10823) TestReloadingX509TrustManager is flaky

2016-08-08 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10823?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412158#comment-15412158
 ] 

Jitendra Nath Pandey commented on HADOOP-10823:
---

+1, I will commit shortly.

> TestReloadingX509TrustManager is flaky
> --
>
> Key: HADOOP-10823
> URL: https://issues.apache.org/jira/browse/HADOOP-10823
> Project: Hadoop Common
>  Issue Type: Bug
>Affects Versions: 2.3.0
> Environment: java version "1.6.0_31"
> Java(TM) SE Runtime Environment (build 1.6.0_31-b04)
> Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode)
> Linux [hostname] 2.6.32-279.14.1.el6.x86_64 #1 SMP Mon Oct 15 13:44:51 EDT 
> 2012 x86_64 x86_64 x86_64 GNU/Linux
>Reporter: Ratandeep Ratti
>Assignee: Ratandeep Ratti
>  Labels: BB2015-05-TBR
> Attachments: HADOOP-10823.001.patch, HADOOP-10823.002.patch, 
> HADOOP-10823.003.patch, HADOOP-10823.004.patch, HADOOP-10823.005.patch, 
> HADOOP-10823.patch
>
>
> Pasting the log
> {quote}
> Error Message
> expected:<2> but was:<1>
> Stacktrace
> junit.framework.AssertionFailedError: expected:<2> but was:<1>
>   at junit.framework.Assert.fail(Assert.java:50)
>   at junit.framework.Assert.failNotEquals(Assert.java:287)
>   at junit.framework.Assert.assertEquals(Assert.java:67)
>   at junit.framework.Assert.assertEquals(Assert.java:199)
>   at junit.framework.Assert.assertEquals(Assert.java:205)
>   at 
> org.apache.hadoop.security.ssl.TestReloadingX509TrustManager.testReload(TestReloadingX509TrustManager.java:112)
> Standard Output
> 2014-07-06 06:12:21,170 WARN  ssl.ReloadingX509TrustManager 
> (ReloadingX509TrustManager.java:run(197)) - Could not load truststore (keep 
> using existing one) : java.io.EOFException
> java.io.EOFException
>   at java.io.DataInputStream.readInt(DataInputStream.java:375)
>   at sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:628)
>   at 
> sun.security.provider.JavaKeyStore$JKS.engineLoad(JavaKeyStore.java:38)
>   at java.security.KeyStore.load(KeyStore.java:1185)
>   at 
> org.apache.hadoop.security.ssl.ReloadingX509TrustManager.loadTrustManager(ReloadingX509TrustManager.java:166)
>   at 
> org.apache.hadoop.security.ssl.ReloadingX509TrustManager.run(ReloadingX509TrustManager.java:195)
>   at java.lang.Thread.run(Thread.java:662)
> {quote}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-13457) Remove hardcoded absolute path for shell executable

2016-08-08 Thread Chen Liang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13457?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chen Liang reassigned HADOOP-13457:
---

Assignee: Chen Liang  (was: Arpit Agarwal)

> Remove hardcoded absolute path for shell executable
> ---
>
> Key: HADOOP-13457
> URL: https://issues.apache.org/jira/browse/HADOOP-13457
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Arpit Agarwal
>Assignee: Chen Liang
>
> Shell.java has a hardcoded path to /bin/bash which is not correct on all 
> platforms. 
> Pointed out by [~aw] while reviewing HADOOP-13434.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13323) Downgrade stack trace on FS load from Warn to debug

2016-08-08 Thread Chen Liang (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13323?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412143#comment-15412143
 ] 

Chen Liang commented on HADOOP-13323:
-

Thanks for the reply [~ste...@apache.org] ! Actually never mind, I was working 
on HADOOP-13439 which tries to fix that test and I figured it out later on. 

> Downgrade stack trace on FS load from Warn to debug
> ---
>
> Key: HADOOP-13323
> URL: https://issues.apache.org/jira/browse/HADOOP-13323
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: fs
>Affects Versions: 2.7.3
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13323-branch-2-001.patch
>
>
> HADOOP-12636 catches exceptions on FS creation, but prints a stack trace @ 
> warn every time..this is noisy and irrelevant if the installation doesn't 
> need connectivity to a specific filesystem or object store.
> I propose: only printing the toString values of the exception chain @ warn; 
> the full stack comes out at debug.
> We could some more tuning: 
> * have a specific log for this exception, which allows installations to turn 
> even the warnings off.
> * add a link to a wiki page listing the dependencies of the shipped 
> filesystems



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13382) remove unneeded commons-httpclient dependencies from POM files in Hadoop and sub-projects

2016-08-08 Thread Matt Foley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-13382:

Description: 
In branch-2.8 and later, the patches for various child and related bugs listed 
in HADOOP-10105, most recently including HADOOP-11613, HADOOP-12710, 
HADOOP-12711, HADOOP-12552, and HDFS-10623, eliminate all use of 
"commons-httpclient" from Hadoop and its sub-projects (except for 
hadoop-tools/hadoop-openstack; see HADOOP-11614).

However, after incorporating these patches, "commons-httpclient" is still 
listed as a dependency in these POM files:
* hadoop-project/pom.xml
* hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/pom.xml

We wish to remove these, but since commons-httpclient is still used in many 
files in hadoop-tools/hadoop-openstack, we'll need to _add_ the dependency to
* hadoop-tools/hadoop-openstack/pom.xml
(We'll add a note to HADOOP-11614 to undo this when commons-httpclient is 
removed from hadoop-openstack.)
In 2.8, this was mostly done by HADOOP-12552, but the version info formerly 
inherited from hadoop-project/pom.xml also needs to be added, so that is in the 
branch-2.8 version of the patch.

Other projects with undeclared transitive dependencies on commons-httpclient, 
previously provided via hadoop-common or hadoop-client, may find this to be an 
incompatible change.  Of course that also means such project is exposed to the 
commons-httpclient CVE, and needs to be fixed for that reason as well.


  was:
In branch-2.8 and later, the patches for various child and related bugs listed 
in HADOOP-10105, most recently including HADOOP-11613, HADOOP-12710, 
HADOOP-12711, HADOOP-12552, and HDFS-10623, eliminate all use of 
"commons-httpclient" from Hadoop and its sub-projects (except for 
hadoop-tools/hadoop-openstack; see HADOOP-11614).

However, after incorporating these patches, "commons-httpclient" is still 
listed as a dependency in these POM files:
* hadoop-project/pom.xml
* hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/pom.xml

We wish to remove these, but since commons-httpclient is still used in many 
files in hadoop-tools/hadoop-openstack, we'll need to _add_ the dependency to
* hadoop-tools/hadoop-openstack/pom.xml
(We'll add a note to HADOOP-11614 to undo this when commons-httpclient is 
removed from hadoop-openstack.)
In 2.8, this was mostly done by HADOOP-12552, but the version info formerly 
inherited from hadoop-project/pom.xml also needs to be added, so that is in the 
branch-2.8 version of the patch.



> remove unneeded commons-httpclient dependencies from POM files in Hadoop and 
> sub-projects
> -
>
> Key: HADOOP-13382
> URL: https://issues.apache.org/jira/browse/HADOOP-13382
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Matt Foley
>Assignee: Matt Foley
> Fix For: 2.8.0
>
> Attachments: HADOOP-13382-branch-2.000.patch, 
> HADOOP-13382-branch-2.8.000.patch, HADOOP-13382.000.patch
>
>
> In branch-2.8 and later, the patches for various child and related bugs 
> listed in HADOOP-10105, most recently including HADOOP-11613, HADOOP-12710, 
> HADOOP-12711, HADOOP-12552, and HDFS-10623, eliminate all use of 
> "commons-httpclient" from Hadoop and its sub-projects (except for 
> hadoop-tools/hadoop-openstack; see HADOOP-11614).
> However, after incorporating these patches, "commons-httpclient" is still 
> listed as a dependency in these POM files:
> * hadoop-project/pom.xml
> * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/pom.xml
> We wish to remove these, but since commons-httpclient is still used in many 
> files in hadoop-tools/hadoop-openstack, we'll need to _add_ the dependency to
> * hadoop-tools/hadoop-openstack/pom.xml
> (We'll add a note to HADOOP-11614 to undo this when commons-httpclient is 
> removed from hadoop-openstack.)
> In 2.8, this was mostly done by HADOOP-12552, but the version info formerly 
> inherited from hadoop-project/pom.xml also needs to be added, so that is in 
> the branch-2.8 version of the patch.
> Other projects with undeclared transitive dependencies on commons-httpclient, 
> previously provided via hadoop-common or hadoop-client, may find this to be 
> an incompatible change.  Of course that also means such project is exposed to 
> the commons-httpclient CVE, and needs to be fixed for that reason as well.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13382) remove unneeded commons-httpclient dependencies from POM files in Hadoop and sub-projects

2016-08-08 Thread Matt Foley (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13382?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15412097#comment-15412097
 ] 

Matt Foley commented on HADOOP-13382:
-

[~steve_l]: sorry.  Fixed in 2.8.0.
[~gsaha]: It is true that other projects with undeclared transitive 
dependencies on commons-httpclient, previously provided via hadoop-common or 
hadoop-client, may find this to be an incompatible change.  Of course that also 
means such project is exposed to the commons-httpclient CVE, and needs to be 
fixed for that reason as well.  Will update the Description to note this.  
Thanks for setting the appropriate flags in the jira.

> remove unneeded commons-httpclient dependencies from POM files in Hadoop and 
> sub-projects
> -
>
> Key: HADOOP-13382
> URL: https://issues.apache.org/jira/browse/HADOOP-13382
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Matt Foley
>Assignee: Matt Foley
> Fix For: 2.8.0
>
> Attachments: HADOOP-13382-branch-2.000.patch, 
> HADOOP-13382-branch-2.8.000.patch, HADOOP-13382.000.patch
>
>
> In branch-2.8 and later, the patches for various child and related bugs 
> listed in HADOOP-10105, most recently including HADOOP-11613, HADOOP-12710, 
> HADOOP-12711, HADOOP-12552, and HDFS-10623, eliminate all use of 
> "commons-httpclient" from Hadoop and its sub-projects (except for 
> hadoop-tools/hadoop-openstack; see HADOOP-11614).
> However, after incorporating these patches, "commons-httpclient" is still 
> listed as a dependency in these POM files:
> * hadoop-project/pom.xml
> * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/pom.xml
> We wish to remove these, but since commons-httpclient is still used in many 
> files in hadoop-tools/hadoop-openstack, we'll need to _add_ the dependency to
> * hadoop-tools/hadoop-openstack/pom.xml
> (We'll add a note to HADOOP-11614 to undo this when commons-httpclient is 
> removed from hadoop-openstack.)
> In 2.8, this was mostly done by HADOOP-12552, but the version info formerly 
> inherited from hadoop-project/pom.xml also needs to be added, so that is in 
> the branch-2.8 version of the patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13382) remove unneeded commons-httpclient dependencies from POM files in Hadoop and sub-projects

2016-08-08 Thread Matt Foley (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13382?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Matt Foley updated HADOOP-13382:

Fix Version/s: 2.8.0

> remove unneeded commons-httpclient dependencies from POM files in Hadoop and 
> sub-projects
> -
>
> Key: HADOOP-13382
> URL: https://issues.apache.org/jira/browse/HADOOP-13382
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Affects Versions: 2.8.0
>Reporter: Matt Foley
>Assignee: Matt Foley
> Fix For: 2.8.0
>
> Attachments: HADOOP-13382-branch-2.000.patch, 
> HADOOP-13382-branch-2.8.000.patch, HADOOP-13382.000.patch
>
>
> In branch-2.8 and later, the patches for various child and related bugs 
> listed in HADOOP-10105, most recently including HADOOP-11613, HADOOP-12710, 
> HADOOP-12711, HADOOP-12552, and HDFS-10623, eliminate all use of 
> "commons-httpclient" from Hadoop and its sub-projects (except for 
> hadoop-tools/hadoop-openstack; see HADOOP-11614).
> However, after incorporating these patches, "commons-httpclient" is still 
> listed as a dependency in these POM files:
> * hadoop-project/pom.xml
> * hadoop-yarn-project/hadoop-yarn/hadoop-yarn-registry/pom.xml
> We wish to remove these, but since commons-httpclient is still used in many 
> files in hadoop-tools/hadoop-openstack, we'll need to _add_ the dependency to
> * hadoop-tools/hadoop-openstack/pom.xml
> (We'll add a note to HADOOP-11614 to undo this when commons-httpclient is 
> removed from hadoop-openstack.)
> In 2.8, this was mostly done by HADOOP-12552, but the version info formerly 
> inherited from hadoop-project/pom.xml also needs to be added, so that is in 
> the branch-2.8 version of the patch.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13061) Refactor erasure coders

2016-08-08 Thread Kai Sasaki (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15411729#comment-15411729
 ] 

Kai Sasaki commented on HADOOP-13061:
-

[~drankye] Could you review this? and what do you think about above check style 
issue?

> Refactor erasure coders
> ---
>
> Key: HADOOP-13061
> URL: https://issues.apache.org/jira/browse/HADOOP-13061
> Project: Hadoop Common
>  Issue Type: Sub-task
>Reporter: Rui Li
>Assignee: Kai Sasaki
> Attachments: HADOOP-13061.01.patch, HADOOP-13061.02.patch, 
> HADOOP-13061.03.patch, HADOOP-13061.04.patch, HADOOP-13061.05.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11588) Benchmark framework and test for erasure coders

2016-08-08 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11588?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15411711#comment-15411711
 ] 

Hadoop QA commented on HADOOP-11588:


| (/) *{color:green}+1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
14s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 1 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
18s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
44s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 23s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 0 unchanged - 0 fixed = 1 total (was 0) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
52s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
44s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
59s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
21s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 37m 57s{color} | 
{color:black} {color} |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:9560f25 |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12822553/HADOOP-11588.6.patch |
| JIRA Issue | HADOOP-11588 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux ea0991238603 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4d3af47 |
| Default Java | 1.8.0_101 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10200/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10200/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/10200/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Benchmark framework and test for erasure coders
> ---
>
> Key: HADOOP-11588
> URL: https://issues.apache.org/jira/browse/HADOOP-11588
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>   

[jira] [Updated] (HADOOP-11588) Benchmark framework and test for erasure coders

2016-08-08 Thread Rui Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11588?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Rui Li updated HADOOP-11588:

Attachment: HADOOP-11588.6.patch

Update patch now that we have the ISA-L and new Java coders.

> Benchmark framework and test for erasure coders
> ---
>
> Key: HADOOP-11588
> URL: https://issues.apache.org/jira/browse/HADOOP-11588
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: io
>Reporter: Kai Zheng
>Assignee: Rui Li
> Attachments: HADOOP-11588-HDFS-7285.2.patch, HADOOP-11588-v1.patch, 
> HADOOP-11588.3.patch, HADOOP-11588.4.patch, HADOOP-11588.5.patch, 
> HADOOP-11588.6.patch
>
>
> Given more than one erasure coders are implemented for a code scheme, we need 
> benchmark and test to help evaluate which one outperforms in certain 
> environment. This is to implement the benchmark framework.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13472) Inconsistencies for s3a timeouts in description and default values

2016-08-08 Thread Sebastian Nagel (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13472?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15411446#comment-15411446
 ] 

Sebastian Nagel commented on HADOOP-13472:
--

Ok, thanks!

> Inconsistencies for s3a timeouts in description and default values
> --
>
> Key: HADOOP-13472
> URL: https://issues.apache.org/jira/browse/HADOOP-13472
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Reporter: Sebastian Nagel
>Priority: Trivial
>  Labels: newbie
> Attachments: HADOOP-13472.patch
>
>
> * the description (code comments) of the properties 
> fs.s3a.connection.establish.timeout and fs.s3a.connection.timeout states that 
> these are in seconds while the core-default.xml says "milliseconds"
> * the default for fs.s3a.connection.establish.timeout is defined as 5000 in 
> core-default.xml. The value given in o.a.h.fs.s3a.Constants is 50 and 
> should be the same as in core-default.xml



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org