[jira] [Updated] (HADOOP-13337) Update maven-enforcer-plugin version to 1.4.1

2016-07-01 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13337:
---
Summary: Update maven-enforcer-plugin version to 1.4.1  (was: Update 
maven-enforcer-plugin versioin to 1.4.1)

> Update maven-enforcer-plugin version to 1.4.1
> -
>
> Key: HADOOP-13337
> URL: https://issues.apache.org/jira/browse/HADOOP-13337
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13337.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13283) Support reset operation for new global storage statistics and per FS storage stats

2016-07-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15359981#comment-15359981
 ] 

Hadoop QA commented on HADOOP-13283:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 3 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
43s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
24s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
27s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m 
40s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
56s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  4m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
21s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
13s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  2m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
26s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  3m 
19s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 7s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 1s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  6m 
18s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  2m 
37s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  7m 
45s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  1m  
0s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 56m 43s{color} 
| {color:red} hadoop-hdfs in the patch failed. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
22s{color} | {color:green} hadoop-aws in the patch passed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
28s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}139m 17s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12815850/HADOOP-13283.002.patch
 |
| JIRA Issue | HADOOP-13283 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux 7fbfdf369e0d 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0a5def1 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9919/artifact/patchprocess/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt
 |
|  Test Results | 

[jira] [Updated] (HADOOP-13283) Support reset operation for new global storage statistics and per FS storage stats

2016-07-01 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13283?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-13283:
---
Attachment: HADOOP-13283.002.patch

The v2 patch addresses the previous comments.

> Support reset operation for new global storage statistics and per FS storage 
> stats
> --
>
> Key: HADOOP-13283
> URL: https://issues.apache.org/jira/browse/HADOOP-13283
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13283.000.patch, HADOOP-13283.001.patch, 
> HADOOP-13283.002.patch
>
>
> Applications may reuse the file system object across jobs and its storage 
> statistics should be reset. Specially the {{FileSystem.Statistics}} supports 
> reset and [HADOOP-13032] needs to keep that use case valid.
> This jira is for supporting reset operations for storage statistics.
> Thanks [~hitesh] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12912) Add LOG.isDebugEnabled() guard in Progress.set method

2016-07-01 Thread Mingliang Liu (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12912?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mingliang Liu updated HADOOP-12912:
---
Resolution: Not A Problem
Status: Resolved  (was: Patch Available)

I'm gonna resolve this as we don't need the guard here.

> Add LOG.isDebugEnabled() guard in Progress.set method
> -
>
> Key: HADOOP-12912
> URL: https://issues.apache.org/jira/browse/HADOOP-12912
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-12912.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12747) support wildcard in libjars argument

2016-07-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15359881#comment-15359881
 ] 

Hadoop QA commented on HADOOP-12747:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
20s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 2 new or modified test 
files. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
47s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
52s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  0m 
26s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
57s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
12s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
22s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  0m 
38s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
51s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  6m 
51s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
0m 26s{color} | {color:orange} hadoop-common-project/hadoop-common: The patch 
generated 1 new + 181 unchanged - 11 fixed = 182 total (was 192) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
13s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  1m 
32s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
46s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red}  7m 42s{color} 
| {color:red} hadoop-common in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
20s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black} 38m 27s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | hadoop.metrics2.impl.TestGangliaMetrics |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12815841/HADOOP-12747.07.patch 
|
| JIRA Issue | HADOOP-12747 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  findbugs  checkstyle  |
| uname | Linux d9eaeec9e357 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 0a5def1 |
| Default Java | 1.8.0_91 |
| findbugs | v3.0.0 |
| checkstyle | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9918/artifact/patchprocess/diff-checkstyle-hadoop-common-project_hadoop-common.txt
 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9918/artifact/patchprocess/patch-unit-hadoop-common-project_hadoop-common.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9918/testReport/ |
| modules | C: hadoop-common-project/hadoop-common U: 
hadoop-common-project/hadoop-common |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9918/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> support wildcard in libjars argument
> 
>
> Key: HADOOP-12747
> 

[jira] [Updated] (HADOOP-12747) support wildcard in libjars argument

2016-07-01 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-12747:
-
Attachment: HADOOP-12747.07.patch

Posted patch v.7.

Fixed javadoc on a few methods in {{GenericOptionsParser}}.

> support wildcard in libjars argument
> 
>
> Key: HADOOP-12747
> URL: https://issues.apache.org/jira/browse/HADOOP-12747
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-12747.01.patch, HADOOP-12747.02.patch, 
> HADOOP-12747.03.patch, HADOOP-12747.04.patch, HADOOP-12747.05.patch, 
> HADOOP-12747.06.patch, HADOOP-12747.07.patch
>
>
> There is a problem when a user job adds too many dependency jars in their 
> command line. The HADOOP_CLASSPATH part can be addressed, including using 
> wildcards (\*). But the same cannot be done with the -libjars argument. Today 
> it takes only fully specified file paths.
> We may want to consider supporting wildcards as a way to help users in this 
> situation. The idea is to handle it the same way the JVM does it: \* expands 
> to the list of jars in that directory. It does not traverse into any child 
> directory.
> Also, it probably would be a good idea to do it only for libjars (i.e. don't 
> do it for -files and -archives).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12747) support wildcard in libjars argument

2016-07-01 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15359859#comment-15359859
 ] 

Sangjin Lee commented on HADOOP-12747:
--

For some reason, jenkins didn't post the results. It's here: 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9917/console

The unit test failure is the only -1 on the result, and it is unrelated to this 
patch. Checkstyle is -0 because a unit test class that I touched is lacking 
javadoc at the class level. I think it is OK to not worry about that one.

Regarding the javadoc issue you mentioned Daniel, jenkins returned +1 because 
javadoc for private methods/fields is not required and they don't generate 
warnings. It is more for information. Would you prefer that I fix them still? 
Let me know... Thanks!

> support wildcard in libjars argument
> 
>
> Key: HADOOP-12747
> URL: https://issues.apache.org/jira/browse/HADOOP-12747
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-12747.01.patch, HADOOP-12747.02.patch, 
> HADOOP-12747.03.patch, HADOOP-12747.04.patch, HADOOP-12747.05.patch, 
> HADOOP-12747.06.patch
>
>
> There is a problem when a user job adds too many dependency jars in their 
> command line. The HADOOP_CLASSPATH part can be addressed, including using 
> wildcards (\*). But the same cannot be done with the -libjars argument. Today 
> it takes only fully specified file paths.
> We may want to consider supporting wildcards as a way to help users in this 
> situation. The idea is to handle it the same way the JVM does it: \* expands 
> to the list of jars in that directory. It does not traverse into any child 
> directory.
> Also, it probably would be a good idea to do it only for libjars (i.e. don't 
> do it for -files and -archives).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12747) support wildcard in libjars argument

2016-07-01 Thread Sangjin Lee (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15359824#comment-15359824
 ] 

Sangjin Lee commented on HADOOP-12747:
--

Ugh, thanks. I'll wait for the jenkins result to collect all issues if there 
are more and fix them.

> support wildcard in libjars argument
> 
>
> Key: HADOOP-12747
> URL: https://issues.apache.org/jira/browse/HADOOP-12747
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-12747.01.patch, HADOOP-12747.02.patch, 
> HADOOP-12747.03.patch, HADOOP-12747.04.patch, HADOOP-12747.05.patch, 
> HADOOP-12747.06.patch
>
>
> There is a problem when a user job adds too many dependency jars in their 
> command line. The HADOOP_CLASSPATH part can be addressed, including using 
> wildcards (\*). But the same cannot be done with the -libjars argument. Today 
> it takes only fully specified file paths.
> We may want to consider supporting wildcards as a way to help users in this 
> situation. The idea is to handle it the same way the JVM does it: \* expands 
> to the list of jars in that directory. It does not traverse into any child 
> directory.
> Also, it probably would be a good idea to do it only for libjars (i.e. don't 
> do it for -files and -archives).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12747) support wildcard in libjars argument

2016-07-01 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15359813#comment-15359813
 ] 

Daniel Templeton commented on HADOOP-12747:
---

Just a quick comment from a first parse: both {{validateFiles()}} methods are 
missing the param tag for {{files}} and a return tag in the javadoc.

> support wildcard in libjars argument
> 
>
> Key: HADOOP-12747
> URL: https://issues.apache.org/jira/browse/HADOOP-12747
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-12747.01.patch, HADOOP-12747.02.patch, 
> HADOOP-12747.03.patch, HADOOP-12747.04.patch, HADOOP-12747.05.patch, 
> HADOOP-12747.06.patch
>
>
> There is a problem when a user job adds too many dependency jars in their 
> command line. The HADOOP_CLASSPATH part can be addressed, including using 
> wildcards (\*). But the same cannot be done with the -libjars argument. Today 
> it takes only fully specified file paths.
> We may want to consider supporting wildcards as a way to help users in this 
> situation. The idea is to handle it the same way the JVM does it: \* expands 
> to the list of jars in that directory. It does not traverse into any child 
> directory.
> Also, it probably would be a good idea to do it only for libjars (i.e. don't 
> do it for -files and -archives).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12747) support wildcard in libjars argument

2016-07-01 Thread Sangjin Lee (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12747?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Sangjin Lee updated HADOOP-12747:
-
Attachment: HADOOP-12747.06.patch

Posted patch v.6.

Minor update to handle the case where "\*" or "./\*" is passed into libjars. 
Rebased the patch to trunk.

> support wildcard in libjars argument
> 
>
> Key: HADOOP-12747
> URL: https://issues.apache.org/jira/browse/HADOOP-12747
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: util
>Reporter: Sangjin Lee
>Assignee: Sangjin Lee
> Attachments: HADOOP-12747.01.patch, HADOOP-12747.02.patch, 
> HADOOP-12747.03.patch, HADOOP-12747.04.patch, HADOOP-12747.05.patch, 
> HADOOP-12747.06.patch
>
>
> There is a problem when a user job adds too many dependency jars in their 
> command line. The HADOOP_CLASSPATH part can be addressed, including using 
> wildcards (\*). But the same cannot be done with the -libjars argument. Today 
> it takes only fully specified file paths.
> We may want to consider supporting wildcards as a way to help users in this 
> situation. The idea is to handle it the same way the JVM does it: \* expands 
> to the list of jars in that directory. It does not traverse into any child 
> directory.
> Also, it probably would be a good idea to do it only for libjars (i.e. don't 
> do it for -files and -archives).



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13337) Update maven-enforcer-plugin versioin to 1.4.1

2016-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15359790#comment-15359790
 ] 

Hudson commented on HADOOP-13337:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #10045 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10045/])
HADOOP-13337. Update maven-enforcer-plugin versioin to 1.4.1. (ozawa) (ozawa: 
rev 36cd0bce83b285d1f01a6c16f8f2b9284ac14cfc)
* pom.xml


> Update maven-enforcer-plugin versioin to 1.4.1
> --
>
> Key: HADOOP-13337
> URL: https://issues.apache.org/jira/browse/HADOOP-13337
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13337.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13283) Support reset operation for new global storage statistics and per FS storage stats

2016-07-01 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15359789#comment-15359789
 ] 

Mingliang Liu commented on HADOOP-13283:


Thanks [~hitesh] for your comment. I think you point is very valid. The v2 
patch will address this, along with unit tests.

I'll revise the NPE problem as well. Thanks, [~jnp].

> Support reset operation for new global storage statistics and per FS storage 
> stats
> --
>
> Key: HADOOP-13283
> URL: https://issues.apache.org/jira/browse/HADOOP-13283
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13283.000.patch, HADOOP-13283.001.patch
>
>
> Applications may reuse the file system object across jobs and its storage 
> statistics should be reset. Specially the {{FileSystem.Statistics}} supports 
> reset and [HADOOP-13032] needs to keep that use case valid.
> This jira is for supporting reset operations for storage statistics.
> Thanks [~hitesh] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13305) Define common statistics names across schemes

2016-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15359791#comment-15359791
 ] 

Hudson commented on HADOOP-13305:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #10045 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10045/])
HADOOP-13305. Define common statistics names across schemes. Contributed 
(jitendra: rev aa42c7a6dda23f9dd686cc844b31a5aeebe7e088)
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/test/java/org/apache/hadoop/hdfs/TestDFSOpsCountStatistics.java
* 
hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DFSOpsCountStatistics.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/FileSystemStorageStatistics.java
* hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/Statistic.java
* 
hadoop-common-project/hadoop-common/src/main/java/org/apache/hadoop/fs/StorageStatistics.java
* 
hadoop-tools/hadoop-aws/src/main/java/org/apache/hadoop/fs/s3a/S3AStorageStatistics.java


> Define common statistics names across schemes
> -
>
> Key: HADOOP-13305
> URL: https://issues.apache.org/jira/browse/HADOOP-13305
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13305.000.patch, HADOOP-13305.001.patch
>
>
> The {{StorageStatistics}} provides a pretty general interface, i.e. 
> {{getLong(name)}} and {{getLongStatistics()}}. There is no shared or standard 
> names for the storage statistics and thus the getLong(name) is up to the 
> implementation of storage statistics. The problems:
> # For the common statistics, downstream applications expect the same 
> statistics name across different storage statistics and/or file system 
> schemes. Chances are they have to use 
> {{DFSOpsCountStorageStatistics#getLong(“getStatus”)}} and 
> {{S3A.Statistics#getLong(“get_status”)}} for retrieving the getStatus 
> operation stat.
> # Moreover, probing per-operation stats is hard if there is no 
> standard/shared common names.
> It makes a lot of sense for different schemes to issue the per-operation 
> stats of the same name. Meanwhile, every FS will have its own internal things 
> to count, which can't be centrally defined or managed. But there are some 
> common which would be easier managed if they all had the same name.
> Another motivation is that having a common set of names here will encourage 
> uniform instrumentation of all filesystems; it will also make it easier to 
> analyze the output of runs, were the stats to be published to a "performance 
> log" similar to the audit log. See Steve's work for S3  (e.g. [HADOOP-13171])
> This jira is track the effort of defining common StorageStatistics entry 
> names. Thanks to [~cmccabe], [~ste...@apache.org], [~hitesh] and [~jnp] for 
> offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13283) Support reset operation for new global storage statistics and per FS storage stats

2016-07-01 Thread Hitesh Shah (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15359782#comment-15359782
 ] 

Hitesh Shah commented on HADOOP-13283:
--

Any reason why FileSystem.clearStatistics() is not being changed to call a 
reset on GlobalStorageStats? 

> Support reset operation for new global storage statistics and per FS storage 
> stats
> --
>
> Key: HADOOP-13283
> URL: https://issues.apache.org/jira/browse/HADOOP-13283
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13283.000.patch, HADOOP-13283.001.patch
>
>
> Applications may reuse the file system object across jobs and its storage 
> statistics should be reset. Specially the {{FileSystem.Statistics}} supports 
> reset and [HADOOP-13032] needs to keep that use case valid.
> This jira is for supporting reset operations for storage statistics.
> Thanks [~hitesh] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13305) Define common statistics names across schemes

2016-07-01 Thread Mingliang Liu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13305?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15359781#comment-15359781
 ] 

Mingliang Liu commented on HADOOP-13305:


Thank you [~jnp] for your review and commit!

> Define common statistics names across schemes
> -
>
> Key: HADOOP-13305
> URL: https://issues.apache.org/jira/browse/HADOOP-13305
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13305.000.patch, HADOOP-13305.001.patch
>
>
> The {{StorageStatistics}} provides a pretty general interface, i.e. 
> {{getLong(name)}} and {{getLongStatistics()}}. There is no shared or standard 
> names for the storage statistics and thus the getLong(name) is up to the 
> implementation of storage statistics. The problems:
> # For the common statistics, downstream applications expect the same 
> statistics name across different storage statistics and/or file system 
> schemes. Chances are they have to use 
> {{DFSOpsCountStorageStatistics#getLong(“getStatus”)}} and 
> {{S3A.Statistics#getLong(“get_status”)}} for retrieving the getStatus 
> operation stat.
> # Moreover, probing per-operation stats is hard if there is no 
> standard/shared common names.
> It makes a lot of sense for different schemes to issue the per-operation 
> stats of the same name. Meanwhile, every FS will have its own internal things 
> to count, which can't be centrally defined or managed. But there are some 
> common which would be easier managed if they all had the same name.
> Another motivation is that having a common set of names here will encourage 
> uniform instrumentation of all filesystems; it will also make it easier to 
> analyze the output of runs, were the stats to be published to a "performance 
> log" similar to the audit log. See Steve's work for S3  (e.g. [HADOOP-13171])
> This jira is track the effort of defining common StorageStatistics entry 
> names. Thanks to [~cmccabe], [~ste...@apache.org], [~hitesh] and [~jnp] for 
> offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13283) Support reset operation for new global storage statistics and per FS storage stats

2016-07-01 Thread Jitendra Nath Pandey (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13283?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15359779#comment-15359779
 ] 

Jitendra Nath Pandey commented on HADOOP-13283:
---

Please check for stats null, particularly in UnionStorageStatistics. In case of 
null stats, reset should be no-op I think.

> Support reset operation for new global storage statistics and per FS storage 
> stats
> --
>
> Key: HADOOP-13283
> URL: https://issues.apache.org/jira/browse/HADOOP-13283
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13283.000.patch, HADOOP-13283.001.patch
>
>
> Applications may reuse the file system object across jobs and its storage 
> statistics should be reset. Specially the {{FileSystem.Statistics}} supports 
> reset and [HADOOP-13032] needs to keep that use case valid.
> This jira is for supporting reset operations for storage statistics.
> Thanks [~hitesh] for reporting this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13337) Update maven-enforcer-plugin versioin to 1.4.1

2016-07-01 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-13337:

   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 3.0.0-alpha1
   2.9.0
   Status: Resolved  (was: Patch Available)

Thanks Akira for the review. Committed this to trunk and branch-2.

> Update maven-enforcer-plugin versioin to 1.4.1
> --
>
> Key: HADOOP-13337
> URL: https://issues.apache.org/jira/browse/HADOOP-13337
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Fix For: 2.9.0, 3.0.0-alpha1
>
> Attachments: HADOOP-13337.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13337) Update maven-enforcer-plugin to 1.4.1

2016-07-01 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-13337:

Summary: Update maven-enforcer-plugin to 1.4.1  (was: Upgrading 
maven-enforcer-plugin to 1.4.1)

> Update maven-enforcer-plugin to 1.4.1
> -
>
> Key: HADOOP-13337
> URL: https://issues.apache.org/jira/browse/HADOOP-13337
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-13337.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13337) Update maven-enforcer-plugin versioin to 1.4.1

2016-07-01 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-13337:

Summary: Update maven-enforcer-plugin versioin to 1.4.1  (was: Update 
maven-enforcer-plugin to 1.4.1)

> Update maven-enforcer-plugin versioin to 1.4.1
> --
>
> Key: HADOOP-13337
> URL: https://issues.apache.org/jira/browse/HADOOP-13337
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-13337.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13305) Define common statistics names across schemes

2016-07-01 Thread Jitendra Nath Pandey (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jitendra Nath Pandey updated HADOOP-13305:
--
Resolution: Fixed
Status: Resolved  (was: Patch Available)

I have committed this to trunk, branch-2 and branch-2.8. Thanks Mingliang!

> Define common statistics names across schemes
> -
>
> Key: HADOOP-13305
> URL: https://issues.apache.org/jira/browse/HADOOP-13305
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs
>Affects Versions: 2.8.0
>Reporter: Mingliang Liu
>Assignee: Mingliang Liu
> Fix For: 2.8.0
>
> Attachments: HADOOP-13305.000.patch, HADOOP-13305.001.patch
>
>
> The {{StorageStatistics}} provides a pretty general interface, i.e. 
> {{getLong(name)}} and {{getLongStatistics()}}. There is no shared or standard 
> names for the storage statistics and thus the getLong(name) is up to the 
> implementation of storage statistics. The problems:
> # For the common statistics, downstream applications expect the same 
> statistics name across different storage statistics and/or file system 
> schemes. Chances are they have to use 
> {{DFSOpsCountStorageStatistics#getLong(“getStatus”)}} and 
> {{S3A.Statistics#getLong(“get_status”)}} for retrieving the getStatus 
> operation stat.
> # Moreover, probing per-operation stats is hard if there is no 
> standard/shared common names.
> It makes a lot of sense for different schemes to issue the per-operation 
> stats of the same name. Meanwhile, every FS will have its own internal things 
> to count, which can't be centrally defined or managed. But there are some 
> common which would be easier managed if they all had the same name.
> Another motivation is that having a common set of names here will encourage 
> uniform instrumentation of all filesystems; it will also make it easier to 
> analyze the output of runs, were the stats to be published to a "performance 
> log" similar to the audit log. See Steve's work for S3  (e.g. [HADOOP-13171])
> This jira is track the effort of defining common StorageStatistics entry 
> names. Thanks to [~cmccabe], [~ste...@apache.org], [~hitesh] and [~jnp] for 
> offline discussion.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12847) hadoop daemonlog should support https and SPNEGO for Kerberized cluster

2016-07-01 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12847?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HADOOP-12847:
---
Component/s: security

> hadoop daemonlog should support https and SPNEGO for Kerberized cluster
> ---
>
> Key: HADOOP-12847
> URL: https://issues.apache.org/jira/browse/HADOOP-12847
> Project: Hadoop Common
>  Issue Type: New Feature
>  Components: security
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
>  Labels: supportability
> Fix For: 2.9.0
>
> Attachments: HADOOP-12847.001.patch, HADOOP-12847.002.patch, 
> HADOOP-12847.003.patch, HADOOP-12847.004.patch, HADOOP-12847.005.patch, 
> HADOOP-12847.006.patch, HADOOP-12847.008.patch, HADOOP-12847.009.patch, 
> HADOOP-12847.010.branch-2.8.patch, HADOOP-12847.010.branch-2.patch, 
> HADOOP-12847.010.patch
>
>
> {{hadoop daemonlog}} is a simple, yet useful tool for debugging.
> However, it does not support https, nor does it support a Kerberized Hadoop 
> cluster.
> Using {{AuthenticatedURL}}, it will be able to support SPNEGO negotiation 
> with a Kerberized name node web ui. It will also fall back to simple 
> authentication if the cluster is not Kerberized.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13337) Upgrading maven-enforcer-plugin to 1.4.1

2016-07-01 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15359767#comment-15359767
 ] 

Tsuyoshi Ozawa commented on HADOOP-13337:
-

Checking this in.

> Upgrading maven-enforcer-plugin to 1.4.1
> 
>
> Key: HADOOP-13337
> URL: https://issues.apache.org/jira/browse/HADOOP-13337
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-13337.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13337) Upgrading maven-enforcer-plugin to 1.4.1

2016-07-01 Thread Akira Ajisaka (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15359764#comment-15359764
 ] 

Akira Ajisaka commented on HADOOP-13337:


+1, the test failure is not related to the patch. This is tracked by HDFS-10572.

> Upgrading maven-enforcer-plugin to 1.4.1
> 
>
> Key: HADOOP-13337
> URL: https://issues.apache.org/jira/browse/HADOOP-13337
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-13337.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12782) Faster LDAP group name resolution with ActiveDirectory

2016-07-01 Thread Yongjun Zhang (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12782?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yongjun Zhang updated HADOOP-12782:
---
Component/s: security

> Faster LDAP group name resolution with ActiveDirectory
> --
>
> Key: HADOOP-12782
> URL: https://issues.apache.org/jira/browse/HADOOP-12782
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: security
>Reporter: Wei-Chiu Chuang
>Assignee: Wei-Chiu Chuang
> Fix For: 2.8.0
>
> Attachments: HADOOP-12782.001.patch, HADOOP-12782.002.patch, 
> HADOOP-12782.003.patch, HADOOP-12782.004.patch, HADOOP-12782.005.patch, 
> HADOOP-12782.006.patch, HADOOP-12782.007.patch, HADOOP-12782.008.patch, 
> HADOOP-12782.009.patch, HADOOP-12782.branch-2.010.patch
>
>
> The typical LDAP group name resolution works well under typical scenarios. 
> However, we have seen cases where a user is mapped to many groups (in an 
> extreme case, a user is mapped to more than 100 groups). The way it's being 
> implemented now makes this case super slow resolving groups from 
> ActiveDirectory.
> The current LDAP group resolution implementation sends two queries to a 
> ActiveDirectory server. The first query returns a user object, which contains 
> DN (distinguished name). The second query looks for groups where the user DN 
> is a member. If a user is mapped to many groups, the second query returns all 
> group objects associated with the user, and is thus very slow.
> After studying a user object in ActiveDirectory, I found a user object 
> actually contains a "memberOf" field, which is the DN of all group objects 
> where the user belongs to. Assuming that an organization has no recursive 
> group relation (that is, a user A is a member of group G1, and group G1 is a 
> member of group G2), we can use this properties to avoid the second query, 
> which can potentially run very slow.
> I propose that we add a configuration to only enable this feature for users 
> who want to reduce group resolution time and who does not have recursive 
> groups, so that existing behavior will not be broken.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13337) Upgrading maven-enforcer-plugin to 1.4.1

2016-07-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13337?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15359657#comment-15359657
 ] 

Hadoop QA commented on HADOOP-13337:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
22s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:red}-1{color} | {color:red} test4tests {color} | {color:red}  0m  
0s{color} | {color:red} The patch doesn't appear to include any new or modified 
tests. Please justify why no new tests are needed for this patch. Also please 
list what manual steps were performed to verify this patch. {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  7m 
36s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m  
3s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  8m 
49s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
48s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
42s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  8m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  8m 
21s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 12m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  1m 
 8s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  
2s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  4m 
59s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} unit {color} | {color:red} 93m 43s{color} 
| {color:red} root in the patch failed. {color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
22s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}157m 34s{color} | 
{color:black} {color} |
\\
\\
|| Reason || Tests ||
| Failed junit tests | 
hadoop.hdfs.tools.offlineEditsViewer.TestOfflineEditsViewer |
\\
\\
|| Subsystem || Report/Notes ||
| Docker |  Image:yetus/hadoop:85209cc |
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12815799/HADOOP-13337.001.patch
 |
| JIRA Issue | HADOOP-13337 |
| Optional Tests |  asflicense  compile  javac  javadoc  mvninstall  mvnsite  
unit  xml  |
| uname | Linux e26d3d98fe3b 3.13.0-36-lowlatency #63-Ubuntu SMP PREEMPT Wed 
Sep 3 21:56:12 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /testptch/hadoop/patchprocess/precommit/personality/provided.sh 
|
| git revision | trunk / 4009fa3 |
| Default Java | 1.8.0_91 |
| unit | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9916/artifact/patchprocess/patch-unit-root.txt
 |
|  Test Results | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9916/testReport/ |
| modules | C: . U: . |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9916/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> Upgrading maven-enforcer-plugin to 1.4.1
> 
>
> Key: HADOOP-13337
> URL: https://issues.apache.org/jira/browse/HADOOP-13337
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-13337.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13338) Incompatible change to SortedMapWritable

2016-07-01 Thread Siddharth Seth (JIRA)
Siddharth Seth created HADOOP-13338:
---

 Summary: Incompatible change to SortedMapWritable
 Key: HADOOP-13338
 URL: https://issues.apache.org/jira/browse/HADOOP-13338
 Project: Hadoop Common
  Issue Type: Bug
Affects Versions: 2.8.0
Reporter: Siddharth Seth
Priority: Critical


Hive does not compile against Hadoop-2.8.0-SNAPSHOT

{code}
[ERROR] Failed to execute goal 
org.apache.maven.plugins:maven-compiler-plugin:3.1:compile (default-compile) on 
project hive-contrib: Compilation failure
[ERROR] 
/Users/sseth/work2/projects/hive/dev/forMvnInstall/contrib/src/java/org/apache/hadoop/hive/contrib/util/typedbytes/TypedBytesWritableOutput.java:[215,70]
 incompatible types: java.lang.Object cannot be converted to 
java.util.Map.Entry
{code}

Looks like the change in HADOOP-10465 causes this.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13332) Remove jackson 1.9.13 and switch all jackson code to 2.x code line

2016-07-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13332?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15359644#comment-15359644
 ] 

Hadoop QA commented on HADOOP-13332:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
23s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 16 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
12s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
37s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  6m 
54s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
45s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  9m 
42s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  4m 
40s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-mapreduce-project/hadoop-mapreduce-client {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 12m 
44s{color} | {color:green} trunk passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  7m 
28s{color} | {color:green} trunk passed {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
18s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 
33s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
10s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
51s{color} | {color:green} root: The patch generated 0 new + 1576 unchanged - 6 
fixed = 1576 total (was 1582) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green} 11m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  5m 
39s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m 
 0s{color} | {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m 
24s{color} | {color:green} The patch has no ill-formed XML file. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m  
0s{color} | {color:blue} Skipped patched modules with no Java source: 
hadoop-project hadoop-mapreduce-project/hadoop-mapreduce-client {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green} 16m  
1s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} hadoop-project in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
16s{color} | {color:green} hadoop-maven-plugins in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
52s{color} | {color:green} hadoop-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-kms in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
26s{color} | {color:green} hadoop-hdfs-client in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m  
6s{color} | {color:green} hadoop-hdfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
18s{color} | {color:green} hadoop-hdfs-nfs in the patch passed. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  0m 
34s{color} | {color:green} hadoop-yarn-common in the patch passed. {color} |
| {color:green}+1{color} | {color:green} 

[jira] [Updated] (HADOOP-13290) Appropriate use of generics in FairCallQueue

2016-07-01 Thread Konstantin Shvachko (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13290?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Konstantin Shvachko updated HADOOP-13290:
-
Assignee: Jonathan Hung

> Appropriate use of generics in FairCallQueue
> 
>
> Key: HADOOP-13290
> URL: https://issues.apache.org/jira/browse/HADOOP-13290
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: ipc
>Affects Versions: 2.6.0
>Reporter: Konstantin Shvachko
>Assignee: Jonathan Hung
>  Labels: newbie++
>
> # {{BlockingQueue}} is intermittently used with and without generic 
> parameters in {{FairCallQueue}} class. Should be parameterized.
> # Same for {{FairCallQueue}}. Should be parameterized. Could be a bit more 
> tricky for that one.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13337) Upgrading maven-enforcer-plugin to 1.4.1

2016-07-01 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-13337:

Attachment: HADOOP-13337.001.patch

Attaching first patch.

> Upgrading maven-enforcer-plugin to 1.4.1
> 
>
> Key: HADOOP-13337
> URL: https://issues.apache.org/jira/browse/HADOOP-13337
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-13337.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13337) Upgrading maven-enforcer-plugin to 1.4.1

2016-07-01 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13337?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-13337:

Status: Patch Available  (was: Open)

> Upgrading maven-enforcer-plugin to 1.4.1
> 
>
> Key: HADOOP-13337
> URL: https://issues.apache.org/jira/browse/HADOOP-13337
> Project: Hadoop Common
>  Issue Type: Improvement
>  Components: build
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
> Attachments: HADOOP-13337.001.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13337) Upgrading maven-enforcer-plugin to 1.4.1

2016-07-01 Thread Tsuyoshi Ozawa (JIRA)
Tsuyoshi Ozawa created HADOOP-13337:
---

 Summary: Upgrading maven-enforcer-plugin to 1.4.1
 Key: HADOOP-13337
 URL: https://issues.apache.org/jira/browse/HADOOP-13337
 Project: Hadoop Common
  Issue Type: Improvement
  Components: build
Reporter: Tsuyoshi Ozawa
Assignee: Tsuyoshi Ozawa






--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12064) [JDK8] Update guice version to 4.0

2016-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15359440#comment-15359440
 ] 

Hudson commented on HADOOP-12064:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #10043 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10043/])
HADOOP-12064. [JDK8] Update guice version to 4.0. (ozawa) (ozawa: rev 
4009fa3a9272ddfe3825b1bd61b3ab9dc0124050)
* hadoop-project/pom.xml


> [JDK8] Update guice version to 4.0
> --
>
> Key: HADOOP-12064
> URL: https://issues.apache.org/jira/browse/HADOOP-12064
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
>  Labels: UpgradeKeyLibrary
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12064.001.patch, HADOOP-12064.002.WIP.patch, 
> HADOOP-12064.002.patch
>
>
> guice 3.0 doesn't work with lambda statement. 
> https://github.com/google/guice/issues/757
> We should upgrade it to 4.0 which includes the fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Assigned] (HADOOP-11993) maven enforcer plugin to ban java 8 incompatible dependencies

2016-07-01 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa reassigned HADOOP-11993:
---

Assignee: Tsuyoshi Ozawa

> maven enforcer plugin to ban java 8 incompatible dependencies
> -
>
> Key: HADOOP-11993
> URL: https://issues.apache.org/jira/browse/HADOOP-11993
> Project: Hadoop Common
>  Issue Type: Sub-task
>Affects Versions: 2.7.0
>Reporter: Steve Loughran
>Assignee: Tsuyoshi Ozawa
>Priority: Minor
>
> It's possible to use maven-enforcer to ban dependencies; this can be used to 
> reject those known to be incompatible with Java 8
> [example|https://gist.github.com/HiJon89/65e34552c18e5ac9fd31]
> If we set maven enforcer to do this checking, it can ensure that the 2.7+ 
> codebase isn't pulling in any incompatible binaries.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12064) [JDK8] Update guice version to 4.0

2016-07-01 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12064:

Release Note: 
Upgrading following dependences:
* Guice from 3.0 to 4.0
* cglib from 2.2 to 3.2.0
* asm from 3.2 to 5.0.4

> [JDK8] Update guice version to 4.0
> --
>
> Key: HADOOP-12064
> URL: https://issues.apache.org/jira/browse/HADOOP-12064
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
>  Labels: UpgradeKeyLibrary
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12064.001.patch, HADOOP-12064.002.WIP.patch, 
> HADOOP-12064.002.patch
>
>
> guice 3.0 doesn't work with lambda statement. 
> https://github.com/google/guice/issues/757
> We should upgrade it to 4.0 which includes the fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12064) [JDK8] Update guice version to 4.0

2016-07-01 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12064:

Hadoop Flags: Incompatible change,Reviewed  (was: Incompatible change)

> [JDK8] Update guice version to 4.0
> --
>
> Key: HADOOP-12064
> URL: https://issues.apache.org/jira/browse/HADOOP-12064
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
>  Labels: UpgradeKeyLibrary
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12064.001.patch, HADOOP-12064.002.WIP.patch, 
> HADOOP-12064.002.patch
>
>
> guice 3.0 doesn't work with lambda statement. 
> https://github.com/google/guice/issues/757
> We should upgrade it to 4.0 which includes the fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-12064) [JDK8] Update guice version to 4.0

2016-07-01 Thread Tsuyoshi Ozawa (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tsuyoshi Ozawa updated HADOOP-12064:

   Resolution: Fixed
Fix Version/s: 3.0.0-alpha1
   Status: Resolved  (was: Patch Available)

Committed this to trunk. Again, thanks for your review, Akira!

> [JDK8] Update guice version to 4.0
> --
>
> Key: HADOOP-12064
> URL: https://issues.apache.org/jira/browse/HADOOP-12064
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
>  Labels: UpgradeKeyLibrary
> Fix For: 3.0.0-alpha1
>
> Attachments: HADOOP-12064.001.patch, HADOOP-12064.002.WIP.patch, 
> HADOOP-12064.002.patch
>
>
> guice 3.0 doesn't work with lambda statement. 
> https://github.com/google/guice/issues/757
> We should upgrade it to 4.0 which includes the fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-12064) [JDK8] Update guice version to 4.0

2016-07-01 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15359425#comment-15359425
 ] 

Tsuyoshi Ozawa commented on HADOOP-12064:
-

Thanks [~ajisakaa] for your review. Checking this in based on your review and 
discussion on the mailing list.

> [JDK8] Update guice version to 4.0
> --
>
> Key: HADOOP-12064
> URL: https://issues.apache.org/jira/browse/HADOOP-12064
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
>  Labels: UpgradeKeyLibrary
> Attachments: HADOOP-12064.001.patch, HADOOP-12064.002.WIP.patch, 
> HADOOP-12064.002.patch
>
>
> guice 3.0 doesn't work with lambda statement. 
> https://github.com/google/guice/issues/757
> We should upgrade it to 4.0 which includes the fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-12064) [JDK8] Update guice version to 4.0

2016-07-01 Thread Tsuyoshi Ozawa (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-12064?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15359425#comment-15359425
 ] 

Tsuyoshi Ozawa edited comment on HADOOP-12064 at 7/1/16 6:23 PM:
-

Thanks [~ajisakaa] for your review. Checking this in trunk based on your review 
and discussion on the mailing list.


was (Author: ozawa):
Thanks [~ajisakaa] for your review. Checking this in based on your review and 
discussion on the mailing list.

> [JDK8] Update guice version to 4.0
> --
>
> Key: HADOOP-12064
> URL: https://issues.apache.org/jira/browse/HADOOP-12064
> Project: Hadoop Common
>  Issue Type: Improvement
>Affects Versions: 3.0.0-alpha1
>Reporter: Tsuyoshi Ozawa
>Assignee: Tsuyoshi Ozawa
>Priority: Blocker
>  Labels: UpgradeKeyLibrary
> Attachments: HADOOP-12064.001.patch, HADOOP-12064.002.WIP.patch, 
> HADOOP-12064.002.patch
>
>
> guice 3.0 doesn't work with lambda statement. 
> https://github.com/google/guice/issues/757
> We should upgrade it to 4.0 which includes the fix.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-10724) `hadoop fs -du -h` incorrectly formatted

2016-07-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-10724?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15359385#comment-15359385
 ] 

Hadoop QA commented on HADOOP-10724:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m  
0s{color} | {color:blue} Docker mode activated. {color} |
| {color:red}-1{color} | {color:red} patch {color} | {color:red}  0m  4s{color} 
| {color:red} HADOOP-10724 does not apply to trunk. Rebase required? Wrong 
Branch? See https://wiki.apache.org/hadoop/HowToContribute for help. {color} |
\\
\\
|| Subsystem || Report/Notes ||
| JIRA Patch URL | 
https://issues.apache.org/jira/secure/attachment/12728878/0001-HADOOP-10724-do-not-insert-a-space-between-number-an.patch
 |
| JIRA Issue | HADOOP-10724 |
| Console output | 
https://builds.apache.org/job/PreCommit-HADOOP-Build/9915/console |
| Powered by | Apache Yetus 0.4.0-SNAPSHOT   http://yetus.apache.org |


This message was automatically generated.



> `hadoop fs -du -h` incorrectly formatted
> 
>
> Key: HADOOP-10724
> URL: https://issues.apache.org/jira/browse/HADOOP-10724
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Sam Steingold
>Assignee: Sam Steingold
> Attachments: 
> 0001-HADOOP-10724-do-not-insert-a-space-between-number-an.patch
>
>
> {{hadoop fs -du -h}} prints sizes with a space between the number and the 
> unit:
> {code}
> $ hadoop fs -du -h . 
> 91.7 G   
> 583.1 M  
> 97.6 K   .
> {code}
> The standard unix {{du -h}} does not:
> {code}
> $ du -h
> 400K...
> 404K
> 480K.
> {code}
> the result is that the output of {{du -h}} is properly sorted by {{sort -h}} 
> while the output of {{hadoop fs -du -h}} is *not* properly sorted by it.
> Please see 
> * [sort|http://linux.die.net/man/1/sort]: "-h --human-numeric-sort
> compare human readable numbers (e.g., 2K 1G) "
> * [du|http://linux.die.net/man/1/du]: "-h, --human-readable
> print sizes in human readable format (e.g., 1K 234M 2G) "



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-10724) `hadoop fs -du -h` incorrectly formatted

2016-07-01 Thread Allen Wittenauer (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-10724?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Allen Wittenauer updated HADOOP-10724:
--
 Hadoop Flags: Incompatible change
Affects Version/s: 3.0.0-alpha1
 Target Version/s: 3.0.0-alpha1
   Status: Patch Available  (was: Open)

> `hadoop fs -du -h` incorrectly formatted
> 
>
> Key: HADOOP-10724
> URL: https://issues.apache.org/jira/browse/HADOOP-10724
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs
>Affects Versions: 3.0.0-alpha1
>Reporter: Sam Steingold
>Assignee: Sam Steingold
> Attachments: 
> 0001-HADOOP-10724-do-not-insert-a-space-between-number-an.patch
>
>
> {{hadoop fs -du -h}} prints sizes with a space between the number and the 
> unit:
> {code}
> $ hadoop fs -du -h . 
> 91.7 G   
> 583.1 M  
> 97.6 K   .
> {code}
> The standard unix {{du -h}} does not:
> {code}
> $ du -h
> 400K...
> 404K
> 480K.
> {code}
> the result is that the output of {{du -h}} is properly sorted by {{sort -h}} 
> while the output of {{hadoop fs -du -h}} is *not* properly sorted by it.
> Please see 
> * [sort|http://linux.die.net/man/1/sort]: "-h --human-numeric-sort
> compare human readable numbers (e.g., 2K 1G) "
> * [du|http://linux.die.net/man/1/du]: "-h, --human-readable
> print sizes in human readable format (e.g., 1K 234M 2G) "



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13320) Fix arguments check in documentation for WordCount v2.0

2016-07-01 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15359298#comment-15359298
 ] 

Daniel Templeton commented on HADOOP-13320:
---

LGTM.  +1 (non-binding).  [~rchiang], here's a commit for you.

> Fix arguments check in documentation for WordCount v2.0
> ---
>
> Key: HADOOP-13320
> URL: https://issues.apache.org/jira/browse/HADOOP-13320
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: niccolo becchi
>Priority: Trivial
>
> This issue is affecting the documentation page, so the code is not covered by 
> any tests. It's actually visible on the page:
> https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html#Example:_WordCount_v2.0
> On the Example: WordCount v2.0 
> The actual arguments check is wrong, as it's never printing the message of 
> the correct usage. So, running the code with no parameters, as in the 
> following example:
> {code}
> yarn jar /var/tmp/WordCount.jar task0.WordCount2
> {code}
>  
> I have got the following exception message in output:
> {code}
> Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, 
> Size: 0
> at java.util.ArrayList.rangeCheck(ArrayList.java:635)
> at java.util.ArrayList.get(ArrayList.java:411)
> at task0.WordCount2.main(WordCount2.java:131)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {code}
> Intead than the expected friendly message:
> {code}
>  Usage: wordcount   [-skip skipPatternFile]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13320) Fix arguments check in documentation for WordCount v2.0

2016-07-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15359293#comment-15359293
 ] 

ASF GitHub Bot commented on HADOOP-13320:
-

Github user pippobaudos commented on the issue:

https://github.com/apache/hadoop/pull/108
  
Thanks @templedf  I have updated the pull request following the suggestion


> Fix arguments check in documentation for WordCount v2.0
> ---
>
> Key: HADOOP-13320
> URL: https://issues.apache.org/jira/browse/HADOOP-13320
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: niccolo becchi
>Priority: Trivial
>
> This issue is affecting the documentation page, so the code is not covered by 
> any tests. It's actually visible on the page:
> https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html#Example:_WordCount_v2.0
> On the Example: WordCount v2.0 
> The actual arguments check is wrong, as it's never printing the message of 
> the correct usage. So, running the code with no parameters, as in the 
> following example:
> {code}
> yarn jar /var/tmp/WordCount.jar task0.WordCount2
> {code}
>  
> I have got the following exception message in output:
> {code}
> Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, 
> Size: 0
> at java.util.ArrayList.rangeCheck(ArrayList.java:635)
> at java.util.ArrayList.get(ArrayList.java:411)
> at task0.WordCount2.main(WordCount2.java:131)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {code}
> Intead than the expected friendly message:
> {code}
>  Usage: wordcount   [-skip skipPatternFile]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13320) Fix arguments check in documentation for WordCount v2.0

2016-07-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15359273#comment-15359273
 ] 

ASF GitHub Bot commented on HADOOP-13320:
-

GitHub user pippobaudos reopened a pull request:

https://github.com/apache/hadoop/pull/108

HADOOP-13320. Fix arguments check in the WordCount v2.0 in Documentation



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pippobaudos/hadoop HADOOP-13320

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/108.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #108


commit 469bc02c6f932bc55fced96de33884cebbe92242
Author: Niccolo Becchi 
Date:   2016-06-24T13:50:24Z

HADOOP-13320. Fix arguments check in the WordCount v2.0 in Doc. Contributed 
by Niccolo Becchi




> Fix arguments check in documentation for WordCount v2.0
> ---
>
> Key: HADOOP-13320
> URL: https://issues.apache.org/jira/browse/HADOOP-13320
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: niccolo becchi
>Priority: Trivial
>
> This issue is affecting the documentation page, so the code is not covered by 
> any tests. It's actually visible on the page:
> https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html#Example:_WordCount_v2.0
> On the Example: WordCount v2.0 
> The actual arguments check is wrong, as it's never printing the message of 
> the correct usage. So, running the code with no parameters, as in the 
> following example:
> {code}
> yarn jar /var/tmp/WordCount.jar task0.WordCount2
> {code}
>  
> I have got the following exception message in output:
> {code}
> Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, 
> Size: 0
> at java.util.ArrayList.rangeCheck(ArrayList.java:635)
> at java.util.ArrayList.get(ArrayList.java:411)
> at task0.WordCount2.main(WordCount2.java:131)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {code}
> Intead than the expected friendly message:
> {code}
>  Usage: wordcount   [-skip skipPatternFile]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13320) Fix arguments check in documentation for WordCount v2.0

2016-07-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15359272#comment-15359272
 ] 

ASF GitHub Bot commented on HADOOP-13320:
-

Github user pippobaudos commented on the issue:

https://github.com/apache/hadoop/pull/108
  
Hi @templedf I have updated the commit. Now should be the simplest 
expression to read...


> Fix arguments check in documentation for WordCount v2.0
> ---
>
> Key: HADOOP-13320
> URL: https://issues.apache.org/jira/browse/HADOOP-13320
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: niccolo becchi
>Priority: Trivial
>
> This issue is affecting the documentation page, so the code is not covered by 
> any tests. It's actually visible on the page:
> https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html#Example:_WordCount_v2.0
> On the Example: WordCount v2.0 
> The actual arguments check is wrong, as it's never printing the message of 
> the correct usage. So, running the code with no parameters, as in the 
> following example:
> {code}
> yarn jar /var/tmp/WordCount.jar task0.WordCount2
> {code}
>  
> I have got the following exception message in output:
> {code}
> Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, 
> Size: 0
> at java.util.ArrayList.rangeCheck(ArrayList.java:635)
> at java.util.ArrayList.get(ArrayList.java:411)
> at task0.WordCount2.main(WordCount2.java:131)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {code}
> Intead than the expected friendly message:
> {code}
>  Usage: wordcount   [-skip skipPatternFile]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13254) Make Diskchecker Pluggable

2016-07-01 Thread Yufei Gu (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13254?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15359253#comment-15359253
 ] 

Yufei Gu commented on HADOOP-13254:
---

Thanks a lot, [~templedf].

> Make Diskchecker Pluggable
> --
>
> Key: HADOOP-13254
> URL: https://issues.apache.org/jira/browse/HADOOP-13254
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: util
>Reporter: Yufei Gu
>Assignee: Yufei Gu
> Attachments: HADOOP-13254.001.patch, HADOOP-13254.002.patch, 
> HADOOP-13254.003.patch, HADOOP-13254.004.patch, HADOOP-13254.005.patch, 
> HADOOP-13254.006.patch, HADOOP-13254.007.patch
>
>




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13320) Fix arguments check in documentation for WordCount v2.0

2016-07-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15359249#comment-15359249
 ] 

ASF GitHub Bot commented on HADOOP-13320:
-

Github user pippobaudos closed the pull request at:

https://github.com/apache/hadoop/pull/108


> Fix arguments check in documentation for WordCount v2.0
> ---
>
> Key: HADOOP-13320
> URL: https://issues.apache.org/jira/browse/HADOOP-13320
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: niccolo becchi
>Priority: Trivial
>
> This issue is affecting the documentation page, so the code is not covered by 
> any tests. It's actually visible on the page:
> https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html#Example:_WordCount_v2.0
> On the Example: WordCount v2.0 
> The actual arguments check is wrong, as it's never printing the message of 
> the correct usage. So, running the code with no parameters, as in the 
> following example:
> {code}
> yarn jar /var/tmp/WordCount.jar task0.WordCount2
> {code}
>  
> I have got the following exception message in output:
> {code}
> Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, 
> Size: 0
> at java.util.ArrayList.rangeCheck(ArrayList.java:635)
> at java.util.ArrayList.get(ArrayList.java:411)
> at task0.WordCount2.main(WordCount2.java:131)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {code}
> Intead than the expected friendly message:
> {code}
>  Usage: wordcount   [-skip skipPatternFile]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13320) Fix arguments check in documentation for WordCount v2.0

2016-07-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15359238#comment-15359238
 ] 

ASF GitHub Bot commented on HADOOP-13320:
-

Github user templedf commented on the issue:

https://github.com/apache/hadoop/pull/108
  
Looks like the right fix, but can you simplify the logic a little, i.e. 
{{(len != 2) && (len != 4))}}?


> Fix arguments check in documentation for WordCount v2.0
> ---
>
> Key: HADOOP-13320
> URL: https://issues.apache.org/jira/browse/HADOOP-13320
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: niccolo becchi
>Priority: Trivial
>
> This issue is affecting the documentation page, so the code is not covered by 
> any tests. It's actually visible on the page:
> https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html#Example:_WordCount_v2.0
> On the Example: WordCount v2.0 
> The actual arguments check is wrong, as it's never printing the message of 
> the correct usage. So, running the code with no parameters, as in the 
> following example:
> {code}
> yarn jar /var/tmp/WordCount.jar task0.WordCount2
> {code}
>  
> I have got the following exception message in output:
> {code}
> Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, 
> Size: 0
> at java.util.ArrayList.rangeCheck(ArrayList.java:635)
> at java.util.ArrayList.get(ArrayList.java:411)
> at task0.WordCount2.main(WordCount2.java:131)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {code}
> Intead than the expected friendly message:
> {code}
>  Usage: wordcount   [-skip skipPatternFile]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13320) Fix arguments check in documentation for WordCount v2.0

2016-07-01 Thread Daniel Templeton (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15359232#comment-15359232
 ] 

Daniel Templeton commented on HADOOP-13320:
---

Gah.  Sorry, missed the pull request.

> Fix arguments check in documentation for WordCount v2.0
> ---
>
> Key: HADOOP-13320
> URL: https://issues.apache.org/jira/browse/HADOOP-13320
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: niccolo becchi
>Priority: Trivial
>
> This issue is affecting the documentation page, so the code is not covered by 
> any tests. It's actually visible on the page:
> https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html#Example:_WordCount_v2.0
> On the Example: WordCount v2.0 
> The actual arguments check is wrong, as it's never printing the message of 
> the correct usage. So, running the code with no parameters, as in the 
> following example:
> {code}
> yarn jar /var/tmp/WordCount.jar task0.WordCount2
> {code}
>  
> I have got the following exception message in output:
> {code}
> Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, 
> Size: 0
> at java.util.ArrayList.rangeCheck(ArrayList.java:635)
> at java.util.ArrayList.get(ArrayList.java:411)
> at task0.WordCount2.main(WordCount2.java:131)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {code}
> Intead than the expected friendly message:
> {code}
>  Usage: wordcount   [-skip skipPatternFile]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13320) Fix arguments check in documentation for WordCount v2.0

2016-07-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15359233#comment-15359233
 ] 

ASF GitHub Bot commented on HADOOP-13320:
-

GitHub user pippobaudos opened a pull request:

https://github.com/apache/hadoop/pull/108

HADOOP-13320. Fix arguments check in the WordCount v2.0 in Documentation



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/pippobaudos/hadoop HADOOP-13320

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/108.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #108


commit 469bc02c6f932bc55fced96de33884cebbe92242
Author: Niccolo Becchi 
Date:   2016-06-24T13:50:24Z

HADOOP-13320. Fix arguments check in the WordCount v2.0 in Doc. Contributed 
by Niccolo Becchi




> Fix arguments check in documentation for WordCount v2.0
> ---
>
> Key: HADOOP-13320
> URL: https://issues.apache.org/jira/browse/HADOOP-13320
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: niccolo becchi
>Priority: Trivial
>
> This issue is affecting the documentation page, so the code is not covered by 
> any tests. It's actually visible on the page:
> https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html#Example:_WordCount_v2.0
> On the Example: WordCount v2.0 
> The actual arguments check is wrong, as it's never printing the message of 
> the correct usage. So, running the code with no parameters, as in the 
> following example:
> {code}
> yarn jar /var/tmp/WordCount.jar task0.WordCount2
> {code}
>  
> I have got the following exception message in output:
> {code}
> Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, 
> Size: 0
> at java.util.ArrayList.rangeCheck(ArrayList.java:635)
> at java.util.ArrayList.get(ArrayList.java:411)
> at task0.WordCount2.main(WordCount2.java:131)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {code}
> Intead than the expected friendly message:
> {code}
>  Usage: wordcount   [-skip skipPatternFile]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13320) Fix arguments check in documentation for WordCount v2.0

2016-07-01 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13320?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15359226#comment-15359226
 ] 

ASF GitHub Bot commented on HADOOP-13320:
-

Github user pippobaudos closed the pull request at:

https://github.com/apache/hadoop/pull/105


> Fix arguments check in documentation for WordCount v2.0
> ---
>
> Key: HADOOP-13320
> URL: https://issues.apache.org/jira/browse/HADOOP-13320
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: documentation
>Reporter: niccolo becchi
>Priority: Trivial
>
> This issue is affecting the documentation page, so the code is not covered by 
> any tests. It's actually visible on the page:
> https://hadoop.apache.org/docs/current/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html#Example:_WordCount_v2.0
> On the Example: WordCount v2.0 
> The actual arguments check is wrong, as it's never printing the message of 
> the correct usage. So, running the code with no parameters, as in the 
> following example:
> {code}
> yarn jar /var/tmp/WordCount.jar task0.WordCount2
> {code}
>  
> I have got the following exception message in output:
> {code}
> Exception in thread "main" java.lang.IndexOutOfBoundsException: Index: 0, 
> Size: 0
> at java.util.ArrayList.rangeCheck(ArrayList.java:635)
> at java.util.ArrayList.get(ArrayList.java:411)
> at task0.WordCount2.main(WordCount2.java:131)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:606)
> at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
> at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
> {code}
> Intead than the expected friendly message:
> {code}
>  Usage: wordcount   [-skip skipPatternFile]
> {code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13271) Intermittent failure of TestS3AContractRootDir.testListEmptyRootDirectory

2016-07-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15359200#comment-15359200
 ] 

Steve Loughran commented on HADOOP-13271:
-

This has stopped for me recently, not sure what is up. I think whatever 
consistency condition existed, it has "gone away" with changes to the code.

maybe close as cannot-reproduce until it comes back

> Intermittent failure of TestS3AContractRootDir.testListEmptyRootDirectory
> -
>
> Key: HADOOP-13271
> URL: https://issues.apache.org/jira/browse/HADOOP-13271
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3, test
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Priority: Minor
>
> I'm seeing an intermittent failure of 
> {{TestS3AContractRootDir.testListEmptyRootDirectory}}
> The sequence of : deleteFiles(listStatus(Path("/)")) is failing because the 
> file to delete is root ...yet the code is passing in the children of /, not / 
> itself.
> hypothesis: when you call listStatus on an empty root dir, you get a file 
> entry back that says isFile, not isDirectory.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13208) S3A listFiles(recursive=true) to do a bulk listObjects instead of walking the pseudo-tree of directories

2016-07-01 Thread Hadoop QA (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13208?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15358999#comment-15358999
 ] 

Hadoop QA commented on HADOOP-13208:


| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
| {color:blue}0{color} | {color:blue} reexec {color} | {color:blue}  0m 
28s{color} | {color:blue} Docker mode activated. {color} |
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  
0s{color} | {color:green} The patch does not contain any @author tags. {color} |
| {color:green}+1{color} | {color:green} test4tests {color} | {color:green}  0m 
 0s{color} | {color:green} The patch appears to include 8 new or modified test 
files. {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
39s{color} | {color:blue} Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  6m 
50s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
25s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
44s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  1m 
36s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  1m 
54s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
48s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} findbugs {color} | {color:green}  3m 
13s{color} | {color:green} branch-2 passed {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
28s{color} | {color:green} branch-2 passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
38s{color} | {color:green} branch-2 passed with JDK v1.7.0_101 {color} |
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 
17s{color} | {color:blue} Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  1m 
25s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
16s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
16s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  7m 
16s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green}  7m 
16s{color} | {color:green} the patch passed {color} |
| {color:orange}-0{color} | {color:orange} checkstyle {color} | {color:orange}  
1m 31s{color} | {color:orange} root: The patch generated 9 new + 42 unchanged - 
52 fixed = 51 total (was 94) {color} |
| {color:green}+1{color} | {color:green} mvnsite {color} | {color:green}  2m  
3s{color} | {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} mvneclipse {color} | {color:green}  0m 
55s{color} | {color:green} the patch passed {color} |
| {color:red}-1{color} | {color:red} whitespace {color} | {color:red}  0m  
1s{color} | {color:red} The patch has 49 line(s) that end in whitespace. Use 
git apply --whitespace=fix. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 
59s{color} | {color:red} hadoop-tools/hadoop-aws generated 2 new + 0 unchanged 
- 0 fixed = 2 total (was 0) {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
24s{color} | {color:green} the patch passed with JDK v1.8.0_91 {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  1m 
41s{color} | {color:green} the patch passed with JDK v1.7.0_101 {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  9m 
16s{color} | {color:green} hadoop-common in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
21s{color} | {color:green} hadoop-openstack in the patch passed with JDK 
v1.7.0_101. {color} |
| {color:green}+1{color} | {color:green} unit {color} | {color:green}  0m 
23s{color} | {color:green} hadoop-aws in the patch passed with JDK v1.7.0_101. 
{color} |
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 
29s{color} | {color:green} The patch does not generate ASF License warnings. 
{color} |
| {color:black}{color} | {color:black} {color} | {color:black}107m 51s{color} | 
{color:black} {color} |
\\
\\
|| 

[jira] [Updated] (HADOOP-13332) Remove jackson 1.9.13 and switch all jackson code to 2.x code line

2016-07-01 Thread Akira Ajisaka (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13332?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Akira Ajisaka updated HADOOP-13332:
---
Attachment: HADOOP-13332.02.patch

02 patch
* Fixes javac, javadoc, checkstyle failures
* Fixes TestSLSRunner

> Remove jackson 1.9.13 and switch all jackson code to 2.x code line
> --
>
> Key: HADOOP-13332
> URL: https://issues.apache.org/jira/browse/HADOOP-13332
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: build
>Affects Versions: 2.8.0
>Reporter: PJ Fanning
>Assignee: Akira Ajisaka
> Attachments: HADOOP-13332.00.patch, HADOOP-13332.01.patch, 
> HADOOP-13332.02.patch
>
>
> This jackson 1.9 code line is no longer maintained. Upgrade
> Most changes from jackson 1.9 to 2.x just involve changing the package name.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Created] (HADOOP-13336) support cross-region operations in S3a

2016-07-01 Thread Steve Loughran (JIRA)
Steve Loughran created HADOOP-13336:
---

 Summary: support cross-region operations in S3a
 Key: HADOOP-13336
 URL: https://issues.apache.org/jira/browse/HADOOP-13336
 Project: Hadoop Common
  Issue Type: Sub-task
  Components: fs/s3
Affects Versions: 2.8.0
Reporter: Steve Loughran
Priority: Minor


S3a now supports different regions, by way of declaring the endpoint —but you 
can't do things like read in one region, write back in another (e.g. a distcp 
backup), because only one region can be specified in a configuration.

If s3a supported region declaration in the URL, e.g. s3a://b1.frankfurt 
s3a://b2.seol , then this would be possible. 

Swift does this with a full filesystem binding/config: endpoints, username, 
etc, in the XML file. Would we need to do that much? It'd be simpler initially 
to use a domain suffix of a URL to set the region of a bucket from the domain 
and have the aws library sort the details out itself, maybe with some config 
options for working with non-AWS infra



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13208) S3A listFiles(recursive=true) to do a bulk listObjects instead of walking the pseudo-tree of directories

2016-07-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13208?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-13208:

Status: Patch Available  (was: Open)

> S3A listFiles(recursive=true) to do a bulk listObjects instead of walking the 
> pseudo-tree of directories
> 
>
> Key: HADOOP-13208
> URL: https://issues.apache.org/jira/browse/HADOOP-13208
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs/s3
>Affects Versions: 2.8.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-13208-branch-2-001.patch, 
> HADOOP-13208-branch-2-007.patch
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> A major cost in split calculation against object stores turns out be listing 
> the directory tree itself. That's because against S3, it takes S3A two HEADs 
> and two lists to list the content of any directory path (2 HEADs + 1 list for 
> getFileStatus(); the next list to query the contents).
> Listing a directory could be improved slightly by combining the final two 
> listings. However, a listing of a directory tree will still be 
> O(directories). In contrast, a recursive {{listFiles()}} operation should be 
> implementable by a bulk listing of all descendant paths; one List operation 
> per thousand descendants. 
> As the result of this call is an iterator, the ongoing listing can be 
> implemented within the iterator itself



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11270) Seek behavior difference between NativeS3FsInputStream and DFSInputStream

2016-07-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15358829#comment-15358829
 ] 

Steve Loughran commented on HADOOP-11270:
-

+findbugs warning isn't to be ignored. Skip result should be used to check 
range skipped, consider some policy if != desired. Warn? repeat seek? 
{{S3AInputStream}} does a warning

> Seek behavior difference between NativeS3FsInputStream and DFSInputStream
> -
>
> Key: HADOOP-11270
> URL: https://issues.apache.org/jira/browse/HADOOP-11270
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.5.1
>Reporter: Venkata Puneet Ravuri
>Assignee: Venkata Puneet Ravuri
> Attachments: HADOOP-11270.02.patch, HADOOP-11270.03.patch, 
> HADOOP-11270.04.patch, HADOOP-11270.patch
>
>
> There is a difference in behavior while seeking a given file present
> in S3 using NativeS3FileSystem$NativeS3FsInputStream and a file present in 
> HDFS using DFSInputStream.
> If we seek to the end of the file incase of NativeS3FsInputStream, it fails 
> with exception "java.io.EOFException: Attempted to seek or read past the end 
> of the file". That is because a getObject request is issued on the S3 object 
> with range start as value of length of file.
> This is the complete exception stack:-
> Caused by: java.io.EOFException: Attempted to seek or read past the end of 
> the file
> at 
> org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.processException(Jets3tNativeFileSystemStore.java:462)
> at 
> org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.handleException(Jets3tNativeFileSystemStore.java:411)
> at 
> org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieve(Jets3tNativeFileSystemStore.java:234)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:601)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at org.apache.hadoop.fs.s3native.$Proxy17.retrieve(Unknown Source)
> at 
> org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.seek(NativeS3FileSystem.java:205)
> at 
> org.apache.hadoop.fs.BufferedFSInputStream.seek(BufferedFSInputStream.java:96)
> at 
> org.apache.hadoop.fs.BufferedFSInputStream.skip(BufferedFSInputStream.java:67)
> at java.io.DataInputStream.skipBytes(DataInputStream.java:220)
> at org.apache.hadoop.hive.ql.io.RCFile$ValueBuffer.readFields(RCFile.java:739)
> at 
> org.apache.hadoop.hive.ql.io.RCFile$Reader.currentValueBuffer(RCFile.java:1720)
> at org.apache.hadoop.hive.ql.io.RCFile$Reader.getCurrentRow(RCFile.java:1898)
> at 
> org.apache.hadoop.hive.ql.io.RCFileRecordReader.next(RCFileRecordReader.java:149)
> at 
> org.apache.hadoop.hive.ql.io.RCFileRecordReader.next(RCFileRecordReader.java:44)
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:339)
> ... 15 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11270) Seek behavior difference between NativeS3FsInputStream and DFSInputStream

2016-07-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11270?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15358813#comment-15358813
 ] 

Steve Loughran commented on HADOOP-11270:
-

Usual S3x patch question: which S3 installation have you run the full 
hadoop-aws test suite against? Jenkins doesn't test that bit, see.

Also: is there a seek test that we need? I've done a lot of extra work on seek 
tests on S3A, and actually hoped that I'd fixed this issue there. If S3n still 
has it, then the other S3 and object store clients may still have it too. 

Could you see what you can add to {{AbstractContractSeekTest}} in branch-2 or 
trunk to create the problem before your patch goes in, make it go away after. 
And, if s3a, s3, swift and azure have the issue, have their subclasses skip 
that test for now ... that'd be extra patches

> Seek behavior difference between NativeS3FsInputStream and DFSInputStream
> -
>
> Key: HADOOP-11270
> URL: https://issues.apache.org/jira/browse/HADOOP-11270
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.5.1
>Reporter: Venkata Puneet Ravuri
>Assignee: Venkata Puneet Ravuri
> Attachments: HADOOP-11270.02.patch, HADOOP-11270.03.patch, 
> HADOOP-11270.04.patch, HADOOP-11270.patch
>
>
> There is a difference in behavior while seeking a given file present
> in S3 using NativeS3FileSystem$NativeS3FsInputStream and a file present in 
> HDFS using DFSInputStream.
> If we seek to the end of the file incase of NativeS3FsInputStream, it fails 
> with exception "java.io.EOFException: Attempted to seek or read past the end 
> of the file". That is because a getObject request is issued on the S3 object 
> with range start as value of length of file.
> This is the complete exception stack:-
> Caused by: java.io.EOFException: Attempted to seek or read past the end of 
> the file
> at 
> org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.processException(Jets3tNativeFileSystemStore.java:462)
> at 
> org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.handleException(Jets3tNativeFileSystemStore.java:411)
> at 
> org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieve(Jets3tNativeFileSystemStore.java:234)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:601)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at org.apache.hadoop.fs.s3native.$Proxy17.retrieve(Unknown Source)
> at 
> org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.seek(NativeS3FileSystem.java:205)
> at 
> org.apache.hadoop.fs.BufferedFSInputStream.seek(BufferedFSInputStream.java:96)
> at 
> org.apache.hadoop.fs.BufferedFSInputStream.skip(BufferedFSInputStream.java:67)
> at java.io.DataInputStream.skipBytes(DataInputStream.java:220)
> at org.apache.hadoop.hive.ql.io.RCFile$ValueBuffer.readFields(RCFile.java:739)
> at 
> org.apache.hadoop.hive.ql.io.RCFile$Reader.currentValueBuffer(RCFile.java:1720)
> at org.apache.hadoop.hive.ql.io.RCFile$Reader.getCurrentRow(RCFile.java:1898)
> at 
> org.apache.hadoop.hive.ql.io.RCFileRecordReader.next(RCFileRecordReader.java:149)
> at 
> org.apache.hadoop.hive.ql.io.RCFileRecordReader.next(RCFileRecordReader.java:44)
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:339)
> ... 15 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11270) Seek behavior difference between NativeS3FsInputStream and DFSInputStream

2016-07-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11270?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11270:

Labels:   (was: BB2015-05-TBR fs)

> Seek behavior difference between NativeS3FsInputStream and DFSInputStream
> -
>
> Key: HADOOP-11270
> URL: https://issues.apache.org/jira/browse/HADOOP-11270
> Project: Hadoop Common
>  Issue Type: Bug
>  Components: fs/s3
>Affects Versions: 2.5.1
>Reporter: Venkata Puneet Ravuri
>Assignee: Venkata Puneet Ravuri
> Attachments: HADOOP-11270.02.patch, HADOOP-11270.03.patch, 
> HADOOP-11270.04.patch, HADOOP-11270.patch
>
>
> There is a difference in behavior while seeking a given file present
> in S3 using NativeS3FileSystem$NativeS3FsInputStream and a file present in 
> HDFS using DFSInputStream.
> If we seek to the end of the file incase of NativeS3FsInputStream, it fails 
> with exception "java.io.EOFException: Attempted to seek or read past the end 
> of the file". That is because a getObject request is issued on the S3 object 
> with range start as value of length of file.
> This is the complete exception stack:-
> Caused by: java.io.EOFException: Attempted to seek or read past the end of 
> the file
> at 
> org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.processException(Jets3tNativeFileSystemStore.java:462)
> at 
> org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.handleException(Jets3tNativeFileSystemStore.java:411)
> at 
> org.apache.hadoop.fs.s3native.Jets3tNativeFileSystemStore.retrieve(Jets3tNativeFileSystemStore.java:234)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
> at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:601)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
> at 
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
> at org.apache.hadoop.fs.s3native.$Proxy17.retrieve(Unknown Source)
> at 
> org.apache.hadoop.fs.s3native.NativeS3FileSystem$NativeS3FsInputStream.seek(NativeS3FileSystem.java:205)
> at 
> org.apache.hadoop.fs.BufferedFSInputStream.seek(BufferedFSInputStream.java:96)
> at 
> org.apache.hadoop.fs.BufferedFSInputStream.skip(BufferedFSInputStream.java:67)
> at java.io.DataInputStream.skipBytes(DataInputStream.java:220)
> at org.apache.hadoop.hive.ql.io.RCFile$ValueBuffer.readFields(RCFile.java:739)
> at 
> org.apache.hadoop.hive.ql.io.RCFile$Reader.currentValueBuffer(RCFile.java:1720)
> at org.apache.hadoop.hive.ql.io.RCFile$Reader.getCurrentRow(RCFile.java:1898)
> at 
> org.apache.hadoop.hive.ql.io.RCFileRecordReader.next(RCFileRecordReader.java:149)
> at 
> org.apache.hadoop.hive.ql.io.RCFileRecordReader.next(RCFileRecordReader.java:44)
> at 
> org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:339)
> ... 15 more



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-11601) Enhance FS spec & tests to mandate FileStatus.getBlocksize() >0 for non-empty files

2016-07-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15358713#comment-15358713
 ] 

Steve Loughran commented on HADOOP-11601:
-

I see —and I see the fix. Edited the relevant comment

> Enhance FS spec & tests to mandate FileStatus.getBlocksize() >0 for non-empty 
> files
> ---
>
> Key: HADOOP-11601
> URL: https://issues.apache.org/jira/browse/HADOOP-11601
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, test
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-11601-001.patch, HADOOP-11601-002.patch, 
> HADOOP-11601-003.patch, HADOOP-11601-004.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> HADOOP-11584 has shown that the contract tests are not validating that 
> {{FileStatus.getBlocksize()}} must be >0 for any analytics jobs to partition 
> workload correctly. 
> Clarify in text and add test to do this. Test MUST be designed to work 
> against eventually consistent filesystems where {{getFileStatus()}} may not 
> be immediately visible, by retrying operation if FS declares it is an object 
> store.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11601) Enhance FS spec & tests to mandate FileStatus.getBlocksize() >0 for non-empty files

2016-07-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11601:

Status: Open  (was: Patch Available)

> Enhance FS spec & tests to mandate FileStatus.getBlocksize() >0 for non-empty 
> files
> ---
>
> Key: HADOOP-11601
> URL: https://issues.apache.org/jira/browse/HADOOP-11601
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, test
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-11601-001.patch, HADOOP-11601-002.patch, 
> HADOOP-11601-003.patch, HADOOP-11601-004.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> HADOOP-11584 has shown that the contract tests are not validating that 
> {{FileStatus.getBlocksize()}} must be >0 for any analytics jobs to partition 
> workload correctly. 
> Clarify in text and add test to do this. Test MUST be designed to work 
> against eventually consistent filesystems where {{getFileStatus()}} may not 
> be immediately visible, by retrying operation if FS declares it is an object 
> store.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-11601) Enhance FS spec & tests to mandate FileStatus.getBlocksize() >0 for non-empty files

2016-07-01 Thread Steve Loughran (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-11601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Steve Loughran updated HADOOP-11601:

Status: Patch Available  (was: Open)

> Enhance FS spec & tests to mandate FileStatus.getBlocksize() >0 for non-empty 
> files
> ---
>
> Key: HADOOP-11601
> URL: https://issues.apache.org/jira/browse/HADOOP-11601
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, test
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-11601-001.patch, HADOOP-11601-002.patch, 
> HADOOP-11601-003.patch, HADOOP-11601-004.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> HADOOP-11584 has shown that the contract tests are not validating that 
> {{FileStatus.getBlocksize()}} must be >0 for any analytics jobs to partition 
> workload correctly. 
> Clarify in text and add test to do this. Test MUST be designed to work 
> against eventually consistent filesystems where {{getFileStatus()}} may not 
> be immediately visible, by retrying operation if FS declares it is an object 
> store.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Comment Edited] (HADOOP-11601) Enhance FS spec & tests to mandate FileStatus.getBlocksize() >0 for non-empty files

2016-07-01 Thread Steve Loughran (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-11601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15021105#comment-15021105
 ] 

Steve Loughran edited comment on HADOOP-11601 at 7/1/16 9:43 AM:
-

GitHub user steveloughran opened a pull request:

  



was (Author: githubbot):
GitHub user steveloughran opened a pull request:

https://github.com/apache/hadoop/pull/50

HADOOP-11601 

Enhance FS spec & tests to mandate FileStatus.getBlocksize() >0 for 
non-empty files

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/steveloughran/hadoop 
stevel/HADOOP-11601-min-blocksize

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/hadoop/pull/50.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #50


commit 19296717b8f33d9897c71a993319c0a90236fd00
Author: Steve Loughran 
Date:   2015-02-16T11:46:12Z

HADOOP-11601 Enhance FS spec & tests to mandate FileStatus.getBlocksize() 
>0 for non-empty files




> Enhance FS spec & tests to mandate FileStatus.getBlocksize() >0 for non-empty 
> files
> ---
>
> Key: HADOOP-11601
> URL: https://issues.apache.org/jira/browse/HADOOP-11601
> Project: Hadoop Common
>  Issue Type: Sub-task
>  Components: fs, test
>Affects Versions: 2.6.0
>Reporter: Steve Loughran
>Assignee: Steve Loughran
>Priority: Minor
> Attachments: HADOOP-11601-001.patch, HADOOP-11601-002.patch, 
> HADOOP-11601-003.patch, HADOOP-11601-004.patch
>
>   Original Estimate: 1h
>  Remaining Estimate: 1h
>
> HADOOP-11584 has shown that the contract tests are not validating that 
> {{FileStatus.getBlocksize()}} must be >0 for any analytics jobs to partition 
> workload correctly. 
> Clarify in text and add test to do this. Test MUST be designed to work 
> against eventually consistent filesystems where {{getFileStatus()}} may not 
> be immediately visible, by retrying operation if FS declares it is an object 
> store.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Commented] (HADOOP-13314) Remove 'package-info.java' from 'test\java\org\apache\hadoop\fs\shell\' to remove eclipse compile error

2016-07-01 Thread Hudson (JIRA)

[ 
https://issues.apache.org/jira/browse/HADOOP-13314?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15358528#comment-15358528
 ] 

Hudson commented on HADOOP-13314:
-

SUCCESS: Integrated in Hadoop-trunk-Commit #10042 (See 
[https://builds.apache.org/job/Hadoop-trunk-Commit/10042/])
HADOOP-13314. Remove 'package-info.java' from (vinayakumarb: rev 
c25021fb7196f498ccf1319dbd0c7f948f8518c1)
* 
hadoop-common-project/hadoop-common/src/test/java/org/apache/hadoop/fs/shell/package-info.java


> Remove 'package-info.java' from 'test\java\org\apache\hadoop\fs\shell\' to 
> remove eclipse compile error
> ---
>
> Key: HADOOP-13314
> URL: https://issues.apache.org/jira/browse/HADOOP-13314
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HADOOP-13314-01.patch
>
>
> HADOOP-13079 added package-info.java in test\java\org\apache\hadoop\fs\shell\ 
> to avoid checkstyle comment. 
> But this resulted in an eclipse compile error "The type package-info is 
> already defined", because in src folder in same package package-info.java 
> already present.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org



[jira] [Updated] (HADOOP-13314) Remove 'package-info.java' from 'test\java\org\apache\hadoop\fs\shell\' to remove eclipse compile error

2016-07-01 Thread Vinayakumar B (JIRA)

 [ 
https://issues.apache.org/jira/browse/HADOOP-13314?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Vinayakumar B updated HADOOP-13314:
---
   Resolution: Fixed
 Hadoop Flags: Reviewed
Fix Version/s: 2.8.0
   Status: Resolved  (was: Patch Available)

Committed to trunk, branch-2 and branch-2.8.
Thanks [~ajisakaa]

> Remove 'package-info.java' from 'test\java\org\apache\hadoop\fs\shell\' to 
> remove eclipse compile error
> ---
>
> Key: HADOOP-13314
> URL: https://issues.apache.org/jira/browse/HADOOP-13314
> Project: Hadoop Common
>  Issue Type: Bug
>Reporter: Vinayakumar B
>Assignee: Vinayakumar B
>Priority: Trivial
> Fix For: 2.8.0
>
> Attachments: HADOOP-13314-01.patch
>
>
> HADOOP-13079 added package-info.java in test\java\org\apache\hadoop\fs\shell\ 
> to avoid checkstyle comment. 
> But this resulted in an eclipse compile error "The type package-info is 
> already defined", because in src folder in same package package-info.java 
> already present.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

-
To unsubscribe, e-mail: common-issues-unsubscr...@hadoop.apache.org
For additional commands, e-mail: common-issues-h...@hadoop.apache.org